id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.09355
The curious case of CO$_2$ dissociation on Cu(110)
Dissociation of CO$_2$ on copper surfaces, a model system for understanding the elementary steps in catalytic conversion of CO$_2$ to methanol has been extensively studied in the past. It is thought to be reasonably well-understood from both experiments and theory. In contrast, our findings reported here suggest a different picture. Using molecular beam surface scattering methods, we measure the initial dissociation probabilities ($S_{\rm 0}$) of CO$_2$ on a flat, clean Cu(110) surface under ultra-high vacuum conditions. The observed $S_{\rm 0}$ ranges from $3.9\times10^{-4}$ to $1.8\times10^{-2}$ at incidence energies of 0.64 eV to 1.59 eV with a lower limit to dissociation barrier estimated to be around 2.0 eV, much larger than that understood previously. We discuss the possible reasons behind such large differences in our results and previous work. These findings are anticipated to be extremely important for obtaining a correct understanding of elementary steps in CO$_2$ dissociation on Cu surfaces.
Saurabh Kumar Singh, Pranav R. Shirhatti
2023-08-18T07:32:43Z
http://arxiv.org/abs/2308.09355v1
# The curious case of CO\({}_{2}\) dissociation on Cu(110) ###### Abstract Dissociation of CO\({}_{2}\) on copper surfaces, a model system for understanding the elementary steps in catalytic conversion of CO\({}_{2}\) to methanol has been extensively studied in the past. It is thought to be reasonably well-understood from both experiments and theory. In contrast, our findings reported here suggest a different picture. Using molecular beam surface scattering methods, we measure the initial dissociation probabilities (\(S_{0}\)) of CO\({}_{2}\) on a flat, clean Cu(110) surface under ultra-high vacuum conditions. The observed \(S_{0}\) ranges from \(3.9\times 10^{-4}\) to \(1.8\times 10^{-2}\) at incidence energies of 0.64 eV to 1.59 eV with a lower limit to dissociation barrier estimated to be around 2.0 eV, much larger than that understood previously. We discuss the possible reasons behind such large differences in our results and previous work. These findings are anticipated to be extremely important for obtaining a correct understanding of elementary steps in CO\({}_{2}\) dissociation on Cu surfaces. **Keywords: activated dissociation, CO\({}_{2}\), molecular beam, dissociation barrier, Cu(110)** ## Introduction The environmental impact of CO\({}_{2}\) production by the use of fossil fuels is understood to be a key player contributing to global climate crisis [1]. While this problem is extremely complex and multi-faceted in nature, one of the proposed strategies for dealing with this is that of Carbon Dioxide Capture, Utilization, and Storage [2]. In this context, the process of conversion of CO\({}_{2}\) to methanol (CH\({}_{3}\)OH) has been of particular interest. The methanol produced can be used directly as a fuel and also as a feedstock for the chemical industry, thereby enabling carbon recycling and reducing the damaging impact of increasing CO\({}_{2}\) emissions [2, 3]. CO\({}_{2}\) being a stable molecule from a thermodynamic and kinetic standpoint [4], its chemical transformation is challenging and requires the selection of proper co-reactants and catalysts in order to achieve sufficient efficiency. Copper/Zinc oxide/Alumina support-based catalysts, with H\({}_{2}\) and CO as the co-reactants are commonly used in industrial processes for CO\({}_{2}\) conversion to methanol. Based on previous work, CO\({}_{2}\) has been identified to be the main source of carbon in methanol formation, and CO\({}_{2}\) dissociation on the catalyst surface is understood to be a key step in the overall reaction scheme [5, 6]. Understandably, the interaction of CO\({}_{2}\) with well-defined Cu single crystals has been used extensively as a model system to gain insights into the elementary steps involved in this catalytic process. Among different low index planes of crystalline copper surfaces, Cu(110) has been of particular interest as the trend in the catalytic activity is observed to be following the order: Cu(110) \(>\) Cu(100) \(>\) Cu (111) [7]. The energy barrier for CO\({}_{2}\) dissociation on Cu(110) and Cu(100) single crystal surfaces have been reported to be 0.64 eV and 0.96 eV, respectively [8, 9]. These measurements were performed using clean single crystalline copper surfaces, exposed to high pressure of CO\({}_{2}\). The O-atom coverage and the initial sticking probability (\(S_{0}\)) resulting from CO\({}_{2}\) dissociation, measured at different temperatures, were used to determine the dissociation barrier. The calculated dissociation barriers for Cu(110) and Cu(100), assuming CO\({}_{2}\) to be interacting with idealized flat single crystal surfaces, using density functional theory (DFT) based computational methods agree very well with the above values [10, 11]. Interestingly, in the case of Cu(111), where the catalytic activity is much lower and as a result direct experimental results are not available the situation is not as clear. The reported values of the dissociation barrier obtained using DFT-based methods show a rather large spread ranging from 1.69 eV [12], 1.33 eV [10] and 0.93 eV [11]. Nonetheless, the overall trend in the computed dissociation barriers [13] is consistent with the experimental observations and with the general understanding that more open surfaces, such as Cu(110) will have higher activity when compared to their closed-packed counterparts. This is further confirmed by the fact that both experimental and computational studies on high index planes of Cu crystals, where the step densities are expected to be much higher, exhibit much lower dissociation barriers [10, 14, 15]. Given the above considerations, it is tempting to think that the model system of CO\({}_{2}\) interacting with well-defined Cu single crystals is well understood and can serve as a platform for building our understanding of realistic catalytic processes. However, a closer look at the existing literature shows that a few essential questions of fundamental importance have largely remained unanswered. These are mainly concerned with the precise magnitude of the dissociation probabilities, its dependence on incidence energy, and the magnitude of the dissociation barrier on the terrace and step sites. Having such information is of crucial importance in order to validate the prevailing microscopic picture underlying the CO\({}_{2}\) dissociation on Cu surfaces and to validate the estimates obtained from theoretical/computational approaches. In light of the above considerations, we have carried out detailed measurements of the dissociation probabilities of CO\({}_{2}\) on Cu(110), using molecular beams under UHV conditions. Absolute dissociation probabilities, measured as a function of the incidence energy are presented along with an estimate of the lower bound to the dissociation barrier. Strikingly, our results show that the dissociation barrier is significantly higher, by at least about 3 times, when compared to the currently accepted value. We present the likely hypotheses explaining these large deviations, along with a discussion of the broader implications of our results on the prevailing understanding of CO\({}_{2}\) dissociation on Cu surfaces in general. ## Results and Discussion The present studies of the \(S_{0}\) on a Cu(110) surface were performed using molecular beam-surface scattering (see methods). Figure 1 (left) shows an example of CO\({}_{2}\) and H\({}_{2}\) partial pressure changes, observed in the UHV chamber housing the Cu(110) single crystal, upon turning on the molecular beam. In this example shown, a molecular beam of 1.5 % CO\({}_{2}\) seeded in H\({}_{2}\), with an estimated incidence energy (\(E_{\rm i}\)) of 1.40 eV was used. Auger electron spectra (AES), before and after exposing a clean Cu(110) surface to the molecular beam of 1.5 % CO\({}_{2}\) in H\({}_{2}\) (250 ML dose, surface temperature, \(T_{\rm s}\) = 300 K) is depicted by the red and green curves, respectively (right panel). A clear increase in the AES signal at 503 eV was observed (inset) after exposure to the CO\({}_{2}\) beam, indicating a build-up of O-atom coverage resulting from the dissociation of incident CO\({}_{2}\) on the surface. Since O\({}_{2}\) is also known to react readily with Cu(110) surface with a reported \(S_{0}\) of 0.23 (\(E_{\rm i}<\) 50 meV) [16, 17, 18] we also checked for any O-atom coverage build-up caused by background gas in the course of our measurements. This was estimated by measuring the AES signal at a nearby location (3 mm away from the dosing region) on the crystal, not exposed to the CO\({}_{2}\) beam (blue), measured at the end of the last dosing cycle (figure 1, right). Throughout this study, we carefully monitored background oxygen build-up in each set of measurements, and its values were observed to remain below 5% of saturation O-atom coverage (see SI-1). All the O-atom coverage curves shown here subsequently have been corrected for this small background signal. Further, by employing a pure beam of CO\({}_{2}\), we observed no build-up of oxygen coverage, thereby ruling out any noticeable oxygen contamination in the incident molecular beam. Additionally, a small carbon coverage (272 eV) (presumably due to background hydrocarbon adsorption), was also observed at long dosing times of the order of \(2\times 10^{3}\) seconds. We estimate the maximum carbon coverage in such cases to be less than \(\sim 2\%\) of ML (see SI-2). Given its small value, we assume it to be not of much consequence for the measurements of CO\({}_{2}\) dissociation under consideration. CO\({}_{2}\) dissociation on clean a Cu(110) surface will result in CO and O formation. Given that the CO molecules are known to desorb from Cu(110) surface at temperatures \(>200\) K [19], and that oxygen binds very strongly to the Cu(110) surface and the adsorbed layer remains intact even at much higher surface temperatures \(>770\) K [14; 20], we conclude that this O-atom coverage build-up results from dissociation of incident CO\({}_{2}\). Finally, by measuring the surface O-atom coverage as a function of the incident CO\({}_{2}\) dose (see methods section and SI-3 to SI-5) we estimated the initial sticking probability (\(S_{0}\)) for CO\({}_{2}\) dissociation at different incidence energies. Figure 2 (left), shows the AES signal measured as a function of CO\({}_{2}\) dose (\(E_{\text{i}}\) = 1.15 eV, incidence angle, \(\theta_{\text{i}}\) = 0\({}^{\circ}\)), ranging from 0 ML (clean surface) to 1170 ML (saturation). A clear trend of increasing surface O-atom coverage with CO\({}_{2}\) dose can be seen. A quantitative analysis of this trend was obtained by plotting the ratio of oxygen to Cu peak-to-peak signal (background subtracted) as a function of incident CO\({}_{2}\) dose and is depicted in the figure 2 (right). Notably, the ratio (O/Cu) reaches a value of 0.205 \(\pm\) 0.005 at saturation coverage, which the is same as that obtained by dosing pure O\({}_{2}\) (until saturation) on the same surface, measured independently. Based on several previous studies using AES and low energy electron diffraction it has been well-established that saturation O-atom coverage corresponds to 0.5 ML, owing to a 2\(\times\)1 structure of the O-covered Cu(110) surface [16; 21]. This firmly establishes that the O-atom coverage observed under our measurement conditions remained unaffected due to CO + O recombination reaction and any unwanted reactions caused by the carrier gas (H\({}_{2}\)) or the background gas. Further, given that the saturation coverage corresponds to 0.5 ML, we convert the ratio of AES signals to surface O-atom coverage (\(\Theta\)), as shown in figure 2 (right). Here, the surface atom density of the Cu(110) was assumed to be \(1.08\times 10^{15}\) atom/cm\({}^{2}\)[22]. The surface O-atom coverage build-up Figure 1: (left) Partial pressure changes in the UHV chamber, monitored using a mass spectrometer when the incident molecular beam was turned on. (right) Auger electron spectra of the Cu(110) surface measured after cleaning (red), after CO\({}_{2}\) dosing of 250 ML on an annealed surface (green). The background oxygen coverage build-up, measured at a 3 mm distance away from the position of CO\({}_{2}\) dosing is shown in blue. Peaks at 503 eV correspond to adsorbed oxygen, while peaks in the 700–920 eV region correspond to Cu. Inset shows a zoomed-in view of the oxygen peaks after dosing. The maximum background oxygen build-up was estimated to be \(<5\%\) of saturation coverage in all our measurements. as a function of the incident CO\({}_{2}\) dose was observed to be consistent with a simple first-order kinetics model. This can be described by the equation \(\Theta=0.5(1-e^{-\hat{\epsilon}\phi_{\mathrm{i}}})\), where \(\phi_{\mathrm{i}}\) corresponds to the incident CO\({}_{2}\) dose (time-integrated incident flux) with the value of saturation coverage set to 0.5 ML. The slope of this function in the zero coverage limit (\(0.5\times k\)) gives the initial dissociative sticking probability (\(S_{0}\)) of CO\({}_{2}\) on Cu(110). **Incident translational energy dependence:** Dissociative chemisorption of CO\({}_{2}\) on Cu(110) was investigated over a range of translational energies spanning from 0.098 eV (100 % CO\({}_{2}\)) to 1.59 eV (0.75 % CO\({}_{2}\) in H\({}_{2}\)). Figure 3 shows surface O-atom coverage (in ML) measured as a function of incident CO\({}_{2}\) dose (in ML) for seven different translational energies along with the best-fit curves. These measurements were carried out with \(T_{\mathrm{s}}\) = 300 K and \(\theta_{\mathrm{i}}\) = 0\({}^{\circ}\). As seen in figure 3, all the curves (except \(E_{\mathrm{i}}\) = 0.098 eV) follow a similar pattern and approach the same saturation level of 0.5 ML O-atom coverage. For the lowest incident energy of 0.098 eV, even upon dosing the surface with 1200 ML of CO\({}_{2}\), the O-atom coverage remained indistinguishable from the background. Hence we conclude that the \(S_{0}\) at 0.098 eV is below the detection sensitivity of our measurements, approximately \(1.6\times 10^{-5}\), limited by background oxygen coverage build-up in long dosing experiments. Most importantly, with increasing \(E_{\mathrm{i}}\) the initial slope increases, which is a clear signature of translationally activated dissociation. The \(S_{0}\) values derived from the initial slopes of the curves in figure 3 are plotted against the translational energy associated with the normal component of incident momentum to the surface (\(E_{\mathrm{n}}\)) in figure 4 (left). Black circles depict \(S_{0}\) values for measurements carried out at \(\theta_{\mathrm{i}}\) = 0\({}^{\circ}\) and the red triangle refers to that obtained at \(\theta_{\mathrm{i}}\) = 19\({}^{\circ}\) (\(E_{\mathrm{i}}\) = 1.59 eV, \(E_{\mathrm{n}}\) = 1.42 eV). The blue dashed curve depicts an empirical fit function in the form of an S-shaped curve (discussed below). With increasing \(E_{\mathrm{n}}\) in the range of 0.64 to 1.59 eV, the \(S_{0}\) increased from 3.9\(\times\)10\({}^{-4}\) to 1.8\(\times\)10\({}^{-2}\) (also see table 1). The measurement at \(\theta_{\mathrm{i}}\) = 19\({}^{\circ}\) (red triangle) is consistent with the trend seen for the measurements performed at normal incidence, suggesting that only the normal component of the momentum (and the associated translational energy) is relevant for overcoming the dissociation barrier. This indicates that a simple one-dimensional barrier model can be used to understand this system. In this case, the overall sticking probability can be expressed as: \[S_{0}(E,T)=\sum_{v}F_{\mathrm{B}}(v,T)\cdot S_{0}(v) \tag{1}\] Where \(F_{\mathrm{B}}(v,T)\) is the population in different vibrational states at a given vibrational temperature (\(T\)) of the incident beam, and \(S_{0}(v)\) is the vibration state-specific initial dissociation probability. Given that in our experiments, the nozzle is at room temperature (300 K), to a good approximation \(F_{\mathrm{B}}(v=0,T)=1\) Figure 2: (left) Auger electron spectra of the Cu(110) surface measured at different incident dose of CO\({}_{2}\). (right) O-atom coverage build-up with increasing CO\({}_{2}\) dose. The coverage estimation was made using the AES peak ratio of O(503 eV) and Cu(776 eV). The red curve is the best fit using a first-order kinetics model. i.e. the population from the higher vibration states can be considered to be much smaller than the ground state. The expression for \(S_{0}(v)\) is given by a S-shaped curve (equation 2). Here, the saturation value of \(S_{0}\) is given by \(A\), \(E_{0}\) corresponds to the dissociation barrier height (also the midpoint of the Figure 4: (left) A plot of \(S_{0}\) obtained at different \(E_{n}\) of CO\({}_{2}\). Black circles show the measurements at normal incidence and the red triangle corresponds to a measurement at \(\theta=19^{\circ}\). (right) The same points (on a linear scale) are shown along with the best fits using S-shaped curves with different values of A of 0.1 (blue), 0.5 (red), and 1 (green). Even for the lowest A (0.1), the \(E_{0}\) comes out to quite large and is 2.0 eV. Figure 3: A combined plot depicting O-atom coverage build-up (in ML) on Cu(110) surface as a function of incident CO\({}_{2}\) dose (in ML), measured for different translational energies. The inset shows a zoomed-in view of the initial kinetics. At the lowest energy, we are unable to observe any O-atom coverage build-up. Clearly, the \(S_{0}\) increases with incident translational energy indicating a translationally activated dissociation. curve) and \(W_{0}\) describes the distribution of the barrier heights. \[S_{0}(v)=\frac{A}{2}\left\{1+erf\left[\frac{E_{n}-E_{0}}{W_{0}}\right]\right\} \tag{2}\] Given that, even at the highest incidence energy used in our measurements, the reaction probability is far from reaching its maximum value (it is still increasing) a precise estimation of the best-fit parameters is not possible at this stage. Nonetheless, reasonably good estimates for the lower and upper limits of the barrier height can still be made (see figure 4, right). If one considers the maximum value of \(S_{0}\) to be similar to that observed here, the activation barrier comes out to be around 1.4 eV. However, given that the \(S_{0}\) is still increasing even at the highest \(E_{\mathrm{n}}\) used, a more realistic estimate of the maximum \(S_{0}\) would be around 0.1. The resulting \(E_{0}\) in this case is 2.0 eV (blue dashed curve, figure 4, right). Further, assuming that the maximum \(S_{0}\) to be 0.5 (red curve) and 1 (green curve), the estimated dissociation barriers come out to be 2.6 and 2.9 eV, respectively. In summary, even at incidence energies \(>1.5\) eV, the \(S_{0}\) values are low and of the order of \(10^{-2}\), and the dissociation barrier is estimated to be at least of the order of 2 eV. It is worth pointing out here that according to previous work [23], at \(E_{\mathrm{n}}\) = 1.3 eV, using a heated nozzle at 750 K, no dissociative chemisorption was observed. Even more striking is the fact that the dissociation barrier estimated from our observations is much larger when compared to 0.64 eV reported previously [8; 11]. In the following discussion, we present a few hypotheses based on which these large differences can be possibly rationalized. **Understanding the activated dissociation of CO\({}_{2}\):** First, we provide a detailed comparison of our results with those reported previously by Funk and co-workers [23]. As both these studies were carried out using molecular beams under UHV conditions, a systematic comparison is relatively easier to make. The main objective of this previously reported study was to understand the physisorption dynamics of CO\({}_{2}\) on the Cu(110) surface. As a consequence, a cold surface below the desorption temperature of CO\({}_{2}\) (90 K) was used in their work, as opposed to the surface being at 300 K in our work. Under these conditions, they report an \(S_{0}\) for the non-dissociative physisorbed CO\({}_{2}\) to be 0.05 (\(E_{\mathrm{n}}\)= 1.3 eV), which is about a factor of 10 higher than \(S_{0}\) for dissociative chemisorption at the same incidence energy (see figure 4, left). Since physisorbed CO\({}_{2}\) will stay on the surface in measurements made below its desorption temperature, and that its \(S_{0}\) (non-dissociative) is much higher, a very small fraction of surface sites are expected to be available for dissociative chemisorption of the incoming \begin{table} \begin{tabular}{l l l} \hline \hline Gas mixture composition & \(E_{\mathrm{n}}\) (eV) & Initial sticking probability \\ \hline 0.75 \% CO\({}_{2}\) in H\({}_{2}\) & 1.59 & \(1.8\times 10^{-2}\) \\ 0.75\% CO\({}_{2}\) in H\({}_{2}\) (\(\theta_{i}=19^{\circ}\)) & 1.43 & \(1.3\times 10^{-2}\) \\ 1.5\% CO\({}_{2}\) in H\({}_{2}\) & 1.40 & \(8.1\times 10^{-3}\) \\ 2.9\% CO\({}_{2}\) in H\({}_{2}\) & 1.15 & \(2.2\times 10^{-3}\) \\ 4\% CO\({}_{2}\) in H\({}_{2}\) & 1.01 & \(1.5\times 10^{-3}\) \\ 7.5\% CO\({}_{2}\) in H\({}_{2}\) & 0.73 & \(5.6\times 10^{-4}\) \\ 9.2\% CO\({}_{2}\) in H\({}_{2}\) & 0.64 & \(3.9\times 10^{-4}\) \\ 100\% CO\({}_{2}\) & 0.098 & \(<1.6\times 10^{-5}\) \\ \hline \end{tabular} \end{table} Table 1: Gas mixture composition, \(E_{\mathrm{n}}\) and the observed \(S_{0}\) values as shown in figure 4. All measurements were performed at normal incidence except for one shown in the second row. The random uncertainties in the \(S_{0}\) values were evaluated to be about 14% (see methods and table 2 for a discussion on uncertainties). CO\({}_{2}\). Under such conditions, it is very likely that the dissociative chemisorbed signal will be very small and remain below the detection threshold in their measurements, reported to be approximately 0.03 ML. A simple first-order kinetics-based model was used to estimate the expected surface coverage based on the \(S_{0}\) values available for non-dissociative physisorption (see SI-6) and dissociative chemisorption of CO\({}_{2}\) (present work). Since the non-dissociative physisorption decreases with \(E_{\rm n}\) and dissociative chemisorption increases with \(E_{\rm n}\), it is useful to make this comparison at the highest energy (1.3 eV) used in their work. With \(S_{0}\) for the dissociative chemisorption as \(5.5\times 10^{-3}\) and that for non-dissociative physisorption as \(5\times 10^{-2}\) the kinetic model predicts the maximum surface coverage due to due to dissociative chemisorption to be less than 0.018 ML (see figure 5). This is well below the reported detection limit of 0.03 ML, possibly explaining the absence of any signature of dissociative chemisorption in their measurements. It is worth pointing out that a heated nozzle (750 K, for \(E_{\rm i}\) = 1.3 eV) was used in the previous experiments [23] and an additional contribution to the dissociative sticking channel due to vibrationally excited molecules in the incident beam is possible. Based on the reported detection sensitivity (\(\Theta=0.03\) ML) and the fact that they were unable to see any dissociation also allows us to estimate an upper bound to the vibrational enhancement of dissociative chemisorption of CO\({}_{2}\) on Cu(110). Again using the same kinetics model, we estimate this upper limit to the \(S_{0}\), using a hot nozzle at 750 K, to be \(1.0\times 10^{-2}\) at 1.3 eV. This is approximately a factor of 1.8 times higher than that obtained with a room-temperature nozzle in our studies. This is of significance as several studies have concluded that Figure 5: Surface coverage for O-atom, CO, and CO\({}_{2}\) calculated using the kinetic model for a low-temperature surface. The \(S_{0}\) values for CO\({}_{2}\) physisorption and dissociative sticking are used are \(5\times 10^{-2}\) and \(5.5\times 10^{-3}\), respectively. The red dashed horizontal line shows maximum surface coverage arising from CO\({}_{2}\) dissociation assuming a \(S_{0}\) of \(5.5\times 10^{-3}\) (present work). The black dashed horizontal line shows the detection sensitivity of the surface coverage due to CO\({}_{2}\) dissociation from previous work [23]. As can be seen from the green and black dashed curves, even if the \(S_{0}\) is as high as \(1.0\times 10^{-2}\) (say due to additional contribution of vibrationally excited CO\({}_{2}\) in the incident beam), the resulting surface coverage will remain close to the detection threshold of 0.03 ML reported previously [23]. the transition state for CO\({}_{2}\) dissociation is via a bent configuration [10, 11] and in a molecular beam using a hot nozzle, a significant amount of bending mode excited CO\({}_{2}\) will be present in the incident beam. At a nozzle temperature of 750 K and assuming negligible vibrational relaxation in the supersonic jet expansion, the population of bending mode excited CO\({}_{2}\) will be about 21%. Based on this we estimate the maximum \(S_{0}\) for purely bending excited CO\({}_{2}\) to be approximately \(2\times 10^{-2}\). Corresponding vibrational efficacy [24], the ratio of translational energy to vibrational energy needed to reach the same \(S_{0}\), assuming that the bending mode excitation is solely responsible for enhanced reactivity, can be as large as 5 (upper limit) for this system. Similar observations in the case of CO\({}_{2}\)/Ni(100) have been reported by [25], where for given \(E_{\mathrm{n}}\) with increasing nozzle temperature a clear enhancement in \(S_{0}\) was observed. This was attributed to the significantly higher dissociation probability of incident molecules with vibration excitation, in particular with the bending mode excitation. Recent theoretical studies on the same system [26, 27] are also in agreement with the conclusions. We now turn our attention towards a comparison of our results with those reported previously using high-pressure measurements [8]. Based on exposing the Cu(110) surface with CO\({}_{2}\) gas at 65 and 650 Torr pressure, under well-controlled conditions such that any effect of contamination giving rise to spurious oxygen coverage is minimized, they observe O-atom coverage build-up on the surface using AES and conclude that this results from CO\({}_{2}\) dissociation. The O-atom coverage was reported to increase with duration of CO\({}_{2}\) exposure (dose), and also increase with increasing surface temperature. They estimate the dissociation probability to be \(1\times 10^{-11}\) to \(1\times 10^{-9}\) with the surface temperature ranging from 430 K to 612 K, respectively. Based on the temperature-dependent dissociation rates, the activation barrier was estimated to be 0.64 eV in the low coverage limit. More recent studies on the same system [11], also compare favourably. Using the near-ambient X-ray photoelectron spectroscopy (NAXPS) method at 300 K and exposure to 1 mbar CO\({}_{2}\) gas, a reaction probability of \(4.4\times 10^{-11}\) per collision on the surface was reported. One of the peaks observed in NAXPS is attributed to be the signature of the molecularly chemisorbed CO\({}_{2}\), which is understood to be anionic in nature with a bent structure. Additionally, they also find that using DFT-based computational methods the activation barrier comes out to be 0.64 eV, consistent with those reported earlier. At the same time, it is also quite clear that these results are inconsistent with those obtained using molecular beams previously [23] and the present work. If indeed, the dissociation barrier was as low as 0.64 eV, much higher dissociation probabilities would have been observed in these measurements, especially at incidence energies as high as twice the dissociation barrier. One possible way to understand this would be that the reaction follows completely different pathways under high-pressure conditions as opposed to that using molecular beams. In the former case, given that the translational energy is low, most molecules will undergo trapping on the surface and consequently, the reaction will proceed via a precursor-mediated pathway. The trapped molecules on a hot Cu(110) surface will thermalize and acquire the chemisorbed state (bent structure) and subsequently follow a low activation barrier pathway for dissociation. On the other hand, such a low-energy pathway might be inaccessible to the incident CO\({}_{2}\) molecules from the gas phase, as in the case of molecular beams. Here, the reaction will follow a direct dissociation pathway which is highly translationally activated as observed in our measurements. A deeper understanding of this issue can be obtained by means of trajectory calculations, as reported previously in the case of CO\({}_{2}\) on Ni(100) and W(110) surfaces [26, 28]. Another possibility for explaining this large difference would be that at low vs. high-pressure conditions, fundamentally different surface structures are present leading to different reactivity. Such differences are commonly termed as _pressure gap_ and can lead to systematic differences among the high-pressure and low-pressure studies. As an example, Eren and coworkers [29, 30] have studied the interaction of Cu(100) and Cu(111) surfaces with CO\({}_{2}\) and CO under high-pressure conditions. In the case of CO\({}_{2}\) at pressures beyond 20 Torr, they conclude that the surface breaks up into nano-sized clusters producing highly reactive kink and step sites. They also report that the saturation O-atom coverage under these conditions is higher than that observed at lower pressures, where no such restructuring occurs. It should be noted that the saturation surface O-atom coverage in our measurements and in that reported under high-pressure conditions are both the same (0.5 ML). This suggests that in both these cases (high and low-pressure studies) the surface structure is likely to be similar and this alone can not be the reason for such large deviations observed in the dissociation barriers. Finally, large differences in the activation barriers can arise from different reaction sites, i.e. steps vs. terraces in high vs. low pressure studies, respectively. This situation is reminiscent of that reported for N\({}_{2}\) dissociation on Ru(0001) [31, 32]. In these studies, the dissociation barrier determined using high-pressure reactions was reported to be 0.4 eV, whereas that obtained using molecular beam studies was observed greater than 1 eV [33]. It was also reported that upon blocking the step sites by adsorbing a small fraction of Au atoms on the surface (1-2% ML) a remarkable drop in the reactivity by a factor of \(10^{9}\) was observed. At the same time, the corresponding decrease observed in the molecular beam measurements was reported to be much smaller, only by a factor of two. Based on these observations and the fact that the step density on their surface was estimated to be \(<\) 1%, it was concluded that under high-pressure conditions, N\({}_{2}\) mainly dissociates on the step sites, which are much more active than terrace sites. This is followed by the diffusion of N-atoms to the terrace sites, thereby allowing the reaction to proceed further. On the other hand in molecular beam experiments, the N\({}_{2}\) mainly dissociates on the terrace sites which are available in a much larger fraction. Given the above considerations, it is very likely that a similar situation is prevailing in the case of CO\({}_{2}\) on Cu(110). Even a small fraction of the steps (\(\sim\)1%) having much higher reactivity could lead to a much lower activation barrier being observed in the high-pressure experiments as compared to that using molecular beams. This is also consistent with previous studies where using carefully prepared surfaces with higher step densities, more facile dissociation was observed [14, 15]. In a more closely related system of CO\({}_{2}\) dissociation on Cu(100) surface, recent studies [34] using APXPS and DFT-based computational methods it was inferred that the steps play a very important role in the reaction under high-pressure conditions. In a recent DFT-based study carried out by Jin and coworkers [35] CO\({}_{2}\) interaction with a very large range of metal surfaces, both flat and stepped, was studied. The general trends observed here too show the much higher activity of the step sites for CO\({}_{2}\) dissociation. We have made a preliminary attempt to understand this by measuring the \(S_{0}\) on a clean sputtered surface and comparing it with an annealed surface (see SI-7). However, the changes observed are too small to be conclusive at the moment, and further systematic studies will be needed. It would also be very interesting to look into CO\({}_{2}\) dissociation on Cu(100) and Cu(111) surfaces, using molecular beam techniques such as those presented here, so that the dissociation barrier corresponding to terrace sites can be measured unambiguously. Finally, we would like to point out that the dissociation barriers based on DFT studies are reported to be 1.69 eV - 0.97 eV, 0.93 eV, and 0.64 eV on Cu(111), Cu(100), and Cu(110) respectively. While the trend observed is consistent with that known in experiments, our present findings strongly suggest that these are likely to be severely underestimated. ## Conclusion In summary, using molecular beam methods we report that the CO\({}_{2}\) dissociation barrier on Cu(110) terrace sites is of the order of 2 eV, much higher than that known previously. Among different possible reasons, these observations suggest that there could be a large impact of step sites in driving dissociation under high-pressure conditions, resulting in a substantially low activation barrier when compared to terrace sites, as observed using molecular beams. This insight prompts into the direction where a critical reevaluation of the microscopic picture associated CO\({}_{2}\) dissociation on Cu surfaces is necessary. Specifically, one needs to carefully reexamine the barrier heights and the corresponding dissociation probabilities on step versus terrace sites on low-index copper surfaces. Our study also suggests that it will be very interesting to look into the reactivity of vibrationally excited CO\({}_{2}\) in order to understand the mode specificity of this reaction. The estimates of the vibrational promotion in CO\({}_{2}\) dissociation on Cu(110) provided here will need to be tested using hot nozzle/infra-red excitation methods. In case significant vibrational promotion is found to be absent, it would again point towards the possible role of steps in CO\({}_{2}\) dissociation on Cu(110) under high-pressure conditions, leading to a much lower dissociation barrier. Additionally, these results also suggest that it is crucial to carefully evaluate the surface diffusion barriers, especially the step to terrace migration, in order to fully understand the microscopic details involved in case of high-pressure conditions. In the near future, we will be looking into some of these aspects in order to obtain a deeper understanding of the surface chemistry of CO\({}_{2}\) on Cu surfaces. ## Methods Our experiments were conducted using a recently designed molecular beam-surface scattering apparatus. It consists of a source chamber, and two differential pumping stages (Diff-1 and Diff-2) followed by a UHV chamber where the Cu(110) single crystal is placed. The source, Diff-1 Diff-2, and UHV chamber were pumped using a turbomolecular pump with nominal pumping speeds of 1200 l/s (HiPace 1200, Pfeiffer Vacuum), 400 l/s (HiPace 400, Pfeiffer Vacuum), 80 l/s (HiPace 80, Pfeiffer Vacuum) and 700 l/s (HiPace 700H, Pfeiffer Vacuum), respectively. The turbomolecular pump for the source and Diff-1 were backed by a 35 m\({}^{3}\)/hour two-stage rotary vane pump (Duo 35, Pfeiffer Vacuum), and the Diff-2 and UHV stages were backed by a dry roots pump (ACP 15, Pfeiffer Vacuum). A pulsed solenoid valve with an opening diameter of 1 mm (Parker 009-1643-900, driver IOATA ONE 060-0001-900) placed in the source chamber was used as a molecular beam source. The supersonically expanded gas was made to pass through a 1.5 mm opening diameter skimmer (Beam Dynamics) and two subsequent apertures (2 mm diameter) placed downstream at the entrance of Diff-2 and UHV chambers. The overall source-to-sample distance was approximately 340 mm. The beam diameter at the target surface was measured to be 2.9 mm (see SI). The ultimate base pressure of the UHV chamber, monitored using a nude ion gauge (IMR 430, Pfeiffer vacuum) ranged from (6 to 8) \(\times 10^{-10}\) mbar. This ion gauge was independently calibrated by \(S_{0}\) measurements of O\({}_{2}\) on a clean Cu(110) surface, measured using the molecular beam reflection method [36] (see SI-3). An Ar ion source (IS40, Prevac) was used to clean the copper surface by sputtering and an Auger electron spectrometer (SMG600, OCI Vacuum Microengineering) was used to analyze the surface chemical composition. Additionally, a mass spectrometer (SRS RGA 200), calibrated with the ion gauge as a reference, was used to measure the residual gas composition as well as for the estimation of the incident beam flux. With the CO\({}_{2}\) molecular beam on, the pressure in the source, Diff-1, Diff-2, and UHV in the chamber typically was to \((0.5-2)\times 10^{-4}\), \((1-3)\times 10^{-6}\) mbar, \((5-8)\times 10^{-7}\) mbar and \((4-7)\times 10^{-9}\) mbar, respectively. The purity levels of the gases used in our measurements were specified to be \(>\)99.999% for H\({}_{2}\) and \(>99.99\)% for CO\({}_{2}\) and were used without any further treatment. A Cu(110) single crystal 99.9999 % pure, 10 mm diameter, and 2 mm thickness), cut to an accuracy better than 0.1 and polished to have a roughness lower than 10 nm (MaTeck Material Technologie and Kristalle GmbH), was used as a target sample. It was mounted on a four-axis differentially pumped manipulator using a pair of 0.25 mm diameter tungsten wires that enabled the sample heating. The sample manipulator is equipped with electrical and thermocouple feed-throughs for heating the sample and monitoring its temperature using a K-type thermocouple. The CO\({}_{2}\) flux on the target surface ranged from 0.05 - 0.8 ML/sec, where 1 ML corresponds to \(1.08\times 10^{15}\) atoms cm\({}^{-2}\). Throughout all measurements, the backing pressure was maintained at a constant value of 5 bar, and the nozzle pulsing rate was typically 10 Hz. The nozzle opening time was varied within the range of 300 to 400 \(\mu\)s. The Cu(110) surface was cleaned according to well-established procedures reported previously [37, 38]. After a bakeout of the UHV chamber, the main contamination at the copper surface was found to be carbon. This carbon was removed by heating the Cu(110) surface (700 K) in an oxygen environment (at \(2\times 10^{-8}\) mbar). Subsequently, the remaining oxygen contamination was removed (as seen in AES) by prolonged Ar ion sputtering. Thereafter, for day-to-day operation, the sample surface was subjected to Ar ion sputtering for a duration of 30 min (0.6 \(\mu A\) ion current) at 3 keV ion energy. Under these conditions, impurity levels (mainly carbon and oxygen) were found to be below the detection threshold (\(<\) 0.1% ML and 2% ML, respectively) of AES. Subsequently, the surface was annealed at 800 K for 20-30 min and allowed to cool down to 300-310 K before conducting the measurements. With a base pressure of \(8\times 10^{-10}\) mbar, we observed that the impurity levels of carbon and oxygen (as measured by AES) remained \(<3\)% of a ML for a duration four hours. We also measured background build-up in each measurement at the end of the experiments (see SI-1) and these were found to be negligibly small in the timescale of our measurements. Molecular beams with different translation energies were prepared by using different fractions of CO\({}_{2}\) seeded in H\({}_{2}\). The incidence translational energy CO\({}_{2}\) in these gas mixtures was estimated using the following relation: \[E_{i}=\frac{X_{CO_{2}}C_{FO_{O_{2}}}+X_{H_{2}}C_{P_{H_{2}}}}{X_{CO_{2}}M_{CO_{2}}+ X_{H_{2}}M_{H_{2}}}M_{CO_{2}}(T_{N}-T_{R}) \tag{3}\] Here, X\({}_{CO_{2}}\) and X\({}_{H_{2}}\) represents the mole fraction of CO\({}_{2}\) and H\({}_{2}\), respectively. C\({}_{FO_{2}}\) and C\({}_{P_{H_{2}}}\) indicate the heat capacities of CO\({}_{2}\) and H\({}_{2}\), respectively. T\({}_{N}\) corresponds to the nozzle temperature, while T\({}_{R}\) represents the rotational temperature of the molecular beam. By utilizing the parameters of T\({}_{N}\) = 300 K and assuming T\({}_{R}\) = 10 K (typical for molecular beams), we calculated the translational energy, E\({}_{i}\), of the CO\({}_{2}\) beam in a given mixture (see 1). Based on previous work, we estimate that these calculated beam energies will be accurate within 10 % of that reported here. The initial dissociative sticking probability of CO\({}_{2}\) on Cu(110) was measured in the energy range of 0.098 eV to 1.59 eV. To determine the coverage of adsorbed oxygen resulting from dissociation, an Auger electron spectrometer was used with a 1 \(\mu\)\(A\) surface current and a 2.5 keV beam energy. Under these conditions, we could not observe any change in O-atom coverage caused by electron-stimulated desorption. The coverage was determined by measuring the ratio of peak-to-peak heights at electron energies of 503 eV (O) and 776 eV (Cu) using the Auger electron spectrometer. The observed ratio of 0.205 \(\pm\) 0.005 indicated a saturated O-atom coverage of 0.5 ML on the Cu(110) surface. To estimate the incident CO\({}_{2}\) flux, a correlation between CO\({}_{2}\) and H\({}_{2}\) was established by utilizing measurements from a calibrated ion gauge and a mass spectrometer. The gas-dependent sensitivity factors for the ionization gauge, 1.42 for CO\({}_{2}\) and 0.46 for H\({}_{2}\), as well as calibration factors for the ion gauge and mass spectrometer, were used for estimating the incident CO\({}_{2}\) flux (see SI-5). Additionally, we have excluded any significant contribution from potential contaminants present in the incident beam, such as CO or H\({}_{2}\)O, which could possibly arise from the reverse water gas shift reaction (see SI-8). **Uncertainty estimates:** The random errors in the \(S_{0}\) estimation arose from the uncertainties in surface O-atom coverage and the incident beam flux estimation. The contributing factors are listed in table 2. Since the pumping speeds of CO\({}_{2}\) and oxygen are not directly available from the manufacturer's datasheet, we have assumed them to be equal to gases with similar mass such as Argon (665 l/s) and nitrogen (685 l/s), respectively. These values are as per the manufacturer's specification and uncertainty arising due to deviations here will contribute mainly to the systematic errors, leaving the overall trends reported here unchanged. \begin{table} \begin{tabular}{l l l} \hline \hline Source of error & \(\delta f/f\) (\%) & Remarks \\ \hline Uncertainty in determining absolute pressure (ion gauge calibration) & 10\% & From \(S_{0}\) measurements of O\({}_{2}\) on Cu(110) (SI-3) \\ Uncertainty in calibrating the mass spectrometer with ion gauge & 5 \% & From the uncertainty in the fit parameters describing the correlation among mass spectrometer and ion gauge signals (SI-5) \\ Uncertainty in beam shape estimation & 5\% & Approximate estimate based on the measurements shown in SI-4 and assuming the shape to be same for all beams \\ Uncertainty in AES signal determination & 5 \% & From statistics of repeated AES measurements \\ Uncertainty in repeatability of sample positioning while dosing & 5 \% & Approximate estimate from beam shape estimation and assuming \(\pm\)0.25 mm sample positioning error \\ Overall random uncertainty & 14\% & Assuming independent errors \\ \hline \hline \end{tabular} \end{table} Table 2: A breakdown of the different contributing factors to the random uncertainties in \(S_{0}\) estimation. \(\delta f/f\) here represents the 1\(\sigma\), relative errors given as a percentage Supplementary information. * SI-1: Estimates for background oxygen coverage build-up * SI-2: Estimates for background carbon coverage build-up * SI-3: Ion gauge calibration * SI-4: Molecular beam profile at the Cu(110) surface * SI-5: Estimating the flux of the incident beam * SI-6: Kinetic model for estimating surface coverage caused by non-dissociative physisorption and dissociative chemisorption * SI-7: Influence of defects on \(S_{0}\) * SI-8: Discussion on possible contamination in our incident beams Acknowledgments.We acknowledge the support of intramural funds at TIFR-Hyderabad provided by the Department of Atomic Energy, Government of India, under Project Identification No. RTI 4007 and Scientific and Engineering Research Board, Department of Science and Technology, India (Grant number. CRG/2022/002943). We thank Avinash Kumar for his help in setting up the molecular beam-surface scattering apparatus. Data Availability.All relevant data related to the current study are available from the corresponding author upon reasonable request. Author contributions.SKS and PRS conceived and designed the study. SKS performed the measurements and analyzed the results with inputs from PRS. SKS and PRS discussed the results and prepared the manuscript. Conflict of interest.The authors declare no conflict of interest. ## References * The Physical Science Basis: Working Group I Contribution to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change_ 1 edn (Cambridge University Press, 2023). URL [https://www.cambridge.org/core/product/identifier/9781009157896/type/book](https://www.cambridge.org/core/product/identifier/9781009157896/type/book). * [2] Jiang, X., Nie, X., Guo, X., Song, C. & Chen, J. G. Recent Advances in Carbon Dioxide Hydrogenation to Methanol via Heterogeneous Catalysis. _Chemical Reviews_**120**, 7984-8034 (2020). URL [https://pubs.acs.org/doi/10.1021/acs.chemrev.9b00723](https://pubs.acs.org/doi/10.1021/acs.chemrev.9b00723). * [3] Olah, G. A. Towards Oil Independence Through Renewable Methanol Chemistry. _Angewandte Chemie International Edition_**52**, 104-107 (2013). URL [http://doi.wiley.com/10.1002/anie.201204995](http://doi.wiley.com/10.1002/anie.201204995). * [4] Freund, H.-J. & Roberts, M. Surface chemistry of carbon dioxide. _Surface Science Reports_**25**, 225-273 (1996). URL [https://linkinghub.elsevier.com/retrieve/pii/S0167572996000076](https://linkinghub.elsevier.com/retrieve/pii/S0167572996000076). * [5] Chinchen, G., Denny, P., Parker, D., Spencer, M. & Whan, D. Mechanism of methanol synthesis from CO2/CO/H2 mixtures over copper/zinc oxide/alumina catalysts: Use of \({}^{14}\)C-labelled reactants. _Applied Catalysis_**30**, 333-338 (1987). URL [https://linkinghub.elsevier.com/retrieve/pii/S0166983400841238](https://linkinghub.elsevier.com/retrieve/pii/S0166983400841238). * [6] Chinchen, G. C., Spencer, M. S., Waugh, K. C. & Whan, D. A. Promotion of methanol synthesis and the water-gas shift reactions by adsorbed oxygen on supported copper catalysts. _Journal of the Chemical Society, Faraday Transactions 1: Physical Chemistry in Condensed Phases_**83**, 2193 (1987). URL [http://xlink.rsc.org/?DOI=f19878302193](http://xlink.rsc.org/?DOI=f19878302193). * [7] Yoshihara, J. & Campbell, C. T. Methanol Synthesis and Reverse Water-Gas Shift Kinetics over Cu(110) Model Catalysts: Structural Sensitivity. _Journal of Catalysis_**161**, 776-782 (1996). URL [https://linkinghub.elsevier.com/retrieve/pii/S0021951796902407](https://linkinghub.elsevier.com/retrieve/pii/S0021951796902407). * [8] Nakamura, J., Rodriguez, J. A. & Campbell, C. T. Does CO\({}_{2}\) dissociatively adsorb on Cu surfaces? _Journal of Physics: Condensed Matter_**1**, SB149-SB160 (1989). URL [https://iopscience.iop.org/article/10.1088/0953-8984/1/SB/026](https://iopscience.iop.org/article/10.1088/0953-8984/1/SB/026). * [9] Rasmussen, P., Taylor, P. & Chorkendorff, I. The interaction of carbon dioxide with Cu(100). _Surface Science_**269-270**, 352-359 (1992). URL [https://linkinghub.elsevier.com/retrieve/pii/003960289291274F](https://linkinghub.elsevier.com/retrieve/pii/003960289291274F). * [10] Muttaqien, F., Hamamoto, Y., Inagaki, K. & Morikawa, Y. Dissociative adsorption of CO\({}_{2}\) on flat, stepped, and kinked Cu surfaces. _The Journal of Chemical Physics_**141**, 034702 (2014). URL [https://pubs.aip.org/jcp/article/141/3/034702/194222/Dissociative-adsorption-of-CO2-on-flat-stepped-and](https://pubs.aip.org/jcp/article/141/3/034702/194222/Dissociative-adsorption-of-CO2-on-flat-stepped-and). * [11] Yang, T. _et al._ Surface Orientation and Pressure Dependence of CO\({}_{2}\) Activation on Cu Surfaces. _The Journal of Physical Chemistry C_**124**, 27511-27518 (2020). URL [https://pubs.acs.org/doi/10.1021/acs.jpcc.0c08262](https://pubs.acs.org/doi/10.1021/acs.jpcc.0c08262). * [12] Gokhale, A. A., Dumesic, J. A. & Mavrikakis, M. On the Mechanism of Low-Temperature Water Gas Shift Reaction on Copper. _Journal of the American Chemical Society_**130**, 1402-1414 (2008). URL [https://pubs.acs.org/doi/10.1021/ja0768237](https://pubs.acs.org/doi/10.1021/ja0768237). * [13] Wang, G.-C. _et al._ Cluster and periodic DFT calculations of adsorption and activation of CO\({}_{2}\) on the Cu(hkl) surfaces. _Surface Science_**570**, 205-217 (2004). URL [https://linkinghub.elsevier.com/retrieve/pii/S0039602804010362](https://linkinghub.elsevier.com/retrieve/pii/S0039602804010362). * [14] Fu, S. S. & Somorjai, G. A. Interactions of O\({}_{2}\), CO, CO\({}_{2}\), and D\({}_{2}\) with the stepped Cu(311) crystal face: Comparison to Cu(110). _Surface Science_**262**, 68-76 (1992). URL [https://linkinghub.elsevier.com/retrieve/pii/003960289290460N](https://linkinghub.elsevier.com/retrieve/pii/003960289290460N). * [15] Kim, J. _et al._ Revealing CO\({}_{2}\) dissociation pathways at vicinal copper (997) interfaces. _Nature Communications_**14**, 3273 (2023). URL [https://www.nature.com/articles/s41467-023-38928-1](https://www.nature.com/articles/s41467-023-38928-1). * [16] Gruzalski, G., Zehner, D. & Wendelken, J. An XPS study of oxygen adsorption on Cu(110). _Surface Science_**159**, 353-368 (1985). URL [https://linkinghub.elsevier.com/retrieve/pii/0039602885904339](https://linkinghub.elsevier.com/retrieve/pii/0039602885904339). * [17] Pudney, P. & Bowker, M. Activated dissociation of oxygen on Cu(110). _Chemical Physics Letters_**171**, 373-376 (1990). URL [https://linkinghub.elsevier.com/retrieve/pii/000926149085380U](https://linkinghub.elsevier.com/retrieve/pii/000926149085380U). * [18] Nesbitt, A., Lewin, A. K. & Hodgson, A. Adsorption of oxygen on Cu(110). _Journal of Physics: Condensed Matter_**3**, S71-S76 (1991). URL [https://iopscience.iop.org/article/10.1088/0953-8984/3/S/011](https://iopscience.iop.org/article/10.1088/0953-8984/3/S/011). * [19] Kunat, M., Boas, C., Becker, T., Burghaus, U. & Woll, C. Adsorption dynamics of CO on Cu(110): a molecular beam study. _Surface Science_**474**, 114-128 (2001). URL [https://linkinghub.elsevier.com/retrieve/pii/S0039602800010414](https://linkinghub.elsevier.com/retrieve/pii/S0039602800010414). * [20] Lapujoulade, J., Le Cruer, Y., Lefort, M., Lejay, Y. & Maurel, E. A helium beam scattering study of the adsorption of oxygen on copper (110). _Surface Science_**118**, 103-120 (1982). URL [https://linkinghub.elsevier.com/retrieve/pii/0039602882900176](https://linkinghub.elsevier.com/retrieve/pii/0039602882900176). * [21] Gruzalski, G., Zehner, D. & Wendelken, J. Two adsorbate densities for Cu(110)c(6x2)-O. _Surface Science Letters_**147**, L623-L629 (1984). URL [https://linkinghub.elsevier.com/retrieve/pii/0167258484908417](https://linkinghub.elsevier.com/retrieve/pii/0167258484908417). * [22] Zhai, R.-S. _et al._ Chemisorption and Reaction Characteristics of Methyl Radicals on Cu(110). _Langmuir_**20**, 3623-3631 (2004). URL [https://pubs.acs.org/doi/10.1021/la036294u](https://pubs.acs.org/doi/10.1021/la036294u). * [23] Funk, S. _et al._ Adsorption dynamics of CO\({}_{2}\) on Cu(110): A molecular beam study. _Surface Science_**600**, 583-590 (2006). URL [https://linkinghub.elsevier.com/retrieve/pii/S0039602805012562](https://linkinghub.elsevier.com/retrieve/pii/S0039602805012562). * [24] Chadwick, H. & Beck, R. D. Quantum state resolved gas-surface reaction dynamics experiments: a tutorial review. _Chemical Society Reviews_**45**, 3576-3594 (2016). URL [http://xlink.rsc.org/?DOI=C5CS00476D](http://xlink.rsc.org/?DOI=C5CS00476D). * [25] D'Evelyn, M., Hamza, A., Gdowski, G. & Madix, R. Dynamics of the dissociative adsorption of CO\({}_{2}\) on Ni(100). _Surface Science_**167**, 451-473 (1986). URL [https://linkinghub.elsevier.com/retrieve/pii/003960288690717X](https://linkinghub.elsevier.com/retrieve/pii/003960288690717X). * [26] Jiang, B. & Guo, H. Communication: Enhanced dissociative chemisorption of CO\({}_{2}\) via vibrational excitation. _The Journal of Chemical Physics_**144**, 091101 (2016). URL [http://aip.scitation.org/doi/10.1063/1.4943002](http://aip.scitation.org/doi/10.1063/1.4943002). * [27] Farjamnia, A. & Jackson, B. The dissociative chemisorption of CO\({}_{2}\) on Ni(100): A quantum dynamics study. _The Journal of Chemical Physics_**146**, 074704 (2017). URL [https://pubs.aip.org/jcp/article/146/7/074704/195215/The-dissociative-chemisorption-of-CO2-on-Ni-100-A](https://pubs.aip.org/jcp/article/146/7/074704/195215/The-dissociative-chemisorption-of-CO2-on-Ni-100-A). * [28] Yin, R. & Guo, H. Dynamics of CO\({}_{2}\) Dissociative Chemisorption on W(110). _The Journal of Physical Chemistry C_**126**, 17935-17941 (2022). URL [https://pubs.acs.org/doi/10.1021/acs.jpcc.2c05664](https://pubs.acs.org/doi/10.1021/acs.jpcc.2c05664). * [29] Eren, B., Weatherrup, R. S., Liakakos, N., Somorjai, G. A. & Salmeron, M. Dissociative Carbon Dioxide Adsorption and Morphological Changes on Cu(100) and Cu(111) at Ambient Pressures. _Journal of the American Chemical Society_**138**, 8207-8211 (2016). URL [https://pubs.acs.org/doi/10.1021/jacs.6b04039](https://pubs.acs.org/doi/10.1021/jacs.6b04039). * [30] Eren, B. _et al._ Activation of Cu(111) surface by decomposition into nanoclusters driven by CO adsorption. _Science_**351**, 475-478 (2016). URL [https://www.science.org/doi/10.1126/science.aad8868](https://www.science.org/doi/10.1126/science.aad8868). * [31] Dahl, S. _et al._ Role of Steps in N\({}_{2}\) Activation on Ru(0001). _Physical Review Letters_**83**, 1814-1817 (1999). URL [https://link.aps.org/doi/10.1103/PhysRevLett.83.1814](https://link.aps.org/doi/10.1103/PhysRevLett.83.1814). * [32] Dahl, S., Tornqvist, E. & Chorkendorff, I. Dissociative adsorption of N\({}_{2}\) on Ru(0001): A surface reaction totally dominated by steps. _Journal of Catalysis_**192**, 381-390 (2000). URL [https://linkinghub.elsevier.com/retrieve/pii/S0021951700928586](https://linkinghub.elsevier.com/retrieve/pii/S0021951700928586). * [33] Diekhoner, L. _et al._ N\({}_{2}\) dissociative adsorption on Ru(0001): The role of energy loss. _The Journal of Chemical Physics_**115**, 9028-9035 (2001). URL [https://pubs.aip.org/jcp/article/115/19/9028/184408/N2-dissociative-adsorption-on-Ru-0001-The-role-of](https://pubs.aip.org/jcp/article/115/19/9028/184408/N2-dissociative-adsorption-on-Ru-0001-The-role-of). * [34] Hagman, B. _et al._ Steps Control the Dissociation of CO\({}_{2}\) on Cu(100). _Journal of the American Chemical Society_**140**, 12974-12979 (2018). URL [https://pubs.acs.org/doi/10.1021/jacs.8b07906](https://pubs.acs.org/doi/10.1021/jacs.8b07906). * [35] Jin, W., Wang, Y., Liu, T., Ding, C. & Guo, H. CO\({}_{2}\) chemisorption and dissociation on flat and stepped transition metal surfaces. _Applied Surface Science_**599**, 154024 (2022). URL [https://linkinghub.elsevier.com/retrieve/pii/S0169433222015641](https://linkinghub.elsevier.com/retrieve/pii/S0169433222015641). * [36] King, D. A. & Wells, M. G. Molecular beam investigation of adsorption kinetics on bulk metal targets: Nitrogen on tungsten. _Surface Science_**29**, 454-482 (1972). URL [https://linkinghub.elsevier.com/retrieve/pii/0039602872902324](https://linkinghub.elsevier.com/retrieve/pii/0039602872902324). * [37] Musket, R., McLean, W., Colmenares, C., Makowiecki, D. & Siekhaus, W. Preparation of atomically clean surfaces of selected elements: A review. _Applications of Surface Science_**10**, 143-207 (1982). URL [https://linkinghub.elsevier.com/retrieve/pii/0378596382901428](https://linkinghub.elsevier.com/retrieve/pii/0378596382901428). * [38] Bhardwaj, G., Singh, S. K. & Shirhatti, P. R. A compact and highly collimated atomic/molecular beam source. _Review of Scientific Instruments_**94**, 043305 (2023). URL [https://pubs.aip.org/aip/rsi/article/2882257](https://pubs.aip.org/aip/rsi/article/2882257). # The curious case of CO\({}_{2}\) dissociation on Cu(110): Supplementary information Saurabh Kumar Singh and Pranav R. Shirhatti* Tata Institute of Fundamental Research Hyderabad, 36/P Gopanpally, Hyderabad 500046, Telangana, India. *Corresponding author(s). E-mail(s): [email protected]; Contributing authors: [email protected]; ###### Contents * 1 SI-1: Estimates for background oxygen coverage build-up * 2 SI-2: Estimates for background carbon coverage build-up * 3 SI-3: Ion gauge calibration * 4 SI-4: Molecular beam profile at the Cu(110) surface * 5 SI-5: Estimating the flux of the incident beam * 6 SI-6: Kinetic model for estimating surface coverage caused by non-dissociative physisorption and dissociative chemisorption * 7 SI-7: Influence of defects on \(S_{0}\) * 8 SI-8: Discussion on possible contamination in our incident beams ## 1 SI-1: Estimates for background oxygen coverage build-up To assess the contribution of background O-atom coverage in our measurements, we use auger electron spectra of the Cu(110) surface, measured after the final dosing cycle in each measurement (see figure 1, left). This was measured at a location 3 mm away (black spot) from the point of impact of CO\({}_{2}\) molecular beam for \(S_{0}\) measurements (red), as depicted in Figure 1 (left). Our analysis indicated that the background O-atom coverage amounted to less than 5% of the saturation coverage (see figure 1, right) and is expected to have a negligible effect on the \(S_{0}\) values reported in our work. Figure 1: (left) Auger electron spectra (AES) measured to estimate background O-atom coverage corresponding to each measurement at different incidence energies. A magnified view of the O-atom coverage buildup is shown in the inset. A schematic diagram indicating the different measurement positions is also shown (pink circle). The molecular beam was incident on the center of the target surface (red spot) for the \(S_{0}\) measurements. All the background measurements were conducted at a 3 mm distance away (black spot) from the center of the target surface at the end of the final dosing cycle. (right) The magnitude of the background O-atom coverage (in ML) for measurements carried out at different incidence energies. The maximum background oxygen coverage observed was about 0.025 ML, corresponding to 5% of the saturation coverage. ## 2 SI-2: Estimates for background carbon coverage build-up We have quantified the maximum background carbon coverage build-up based on the AES sensitivity factors and also by comparing with an unclean surface with carbon as the major contaminant. [1; 2] The maximum background carbon coverage was observed to be \(<\) 1.2% of ML and is expected to have a negligible effect on the \(S_{0}\) measurements reported in our work. An example, the estimation of carbon coverage measurement is shown below in figure 2. ## 3 SI-3: Ion gauge calibration To accurately quantify the absolute reaction probability, estimation of the incident CO\({}_{2}\) dose is essential. In our study, we use an ion gauge and a quadrupole mass spectrometer for precise dose calculations. Calibration of the ion gauge was carried out by measuring the \(S_{0}\) and O-atom coverage resulting from O\({}_{2}\) dissociation on a clean Cu(110) surface, using beam reflection method [3]. A beam of pure O\({}_{2}\), with an estimated incidence energy of 0.085 eV was used for this purpose. It is well established that the \(S_{0}\) and O-atom saturation coverage on a clean Cu(110) are \(0.35\pm 0.03\) (at 0.085 eV) and 0.5 monolayers (ML), respectively [4; 5]. These values were used for the calibration of the ion gauge as follows: The fraction of pressure reduced immediately when a pure O\({}_{2}\) beam is incident on a clean Cu(110) surface gives the \(S_{0}\). The overall reduction of the time-integrated oxygen pressure (measured using the ion gauge), until saturation coverage is reached and no further sticking is possible, corresponds to the number of molecules equivalent to the saturation coverage of 0.5 ML. This allowed us to calibrate the ion gauge. Figure 3 shows the result of one such measurement. First, we checked the repeatability of the pressure changes in our system for three on-off cycles of the pure O\({}_{2}\) molecular beam (left). Here, the target surface was moved away from the line of sight of the molecular beam. Subsequently, we positioned the target surface in line with the molecular beam. This adjustment caused a pressure change, attributed to an increase in hydrogen gas pressure, as confirmed by the mass spectrometer. The surface was still clean as confirmed by AES measurements. With the target surface in place, we turned on the oxygen beam and monitored the background pressure in the UHV chamber using the ion gauge. By subtracting the flux of molecules not absorbed from the total flux, we calculated the amount of oxygen adhered to the surface. Figure 2: (left) AES obtained from an unclean Cu(110) surface with carbon as the major contaminant. AES signal ratio (C-272 eV)/(Cu-776 eV) (marked with an arrow) ratio of 7.09 was observed. (right) Estimates of the background carbon coverage in the reported \(S_{0}\) measurements, assuming 7.09 to be the carbon saturation coverage. In reality, the saturation carbon coverage is slightly higher (as per AES sensitivity factors), consequently, these values represent an upper limit to the background carbon coverage. The dose of adsorbed oxygen molecules, estimated using the uncalibrated ion gauge, was equivalent to 0.285 ML, equating to an atomic coverage of 0.57 ML on the Cu(110) surface (see figure 3, right). Since it is well-known that the saturation coverage should be 0.5 ML, we conclude that our ion gauge overestimates the pressure by 14%. This calibration along with the gas-dependent sensitivity factors was used throughout our analysis for incident flux estimation. As an additional confirmation, we also determined the initial sticking probability of O\({}_{2}\) to be 0.36, which is consistent with that reported previously. [4]. ## 4 SI-4: Molecular beam profile at the Cu(110) surface Another important element to determine the absolute incident dose accurately is the shape of the molecular beam on the target surface. We accomplished this by measuring a spatial profile of the chemisorbed oxygen resulting from dissociative CO\({}_{2}\) at the saturation coverage limit on the Cu(110) surface. The molecular beam profile was measured along both the vertical and horizontal axes (see figure 4). We find that the cross-sectional area of our incident molecular beam corresponds to a circle with a diameter of 2.9 mm. Figure 3: (left) Temporal evolution of oxygen pressure is depicted measured using the ion gauge, with the molecular beam on and off. The initial three cycles illustrate the repeatability of the pressure changes. Here, the surface was not in the line of sight of the molecular beam. The fourth measurement cycle was conducted while exposing the Cu(110) surface to the beam. The fifth cycle was measured to check for any systematic drifts. The rapid increase in the pressure at 170 sec is caused by a jump in the H\({}_{2}\) pressure resulting from by the movement of the sample manipulator and does not affect our measurements. (right) A zoomed view of the fourth cycle shows the decrease in O\({}_{2}\) pressure caused by sticking to Cu(110) surface. The fractional decrease in O\({}_{2}\) signal at initial times gives its \(S_{0}\) and the overall decrease corresponds to the total number of molecules adsorbed on the surface. Figure 4: Beam profile of the CO\({}_{2}\) beam (1.40 eV) incident on the Cu(110) surface, measured by mapping the spatial profile of the O-atom coverage. The blue and red curve represents the beam profile along the vertical and horizontal direction, respectively. ## 5 SI-5: Estimating the flux of the incident beam Following the ion gauge calibration and determination of the beam profile on the target surface, we proceeded to establish a correlation between the ion gauge and the mass spectrometer measurements. In the case of a molecular beam mixture (CO\({}_{2}\) + H\({}_{2}\)), relying solely on the total pressure change measured using an ion gauge is not sufficient to calculate the incident dose of CO\({}_{2}\). To address this, we measured the individual partial pressures of H\({}_{2}\) and CO\({}_{2}\) using a mass spectrometer at the time of dosing. The mass spectrometer itself was calibrated against the ion gauge across a pressure range of \(1\times 10^{-9}\) to \(8\times 10^{-9}\) mbar. Measurements using pure H\({}_{2}\) and CO\({}_{2}\) individually, revealed that the mass spectrometer underestimated the partial pressures of H\({}_{2}\) and CO\({}_{2}\) by a factor of 1.78 (see Figure 5, upper panel) and 7.44 (see Figure 5, middle panel), respectively. It should be noted that at the moment, these factors do not include the gas-dependent sensitivity factors of the ion gauge itself (they are included separately later). To further confirm if these calibration factors hold true for a gas mixture, we measured the change in total pressure using the ion gauge upon turning on a molecular beam of 1.5% CO\({}_{2}\) in H\({}_{2}\) and simultaneously the respective partial pressures using the mass spectrometer. The sum of the individual partial pressure changes, adjusted using the estimated calibration factors above, was found to be in agreement with the total pressure change measured with the ion gauge ion (see Figure 5, lower panel). The estimated uncertainty for these measurements remained below 5%. ### Incident CO2 dose estimation The true partial pressure change of CO2 upon turning on the incident molecular beam, measured using the mass spectrometer was calculated as follows: \[\text{True pressure change of CO2:}\ \Delta P_{true}=\frac{CF_{CO2}\times\Delta P_{obs}}{IG_{ SFCO2}} \tag{1}\] Figure 5: (upper left) The dose of a pure H2 beam measured using the mass spectrometer for various working pressure ranges. The upper right panel demonstrates the relationship between the measured ion gauge pressure and the corresponding measurement using the mass spectrometer. A linear fit to this data yielded a calibration factor of 1.78 for H2. The middle panel shows the same for CO2 resulting in a calibration factor of 7.44. To further validate this procedure for a gas mixture, individual partial pressures were measured in the mass spectrometer and corrected using the calculated calibration factors. The corrected total pressure obtained was in excellent agreement with the total pressure measured by the ion gauge. The uncertainty in these measurements remained well within 5%. where \(CF_{CO_{2}}\), \(P_{obs}\) and \(IG_{SF_{CO_{2}}}\) stand for calibration factor for CO\({}_{2}\) (7.44), pressure change during dosing and ionization gauge calibration factor for CO\({}_{2}\) (1.42), respectively. \[\text{Number of molecules/sec:}\ N_{CO_{2}}=\frac{\Delta P_{true}\times \text{pumping speed}\times N_{a}}{R_{g}\times T} \tag{2}\] \[\text{Number of molecules/sec/cm}^{2}\text{, Flux:}\ \phi_{CO_{2}}=\frac{N_{CO_{2}}}{\pi\times r^{2}}\] (3) \[\text{Dose in ML per sec}=\frac{\phi_{CO_{2}}}{1.08\times 10^{15}( \text{atoms/cm}^{2}\text{/ML})} \tag{4}\] SI-6: Kinetic model for estimating surface coverage caused by non-dissociative physisorption and dissociative chemisorption To understand the reasons behind the conclusion drawn in the previous work [6] that CO\({}_{2}\) dissociation was absent, we constructed a kinetic model based on simple first-order kinetics. At a specific incident energy of CO\({}_{2}\), molecules undergo both trapping and dissociative adsorption. Their experimental conditions involved low surface temperatures (below 90 K), leading to the occupation of sites by both CO\({}_{2}\), CO, and O (both resulting from dissociation). A single CO\({}_{2}\) molecule is assumed to occupy one surface site, while a combination of CO and O molecules can also block one site each [7]. The maximum saturation coverage for physisorbed CO\({}_{2}\) can reach up to 1 ML, whereas for CO and O, it is limited to 0.25 ML each. Having this information, we formulated rate equations for each elementary process as shown below: \[\text{CO}_{2(g)}+\text{*}\xrightarrow{k_{1}}\text{CO}_{2phy} \tag{5}\] \[\text{CO}_{2(g)}+2\text{*}\xrightarrow{k_{2}}\text{CO}_{phy}+\text{O}_{chem} \tag{6}\] Here, \(k_{1}\) is the non-dissociative initial sticking probability for CO\({}_{2}\), \(k_{2}\) is the dissociative sticking probability of CO\({}_{2}\) and * denotes an available site on Cu(110) surface. The rate equations for overall surface coverage at any given time t (\(\Theta(t)\)) can be expressed as follows: \[\frac{d[\Theta(t)]}{dt}=(\ k_{1}+\ k_{2})(1-\Theta(t))[\text{CO}_{2(g)}] \tag{7}\] \[\Theta(t)=1-e^{-(k1+k2)t[\text{CO}_{2(g)}]} \tag{8}\] \[\frac{d[\text{CO}_{2phy}]}{dt}=\ k_{1}(1-\Theta(t))[\text{CO}_{2(g)}] \tag{9}\] \[\frac{d[\text{CO}_{\text{phy}}]}{dt}=\ k_{2}(1-\Theta(t))[\text{CO}_{2(g)}] \tag{10}\] \[\frac{d[\mbox{O}_{chem}]}{dt}=\mbox{ }k_{2}(1-\Theta(t))[\mbox{CO}_{2(g)}] \tag{11}\] Upon substituting the value of \(\Theta(t)\) into equations 9-11, and subsequently solving the resulting differential equation while applying appropriate boundary conditions (CO\({}_{2}\) saturation = 1 ML, CO, O-atom saturation coverage = 0.25 ML each and starting with a clean surface), we obtain: \[[\mbox{CO}_{2ply}(t)]=\frac{k_{1}}{(k_{1}+\mbox{ }k_{2})}-\frac{k_{1}}{(\mbox{ }k_{1}+\mbox{ }k_{2})}e^{-(\mbox{ }k_{1}+\mbox{ }k_{2})[\mbox{CO}_{2(g)}]t} \tag{12}\] \[[\mbox{CO}_{phy}(t)]=0.25\frac{k_{2}}{(\mbox{ }k_{1}+\mbox{ }k_{2})}-0.25\frac{k_{2}}{(\mbox{ }k_{1}+\mbox{ }k_{2})}e^{-(\mbox{ }k_{1}+\mbox{ }k_{2})[\mbox{CO}_{2(g)}]t} \tag{13}\] \[[\mbox{O}_{chem}(t)]=0.25\frac{k_{2}}{(\mbox{ }k_{1}+\mbox{ }k_{2})}-0.25\frac{k_{2}}{(\mbox{ }k_{1}+\mbox{ }k_{2})}e^{-(\mbox{ }k_{1}+\mbox{ }k_{2})[\mbox{CO}_{2(g)}]t} \tag{14}\] The derived solution yields the time-dependent coverage buildup of physisorbed CO\({}_{2ply}\), CO\({}_{phy}\), and chemisorbed O\({}_{chem}\) on the Cu(110) surface. With knowledge of the rate constants, \(k_{1}\), and \(k_{2}\), we have estimated the final coverage of CO\({}_{2ply}\), CO\({}_{phy}\), and chemisorbed O\({}_{chem}\) on the surface. ## 7 SI-7: Influence of defects on \(S_{0}\) We investigated the influence of surface defects on the \(S_{0}\) of CO\({}_{2}\) dissociation on Cu(110) by comparing measurements on a clean and annealed vs clean and Ar ion-sputtered (without annealing) surface (see figure 6. Here, a CO\({}_{2}\) beam with \(E_{\rm i}\) = 1.15 eV was used. The clean Cu(110) surface was sputtered for 10 minutes at \(T_{s}\) = 475 K, with a surface current of 0.5 \(\mu\)A. We estimate that the dose of Ar ions incident on the surface to be 1.2 ML. Under these conditions, we were unable to observe any significant change beyond the experimental uncertainties in the \(S_{0}\) values. Further, systematic studies with larger sputtering times will be needed to understand this point better. ## 8 SI-8: Discussion on possible contamination in our incident beams In this study, we concluded the dissociation of CO\({}_{2}\) on Cu(110) by measuring the dissociative chemisorbed oxygen signal using auger electron spectroscopic methods. However, our measurements could potentially be affected by the presence of oxygen-containing gas molecules such as CO, H\({}_{2}\)O, O\({}_{2}\) in the molecular beam, leading to an accumulation of oxygen on the Cu(110) surface. Presence of any O\({}_{2}\) contamination was ruled out by the fact that dosing with pure CO\({}_{2}\) beam did not result in any measurable O-atom coverage. Further, we monitored the signals of CO\({}_{2}\), H\({}_{2}\), CO, and O\({}_{2}\) using a mass spectrometer during dosing. Notably, we did not observe any discernible change in the partial pressures for gases other than CO\({}_{2}\) and H\({}_{2}\), which are components of the mixture. Our observations allow us to largely rule out the presence of any contamination within the sensitivity of our mass spectrometer. Furthermore, if there were any significant oxygen coverage resulting from the presence of CO contamination in our beam (such as CO dissociation), we would expect to observe carbon peaks too. However, no such peaks were detected in our measurements, thereby ruling out CO contamination. Another potential source of contamination is that of H\({}_{2}\)O. This can potentially arise from the reverse water gas shift (RWGS) reaction occurring within the gas mixture. If such a reaction were to Figure 6: Dissociative adsorption kinetics of CO\({}_{2}\) on both clean and Ar ion-sputtered Cu(110) surfaces. Under our measurement conditions, we were unable to observe a significant change in the \(S_{0}\) values. occur, both H\({}_{2}\)O and CO would be formed, and we would anticipate a measurable change in the CO signal in the mass spectrometer. However, no significant changes in the CO\({}_{2}\) to CO partial pressure ratio were observed during dosing for all mixtures (within 3%). Even if we assume that the H\({}_{2}\)O formed is below our detection sensitivity, we estimate it to be of the order of maximum 3% of the CO\({}_{2}\) fraction. The \(E_{1}\) range for such dilute mixtures of H\({}_{2}\)O will only span 0.05 eV. Such minute energy changes will not lead to energy-dependent initial sticking probabilities, as seen in our measurements. Based on this analysis, we rule out any role of H\({}_{2}\)O dissociation on the results presented.
2305.17197
Entailment as Robust Self-Learner
Entailment has been recognized as an important metric for evaluating natural language understanding (NLU) models, and recent studies have found that entailment pretraining benefits weakly supervised fine-tuning. In this work, we design a prompting strategy that formulates a number of different NLU tasks as contextual entailment. This approach improves the zero-shot adaptation of pretrained entailment models. Secondly, we notice that self-training entailment-based models with unlabeled data can significantly improve the adaptation performance on downstream tasks. To achieve more stable improvement, we propose the Simple Pseudo-Label Editing (SimPLE) algorithm for better pseudo-labeling quality in self-training. We also found that both pretrained entailment-based models and the self-trained models are robust against adversarial evaluation data. Experiments on binary and multi-class classification tasks show that SimPLE leads to more robust self-training results, indicating that the self-trained entailment models are more efficient and trustworthy than large language models on language understanding tasks.
Jiaxin Ge, Hongyin Luo, Yoon Kim, James Glass
2023-05-26T18:41:23Z
http://arxiv.org/abs/2305.17197v1
# Entailment as Robust Self-Learner ###### Abstract Entailment has been recognized as an important metric for evaluating natural language understanding (NLU) models, and recent studies have found that entailment pretraining benefits weakly supervised fine-tuning. In this work, we design a prompting strategy that formulates a number of different NLU tasks as contextual entailment. This approach improves the zero-shot adaptation of pretrained entailment models. Secondly, we notice that self-training entailment-based models with unlabeled data can significantly improve the adaptation performance on downstream tasks. To achieve more stable improvement, we propose the **Simple** Pseudo-Label **E**diting (SimPLE) algorithm for better pseudo-labeling quality in self-training. We also found that both pretrained entailment-based models and the self-trained models are robust against adversarial evaluation data. Experiments on binary and multi-class classification tasks show that SimPLE leads to more robust self-training results, indicating that the self-trained entailment models are more efficient and trustworthy than large language models on language understanding tasks. ## 1 Introduction Although achieving state-of-the-art performance in different natural language understanding (NLU) tasks Devlin et al. (2018); Liu et al. (2019); Yang et al. (2019); Clark et al. (2020); He et al. (2020); Joshi et al. (2020), large-scale pretrained language models still highly depend on human-annotated, task-specific training corpora for fine-tuning because the self-supervised pretraining objective does not incorporate explicit task-related knowledge. As a result, state-of-the-art language models are still challenged by the lack of adequate fine-tuning data and difficult evaluation examples crafted by adversarial attacks or model-in-loop adversarial data annotations Wang et al. (2021); Bartolo et al. (2020); Zang et al. (2019); Garg and Ramakrishnan (2020); Li et al. (2020). On the other hand, entailment is recognized as a minimal requirement for NLU Condoravdi et al. (2003). Recent studies have found that entailment learning improves sentence representation Reimers and Gurevych (2019); Gao et al. (2021). However, these models still need fine-tuning with human-annotated training data to handle downstream NLU tasks. The authors of Wang et al. (2021) found that entailment-based models are also few-shot learners that outperform recent efforts on few-shot NLU. For example, LM-BFF Gao et al. (2020) proves that entailment learning can significantly improve the data efficiency and adaptation ability of language models. In this work, we further explore the zero-shot and unsupervised adaptation abilities of entailment-based models without any human-labeled training corpora on downstream tasks. We first study the zero-shot and unsupervised adaptation abilities of the entailment-based language models. Inspired by recent progress on prompt tuning, we formulate different NLU tasks as contextual entailment Routley and Meyer (1973) by constructing task-specific suppositions. The language models are trained to predict the truth value of the constructed suppositions. In zero-shot adaptation experiments, we find this approach significantly outperforms naively concatenating different inputs and labels, proving that the supposition construction method mitigates the distribution gap among different NLU tasks. We further explore the potential of the unsupervised adaptation ability of entailment-based models. We use the pretrained entailment models to predict the pseudo-labels of unlabeled, task-specific language data. We find that the entailment-based models can be improved with self-training Blum and Mitchell (1998) with the automatically annotated pseudo-labels (He et al., 2019). While the self-training strategy has been proven effective on different tasks and modalities (Zou et al., 2019; Zoph et al., 2020; Meng et al., 2020; Xie et al., 2020), a major challenge for self-training is the unstable performance caused by the noisy pseudo-labels. A number of solutions have been proposed to mitigate this issue. The most popular methods are training data selection (Li and Zhou, 2005; Lang et al., 2022) and pseudo-label editing (Shin et al., 2020; Mandal et al., 2020). Recent work also found that simple Dropout (Srivastava et al., 2014) approaches improve contrastive learning (Gao et al., 2021) and speech recognition (Khurana et al., 2021; Dawalatabad et al., 2022). To combine the benefits of data selection and label editing methods, we propose SimPLE, a **sim**ple **p**seudo-**l**abel editing algorithm with simple text augmentation, uncertainty-based data filtering, and majority-based pseudo-labeling. Experiments with different backbone models on binary, multi-class, regular, and adversarial NLU tasks show that our approach makes the following contributions, * Supposition-based task formulation improves the zero-shot adaptation and robustness against adversarial evaluation data of entailment models across different NLU tasks. * SimPLE improves the pseudo-labeling accuracy on confident and uncertain training samples, leading to significant improvement over all self-training and pretrained baselines. * Self-trained, 350M-parameter entailment models without human-generated labels outperform supervised language models with 137B parameters, proving the data and computation efficiency of entailment self-training. ## 2 Related Work **Language modeling.** Task-agnostic, large-scale language models can solve a number of natural language understanding (NLU) tasks (Brown et al., 2020; Raffel et al., 2020; Lewis et al., 2019; Wei et al., 2022, 2022). On the other hand, pretraining with annotated training corpora of different natural language tasks also benefits the generalize ability and zero-shot adaptation performance (Sanh et al., 2021). Recent studies have found that textual entailment (Bowman et al., 2015; Williams et al., 2018) is a powerful pretraining task. Entailment models are applied for sentence representation learning (Reimers and Gurevych, 2019; Gao et al., 2021), relation extraction (Obamuyide and Vlachos, 2018; Yin et al., 2019), and fact-checking (Thorne and Vlachos, 2018). The authors of Wang et al. (2021) showed that entailment models can benefit the few-shot learning performance of pretrained language models on NLU tasks. **Robustness in Self-training.** While most self-training studies are under the computer vision context (Zoph et al., 2020; Zou et al., 2019), efforts also exist for self-training the latest neural language models, including back translation (He et al., 2019), text augmentation (Xie et al., 2020; Chen et al., 2020), question-answer synthesis (Bartolo et al., 2021; Luo et al., 2022), and co-training (Lang et al., 2022). However, self-training methods suffer from noisy pseudo-labels. In computer vision, a straightforward solution is obtaining confident pseudo-labels by augmenting input images (Shin et al., 2020; Mandal et al., 2020; Sohn et al., 2020), including shifting, rotating, or adding noise to pixels. However, data augmentation is not as straightforward for natural language if no additional model is used. Instead, some model-level methods can be applied. Zou et al. (2019) proposed regularizing over pseudo-label confidence to avoid overfitting to simple cases, Gao et al. (2021); Khurana et al. (2021) applied dropout to improve the quality of training corpora. Li and Zhou (2005); Lang et al. (2022) applied a graph-based confidence estimation method for removing training samples with uncertain pseudo labels. **Difference with previous work.** Without any additional language model for text augmentation, we propose a model-level, augmented pseudo-labeling method that improves self-training performance for entailment models. Our method avoids dropping training data and performs more stably than dropout-based methods. Different from previous work on weakly-supervised language understanding with entailment models (Wang et al., 2021), we do not use any human-generated labels. Our models contain 1/500 trainable parameters compared to the models used in Lang et al. (2022); Sanh et al. (2021). ## 3 Entailment Self-training **Pretraining.** Recent studies have found that entailment-based language models can efficiently adapt to different natural language understanding (NLU) tasks with a limited number of human labeled training samples (Wang et al., 2021; Luo and Glass, 2023). In this work, we find that entailment models can be self-improved without any human-generated labels by constructing suppositions (prompts) that describe the given tasks. Most NLU tasks can be formulated as predicting the truth value of the constructed suppositions that wrap inputs and label descriptions, as shown in Table 1. By training the entailment model using the MNLI corpus given with the constructed suppositions, the model can be directly adapted to other tasks with relatively high accuracy. We will show that without entailment pretraining, similar performance can only be achieved by 400 times bigger language models. The entailment-based models can be further fine-tuned on unlabeled texts via self-training. We apply different adaptation strategies for binary and multi-class classification tasks. **Binary classification.** Supposition-based entailment models predict True, Neutral, and False scores for each supposition, corresponding to entail, neutral, and contradictory labels of the MNLI corpus. For binary classification, we ignore the neutral score and calculate only True and False probabilities, and the True/False predicted can be linked to corresponding labels according to the supposition. For example, the SST2 supposition in Table 1 being true means that {x} is a positive movie review. The predicted True/False values are used as pseudo-labels for self-training, **Multi-class classification.** In binary classification, the model is presented with a single supposition and asked to decide whether it's true or not. In multi-class classification, the model is presented with a context sentence and multiple labels and is asked to choose the correct label. To predict the correct answer from multiple options, we propose an entailment score ranking method. First, for each sentence to be classified, we construct a supposition for each label. For example, in an emotion classification task, given the sentence S, we construct the following suppositions: "I am happy is entailed by S", "I am sad is entailed by S", and "I am shocked is entailed by S". We calculate the entailment probability of each supposition with the entailment model and predict the label associated with the most entailed supposition. We propose a max-confidence tuning method for self-training. We select the class with the highest entailment score and then record its predicted pseudo-label for further self-training, and ignore other classes. The model does not need to classify each class correctly but merely learns to predict the truth value of its most confident supposition. ## 4 Simple Pseudo-label Editing We propose the simple pseudo-label editing (Sim-PLE) method, a three-step pipeline for generating robust pseudo labels, including augmented pseudo-labeling with dropout, uncertain data filtering, and majority-based relabeling. We introduce the details of each step in this section. ### Simple Augmentation for Pseudo-labeling Because of languages' discrete and sequential nature, changing a token in a sentence might completely invert its meaning. As a result, unlike straightforward and effective image augmentation processes like FixMatch (Sohn et al., 2020), additional augmentation models are usually needed for text augmentation. Recent studies have found that instead of data-level augmentation, the Dropout mechanism leads to decent embedding-level augmentation. Gao et al. (2021) applied dropout for contrastive sentence representation learning, and Khurana et al. (2021) selected confident pseudo-labels by measuring the consistency of a model with the same input data and random dropouts. As the first step of generating augmented pseudo labels, we run \(N\) independent evaluations with random dropout (dropout rate \(=0.1\)) for each input training sample \(x_{i}\) and obtain a set of \(N\) noisy pseudo-labels. \[Y_{i}=\{y_{j}=M_{j}^{*}(x_{i})\mid j\in[0,N)\} \tag{1}\] where \(j\) stands for the \(j\)-th independent evaluation with a dropout model \(M^{*}\). Meanwhile, we store a set of sequence representations \(E_{i}=\{e_{0},e_{1},\dots,e_{N-1}\}\) of \(x_{i}\) collected in each feed-forward process. After finishing this step, we collect a set of data, pseudo-label, and embeddings. \[C=\{(x_{i},y_{i}^{j},e_{i}^{j})\mid i\in[0,M),j\in[0,N)\} \tag{2}\] \begin{table} \begin{tabular}{l l l} \hline \hline **Task** & **Inputs** & **Supposition** \\ \hline MNLI & \(\{\text{p},\text{h}\}\) & h is entailed by p. \\ RTE & \(\{\text{p},\text{h}\}\) & h is entailed by p. \\ QNL1 & \(\{\text{t},\text{q}\}\) & The answer to q is entailed by t. \\ QQP & \(\{q_{1},q_{2}\}\) & \(q_{1}\)’s answer is entailed by \(q_{2}\)’s answer. \\ SST2 & \(\{\text{x}\}\) & The movie is good is entailed by x. \\ \hline \hline \end{tabular} \end{table} Table 1: The suppositions constructed based on the definitions of different GLUE tasks (Wang et al., 2018). where \(M\) stands for the number of unlabeled training samples, each associated with \(N\) pseudo-labels and corresponding hidden states. In total, the augmented method outputs \(M*N\) label-embedding pairs for further processing. ### Uncertainty Estimation Following Li and Zhou (2005) and Lang et al. (2022), we estimate the confidence of all pseudo-labels using the SETRED algorithm. The motivation of this algorithm is that training samples with similar embeddings are likely to have the same pseudo-labels. On the other hand, if a training sample is located near samples with different pseudo-labels in the embedding space, its own pseudo-label is likely to be uncertain. Using the output data-embedding-label set shown in Equation 2, we can calculate the nearest neighbors of each training sample and estimate the labeling consistency. To estimate the uncertainty of \(y_{u}\), the pseudo-label of training sample \(x_{u}\), we calculate the Euclidean distances between \(x_{u}\) and all other \(M*N-1\) samples using the calculated text embeddings. We construct a set of the top k nearest neighbors of \(x_{u}\), namely \(N(u)\). With the nearest neighbor set, an uncertain score of \((x_{u},y_{u})\) can be calculated as follows, \[J_{u}=\sum_{v\in N(u)}\mathbb{I}(y_{u}\neq y_{j})\ /\ (1+\|e_{u}-e_{v}\|_{2}) \tag{3}\] where \(\mathbb{I}\) is a binary indicator function, whose value is 1 when \(y_{u}\neq y_{v}\) and 0 otherwise. \(\|e_{u}-e_{v}\|_{2}\) stands for the Euclidean distance between the embeddings of \(x_{u}\) and \(x_{v}\). As a result, \(J_{u}\) would have a higher value when more near neighbors are associated with different pseudo-labels. To estimate the uncertainty of \(y_{u}\), we compare \(J_{u}\) with a null hypothesis where all pseudo-labels in \(C\) except \(y_{u}\) are randomly shuffled. After the shuffling, the entire data-label mapping set becomes uncertain. The expectation and variance of \(J_{u}\) after shuffling is \[\mathbb{E}_{v}[J_{u}]=(1-\hat{P}_{y_{u}})\sum_{v\in N(u)}1/(1+\|e_{u}-e_{v}\|_ {2})\] \[\sigma(J_{u})^{2}=\hat{P}_{y_{u}}(1-\hat{P}_{y_{u}})\sum_{v\in N(u)}1/(1+\|e_{ u}-e_{v}\|_{2})^{2}\] The uncertainty can be estimated by verifying the significance of the difference between \(J_{u}\) and the null hypothesis. An uncertainty score can be calculated as \[s(u)=\frac{J_{u}-\mathbb{E}_{v}[J_{u}]}{\sigma(J_{u})} \tag{4}\] With this method, we calculate uncertainty scores for all \(M*N\) training samples in \(C\) for further processing. ### Filtering and Relabeling After finishing estimating the uncertainty of each training sample, we sort all training samples in \(C\) by their uncertainty scores and remove the 20% most uncertain training samples. The remaining samples are used for relabeling based on majority voting. For example, a training sample \(x_{i}\) has Figure 1: Visualization of the SimPLE method. The figure shows the embedding space of natural sentences, and different colors represent different predicted labels. Each data sample is labeled with multiple random dropouts, and we use the SETRED algorithm to detect the uncertain pseudo-labels. The final label is voted by confident inferences. \(N\) pseudo-labels \([y^{i}_{0},y^{i}_{1},\dots,y^{i}_{N-1}]\) after the augmented labeling step, and \(n\) labels are removed based on the uncertainty scores. The final pseudo-label of \(x_{i}\) is decided by the voting result of the \(N-n\) remaining labels. If all generated pseudo-labels of a training sample are removed or there is a tie in the voting, we re-run the labeling process without dropout to get the final pseudo-label. Following this approach, we keep all training samples and, meanwhile, obtain a more robust pseudo-label set. ## 5 Experiments **Benchmarks.** We conduct experiments on popular natural language understanding tasks in the GLUE Wang et al. (2018) benchmark, including RTE Dagan et al. (2005), QNLI Rajpurkar et al. (2016), QQP, SST-2 Socher et al. (2013), and CoLA Warstadt et al. (2019). We also assess the robustness of the proposed method against adversarial evaluation sets in the AdvGLUE corpus Wang et al. (2021), including Adv-QNLI, Adv-QQP, Adv-RTE, and Adv-SST2. The data in AdvGLUE is created by adding word-level and sentence-level perturbations to the GLUE data, as well as human-crafted examples. For Multi-Classification, we use Copa Wang et al. (2019) (which consists of questions paired with two answer choices), Emotion Classification Saravia et al. (2018), Amazon Review Keung et al. (2020) and Ag-News Xiang Zhang (2015). More details are shown in Appendix A. **Hyper-parameters.** We train 350M RoBERTa Devlin et al. (2018) and DeBERTa He et al. (2020) models for the language understanding tasks, without using larger language models like GPT-3 Brown et al. (2020) or T0 Sanh et al. (2021) that are used for generating pseudo-labels in Lang et al. (2022). We also use the same hyper-parameters across all tasks, attempting to avoid the problems mentioned in Perez et al. (2021). In the entailment pretraining on the MNLI dataset Williams et al. (2018), we optimize both RoBERTa and DeBERTa models with the AdamW optimizer Loshchilov and Hutter (2018). For all tasks and both models, we set \(\varepsilon=10^{-6}\). In the entailment pretraining, we set the weight decay weight to \(10^{-5}\), and the learning rate for both models is 3e-6. During the self-training step, the learning rate of both models on all binary classification tasks is 4e-6 and is 1e-6 on multi-classification tasks, and the weight decay is constantly \(10^{-2}\). We run the entailment pretraining for 2 epochs and the self-training for 6 epochs. In confidence-based labeling, we drop 1/8 data with the lowest confidence. **Self-training details.** For each binary classification task, we randomly select \(N=2000\) unlabeled data examples. For each multi-classification task, we randomly select \(N=50\) unlabeled data examples. To estimate the uncertainty of the pseudo-labels in SETRED and SimPLE algorithms, we use the hidden states of the 4th layer from the top of both RoBERTa and DeBERTa language models as the supposition embeddings and measure the uncertainty with 9 neighbors. In SimPLE, we run 7 inferences for each training sample with different dropouts. We train and evaluate the models for each task with 10 independent runs on 2 V100 32G GPUs. Each experiment takes less than an hour. **Assessment.** We evaluate the performance of our algorithm by comparing the average classification accuracy against baseline methods and the robustness. We describe the term _Robustness_ as follows: in multiple independent experiments, a robust method should achieve high maximum, minimum, and average accuracy against with different backbone model and training data, on different natural language understanding tasks. ### GLUE and AdvGLUE Tasks The experiment results are shown in Table 2. We compare the adaptation performance of entailment-based language models and the improvement of different self-training approaches. **Compare with supervised baselines.** We compare our entailment self-training methods with few-shot fine-tuning baselines. The few-shot baselines, including PET Schick and Schutze (2021), LM-BFF Gao et al. (2020), P-tuning Liu et al. (2021), PPT Gu et al. (2021), and UPT Wang et al. (2022), are based on 350M BERT or RoBERTa backbones. Our pretrained DeBERTa entailment model outperforms the best few-shot baseline (LM-BFF) by 4.5%, and the RoBERTa entailment model outperforms LM-BFF by 1.5%. With self-training, our SimPLE method further improves the model's performance by a large margin. The RoBERTa performance is boosted by nearly 5% and the average performance of DeBERTa is over 86%, outperforming the best few-shot supervised baselines by 6.9%. On the other hand, we compare our model with fully supervised RoBERTa/DeBERTa models and robust training methods, including R3F (Aghajanyan et al., 2020), child tuning (CT) (Xu et al., 2021), and match tuning (MT) (Tong et al., 2022) models, on the AdvGLUE benchmark. We found that the fully-supervised DeBERTa model is the best baseline on the AdvGLUE benchmark. However, our RoBERTa entailment model outperforms all robust training baselines with the same pre-trained backbone by over 10%. With SimPLE self-training, the DeBERTa entailment model achieves the best performance on AdvGLUE, outperforming the fully-supervised DeBERTa model by 2.1% as well as all other baselines. We found that our pretrained entailment models outperform EFL, the few-shot fine-tuned entailment model based on RoBERTa-large proposed by Wang et al. (2021). The self-trained models further outperform EFL with larger margins. This indicates the strong adaptation ability introduced by the supposition-based NLU strategy. **Compare with large language models.** We found that both zero-shot pretrained and semi-supervised self-trained entailment models outperform the few-shot large language models on QNLI, QQP, and RTE tasks, and achieve significantly higher average accuracy on GLUE. This suggests that our method is computation-efficient - the models use 1/400 parameters, without human-generated task-specific labels, but achieve better performance than expensive large-scale language models on NLU tasks. **Compare with self-training baselines.** By averaging 10 independent evaluations across GLUE and AdvGLUE benchmarks and backbone models, we found that Dropout and SETRED improve baseline self-training performance on a similar level. On average, SETRED outperforms Dropout by 0.5% on 4 experiment settings. On the GLUE benchmark, the SimPLE method improves the model's performance by 1.5 to 2% on average. The highest improvement boost is on the QNLI tasks, where the SimPLE self-training method outperforms the baseline self-training by 9% and 6% on RoBERTa and DeBERTa respectively. Although the average improvement is not very high, we will show that SimPLE is significantly more robust. The results show that augmenting the pseudo-labels without removing uncertain training samples benefits self-training, which aligns with our hypothesis. In general, the experiments on binary classification NLU tasks proved the data and computation efficiency of entailment self-training over different strong baseline models. Furthermore, the SimPLE algorithm we propose in this work achieves the best average performance, significantly outperforms all baselines on some of the tasks, and meanwhile preserves the robustness of entailment models against adversarial benchmarks. ### Multi-class NLU Tasks The experiment results on Copa, Emotion, Amazon Review, and Ag News are shown in Table 3. In multi-classification tasks, we present the comparative results of the pretrained entailment-based language models and the 4 self-training approaches compared in the previous section with binary NLU tasks, including standard self-training, dropout-based re-labeling, SETRED, and SimPLE. **The effect of dropout-based augmentation.** By merely using dropout, the augmented self-training outperforms the standard normal self-training baseline which keeps all the pseudo-labels in general. This further validates the previous finding that by adding dropout, the models adopt noises that benefit the inference, generate augmented pseudo-labels and mitigate the overfitting problem. **The effect of SETRED.** By merely using SETRED, the self-training does not see a consistent improvement in performance and even falls behind the pretrained and standard self-trained models that preserve all pseudo labels in some tasks (like Amazon-Review). This fact suggests that removing uncertain pseudo-labels can lead the model to overfit confident training samples, thus hurting the self-fine-tuning performance. **The effect of SimPLE.** Table 3 shows that the SimPLE algorithm constantly outperforms all pretrained and self-trained baselines on both backbone models across all multi-class benchmarks, which aligns with the result on the binary NLU tasks. This fact further validates our hypothesis that augmenting the pseudo-labels of uncertain training samples can improve the performance of self-training. **Compare with Large Language Models.** We notice that our self-trained methods can outperform several large language models. On Emotion and AG News tasks, the pretrained entailment model without self-training can achieve a significant improvement over the GPT-3-175b model, which is 500 times large than the entailment model. This indicates that the entailment-based model is a more efficient and trustworthy option for many natural language understanding tasks. ## 6 Analysis **Robustness.** Besides the mean accuracy of all experiments, we also visualize the results of all independent evaluations of different self-training strategies in Figure 2. We found that SimPLE constantly outperforms other self-training baselines on the regular GLUE benchmark by comparing Figure 2: The results of 10 independent experiments with self-trained RoBERTa and DeBERTa models on GLUE (*BERTa Reg.) and AdvGLUE (*BERTa Adv.) benchmarks. mean, maximum, and minimum accuracy. Although DeBERTa performs similarly under different self-training strategies on QQP in terms of average accuracy, there exists a significant gap between the minimal performance of baseline and SimPLE. This indicates that SimPLE is more robust and safer compared with the regular self-training algorithm. The only exception is the DeBERTa model on SST2 - the mean performance of SimPLE is better, but it has a lower minimal performance than the baseline self-training method. Most models overfit to the training corpora and achieve high accuracy on regular evaluation sets, but perform poorly on adversarial benchmarks Wang et al. (2021). As a result, fully supervised models achieve less than 60% accuracy on AdvGLUE. We also investigate if SimPLE hurts the model's robustness against adversarial evaluation data. We found that, except RoBERTa on AdvQQP, other settings show that the entailment-based models are still robust after SimPLE self-training. As we compared in Table 2, all these results significantly outperform fully-supervised baselines. **Pseudo-labeling Accuracy.** We show the pseudo-labeling accuracy of RoBERTa and DeBERTa-based entailment models with different strategies in Figure 3 with 10 independent experiments. The results indicate that the DeBERTa models predict more accurate pseudo-labels in general. On the other hand, the pseudo-label sets produced by SimPLE with both models are significantly less noisy than the standard and dropout-based labeling methods without removing any uncertain data samples. SETRED achieves the highest labeling accuracy because it drops uncertain samples. The comparison suggests that SimPLE achieves the highest performance because it achieves high pseudo-labeling accuracy on uncertain training samples. **Case study.** We visualize the hidden states, pseudo-labels, and confidence of the training samples in the QNLI tasks calculated by the pretrained DeBERTa entailment model with the SimPLE algorithm in Figure 4. The embedding space is calculated with t-SNE Van der Maaten and Hinton (2008) using 252*7=1764 embeddings. Half of them are plotted in the figure. Each training sample is evaluated with 7 different dropouts, and the uncertainty is estimated with 9 neighbors. In Figure 4: Visualization of the embeddings, pseudo-labels, and uncertainty of QNLI suppositions calculated by the pretrained DeBERTa entailment model. Each data sample has 7 embeddings calculated with different dropouts. Black stands for uncertain points and other colors stand for different training examples. Figure 3: Pseudo-labeling accuracy of entailment models with standard (ST), dropout, SETRED, and SimPLE strategies. SETRED achieves higher accuracy because uncertain data samples are dropped. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{5}{c}{**Multi-Classification**} \\ \cline{2-6} & **Copa** & **EM** & **AR** & **News** & **Avg** \\ \hline \multicolumn{6}{c}{DEBERTa\(-\)1arge (350M)} \\ Pretrain & 77.0 & 51.93 & 37.01 & 73.40 & 59.84 \\ BaseST & 78.75 & 51.24 & 38.80 & 73.10 & 60.47 \\ Dropout & 78.25 & 53.69 & 38.19 & 73.16 & 60.82 \\ SETRED & 78.0 & 52.42 & 37.61 & 73.33 & 60.34 \\ SimPLE & **79.75** & **54.58** & **39.05** & **73.57** & **61.74** \\ \hline \multicolumn{6}{c}{RoBERTa-large (350M)} \\ Pretrain & 76.0 & 49.21 & 33.31 & 63.18 & 55.43 \\ BaseST & 76.67 & 50.94 & 37.38 & 64.64 & 57.41 \\ Dropout & 78.67 & 50.99 & 42.87 & 61.05 & 58.40 \\ SETRED & 78.0 & 50.53 & 27.16 & 63.24 & 54.73 \\ SimPLE & **79.0** & **51.79** & **44.06** & **65.60** & **60.11** \\ \hline \multicolumn{6}{c}{_Large Language Models_} \\ Zero-shot & 70.0 & 42.77 & - & 43.9† & - \\ Few-shot & 77.0† & - & - & 61.0† & - \\ \hline Class Num & 2 & 6 & 5 & 4 & - \\ \hline \hline \end{tabular} \end{table} Table 3: Multi-class NLU results with 3 independent runs. The Copa model selects from two candidate sentences, which is different from the previous binary NLU tasks. \(\diamondsuit\): T5-11b, \(\dagger\): GPT-Neo-6b, \(\ddagger\): GPT-3-175b. The performance of large language models are cited from Zhao et al. (2021) and Wang et al. (2023). Figure 4, different embeddings of the same training sample are labeled with the same color, while the uncertain cases are marked in black. + and - stand for the truth value of the suppositions. As shown in the figure, most uncertain cases appear around the uncertain circle. We also highlight two training samples with uncertain representations. This phenomenon indicates that the SimPLE algorithm can drop most embeddings of a data sample and edit the voting results of the dropout-based pseudo-labeling method, improving the pseudo-labeling accuracy from 76.5% to 79.2% in this experiment. We also show that the original pseudo-label set is unbalanced, with 67.1% of all predicted labels being "False". Although we do not provide any prior knowledge about the label distribution of the task (unknown without human annotation), the SimPLE method mitigates the bias through the uncertain candidate removal process. Figure 4 shows that most uncertain pseudo-labels estimated by SimPLE are "False", thus the remaining pseudo-labels are more balanced. ## 7 Conclusion We show that entailment-based language models can be adapted to different NLU tasks without supervision and achieve robust performance against noisy pseudo-labels and adversarial texts. We design a supposition-based prompting strategy to improve the zero-shot adaptation performance of entailment-based models. To improve the stability of self-training, we propose the SimPLE algorithm for augmented pseudo-labeling. Experiments on binary, multi-class, regular, and adversarial NLU tasks show that the SimPLE self-training strategy significantly outperforms a number of strong baselines, including 400 and 500 times larger language models on both zero-shot and weakly supervised settings, proving the effectivenss of entailment self-training for efficient and trustworthy natural language understanding systems. ## Limitations Our method utilized pretrained entailed models and adapted them to other domains under zero-shot and self-training settings. There are two limitations that we would like to improve in future work. Firstly, we use human-designed suppositions for each task, which is less automatic than a direct, zero-shot adaptation of the models. Secondly, the self-training on some multi-class classification tasks is not as high as on binary NLU tasks, indicating the challenge of applying entailment models to multi-choice tasks. We would like to overcome this in the next step. ## Ethics Statement We propose a method that can significantly reduce the financial and environmental cost of language model learning. By reducing the need for data collection and human labeling, our method can effectively protect user and data privacy by avoiding leaking any information while building the training corpora. We found that a medium-sized language model can achieve similar performance as the state-of-the-art large-scale language models, suggesting that we can cost less financially and environmentally during model training and evaluation for comparable performance. However, since we reduced the need for human-labeling efforts, the deployment of the system might decrease the number of data annotation jobs.
2304.08375
A study on a Q-Learning algorithm application to a manufacturing assembly problem
The development of machine learning algorithms has been gathering relevance to address the increasing modelling complexity of manufacturing decision-making problems. Reinforcement learning is a methodology with great potential due to the reduced need for previous training data, i.e., the system learns along time with actual operation. This study focuses on the implementation of a reinforcement learning algorithm in an assembly problem of a given object, aiming to identify the effectiveness of the proposed approach in the optimisation of the assembly process time. A model-free Q-Learning algorithm is applied, considering the learning of a matrix of Q-values (Q-table) from the successive interactions with the environment to suggest an assembly sequence solution. This implementation explores three scenarios with increasing complexity so that the impact of the Q-Learning\textsc's parameters and rewards is assessed to improve the reinforcement learning agent performance. The optimisation approach achieved very promising results by learning the optimal assembly sequence 98.3% of the times.
Miguel Neves, Miguel Vieira, Pedro Neto
2023-04-17T15:38:34Z
http://arxiv.org/abs/2304.08375v1
# A study on a Q-Learning algorithm application to a manufacturing assembly problem 1 ###### Abstract The development of machine learning algorithms has been gathering relevance to address the increasing modelling complexity of manufacturing decision-making problems. Reinforcement learning is a methodology with great potential due to the reduced need for previous training data, i.e., the system learns along time with actual operation. This study focuses on the implementation of a reinforcement learning algorithm in an assembly problem of a given object, aiming to identify the effectiveness of the proposed approach in the optimisation of the assembly process time. A model-free Q-Learning algorithm is applied, considering the learning of a matrix of Q-values (Q-table) from the successive interactions with the environment to suggest an assembly sequence solution. This implementation explores three scenarios with increasing complexity so that the impact of the Q-Learning's parameters and rewards is assessed to improve the reinforcement learning agent performance. The optimisation approach achieved very promising results by learning the optimal assembly sequence 98.3% of the times. ###### Abstract In this paper, we propose a novel approach to the development of traditional optimisation methods for optimization problems. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function. The proposed approach is based on the proposed approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the objective function, which is a new approach to the optimization of the objective function, which is a new approach to the objective function, which is a new approach to the objective function, which is a new approach to the objective function, which is a new approach to the objective function, which is a new approach to the optimization of the objective function, which is a solution that maximises the total rewards. The main advantages of RL compared to other machine learning approaches are the fewer data available in the learning phase, which are the current states and expected rewards in a continuous learning process, fuelled by the interaction between the agent and the environment. Given its characteristics, RL methods are used in complex problems where there appears to be no obvious or easily programmable solution, such as game playing, robotics, control problems or operational research, and in problems where there is not enough labelled data such as anomaly detection problems. Since most of programming tasks are tedious and require years of expertise, RL algorithms can be applied to replace it with an intuitive process comprehensible even by an unskilled user. ### Reinforcement learning applications One of the most successful applications of RL in recent years was the development of the Go playing program known as AlphaGo. This program achieved a 99.8% winning rate against other Go programs and defeated the European Go champion by 5 games to 0 [3]. This program was trained by supervised learning from human expert moves and by reinforcement learning from self-play. This application was further improved (AlphaGo Lee) and was capable of defeating the 9 dan player Lee Sedol, winner of 18 international titles, 4 games out of 5 [4]. Also, in the field of game playing, RL was employed in a set of 49 Atari games by developing the DQN algorithm. This new algorithm was able to outperform the best RL methods at the time in 43 out of the set of 49 games. Furthermore, DQN performed at a level comparable to a professional player, achieving more than 75% of the human score in 29 out of the set of 49 games tested [5]. In recent years, the research on the applicability of RL is also increasing in the fields of decision-making and system control problems. By applying Q-Learning in a stock optimisation problem were achieved results up to 25% better when compared to traditional stock management algorithms, [6]. Wang and Usher studied the implementation of the Q-Learning algorithm for the usage of job agents when establishing routing decisions in a job shop environment, [7]. The authors discussed the effects of the Q-Learning application with the guidelines for future applications and recommendations for factor settings. Also, in dynamic job shop scheduling problem (DJSS) [8] pro posed the usage of RL with a Q-factor algorithm to improve the scheduling method's performance while considering random job arrivals and machine breakdowns. In a simulated environment, this proposed method achieved high performances. In automotive paint shops, colour changeovers between consecutive production orders are a source of costs due to the process of cleaning painting robots. In order to minimize these costs, production orders can be re-sequenced and grouped in identical colours as a batch. Leng et. al. propose the usage of the Deep Q-Network algorithm to solve this Colour-batching Re-sequencing Problem [9]. Huang et al. proved, through a simulation study, the effectiveness of the usage of Q-Learning in a maintenance problem where random failures of machines are highly disruptive [10]. The usage of reinforcement learning was also proposed to tackle the profit optimization problem of a single product production affected by deterioration failures, requiring both maintenance and repairs [11]. The performance in manufacturing work cells that utilise gantries to load and unload the materials and parts needed is highly dependent on the gantry movements in real operation. Ou et al. formulated the gantry scheduling problem as a RL problem through the usage of Q-Learning and demonstrated, from the simulation results, the capability of effectively reducing system production losses in real-time operations, [12]. In a later article, five different reward functions were devised based on different assumptions of the system. It was shown that the policy derived from systematic analyses of production loss significantly outperformed the other policies. Therefore, it was concluded that the level of understanding of the system and how this understanding is transmitted to the reward function greatly impacts the learning model's success [13]. In the areas of production manufacturing [14] proposed the usage of RL to improve assembly efficiency by a dual-arm robot and achieved higher performance when compared with other methods. In recent years, personalized production has emerged due to the increasing customer demand for more personalized products. This type of production, when compared with traditional production, has more uncertainty and variability. Such problems can be tackled by the usage of multi agent systems and reinforcement learning in a smart manufacturing environment. This approach was shown to be competitive in a dynamic environment [15]. The automation of shoemaking production lines is extremely challenging also due to the versatility of manual product customization. To tackle this problem a cyber-physical system artificial intelligence architecture was devised for the complete manufacturing of soft fabric shoe tongues by using Deep-Q reinforcement learning as a means of achieving better control over the manufacturing process and convolutional and long short-term memory artificial neural network to enhance action speed [16]. In a collaborative assembly process context, [17] proposed an approach based on Interactive Reinforcement Learning to reduce the programming effort required by an expert. The learning approach is made in two steps. The first one consists of modelling simple tasks that constitute the assembly process, using task-based formalism. These modelled simple tasks are then used by the robotic system by proposing to the user a set of possible actions at each step of the assembly process via a graphic user interface (GUI). After the user selects the action, the robot performs it, progressing the assembly process while learning the assembly order. The framework also allows different users to teach different assembly processes to the robot. This proposed approach is based on Q-Learning and IRL and was successfully applied in a UR10 robot in an assembly process comprising tasks such as, picking, holding, mounting and receiving objects. Human-robot interactions have become abundant due to the increase of autonomous robotic operators. The consequence of such interaction is the introduction of a source of uncertainty and unpredictability from the human operator. Olliff et. al. presents a methodology for the implementation of a RL-based intelligent agent which allows a change in the robotic operator's behavioural policy in response to performance variation in the human operators [18]. Lastly, there has been an increase in the research of using digital twins in smart manufacturing environments. Xia et. al. developed a control methodology named Digital Engine capable of acquiring process knowledge, scheduling manufacturing tasks, identifying optimal actions and demonstrating control robustness [19]. Hu et. al. proposed a new graphic convolution layer named Petri-net convolution layer (PNC). When utilizing DQN with a PNC network better solutions are obtained for dynamic scheduling problems [20]. ### Main challenges and objectives Even though many advances were made in recent years, some major challenges where noted by [17] when applying RL algorithms. The first challenge arises in the learning phase where the agent must do a trade-off between doing actions known to be effective (exploitation) and actions not yet explored (exploration), which is known as the exploration-exploitation dilemma [2]. Another common challenge is the "curse of dimensionality" [18], where to ensure global optimality, data must be collected throughout the entire state-space, often infeasible in high-dimensional state-action spaces. The restrictions of expensive hardware, component's wear with economical and logistical consequences in the agent interactions with the physical world are known as the "curse of real-world samples". To reduce real-world interactions, a model can be used as a simulation system and the knowledge transferred to a real scenario. However, creating accurate models is very challenging and sometimes even impracticable. Small errors due to under-modelling can accumulate and make the simulation diverge from the real behaviour, which is known as "curse of under-modelling and model uncertainty". Finally, the desired behaviour in RL is often specified by the reward function, which is frequently easier than defining the behaviour itself, however, in practice, in some problems it may be astonishingly difficult. This RL challenge is often known as "curse of goal specification". In this RL application, our approach aims to study the exploration-exploitation dilemma and the curse of goal specification, in which the type of agent properties (e.g. learning rate or reward signal) can influence the results obtained, here tackled by the assessment of the algorithm's learning parameters in the solution's outcome. In addition, this intends to support work planning with the demonstration of the RL method application regarding a feasible and efficient assembly sequence of a product. Therefore, the problem application resumes the optimisation of an assembly problem for a given object considering a time efficient solution. The remainder of the paper is structured as follows: in Sect. 2, the problem formulation and modelling theory of the Q-learning algorithm is outlined; in Sect. 3, the case study is presented with the discussion of the results implementation for a set of assembly scenarios; and finalising, in Sect. 4, with the conclusions and future work. ## 2 Model formulation ### Problem description Due to the market diversification of on-demand product attributes, current production systems are being required to deal with assembly flexibility, in which one or multiple resources is in charge of all steps of product customisation. The tasks sequencing represents one of the major components directly related to the efficiency of the assembly process. However, the assembly of a product is constrained by the product design and quality considerations, which often derives many possibilities in assembly sequence steps. The problem considers a representation of an assembly job of a complex product containing different parts and tools, decomposed in a number of tasks which can be assembled in different sequences by an assigned resource. Along with the achievement of a feasible assembly process, improving the time efficiency is nontrivial due to the complexity of sequences' immediate/general time dependence of previous completed tasks. In order to cope with these issues, the goal of this work is to assess the effectiveness of an RL agent algorithm in the optimisation of task sequences based on real-time system states. ### Mathematical approach To provide a baseline comparison, the problem is formulated as a benchmark mathematical optimisation MILP model, given the following mathematical notations: \begin{tabular}{l l} **Sets:** \\ \(i\), \(i^{\prime}\), & Tasks \\ \(i^{\prime\prime}\) & \\ \(k\), \(k^{\prime}\) & Sequence step \\ **Subsets:** \\ \(P_{ii^{\prime}}\) & General precedence tasks feasibility \\ \(Q_{ii^{\prime}}\) & Immediate forbidden sequence of tasks \\ **Parameters:** \\ \(\tau_{ik}\) & Average processing time of task \(i\) to sequence step \(k\) \\ \(\Delta_{ii^{\prime}}\) & Processing time variation of task _i'_ given the previous completed \\ & task \(i\) \\ **Variables:** \\ \(Y_{ik}\) & Binary variable to assign task \(i\) to sequence step \(k\) \\ \(C_{ik}\) & Completion time of task \(i\) at each sequence step \(k\) \\ \(C_{max}\) & Makespan \\ \end{tabular} The problem is formulated as an assignment problem of a set of tasks \(i\in I\) to a set of sequence steps \(k\in K\) based on the concept of a general precedence model, setting in Eq. 1 the objective function as the makespan minimisation. The variable \(C_{ik}\) defines the completion time of task \(i\) at each step \(k\), given the assignment of task \(i\) to sequence step \(k\) set by variable \(Y_{ik}\). Considering that the last step \(k\) = \(K^{last}\) comprises a feasible assembly combining all possible tasks, the total completion time of the optimal assembly sequence is minimised. \[Minimise\;C_{max}\geq C_{ik}\quad\forall i,k=K^{last} \tag{1}\] _s.t._ \[\Sigma_{i}Y_{ik}=1\quad\forall k \tag{2}\] \[\Sigma_{k}Y_{ik}=1\quad\forall i \tag{3}\] \[Y_{ik}+Y_{i^{\prime}k^{\prime}}<1\quad\forall i,i^{\prime}\in P_{ii^{\prime}},i\neq i^{\prime},k<k^{\prime},k\neq k^{\prime} \tag{4}\] \[\begin{split} C_{ik}\geq\tau_{ik}Yik+(C_{i^{\prime}k-1}+\Delta_{i ^{\prime}i}Y_{i^{\prime}k-1})\;|_{k>1}+\\ (\Sigma_{k^{\prime}>k-1}\Sigma_{i^{\prime}i^{\prime}\notin Q_{i^{ \prime}i^{\prime}},i^{\prime}\neq i}\Delta_{i^{\prime}i^{\prime}}Y_{i^{\prime }k^{\prime}})\;|_{k>2}\;\forall i,i^{\prime}\notin Q_{i^{\prime}i,k}\end{split} \tag{5}\] \[C_{ik}\geq 0\quad Y_{ik}\in\{0,1\} \tag{6}\] Regarding the formulation, Eqs. 2-3 considers the allocation constraints to allocate one and only one task to one step, and Eq. 4 guarantees the general precedence rule of two tasks (under subset \(P_{ii^{\prime}}\)). Eq. 5 defines the completion time for every task and according to production step, ensuring the timing between two consecutive stages, defined by the allowed assembly sequence given by \(Q_{i^{\prime}i}\). Besides the processing time of each task per stage \(\tau_{ik}\), the model considers the variation of time duration according to previous completed tasks, given by \(\Delta_{ii^{\prime}}\). Finally, Eq. 6 reassures the non-negativity and integrality of variables. ### RL modelling theory Given the anatomy of RL algorithms, they can be classified into three categories of methods which are value-function methods, policy search methods and actor-critic methods. The value-function methods, also known as critic-only methods, are based on the idea of initially discovering the optimal value function by fitting a value-function or a Q-function and then deriving the optimal policy from this. On the other hand, the policy search methods, also known as actor-only methods, search directly in the policy space by summing the rewards of sample trajectories, which is only possible if the search space is restricted. In the particular case of policy search algorithms, known as policy gradient methods, one step of gradient ascent is applied to the expected reward objective. Lastly, the actor-critic methods are a combination of both where the critic's function is to monitor the agent's performance by fitting a value function or a Q-function to determine when the policy must be changed by the actor. Moreover, the models can also be divided into model-free algorithms, which do not use models of the environment and are explicitly trial-and-error learners, and model-based algorithms, which use the model for planning or policy improvement. RL algorithms are commonly formalised as Markov decision processes, where the agent observes the current state and decides the next suitable action. The reason for this formalism is the increased difficulty of computation by considering all the states and actions taken from the initial state. Using MDPs, the system only needs to keep track of the last state and action. However, it is important to understand that this Markov assumption leads to the loss of data, which in some situations might be relevant since rewards may be infrequent and delayed. To define a Markov decision process, \(M=\{S,A,T,r\}\), it is required to define a state space \(S\), an action space \(A\), a transition operator \(T\) and a reward function \(r\). The state space is a set of valid states \(s\) the system can occupy, \(s\in S\). The action space is the set of possible actions \(a\) the agent can take such that \(a\in A\). In Markov decision processes the transition probabilities are not only conditioned on the previous state but also on the previous action, \(p(s_{t+1}\mid s_{t},a_{t})\). Since the policy is the mapping of the states to actions, \(\pi_{\theta}(a_{t}\mid s_{t})\), a graphical representation of a Markov decision process can be observed in Fig. 2. The learning agent must evaluate whether if it is being successful in the task. An agent can discern good from bad events based on the reward signal, which is analogous to the way the humans learn when experiencing pain or Figure 2: Markov decision process (MDP). pleasure. This is the primary source of improvement of the policy since if the action selected in a certain state returns a low reward, then the policy may be changed so that, when faced by the same exact state, the policy selects a different and more rewarding action. However, the agent's goal is to maximise the accumulated reward over time through the actions chosen. However, an action with a high immediate reward might not be the optimal choice since it may lead to a lower accumulated reward over the future. To tackle this issue, there are two important concepts, the value function and the quality function, usually known as Q-function. Given a policy, the value function is defined as the total expected reward from a given state \(s_{t}\): \[V^{\pi}(s_{t})=\sum_{t^{\prime}=t}^{T}E_{\pi_{\theta}}[r(s_{t^{\prime}},a_{t^{ \prime}})\mid s_{t}] \tag{7}\] The Q-function is, on the other hand, the total expected reward from taking the pair action \(a_{t}\) in the state \(s_{t}\): \[Q^{\pi}(s_{t},a_{t})=\sum_{t^{\prime}=t}^{T}E_{\pi_{\theta}}[r(s_{t^{\prime}}, a_{t^{\prime}})\mid s_{t},a_{t}] \tag{8}\] Actions must therefore be selected based on value judgements because the agent's goal is to maximise the accumulated reward over time. Unfortunately, while rewards are given directly by the environment, values must be estimated multiple times within the sequence of observations. #### 2.3.1 Q-Learning In order to implement the algorithm, it is necessary to understand the Q-Learning parameters used, such as how the Q-table is updated. The value iteration update is done at each step through the Bellman Equation, which consists on the weighted average of the old Q-value and the new information obtained, where \(\alpha\) corresponds to the learning rate, \(\gamma\) to the discount factor, \(r_{t}\) to the received reward when moving from state \(s_{t}\) to \(s_{t+1}\), \(Q^{new}(s_{t},a_{t})\) to the new Q-value of the state \(s_{t}\) and action \(a_{t}\), \(Q(s_{t},a_{t})\) to the old Q-value of the state \(s_{t}\) and action \(a_{t}\) and \(\max\limits_{a}Q(s_{t+1},a)\) to the estimate of the optimal future Q-value: \[Q^{new}(s_{t},a_{t})\gets Q(s_{t},a_{t})\:+\alpha\times[r_{t}+\gamma\times \max\limits_{a}Q(s_{t+1},a)-Q(s_{t},a_{t})] \tag{9}\] The learning rate parameter has values between 0 and 1 and influences the extent on how the new information changes the current state value, which means that a lower learning rate leads to a longer learning time. However, it is important to note that a higher learning rate may lead to suboptimal results or even divergence in non-deterministic scenarios. The discount factor determines the importance of future rewards, so the lower its value the less meaningful are the future rewards. If the discount factor has the value 0, only the current reward is considered. The selection of the action is made using an epsilon greedy search, i.e. the agent selects a random action with probability \(\varepsilon\) and otherwise selects the action greedily, with probability \(1-\varepsilon\), by selecting the action with the highest Q-value. The value of epsilon (\(\varepsilon\)) decays based on a decay rate known as epsilon decay. The algorithm can be summarized as follows: ``` 01:\(Q(s,a)\) initialised arbitrarily 02:For each episode \(i\)do 03: state \(s\) initialised 04:For each step \(j\)do 05: action \(a\) chosen from state \(s\) using policy derived by Q (e.g. \(\varepsilon\)-greedy) 06: action \(a\) taken 07: reward \(r\) and state s' observed 08:\(Q(s,a)=Q(s,a)+\alpha\times[r+\gamma\times\max_{a^{\text{'}}}Q(s^{\text{'}},a^{ \text{'}})\text{--}Q(s,a)]\) 09:\(s=s^{\text{'}}\) 10:endfor 11:endfor ``` **Algorithm** Q-Learning ## 3 Experiments and results ### Case study The object chosen for the assembly problem is an aeroplane toy from the Yale-CMU-Berkeley Object and Benchmark Dataset [19, 20] (Fig. 3), which process is studied and optimised through the implementation of a RL methodology. As follow, the object components and assembly structure are thoroughly analysed. As a preliminary evaluation, the problem solution is derived from the mathematical optimisation model. Then, the assembly problem is formulated using Q-Learning and is implemented in two different scenarios where the agent must learn a feasible assembly sequence, i.e. an assembly sequence that respects all the task precedes. The scenarios consider the distinct time durations of each tasks incorporated in the decision-making algorithm to foment the learning of an optimised time efficient assembly sequence. The algorithm's learning parameters are individually analysed in order to improve the agent's performance. ### Assembly analysis The aeroplane object is comprised of 9 structural parts and 2 types of fasteners, which are displayed in Table 1 and Table 2. After subdividing the aeroplane into parts and fasteners, the assembly process was subdivided in a total of 8 tasks. In the Table 3 each task is associated with the corresponding parts and fastener required. It is important to note that some parts can be used in more than one task. For the assembly to be complete, every task must be executed without repetitions, therefore the number of different assembly sequences is \(n!=8!=40320\). However, the feasible number of assembly sequences is lower than the Figure 3: Aeroplane from the Yale-CMU-Berkeley Object and Benchmark Dataset. \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{Part} & \multicolumn{1}{c}{Aeroplane part’s description} & \multicolumn{1}{c}{Number of parts} \\ \hline A & Front wheels & 2 \\ B & Upper wing & 1 \\ C & Lower wing & 1 \\ D & Rear wheels & 2 \\ E & Cockpit window & 1 \\ F & Propeller & 1 \\ G & Propeller’s support (engine) & 1 \\ H & Lower body of the aeroplane (lower fuselage) & 1 \\ I & Upper body of the aeroplane (upper fuselage) & 1 \\ J & Rear body of the aeroplane (tail wing) & 1 \\ K & Front wheel’s support & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Aeroplane’s parts. previously calculated one, due to the fact that certain tasks require other tasks to be previously completed. Such precedence sequence dependencies are displayed in the Table 4. When taking into account the task dependencies, the feasible number of assembly sequences is 3360, which corresponds to only 8.3% of all assembly sequences. In resume, the assembly process is subdivided in 8 different tasks, or actions, and the assembly can be considered as complete when all the 8 tasks have been executed. Therefore, the MDP's states can be defined by the updated assembly status at each task, where the initial state would correspond to the situation where none of the actions was executed, the final state when all the actions were completed and the aeroplane is assembled, represented by the binary number indicating whether the task has been executed. The digit in the binary number corresponds to the task number. If the task \(n\) has been executed, 1 is set to the \(n^{th}\) digit in the binary number. Otherwise, the \(n^{th}\) digit is set as 0 (Fig. 4). The initial state would then be represented as 00000000 and the final state would correspond to 11111111, which in decimal notation corresponds to 0 and 255 respectively, thus, the number of states is 256. Similarly, due to task precedencies, there are only 100 possible states. Finally, for this case, the tasks' average time were estimated \(\tau_{ik}\) (Table 5) as well as the increase/decrease variances on the average times with respect to the tasks previously done \(\Delta_{ii^{\prime}}\) (Table 6). In this table is also shown shaded cells in grey with the immediate forbidden sequences, which add as restrictions to the previous general precedence. \begin{table} \begin{tabular}{l l} \hline Task & Precedence task \\ \hline 1 & None \\ 2 & 1 \\ 3 & 1 \\ 4 & 1 \\ 5 & 1 and 4 \\ 6 & 1 \\ 7 & None \\ 8 & None \\ \hline \end{tabular} \end{table} Table 4: Precedence task’s feasibility, \(P_{ii^{\prime}}\). Figure 4: MDP’s states and actions scheme. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Task & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline Average time & 10 & 7 & 8 & 6 & 12 & 8 & 11 & 9 \\ \([\)time units, t.u.\(]\) & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 5: Task’s average time, \(\tau_{ik}\). ### Mathematical optimisation solution To previously assess a baseline solution for the comparison of the Q-learning algorithm performance, the mathematical optimisation model defined in section 2.2 is implemented in GAMS (29.1.1 ver.), using CPLEX (12.8.0.0 ver.) solver, run in an Intel(R) Core(TM) i7-7700HQ @2.80GHz with 16GB RAM. Given the problem description and data defined in section 3.2, the optimal solution obtains an assembly task sequence \(1\to 8\to 4\to 7\to 5\to 2\to 6\to 3\) with the value of 65 time units (t.u.). The solution complies with the given feasibility constraints for an optimality gap of 0 in under 1 second. The model statistics are presented in Table 7. Q-learning algorithm - Scenario I: Learning an assembly sequence based on estimated task average times and variances To better understand the scenario's results of the Q-learning implementation using Matlab R2020a, it is important to introduce the concept of experiment. \begin{table} \begin{tabular}{l c c c c c c c} \hline Task & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline Task 1 done & & & & & & & 0 & 0 \\ Task 2 done & & & -1 & -1.5 & 0 & -1 & 0 & 1 \\ Task 3 done & & 0 & & 0 & 0 & 0 & 0 \\ Task 4 done & & -0.5 & 0 & & & 0 & 0 \\ Task 5 done & & -1 & -0.5 & & & -2 & 1 & 0 \\ Task 6 done & & 0 & 0 & 0 & 0 & & 0 \\ Task 7 done & 0 & 0 & 0 & 0 & 0 & & 0 \\ Task 8 done & 0 & 0 & 0 & 0 & 0 & 0 & \\ \hline \end{tabular} \end{table} Table 6: Tasks’ variation in respect to the average time given completed tasks \(\Delta_{ii}\), [t.u.] and immediate forbidden sequences, \(Q_{ii}\). \begin{table} \begin{tabular}{l c c c c c} \hline Model output & \#Total variables & \#Binary variables & \#Equations Optimality CPU (s) gap & Solution gap & \\ \hline 129 & 64 & 550 & 0.0 & 0.34 & 65.0 \\ \hline \end{tabular} \end{table} Table 7: Mathematical model statistics An experiment comprises the algorithm's learning phase and the result obtained at the end. The learning phase in an experiment takes place over various episodes dictated by the maximum number of episodes per experiment. In this specific case, an episode starts with no tasks done and ends when all tasks have been successfully completed or when the maximum number of steps has been reached. However, in all scenarios the algorithm considers that, whenever the Q-Learning agent selects an impossible action, the current state does not change, which means that the sequence is penalised for requiring more than 8 steps to complete an episode. At the end of the experiment, the agent selects the actions based solely on the Q-Values, which means that after learning, the agent always selects the same assembly sequence (expressed as learned assembly sequence or experiment's result). In order to study statistically relevant solutions, each set of parameters and rewards is replicated in 120 experiments. With the objective of understanding how well the agent would be able to compare the efficiency of the feasible assembly sequences, the rewards are assigned to the processing time of each task. These were then defined through (Eq. 10), where _R(s,a)_ is the reward for taking the action \(a\) in the state \(s\), \(r_{m}\) is the reward multiplier, \(r_{s}\) the reward shift, \(r_{p}\) the reward penalty and _T(s,a)_ the predefined time it takes to complete the action \(a\) in the state \(s\), that is calculated by summing the respective variances to the task's average time: \[\left\{\begin{array}{cc}R(s,a)=r_{m}\times(-T(s,a)+r_{s})&if\ possible\ action\\ R(s,a)=r_{p}&if\ impossible\ action\end{array}\right. \tag{10}\] Since the RL algorithm's goal is to maximise the accumulated reward, in order to learn the most time efficient assembly sequence (minimise the assembly sequence time), the value matrix _T(s,a)_ must be subtracted. The \(r_{s}\) shifts each reward by its value and, as a result, shifts the accumulated reward by eight times its value. The \(r_{m}\), on the other hand, multiplies the shifted reward. Consequently, the accumulated reward is also multiplied by \(r_{m}\). If \(r_{m}\) and \(r_{s}\) have the values of 1 and 0 respectively, the accumulated reward is equal, in absolute values, to the duration of the assembly sequence. The accumulated rewards for all feasible assembly sequences for these exact values of \(r_{m}\) and \(r_{s}\) are displayed in the Fig. 5. As it can be observed in Figure 5 (B), the most common accumulated reward, corresponding to 18.4% in all feasible assembly sequences, is -69.5, which correlates to the corresponding total assembly time. An example of such an assembly sequence is \(8\to 1\to 3\to 4\to 7\to 2\to 6\to 5\) where the accumulated reward is \(\Sigma R(s,a)=-9-10-8-6-11-6.5-7-12=-69.5\). It is also possible to observe in the Fig. 5 (A) that there are 50 feasible assembly sequences with the maximum accumulated reward of -65 (optimal accumulated reward solution as shown on the mathematical optimization model). In an initial sensitivity analysis, the rewards (\(r_{s}\) and \(r_{p}\)) were tested individually (Fig. 6) with the parameters in the Table 6 and with the values 0, 1, and -10000 for the \(r_{s}\), \(r_{m}\) and \(r_{p}\) respectively. The comparison of the performances for the various sets of parameters and rewards considers three indicators: the mean accumulated reward, normalised for a \(r_{m}\) of 1 and a \(r_{s}\) of 0; the percentage of times the agent learned one of the 50 optimal assembly sequences; and the percentage of times the agent failed to learn one feasible assembly sequence in the 120 experiments. When the agent failed to learn a feasible assembly sequence, the number of experiments was increased so that the mean would reflect 120 correctly learned assembly sequences. The reason for this rule was the high penalty on the mean of an incorrectly learned assembly, which would turn unfeasible a correct comparison between sets. When observing the Fig. 6 (A), it can be concluded that a negative \(r_{s}\) value increases the likelihood of incorrectly learning an impossible assembly Figure 5: Distribution by number (A) and percentage (B) of feasible assembly sequences\({}^{\dagger}\) accumulated rewards. sequence because the agent is not able to differentiate between the penalties and the accumulated rewards shifted to values increasingly more negative. Also, a positive \(r_{s}\) may lead to better results, as seen in the mean increase for the value of 20. The reward penalty, as seen in the Fig 6 (B), understandably, influences the percentage of fails since an action with a higher \(r_{p}\) is more likely identified as an incorrect action, i.e. the bigger the penalty the lower the percentage of fails. With the objective of understanding the impact of the \(r_{m}\) and the maximum number of episodes, they were individually changed maintaining the parameters and rewards of the previous sensitivity analysis, except for the \(r_{s}\) and \(r_{p}\) with the new values of 20 and -1000000 respectively. Also, in the experiments where the maximum number of episodes was altered, the selected \(r_{m}\) used was 5. In terms of the \(r_{m}\) sensitivity analysis displayed in the Fig. 7 (A), there are no statistically significant improvements in the agent's performance. However, the value chosen for \(r_{m}\) in future sets of parameters and rewards is 20 as it accomplished the best results. In respect to the maximum number of episodes, presented in the Fig 7 (B), it is possible to conclude that increasing the maximum number of episodes per experiment leads to an increase in performance (both visible in the mean and in the percentage of optimal accu Figure 6: Impact of the reward shift (A) and reward penalty (B) on the agent’s performance. Figure 7: Impact of the reward multiplier (A) and maximum number of episodes per experiment (B) on the agent’s performance. mulated rewards) until a certain value in which it seems to plateau. Though, is important to remember that a higher value for the maximum number of episodes is related to the amount of times the experiment has to be repeated for the agent to learn. This means that it is essential to maintain the number as low as possible since in a real scenario it corresponds to the amount of assemblies required for the learning process. Thus, for these parameters, and specially for the epsilon decay of 0.0001, the optimal maximum number of episodes is 3000. The optimal maximum number of episodes is dependent on the epsilon decay's value since a lower epsilon decay leads to a slower increase in the greedy selection by the agent and, as such, requires more episodes in the learning phase. In order to identify the relationship between these two parameters, a graph of the evolution of the episodic accumulated reward in one of the experiments (with 5000 maximum number of episodes) is analysed (Fig. 8). When analysing the Fig. 8, it is possible to observe that after the identified optimal maximum number of episodes the episodic accumulated reward does not greatly increase. This could explain why increasing the maximum Figure 8: Evolution of the episode reward over the episodes for a value of epsilon decay of 0.0001. number of episodes further does not lead to a significant change in the results in the Fig. 7. Similar graphs were analysed for different values of epsilon decay in order to identify in which episode the episodic accumulated plateaus during an experiment to select this value as the optimal maximum number of episodes per experiment for the given epsilon decay (Fig. 9 and Table 8). From the values available in the Table 8, a linear regression was devised using a logarithmic scale in both axes. As shown in the Fig. 10, the data approximated fits the function \(y=0.50528\times x^{-0.95747}\) where \(y\) is the maximum number of episodes and \(x\) the epsilon decay. Additional new experiments were run with the parameters shown in the Figure 9: Evolution of the episode Q0 (blue line) and episode reward (orange line) over the episodes for various values of epsilon decay. Figure 10: Graph of maximum number of episodes over epsilon decay. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Epsilon & 0.005 & 0.002 & 0.001 & 0.0005 & 0.0002 & 0.0001 & 0.00005 & 0.00003 \\ decay & & & & & & & \\ Max & 90 & 175 & 350 & 800 & 1800 & 3000 & 6500 & 12000 \\ episodes & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 8: Maximum number of episodes selected for each epsilon decay value. Table 9, apart from the epsilon decay and maximum number of episodes which were based on the Table 8. Results are displayed in the Fig. 11. As can be observed in the Fig. 11, the increase in the maximum number of episodes, accompanied by the respective decrease in the epsilon decay, leads to better results with an increase in the mean and in the percentage of optimal rewards. It is also possible to notice that such increase only starts around 1800 and 3000 maximum number of episodes. Q-learning algorithm - Scenario II: Learning an assembly sequence based on measured task average times and estimated variances In the second scenario, task time input data was measured and repeated 10 times (Table 10) so that the average processing times are now confirmed (approximation is used for simplification). As in scenario I, the tasks' average times have variations in respect to the corresponding precedence tasks (Table 11). It is important to notice that the tasks variability complexity has increased from the scenario I, so that there would be a larger variety of accumulated rewards and a lower number of assembly sequences with the largest accumulated reward, i.e. optimal assembly sequences. The variability was also increased by introducing the tool changeover time. Since there are two different types of fasteners, the fastening device's tool must be switched during the assembly process. Regarding the fastening \begin{table} \begin{tabular}{l l} \hline Parameter & Value \\ \hline Learning rate & 1 \\ Discount factor & 1 \\ Epsilon & 0.9 \\ Max steps per episode & 8 \\ Reward shift & 20 \\ Reward multiplier & 20 \\ Reward penalty & -1000000 \\ \hline \end{tabular} \end{table} Table 9: Parameters for the pair epsilon decay and maximum number of episodes experiment. Figure 11: Impact of the pairs of epsilon decay and maximum number of episodes on the agent’s performance. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline Task & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline & 1 & 6.36 & 9.10 & 8.59 & 6.75 & 10.62 & 9.87 & 12.36 & 10.06 \\ & 2 & 6.28 & 9.15 & 10.30 & 8.22 & 10.52 & 11.31 & 12.39 & 8.50 \\ & 3 & 4.71 & 8.00 & 8.70 & 7.90 & 9.82 & 11.56 & 12.88 & 8.03 \\ Measured & 4 & 5.38 & 8.15 & 10.49 & 8.29 & 9.25 & 11.00 & 10.49 & 9.09 \\ tasks & 5 & 5.80 & 8.30 & 8.75 & 7.25 & 9.44 & 8.95 & 11.22 & 8.90 \\ [time units, 6 & 6 & 6.71 & 8.52 & 8.97 & 7.40 & 10.04 & 12.16 & 11.04 & 9.37 \\ t.u.] & 7 & 5.73 & 8.17 & 9.00 & 7.95 & 9.07 & 9.24 & 11.94 & 7.70 \\ & 8 & 7.31 & 8.05 & 8.29 & 6.30 & 10.76 & 10.01 & 10.34 & 8.70 \\ & 9 & 5.16 & 7.62 & 8.33 & 7.75 & 9.95 & 9.56 & 10.14 & 8.15 \\ & 10 & 5.89 & 6.55 & 8.50 & 7.00 & 9.30 & 10.09 & 11.98 & 9.71 \\ \hline Mean [t.u.] & 5.93 & 8.16 & 8.99 & 7.48 & 9.88 & 10.38 & 11.48 & 8.82 \\ \hline Average time & 6 & 8 & 9 & 7.5 & 10 & 10.5 & 11.5 & 9 \\ \hline \end{tabular} \end{table} Table 10: Task’s time measurements. device, there are two main assumptions made: the assembly process starts without any tool placed and the tool changeover lasts three time units to be performed. In order to introduce the tool changeover, the number of states must be increased. The states, apart from the binary number that define the completion of each task, can have three possible indexes (0, 1 and 2). The index 0 indicates that the fastening device does not have any tool placed (start of the assembly), the index 1 indicates that the fastening device has the screwdriver applied, and the index 2 indicates the nut driver is applied. The new total number of states is 511, since in the first state the fastening device has no tool and in any other state it can have either the first or the second tool. Due to task dependencies and tool selections, the number of possible states is 149. With the respective changes to the average times and time variances the new distribution of accumulated rewards from the feasible assembly sequences can be observed in the Fig. 12. The new distribution has, as previously stated, a larger variance of accumulated rewards and the highest accumulated reward or optimal accumulated reward is shared only by 2 assembly sequences (Fig. 12 (B)), which are \(7\to 1\to 8\to 2\to 4\to 5\to 6\to 3\) and \(7\to 8\to 1\to 2\to 4\to 5\to 6\to 3\), and has the value of -64. In this new scenario, it may be easier to understand the impact of the set's parameters on the agent's performance. The reward shift was modified for the values (0, 3, 6, 7, 8, 9, 10 12, 15) while using the values of the Table 9, apart from the epsilon decay and maximum number of episodes, which had the values of 0.00005 and 6500 \begin{table} \begin{tabular}{l l l l l l l l l} \hline Task & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline Task 1 done & & & & & & & 0 & 0 \\ Task 2 done & & & -2 & -3 & -0.5 & -2 & 0 & 1.5 \\ Task 3 done & & 0 & & 0 & 0 & 0 & 0 & 0 \\ Task 4 done & & -1 & 0 & & & -1.5 & 0 & 0 \\ Task 5 done & & -2 & -1 & & & -3 & 2 & 0 \\ Task 6 done & & -1 & -0.5 & 1 & -0.5 & & 0 & 0 \\ Task 7 done & 0 & 0 & 0 & 0 & 0 & 0 & & 0 \\ Task 8 done & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\ \hline \end{tabular} \end{table} Table 11: Tasks’ variation in respect to the average time given completed tasks \(\Delta_{ii}\)’ [t.u.] and immediate forbidden sequences, \(Q_{ii}\)’. of the multiple sets of 120 experiments are displayed in the Fig. 13. The mean of all the 37 possible values of accumulated reward is -73 (for a \(r_{s}\) of 0 and \(r_{m}\) of 1), and therefore, if subdivided evenly, each task would have a reward of -9.125, which we will define as mean task reward. A value of \(r_{s}\) equal to the mean task reward shifts the accumulated rewards to a position where they are evenly separated into positive and negative. When analysing the Fig. 13 it is possible to identify that the best accumulated reward occurs for a value of the reward shift of 8. Also, it is important to notice that the percentage of fails decreases with the increase of the reward shift and is approximately 0 for values higher or equal to 9. Thus, the optimal reward shift may be related to the mean task reward, but it may be relevant to confirm this relationship with a different scenario. A value of the reward shift slightly lower than the mean task reward may improve the mean accumulated reward since a larger number of the accumulated rewards are negative. However, it also increases the percentage of fails, which is highly prejudicial for a real scenario. For that reason, the optimal value for the reward shift is 9. With this new \(r_{s}\) value, the learning rate and the discount factor were individually analysed (Fig. 14). Since in both cases the confidence interval is very small, even small dif Figure 12: Distribution by number (A) and percentage (B) of feasible assembly sequences’ accumulated rewards. Figure 14: Learning rate’s and discount factor’s impact on the agent’s performance. Figure 13: Reward shift’s impact on the agent’s performance. ferences in the mean are significant. Both in the learning rate and in the discount factor, it can be concluded that the optimal value is 1, however, in the discount factor case the difference in the mean and in the percentage of optimal results is more accentuated. Then, with the same set's parameters as before, the maximum steps per episode were individually analysed (Fig. 15). It is possible to conclude that an early increase in the maximum number of steps per episode leads to a significant increase both in the mean and in the percentage of optimal rewards. This shows that additional steps beyond the number of assembly tasks slightly contributes to improve the agent learning process, although further increase in this value does not significantly alter the results. The value 15 was selected for this parameter. Lastly, with all the other parameters decided, the reward multiplier's impact was studied with various values (1, 5, 9, 11, 13, 15, 19, 25, 30) and the experiments results are visible in the Fig. 16. It can be observed that there is an optimal value for \(r_{m}\), since the increase in the value of the reward multiplier leads to a steady increase both in the mean and in the percentage of optimal rewards. For the smaller values of the reward multiplier, the percentage of fails is nonzero. Based on the graph, it can be defined that the optimal reward multiplier value is 13. To confirm Figure 15: Maximum steps per episode’s impact on the agent’s performance. the choice of 6500 maximum number of episodes, in the Fig. 17 are plotted the agent's performances for various values of maximum number of episodes. As expected, a reduction in the maximum number of episodes leads to a lower percentage of optimal rewards. Therefore, in order to guarantee the threshold of 95% the value of the maximum number of episodes is maintained. After the previous scenarios experiments, it is possible to conclude that the best set's parameters are the ones expressed in the Table 12, with which the agent, in 120 experiments with 6500 episodes, was able to learn one of the 2 optimal assembly sequences 115 times (\(\approx 95.8\%\)) (Fig. 17), one of the assembly sequences with the second best accumulated reward 4 times (\(\approx 3.3\%\)) and one of the fifth best accumulated reward once (\(\approx 0.8\%\)), while never failing to learn a feasible assembly sequence. Figure 16: Reward multiplier’s impact on the agent’s performance. ## 6 Conclusion In this paper, we have proposed a new method for computing the performance of the proposed method for the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. The proposed method. The proposed method is based on the proposed method. The proposed method is based on the proposed method. Q-learning algorithm - Scenario III: Learning an assembly sequence based on measured task average times and estimated variances with restricted actions As previously discussed, the agent is capable of selecting impossible actions, which are penalised. However, this greatly increases the exploration required to achieve a correct assembly sequence. Therefore, in this final scenario, impossible actions were restricted, instead of being penalised in order to identify the possibility of reducing the number of episodes. Apart from the maximum number of episodes, which had the values of (400, 500, 600, 700, 750, 780, 1600, 3200), the epsilon decay, which had the corresponding values calculated from the regression equation in the Fig. 10 and the reward penalty that is now nonexistent the parameters used in the experiments are displayed in Table 12, Fig. 18. It can be observed that by restricting impossible actions the maximum Figure 18: Evolution of the mean accumulated reward and the percentage of optimal rewards in regards to the maximum number of episodes. number of episodes can be reduced to 780 episodes, which corresponds to only 21.7% of all possible assembly sequences, while maintaining a percentage of optimal rewards of 98.3%. ## 4 Conclusions In this work, the challenges in the application of a reinforcement learning algorithm in a assembly sequence problem are studied, considering the implementation of a Q-Learning model-free algorithm. By formulating the problem as a MDP, it is shown that Q-Learning finds an optimal state-action policy that maximises the accumulated reward over a succession of given steps. This allows to verify the application of a scalable method to address the optimisation of an assembly sequence of an object as a sequential decision process, where the action in one state influences the transition to the subsequent state. Despite its recent application in the literature, RL methods show a straightforward versatility in complex problems where uncertainty plays a significant role, such as on-line industrial environments, where until now mostly traditional exact and non-exact optimization approaches are considered. The improvement efficiency of the assembly time of complex products is often impractical due to the complexity of assessing all tasks combinations. This approach has the advantage of achieving suitable optimisation results, where the optimal assembly sequence was learned 98.3% of the times. To guarantee this threshold (over 95%), the algorithm requires 780 assemblies (episodes) to correctly learn the best assembly sequence, which is corresponds to approximately 21.7% of the number of feasible assembly sequences. It is acknowledged that an increasingly complex assembly processes might reveal an unpractical use of Q-Learning algorithm due to the "curse of dimensionality". In future work, the acquisition of the tasks' durations on-line during real assemblies will be considered. ## 5 Conflict of interest The authors declare that there is no conflict of interest. ## 6 Declaration of Competing Interest The authors report no declarations of interest. ## 7 Funding This research was partially supported by project PRODUTECH4S&C (46102) by UE/FEDER through the program COMPETE 2020 and the Portuguese Foundation for Science and Technology (FCT): COBOTIS (PTDC/EME-EME/32595/2017) and UIDB/00285/2020.
2306.06521
Universal Language Modelling agent
Large Language Models are designed to understand complex Human Language. Yet, Understanding of animal language has long intrigued researchers striving to bridge the communication gap between humans and other species. This research paper introduces a novel approach that draws inspiration from the linguistic concepts found in the Quran, a revealed Holy Arabic scripture dating back 1400 years. By exploring the linguistic structure of the Quran, specifically the components of ism, fil, and harf, we aim to unlock the underlying intentions and meanings embedded within animal conversations using audio data. To unravel the intricate complexities of animal language, we employ word embedding techniques to analyze each distinct frequency component. This methodology enables the identification of potential correlations and the extraction of meaningful insights from the data. Furthermore, we leverage a bioacoustics model to generate audio, which serves as a valuable resource for training natural language processing (NLP) techniques. This Paper aims to find the intention* behind animal language rather than having each word translation.
Anees Aslam
2023-06-10T21:09:16Z
http://arxiv.org/abs/2306.06521v1
# Universal Language Modelling agent ###### Abstract Large Language Models are designed to understand complex Human Language. Yet, Complexity of animal language has long intrigued researchers striving to bridge the communication gap between humans and other species. This research paper introduces a novel approach that draws inspiration from the linguistic concepts found in the Quran, a revealed Holy Arabic scripture dating back 1400 years. By exploring the linguistic structure of the Quran, specifically the components of sim, fil, and har, we aim to unlock the underlying intentions and meanings embedded within animal conversations using audio data. To unravel the intricate complexities of animal language, we employ word embedding techniques to analyse each distinct frequency component. This methodology enables the identification of potential correlations and the extraction of meaningful insights from the data. Furthermore, we leverage a bioacoustics model to generate audio, which serves as a valuable resource for training natural language processing (NLP) techniques. This Paper aims to find the intention* behind animal language rather than having each word translation. ## 1 Introduction Understanding animal language has been a longstanding pursuit for researchers seeking to unravel the mysteries of animal communication and bridge the gap between humans and other species. While progress has been made in deciphering certain aspects of animal vocalizations, a comprehensive understanding of their intentions and the intricacies of their language remains elusive. This paper proposes a novel approach that draws inspiration from the linguistic patterns found in the Quran, a holy Arabic scripture revealed over 1400 years ago. By exploring the linguistic concepts presented in the Quran and their potential resonance with animal conversation, we aim to shed light on the underlying structure and intentions of animal communication. Traditional studies on animal communication have largely focused on deciphering vocalizations, gestures, and other forms of non-verbal communication. However, these approaches often fall short in capturing the richness and complexity of animal language. Inspired by the Quran's linguistic framework, we introduce the concept of mapping animal communication patterns into three linguistic components: ism, fil, and har. These components draw parallels to the Quranic linguistic categories and provide a structured framework for analysing and interpreting animal communication. To unravel the intentions encoded within animal communication, we leverage techniques commonly used in natural language processing. Word embedding, a powerful method in NLP, is applied to each of the independent frequency components, enabling the extraction of semantic relationships and underlying patterns. By utilizing word embedding techniques, we aim to decipher the subtle nuances and intentions encoded within the animal language. In addition to linguistic analysis, we incorporate a bioacoustics model to generate audio data that reflects the patterns and characteristics of animal communication. This audio data serves as valuable input for training an advanced natural language processing technique. Employing deep learning algorithms, the NLP technique is fine-tuned using the generated animal audio and enriched with human feedback, fostering an iterative learning process that refines the understanding and interpretation of animal intentions. However, this approach is not without its challenges. Validating the mapping of animal communication patterns to linguistic components inspired by the Quran requires extensive analysis and validation. The availability and quality of animal audio data, which are crucial for training the NLP technique, pose significant hurdles in terms of data collection and labelling. Furthermore, ethical considerations surrounding the treatment and welfare of animals during the research process demand careful attention and adherence to responsible research practices. This paper, presents an initial exploration of our proposed approach, emphasizing its potential to enhance our understanding of animal language. While the use of large language models has proven successful in understanding human language, the application of these models to decipher animal communication remains relatively unexplored. By combining insights from linguistics, bioacoustics, and natural language processing techniques, our approach holds promise in unravelling the intentions and intricacies of animal language. Through further research and validation, the aim is to unlock the mysteries of animal communication and forge a deeper connection with the non-human species that share our planet. _"While Traditional Large Language approach are designed in understanding complex human languages as a result of thousand years of civilization. Understanding Animal Intentions takes us back in time to get into the bare bone elements of language. The Quran miraculous way of communicating rich content with understandable pattern acts as bridge. "_ Figure 1.1: The basic interpretation of ULM model ## Chapter 2 Pattern Identification
2310.05916
Interpreting CLIP's Image Representation via Text-Based Decomposition
We investigate the CLIP image encoder by analyzing how individual model components affect the final representation. We decompose the image representation as a sum across individual image patches, model layers, and attention heads, and use CLIP's text representation to interpret the summands. Interpreting the attention heads, we characterize each head's role by automatically finding text representations that span its output space, which reveals property-specific roles for many heads (e.g. location or shape). Next, interpreting the image patches, we uncover an emergent spatial localization within CLIP. Finally, we use this understanding to remove spurious features from CLIP and to create a strong zero-shot image segmenter. Our results indicate that a scalable understanding of transformer models is attainable and can be used to repair and improve models.
Yossi Gandelsman, Alexei A. Efros, Jacob Steinhardt
2023-10-09T17:59:04Z
http://arxiv.org/abs/2310.05916v4
# Interpreting CLIP's Image Representation via Text-Based Decomposition ###### Abstract We investigate the CLIP image encoder by analyzing how individual model components affect the final representation. We decompose the image representation as a sum across individual image patches, model layers, and attention heads, and use CLIP's text representation to interpret the summands. Interpreting the attention heads, we characterize each head's role by automatically finding text representations that span its output space, which reveals property-specific roles for many heads (e.g. location or shape). Next, interpreting the image patches, we uncover an emergent spatial localization within CLIP. Finally, we use this understanding to remove spurious features from CLIP and to create a strong zero-shot image segmenter. Our results indicate that a scalable understanding of transformer models is attainable and can be used to repair and improve models.1 Footnote 1: Project page and code: [https://yossigandelsman.github.io/clip_decomposition/](https://yossigandelsman.github.io/clip_decomposition/) ## 1 Introduction Recently, Radford et al. (2021) introduced CLIP, a class of neural networks that produce image representations from natural language supervision. As language is more expressive than previously used supervision signals (e.g. object categories) and CLIP is trained on a lot more data, its representations have proved useful on downstream tasks including classification (Zhou et al., 2022), segmentation (Luddecke and Ecker, 2022), and generation (Rombach et al., 2022). However, we have a limited understanding of what information is actually encoded in these representations. To better understand CLIP, we design methods to study its internal structure, focusing on CLIP-ViT (Dosovitskiy et al., 2021). Our methods leverage several aspects of CLIP-ViT's architecture: First, the architecture uses _residual_ connections, so the output is a sum of individual layer outputs. Moreover, it uses _attention_, so the output is also a sum across individual locations in the image. Finally, the representation lives in a joint vision-language space, so we can label its directions with text. We use these properties to decompose the representation into text-explainable directions that are attributed to specific attention heads and image locations. As a preliminary step, we use the residual structure to investigate which layers have a significant direct effect on the output. We find that ablating all layers but the last 4 attention layers has only a small effect on CLIP's zero-shot classification accuracy (Section 3). We conclude that the CLIP image representation is primarily constructed by these late attention layers. We next investigate the late attention layers in detail, leveraging the language space to uncover interpretable structure. We propose an algorithm, TextSpan, that finds a basis for each attention head where each basis vector is labeled by a text description. The resulting bases reveal specialized roles for each head: for example, one head's top 3 basis directions are _A semicircular arch_, _A isosceles triangle_ and _oval_, suggesting that it specializes in shapes (Figure 1(a)). We present two applications of these identified head roles. First, we can reduce spurious correlations by removing heads associated with the spurious cue; we apply this on the Waterbirds dataset (Sagawa et al., 2019) to improve worst-group accuracy from \(48\%\) to \(73\%\). Second, the representations of heads with a property-specific role can be used to retrieve images according to that property; we use it to perform retrieval based on discovered senses of similarity, such as color, location, and texture. We next exploit the spatial structure provided by attention layers. Each attention head's output is a weighted sum across image locations, allowing us to decompose the output across these locations. We use this to visualize how much each location writes along a given text direction (Figure 1(b)). This yields a zero-shot image segmenter that outperforms existing CLIP-based zero-shot methods. Finally, we consider the spatial structure jointly with the text basis obtained from TextSpan. For each direction in the basis, the spatial decomposition highlights which image regions affect that basis direction. We visualize this in Figure 1(c), and find that it validates our text labels: for instance, the regions with triangles are the primary contributors to a direction that is labeled as _isosceles triangle_. In summary, we interpret CLIP's image representation by decomposing it into text-interpretable elements that are attributed to individual attention heads and image locations. We discover property-specific heads and emergent localization, and use our discoveries to reduce spurious cues and improve zero-shot segmentation, showing that understanding can improve downstream performance. ## 2 Related Work **Vision model explainability.** A widely used class of explainability methods produces heatmaps to highlight parts in the input image that are most significant to the model output (Selvaraju et al., 2017; Sundararajan et al., 2017; Binder et al., 2016; Voita et al., 2019; Lundberg and Lee, 2017; Chefer et al., 2021). While these heatmaps are useful for explaining the relevance of specific image regions to the output, they do not show how attributes that lack spatial localization (e.g. object size or shape) affect the output. To address this, a few methods interpret models by finding counterfactual edits using generative models (Goetschalckx et al., 2019; Lang et al., 2021; Aberman et al., 2021). All these methods aim to explain the output of the model without interpreting its intermediate computation. **Intermediate representations interpretability.** An alternate way to explain vision models is to study their inner workings. One approach is to invert intermediate features into the input image space (Dosovitskiy and Brox, 2015; Mahendran and Vedaldi, 2014; Goh et al., 2021). Another approach is to interpret individual neurons (Bau et al., 2020, 2019; Dravid et al., 2023) and connections between neurons (Olah et al., 2020). These approaches interpret models by relying only on visual outputs. Few methods use text to interpret intermediate representations in vision models. Hernandez et al. (2022) provide text descriptions for image regions in which a neuron is active. Yuksekgonul et al. (2023) project model features into a bank of text-based concepts. More closely to us, a few methods analyze CLIP's intermediate representations via text--Goh et al. (2021) find multimodal neurons in CLIP that respond to different renditions of the same subject in images. Materzynska et al. (2022) study entanglement in CLIP between images of words and natural images. We differ from these works by using CLIP's intrinsic language-image space and by exploiting decompositions in CLIP's architecture for interpreting intermediate representations. **Contrastive vision-language models.** Contrastive vision-and-language models like CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) showed promising zero-shot transfer capabilities for downstream tasks, including OCR, geo-localization and classification (Wortsman, 2023). Moreover, CLIP Figure 1: **CLIP-ViT image representation decomposition.** By decomposing CLIP’s image representation as a sum across individual image patches, model layers, and attention heads, we can (a) characterize each head’s role by automatically finding text-interpretable directions that span its output space, (b) highlight the image regions that contribute to the similarity score between image and text, and (c) present what regions contribute towards a found text direction at a specific head. representations are used for segmentation (Luddecke and Ecker, 2022), querying 3D scenes (Kerr et al., 2023), and text-based image generation (Ramesh et al., 2021; Rombach et al., 2022). We aim to interpret what information is encoded in these representations. ## 3 Decomposing CLIP Image Representation into Layers We start by presenting the CLIP model (Radford et al., 2021) and describe how the image representation of CLIP-ViT is computed. We show that this representation can be decomposed into direct contributions of individual layers of the image encoder ViT architecture. Through this decomposition, we find that the last few attention layers have most of the direct effects on this representation. ### CLIP-ViT Preliminaries **Contrastive pre-training.** CLIP is trained to produce visual representations from images \(I\) coupled with text descriptions \(t\). It uses two encoders--a transformer-based text encoder \(M_{\text{text}}\) and an image encoder \(M_{\text{image}}\). Both \(M_{\text{text}}\) and \(M_{\text{image}}\) map to a shared vision-and-language latent space, allowing us to measure similarity between images and text via cosine similarity: \[\mathrm{sim}(I,t)=\langle M_{\text{image}}(I),M_{\text{text}}(T)\rangle/(||M_{ \text{image}}(I)||_{2}||M_{\text{text}}(t)||_{2}) \tag{1}\] Given a batch of images and corresponding text descriptions \(\{(I_{i},t_{i})\}_{i\in\{1,\dots,k\}}\), CLIP is trained to maximize the similarity of the image representation \(M_{\text{image}}(I_{i})\) to its corresponding text representation \(M_{\text{text}}(t_{i})\), while minimizing \(\mathrm{sim}(M_{\text{image}}(I_{i}),M_{\text{text}}(t_{j}))\) for every \(i\neq j\) in the batch. **Zero-shot classification.** CLIP can be used for zero-shot image classification. To classify an image given a fixed set of classes, each name of a class (e.g. "Chihuahua") is mapped to a fixed template (e.g. "An image of a {class}") and encoded by the CLIP text encoder. The prediction for a given image is the class whose text description has the highest similarity to the image representation. **CLIP image representation.** Several architectures have been proposed for computing CLIP's image representation. We focus on the variant that incorporates ViT (Dosovitskiy et al., 2021) as a backbone. Here a vision transformer (ViT) is applied to the input image \(I\in\mathbb{R}^{H\times W\times 3}\) to obtain a \(d\)-dimensional representation \(\mathsf{ViT}(I)\). The CLIP image representation \(M_{\text{image}}(I)\) is a linear projection of this output to a \(d^{\prime}\)-dimensional representation in the joint vision-and-language space2. Formally, denoting the projection matrix by \(P\in\mathbb{R}^{d^{\prime}\times d}\): Footnote 2: Both here and in Eq. 3, we ignore a layer-normalization term to simplify derivations. We address layer-normalization in detail in Section A.1. \[M_{\text{image}}(I)=P\mathsf{ViT}(I) \tag{2}\] Both the parameters of the ViT and the projection matrix \(P\) are learned during training. **ViT architecture.** ViT is a residual network built from \(L\) layers, each of which contains a multi-head self-attention (MSA) followed by an MLP block. The input \(I\) is first split into \(N\) non-overlapping image patches. The patches are projected linearly into \(N\)\(d\)-dimensional vectors, and positional embeddings are added to them to create the _image tokens_\(\{z_{i}^{0}\}_{i\in\{1,\dots,N\}}\). An additional learned token \(z_{0}^{0}\in\mathbb{R}^{d}\), named the _class token_, is also included and later used as the output token. Formally, the matrix \(Z^{0}\in\mathbb{R}^{d\times(N+1)}\), with the tokens \(z_{0}^{0},z_{1}^{0},...,z_{N}^{0}\) as columns, constitutes the initial state of the residual stream. It is updated for \(L\) iterations via these two residual steps: \[\hat{Z}^{l}=\mathsf{MSA}^{l}(Z^{l-1})+Z^{l-1},\ \ \ Z^{l}=\mathsf{MLP}^{l}(\hat{Z}^{l})+\hat{Z}^{l}. \tag{3}\] We denote the first column in the residual stream \(Z^{l}\), corresponding to the class token, by \([Z^{l}]_{cls}\). The output of the ViT is therefore \([Z^{L}]_{cls}\). ### Decomposition into layers The residual structure of ViT allows us to express its output as a sum of the direct contributions of individual layers of the model. Recall that the image representation \(M_{\text{image}}(I)\) is a linear projection of the ViT output. By unrolling Eq. 3 across layers, the image representation can be written as: \[M_{\text{image}}(I)=P\text{ViT}(I)=P\left[Z^{0}\right]_{cls}+\underbrace{\sum_{l= 1}^{L}P\left[\text{MSA}^{l}(Z^{l-1})\right]_{cls}}_{\text{MSA~{}terms}}+ \underbrace{\sum_{l=1}^{L}P\left[\text{MLP}^{l}(\hat{Z}^{l})\right]_{cls}}_{ \text{MLP~{}terms}} \tag{4}\] Eq. 4 decomposes the image representation into _direct contributions_ of MLPs, MSAs, and the input class token, allowing us to analyze each term separately. We ignore here the _indirect effects_ of the output of one layer on another downstream layer. We use this decomposition (and further decompositions) to analyze CLIP's representations in the next sections. **Evaluating the direct contribution of layers.** As a preliminary investigation, we study which of the components in Eq. 4 significantly affect the final image representation, and find that the large majority of the direct effects come from the _late attention layers_. To study the direct effect of a component (or set of components), we use mean-ablation (Nanda et al., 2023), which replaces the component with its mean value across a dataset of images. Specifically, we measure the drop in zero-shot accuracy on a classification task before and after ablation. Components with larger direct effects should result in larger accuracy drops. In our experiments, we compute means for each component over the ImageNet (IN) validation set and evaluate the drop in IN classification accuracy. We analyze the OpenCLIP ViT-H-14, L-14, and B-16 models (Iharco et al., 2021), which were trained on LIAION-2B (Schuhmann et al., 2022). **MLPs have a negligible direct effect.** Table 1 presents the results of simultaneously mean-ablating all the MLPs. The MLPs do not have a significant direct effect on the image representation, as ablating all of them leads to only a small drop in accuracy (1%-3%). **Only the last MSAs have a significant direct effect.** We next evaluate the direct effect of different MSA layers. To do so, we mean-ablate all MSA layers up to some layer \(l\). Figure 2 presents the results: removing all the early MSA layers (up to the last 4) does not change the accuracy significantly. Mean-ablating these final MSAs, on the other hand, reduces the performance drastically. In summary, the direct effect on the output is concentrated in the last 4 MSA layers. We therefore focus only on these layers in our subsequent analysis, ignoring the MLPs and the early MSA layers. ### Fine-grained decomposition into heads and positions We present a more fine-grained decomposition of the MSA blocks that will be used in the next two sections. We focus on the output at the class token, as that is the only term appearing in Eq. 4. Following Elhage et al. (2021), we write the MSA output as a sum over \(H\) independent attention heads and the \(N\) input tokens: \[\left[\text{MSA}^{l}(Z^{l-1})\right]_{cls}=\sum_{h=1}^{H}\sum_{i=0}^{N}x_{i}^{ l,h},~{}~{}x_{i}^{l,h}=\alpha_{i}^{l,h}W_{VO}^{l,h}z_{i}^{l-1} \tag{5}\] \begin{table} \begin{tabular}{l|c|c} \hline \hline & Base & + MLPs ablation \\ & accuracy & \\ \hline ViT-B-16 & 70.22 & 67.04 \\ ViT-L-14 & 75.25 & 74.12 \\ ViT-H-14 & 77.95 & 76.30 \\ \hline \hline \end{tabular} \end{table} Table 1: **MLPs mean-ablation.** We simultaneously replace all the direct effects of the MLPs with their average taken across ImageNet’s validation set. This results in only a small reduction in zero-shot classification performance. Figure 2: **MSAs accumulated mean-ablation.** We replace all the direct effects of the MSAs up to a given layer with their average taken across the ImageNet validation set. Only the replacement of the last few layers causes a large decrease in accuracy. where \(W^{l,h}_{VO}\in\mathbb{R}^{d^{\prime}\times d^{\prime}}\) are transition matrices and \(\alpha^{l,h}_{i}\in\mathbb{R}\) are the attention weights from the class token to the \(i\)-th token (\(\sum_{i=0}^{N}\alpha^{l,h}_{i}=1\)). Therefore, the MSA output can be decomposed into direct effects of individual heads and tokens. Plugging the MSA output definition in Eq. 5 into the MSA term in Eq. 4, we obtain: \[\sum_{l=1}^{L}P\left[\mathsf{MSA}^{l}(T^{l-1})\right]_{cls}=\sum_{l=1}^{L}\sum_ {h=1}^{H}\sum_{i=0}^{N}c_{i,l,h},\ \ \ c_{i,l,h}=Px_{i}^{l,h} \tag{6}\] In other words, the total direct effect of all attention blocks is the result of contracting the tensor \(c\) across all of its dimensions. By contracting along only some dimensions, we can decompose effects in a variety of useful ways. For instance, we can contract along the spatial dimension \(i\) to get a contribution for each head: \(c_{\text{head}}^{l,h}=\sum_{i=0}^{N}c_{i,l,h}\). Alternatively, we can contract along layers and heads to get a contribution from each image token: \(c_{\text{token}}^{i}=\sum_{l=1}^{L}\sum_{h=1}^{H}c_{i,l,h}\). The quantities \(c_{i,l,h}\), \(c_{\text{head}}^{l,h}\) and \(c_{\text{token}}^{i}\) all live in the \(d^{\prime}\)-dimensional joint text-image representation space, which allows us to interpret them via text. For instance, given text description \(t\), the quantity \(\langle M_{\text{text}}(t),c_{\text{head}}^{l,h}\rangle\) intuitively measures the similarity of that head's output to description \(t\). ## 4 Decomposition into Attention Heads Motivated by the findings in Section 3.2, we turn to understanding the late MSA layers in CLIP. We use the decomposition into individual attention heads (Section 3.3), and present an algorithm for labeling the latent directions of each head with text descriptions. Examples of this labeling are depicted in Table 2 and Figure 4, with the labeling for all \(64\) late attention heads given in Section A.4. Our labeling reveals that some heads exhibit specific semantic roles, e.g. "counting" or "location", in which many latent directions in the head track different aspects of that role. We show how to exploit these labeled roles both for property-specific image retrieval and for reducing spurious correlations. ### Text-interpretable decomposition into heads We decompose an MSA's output into text-related directions in the joint representation space. We rely on two key properties: First, the output of each MSA block is a sum of contributions of individual attention heads, as demonstrated in Section 3.3. Second, these contributions lie in the joint text-image representation space and so can be associated with text. \begin{table} \begin{tabular}{l|l|l} \hline **L21.H11** (“Geo-locations”) & **L23.H10** (“Counting”) & **L22.H8** (“Letters”) \\ \hline Photo captured in the Arizona desert & Image with six subjects & A photo with the letter V \\ Picture taken in Alberta, Canada & Image with four people & A photo with the letter F \\ Photo taken in Rio de Janeiro, Brazil & An image of the number 3 & A photo with the letter D \\ Picture taken in Cyprus & An image of the number 10 & A photo with the letter T \\ Photo taken in Seoul, South Korea & The number fifteen & A photo with the letter X \\ \hline \hline **L22.H11** (“Colors”) & **L22.H6** (“Animals”) & **L22.H3** (“Objects”) \\ \hline A charcoal gray color & Curious wildlife & An image of legs \\ Sepia-toned photograph & Majestic soaring birds & A jacket \\ Minimalist white backdrop & An image with dogs & A helmet \\ High-contrast black and white & Image with a dragonfly & A scarf \\ Image with a red color & An image with cats & A table \\ \hline \hline **L23.H12** (“Textures”) & **L22.H1** (“Shapes”) & **L22.H2** (“Locations”) \\ \hline Artwork with pointillism technique & A semicircular arch & Urban park greenery \\ Artwork with woven basket design & An isosceles triangle & Cozy home interior \\ Artwork featuring barcode arrangement & An oval & Urban subway station \\ Image with houndstooth patterns & Rectangular object & Energetic street scene \\ Image with quilted fabric patterns & A sphere & Tranquil boating on a lake \\ \hline \end{tabular} \end{table} Table 2: **Top-5 text descriptions extracted per head by our algorithm.** Top 5 components returned by TextSpan applied to ViT-L, for several selected heads. See Section A.4 for results on all the heads. Recall from Section 3.3 that the MSA terms of the image representation (Eq. 4) can be written as a sum over heads, \(\sum_{l,h}c_{\text{head}}^{l,h}\). To interpret a head's contribution \(c_{\text{head}}^{l,h}\), we will find a set of text descriptions that explain most of the variation in the head's output (the head "principal components"). To formalize this, we take input images \(I_{1},...,I_{K}\) with associated head outputs \(c_{1},...,c_{K}\) (for simplicity, we fix the layer \(l\) and head \(h\) and omit it from the notation). As \(c_{1},...,c_{K}\) are vectors in the joint text-image space, each text input \(t\) defines a direction \(M_{\text{text}}(t)\) in that space. Given a collection of text directions \(\mathcal{T}\), let \(\operatorname{Proj}_{\mathcal{T}}\) denote the projection onto the span of \(\{M_{\text{text}}(t)\mid t\in\mathcal{T}\}\). We define the _variance explained by \(\mathcal{T}\)_ as the variance under this projection: \[V_{\text{explained}}(\mathcal{T})=\frac{1}{K}\sum_{k=1}^{K}\|\operatorname{ Proj}_{\mathcal{T}}(c_{k}-c_{\text{avg}})\|_{2}^{2},\text{ where }c_{\text{avg}}=\frac{1}{K}\sum_{k=1}^{K}c_{k}. \tag{7}\] We aim to find a set of \(m\) descriptions \(\mathcal{T}\) for each head that maximizes \(V_{\text{explained}}(\mathcal{T})\). Unlike regular PCA, there is no closed-form solution to this optimization problem, so we take a greedy approach. **Greedy algorithm for descriptive set mining.** To approximately maximize the explained variance in Eq. 7, we start with a large pool of candidate descriptions \(\{t_{i}\}_{i=1}^{M}\) and greedily select from it to obtain the set \(\mathcal{T}\). Our algorithm, TextSpan, is presented in Alg. 1. It starts by forming the matrix \(C\in\mathbb{R}^{K\times d^{\prime}}\) of outputs for head \((l,h)\), as well as the matrix \(R\in\mathbb{R}^{M\times d^{\prime}}\) of representations for the candidate descriptions, projected onto the span of \(C\). In each round, TextSpan computes the dot product between each row of \(R\) and the head outputs \(C\), and finds the row with the highest variance \(R[j^{*}]\) (the first "principle component"). It then projects that component away from all rows and repeats the process to find the next components. The projection step ensures that each new component adds variance that is orthogonal to the earlier components. TextSpan requires an initial set of descriptions \(\{t_{i}\}_{i=1}^{M}\) that is diverse enough to capture the output space of each head. We use a set of sentences that were generated by prompting ChatGPT-3.5 to produce general image descriptions. After obtaining an initial set, we manually prompt ChatGPT to generate more examples of specific patterns we found (e.g. texts that describe more colors). This results in 3498 sentences. In our experiments, we also consider two simpler baselines--one-word descriptions comprising the most common words in Figure 3: ImageNet classification accuracy for the image representation projected to TextSpan bases. We evaluate our algorithm for different initial description pools, and with different output sizes. English, and a set of random \(d^{\prime}\)-dimensional vectors that do not correspond to text (see Section A.3 for the ChatGPT prompt and more details about the baselines). ### Experiments We apply TextSpan to find a basis of text descriptions for all heads in the last 4 MSA layers. We first verify that this set captures most of the model's behavior and that text descriptions track image properties. We then show that some heads are responsible for capturing specific image properties (see Figure 1(1)). We use this finding for two applications--reducing known spurious cues in downstream classification and property-specific image retrieval. **Experimental setting.** We apply TextSpan to all the heads in the last 4 layers of CLIP ViT-L, which are responsible for most of the direct effects on the image representation (see Section 3.2). We consider a variety of output sizes \(m\in\{10,20,30,40,50,60\}\). We first verify that the resulting text representations capture the important variation in the image representation, as measured by zero-shot accuracy on ImageNet. We simultaneously replace each head's direct contribution \(c_{\text{head}}^{l,h}\) with its projection to the text representations \(\operatorname{Proj}_{\mathcal{T}(l,h)}c_{\text{head}}^{l,h}\) (where \(\mathcal{T}(l,h)\) is the obtained text set for head \((l,h)\)). We also mean-ablate all other terms in the representation (MLPs and the rest of the MSA layers). The results are shown in Fig. 3: 60 descriptions per head suffice to reach 72.77% accuracy (compared to 75.25% base accuracy). Moreover, using our ChatGPT-generated descriptions as the candidate pool yields higher zero-shot accuracy than either common words or random directions, for all the different sizes \(m\). In summary, we can approximate CLIP's representation by projecting each head output, a 768-dimensional vector, to a (head-specific) 60-dimensional text-interpretable subspace. **Some attention heads capture specific image properties.** We report selected head descriptions from TextSpan (\(m=60\)) in Table 2, with full results in Appendix A.4. For some heads, the top descriptions center around a single image property like texture (L23H12), shape (L22H1), object count (L23H10), and color (L22H11). This suggests that these heads capture _specific image properties_. We qualitatively verify that the text tracks these image properties by retrieving the images with the largest similarity \(\langle M_{\text{text}}(t_{i}),c_{\text{head}}^{l,h}\rangle\) for the top extracted text descriptions \(t_{i}\). The results in Fig. 4 and 10 show that the returned images indeed match the text. **Reducing known spurious cues.** We can use our knowledge of head-specific roles to manually remove spurious correlations. For instance, if location is being used as a spurious feature, we can ablate heads that specialize in geolocation to hopefully reduce reliance on the incorrect feature. We validate this idea on the Waterbirds dataset (Sagawa et al., 2019), which combines waterbird and landbird photographs from the CUB dataset Welinder et al. (2010) with image backgrounds (water/land background) from the Places dataset (Zhou et al., 2016). Here image background is a spurious cue, and models tend to misclassify waterbirds on land backgrounds (and vice versa). To reduce spurious cues, we manually annotated the role of each head using the text descriptions from TextSpan, mean-ablated the direct contributions of all "geolocation" and "image-location" heads, and then evaluated the zero-shot accuracy on Waterbirds, computing the worst accuracy across subgroups as in Sagawa et al. (2019). As a baseline, we also ablated 10 random heads and Figure 4: **Top-4 images for the top head description found by TextSpan.** We retrieve images with the highest similarity score between \(c_{\text{head}}^{l,h}\) and the top text representation found by TextSpan. They correspond to the provided text descriptions. See Figure 10 in the appendix for randomly selected heads. reported the top accuracy out of 5 trials. As shown in Table 3, the worst-group accuracy increases by a large margin--by 25.2% for ViT-L. This exemplifies that the head roles we found with TextSpan help us to design representations with less spurious cues, without any additional training. **Property-based image retrieval.** Since some heads specialize to image properties, we can use their representations to obtain a property-specific similarity metric. To illustrate this, for a given head \((h,l)\), we compute the inner product \(\langle c_{\text{head}}^{l,h}(I),c_{\text{head}}^{l,h}(I^{\prime})\rangle\) between a base image \(I\) and all other images in the dataset, retrieving the images with the highest similarity. Figure 5 shows the resulting nearest neighbors for heads that capture different properties. The retrieved images are different for each head and match the head-specific properties. In the left example, if we use a head that captures color for retrieval, the nearest neighbors are images with black-and-white objects. If we use a head that counts objects, the nearest neighbors are images with two objects. ## 5 Decomposition into Image Tokens Decomposing the image representation across heads enabled us to answer _what_ each head contributes to the output representation. We can alternately decompose the representation across image tokens to tell us _which image regions_ contribute to the output for a given text direction \(M_{\text{text}}(t)\). We find that these regions match the image parts that \(t\) describes, thereby yielding a zero-shot semantic image segmenter. We compare this segmenter to existing CLIP-based zero-shot methods and find that it is state-of-the-art. Finally, we decompose each head's direct contributions into per-head image tokens and use this to obtain fine-grained visualizations of the information flow from input images to output semantic representations. **Decomposing MSA outputs into image tokens.** Applying the decomposition from Section 3.3, if we group the terms \(c_{i,l,h}\) by position \(i\) instead of head \((l,h)\), we obtain the identity \(M_{\text{image}}(I)=\sum_{i=0}^{N}c_{\text{token}}^{i}(I)\), where \(c_{\text{token}}^{i}(I)\) is the sum of the output at location \(i\) across all heads \((l,h)\). We empirically find that the contribution of the class token \(c_{\text{token}}^{0}\) has negligible direct effect on zero-shot accuracy (see mean-ablation in A.2). Therefore, we focus on the \(N\) image tokens. We use the decomposition into image tokens to generate a heatmap that measures how much the output from each image position contributes to writing in a given text direction. Given a text description \(t\), we obtain this heatmap by computing the score \(\langle c_{\text{token}}^{i}(I),M_{\text{text}}(t)\rangle\) for each position \(i\). **Quantitative segmentation results.** We follow a standard protocol for evaluating heatmap-based explainability methods (Chefer et al., 2021). We first compute image heatmaps given descriptions of the image class (e.g. "An image of a \(\{\text{class}\}\)")3. We then binarize them (by applying a threshold) to obtain a foreground/background segmentation. We compare the segmentation quality to zero-shot segmentations produced by other explainability methods in the same manner. Footnote 3: To normalize out bias terms, we subtract from the heatmap an averaged heatmap computed across all class descriptions in ImageNet. Figure 5: **Top-4 nearest neighbors per head and image. We retrieve the most similar images to an input image by computing the similarity of the direct contributions of individual heads. As some heads capture specific aspects of the image (e.g. colors/objects), retrieval according to this metric results in images that are most similar regarding these aspects. See additional results in the appendix (Fig. 11).** We evaluate the methods on ImageNet-segmentation (Guillaumin et al., 2014), which contains a subset of 4,276 images from the ImageNet validation set with annotated segmentations. Table 4 displays the results: our decomposition is more accurate than existing methods across all metrics. See Chefer et al. (2021) for details about the compared methods and metrics, and additional qualitative comparisons in Section A.5. **Joint decomposition into per-head image tokens.** Finally, we can jointly decompose the output of CLIP across both heads and locations. We use this decomposition to visualize what regions affect each of the basis directions found by TextSpan. Recall that \(c_{i,l,h}\) from Eq. 6 is the direct contribution of token \(i\) at head \((h,l)\) to the representation. For each image token \(i\), we take the inner products between \(c_{i,l,h}\) and a basis direction \(M_{\text{text}}(t)\) and obtain a _per-head_ similarity heatmap. This visualizes the flow of information from input images to the text-labeled basis directions. In Figure 6, we compute heatmaps for the two TextSpan basis elements that have the largest and smallest (most negative) coefficients when producing each head's output. The highlighted regions match the text description for that basis direction--for instance, L22H13 is a geolocation head, its highest-activating direction for the top image is "Photo taken in Paris, France", and the image tokens that contribute to this direction are those matching the Eiffel Tower. ## 6 Limitations and Discussion We studied CLIP's image representation by analyzing how individual model components affect it. Our findings allowed us to reduce spurious cues in downstream classification and improve zero-shot segmentation. We present two limitations of our investigation and conclude with future directions. **Indirect effects.** We analyzed only the direct effects of model components on the representation. Studying indirect effects (e.g. information flow from early layers to deeper ones) can provide additional insights into the internal structure of CLIP and unlock more downstream applications. \begin{table} \begin{tabular}{l|c|c|c} \hline \hline & Pixel Acc. \(\uparrow\) & mIoU \(\uparrow\) & mAP \(\uparrow\) \\ \hline LRP (Binder et al., 2016) & 52.81 & 33.57 & 54.37 \\ partial-LRP (Voita et al., 2019) & 61.49 & 40.71 & 72.29 \\ rollout (Abnar \& Zuidema, 2020) & 60.63 & 40.64 & 74.47 \\ raw attention & 65.67 & 43.83 & 76.05 \\ GradCAM Selvaraju et al. (2017) & 70.27 & 44.50 & 70.30 \\ Chefer et al. (2021) & 69.21 & 47.47 & 78.29 \\ Ours & **75.21** & **54.50** & **81.61** \\ \hline \hline \end{tabular} \end{table} Table 4: **Segmentation performance on ImageNet-segmentation.** The image tokens decomposition results in significantly more accurate zero-shot segmentation than previous methods. Figure 6: **Joint decomposition examples. For each head \((l,h)\), the left heatmap (green border) corresponds to the description that is most similar to \(c_{\text{head}}^{l,h}\) among the TextSpan output set. The right heatmap (red border) corresponds to the least similar text in this set (for \(m=60\)). See Figure 8 for more results.** **Not all attention heads have clear roles.** The outputs of TextSpan show that not every head captures a single image property (see results in Section A.4). We consider three possible explanations for this: First, some heads may not correspond to coherent properties. Second, the initial descriptions pool does not include descriptions of any image property. Third, some heads may collaborate and have a coherent role only when their outputs are addressed together. Uncovering the roles of more complex structures in CLIP can improve the performance of the described applications. **Future work.** We believe that similar analysis for other CLIP architectures (e.g. ResNet) can shed light on the differences between the output representations of different networks. Moreover, Our insights may help to design better CLIP image encoder architectures and feature extractors for downstream tasks. We plan to explore these directions in future work. **Acknowledgements.** We would like to thank Jean-Stanislas Denain for the insightful discussions and comments. We thank Jixahai Feng, Fred Zhang, and Erik Jones for helpful feedback on the manuscript. YG is supported by the Google Fellowship. AE is supported in part by DoD, including DARPA's MCS and ONR MURI, as well as funding from SAP. JS is supported by the NSF Awards No. 1804794 & 2031899.
2308.07393
Using Text Injection to Improve Recognition of Personal Identifiers in Speech
Accurate recognition of specific categories, such as persons' names, dates or other identifiers is critical in many Automatic Speech Recognition (ASR) applications. As these categories represent personal information, ethical use of this data including collection, transcription, training and evaluation demands special care. One way of ensuring the security and privacy of individuals is to redact or eliminate Personally Identifiable Information (PII) from collection altogether. However, this results in ASR models that tend to have lower recognition accuracy of these categories. We use text-injection to improve the recognition of PII categories by including fake textual substitutes of PII categories in the training data using a text injection method. We demonstrate substantial improvement to Recall of Names and Dates in medical notes while improving overall WER. For alphanumeric digit sequences we show improvements to Character Error Rate and Sentence Accuracy.
Yochai Blau, Rohan Agrawal, Lior Madmony, Gary Wang, Andrew Rosenberg, Zhehuai Chen, Zorik Gekhman, Genady Beryozkin, Parisa Haghani, Bhuvana Ramabhadran
2023-08-14T18:26:27Z
http://arxiv.org/abs/2308.07393v1
# Using Text Injection to Improve Recognition of Personal Identifiers in Speech ###### Abstract Accurate recognition of specific categories, such as persons' names, dates or other identifiers is critical in many Automatic Speech Recognition (ASR) applications. As these categories represent personal information, ethical use of this data including collection, transcription, training and evaluation demands special care. One way of ensuring the security and privacy of individuals is to redact or eliminate Personally Identifiable Information (PII) from collection altogether. However, this results in ASR models that tend to have lower recognition accuracy of these categories. We use text-injection to improve the recognition of PII categories by including fake textual substitutes of PII categories in the training data using a text injection method. We demonstrate substantial improvement to Recall of Names and Dates in medical notes while improving overall WER. For alphanumeric digit sequences we show improvements to Character Error Rate and Sentence Accuracy. Yochai Blau, Rohan Agrawal, Lior Madmony, Gary Wang, Andrew Rosenberg, Zhehuai Chen, Zorik Gekhman, Genady Beryozkin, Parisa Haghani, Bhuvana Ramabhadran Google {yochaib,rohanag,liomad,wgary,rosenberg,zhehuai,zorik,genady,parisah,bhuv}@google.com **Index Terms**: conformers, E2E models, Medical ASR, Text-Injection Training, protecting PHI, de-identification, PII ## 1 Introduction While a lot of speech is publicly broadcast or shared online to a vast audience, speech recognition applications frequently interact with more private communications such as dictation, call-center conversations or conversing with a digital assistant. Automatic speech recognition (ASR) systems are trained using transcribed speech, and operate best when their training data matches the context in which they are used. This raises a question of how to train ASR to work well on private communications, while being sensitive to the complexities of collecting and transcribing training material from private contexts. We engage with this question in the context of healthcare. Accurate transcription of medical speech is key to a growing set of applications, including dictation of clinical notes and voice assistance. However, ethical collection and transcription of training data for medical ASR requires care. Medical data is exceptionally sensitive and respecting patients' privacy in any collection effort is essential. De-identification, a process which includes the redaction of PII tokens, is a common practice to protect users' private data. In the process, the audio segment is replaced by silence and the text is either removed or replaced by a special markup tag. De-identification is used extensively for healthcare data and is often required by regulations such as the US HIPAA Act [1]. In contexts such as call-center applications, speech utterances are shorter and may be in response to prompts like "Please confirm your date of birth" or "Please say your user id". In this case, de-identification results in completely eliminating the utterance. In both domains, using such de-identified datasets for training ASR models makes recognition of these classes of PII terms operate at a higher error rate than the surrounding speech. Prior work has focused around replacing PII in speech with arbitrary synthesized speech from the same category [2] (for example, named entities, number sequences) or ensuring that these PII tokens/sequences are adequately covered by a language model, as done in [3]. Text injection has recently emerged as an effective way to leverage text-only data for ASR training in a single model [4, 5, 6, 7] without an LM. Techniques such as the one described in [4] have shown promise for zero-transcribed speech adaptation. In [8], Wang & Kastner et al. demonstrated that text-injection can effectively address domain transfer on diverse SpeechStew corpora [9] with little to no in-domain speech. Motivated by this, we propose to generate a text dataset that contains fake terms instead of the redacted PII and use text-injection during model training. This eliminates the need for any sensitive transcribed training data and allows effective use of de-identified medical datasets. This paper addresses three specific data challenges when training a model for private contexts with the use of text-injection in ASR training. * Domain adaptation: Transcribed speech from private contexts is much harder to come by than public contexts. We show that text-injection along with a small amount of transcribed speech can effectively perform domain adaptation to Medical ASR resulting in a 11.5 reduction to WER and improvements of 1% to Names and 11% to Dates (Section 5.1). * Redaction: Available in-domain data has redacted PII from the transcript with corresponding speech replaced by silence. To mitigate this, we train on generated, spoken text similar to the PII, which allows for substantial improvement to sensitive term recognition without using any individual's protected data; compared to training on redacted speech data, Names and Dates recall is improved by 8% and 13% (Section 5.1). * Elimination: De-identification may eliminate utterances that contain sensitive terms. For example, short utterances containing alphanumeric and digit sequences are eliminated. We show that text-injection can substantially improve recognition of these classes of potentially sensitive information with a full-sequence Sentence Accuracy improvement of 3.2% (and CER improvement of 1.4%) (Section 5.2). ## 2 Related Work The complexity of recognizing personal identifiers in medical speech has recently been studied in [2]. This work shows that PII recognition accuracy declines when training an end-to-end model on de-identified data. The authors propose to generate artificial audio/text pairs with synthetic identifiers, and show that training on such data restores most of the performance degrada tion. Artificial audio/text pairs are generated in a two-step process: first, PII in the text is replaced with random data. Then, corresponding audio is generated or spliced with matching fragments. Generating or manipulating the audio is more demanding then replacing identifiers in the text and also requires accurate word-level timestamps. In our work we take this concept one step further, by improving PII recognition with text replacements alone, and _without_ any audio generation or manipulation. Other work has studied medical domain ASR [3, 10, 11, 12], yet these have not reported findings on PII recognition accuracy and the effect of training on de-identified data. Regarding text-injection in ASR training, this work builds on the architecture described in [4]. This is described in detail in Section 3. However, there is a set of related approaches and architectures that similarly use separate speech and text encoders that feed into a shared encoder [5, 6, 7]. Additionally, a similar thread of work trains on speech and text data operates by first converting the text data to speech via TTS [13, 14, 15, 16, 17]. These approaches, in effect, use speech as an intermediate representation to perform text injection. ## 3 Text-Injection and ASR Details The text-injection model used in this work includes a speech encoder, a text encoder with a learned duration model, a shared encoder, decoder and an alignment decoder following [4]. The speech encoder consists of 4 causal conformer layers. The shared encoder consists of 3 causal conformer layers and 10 non-causal conformer layers, each with model dimension of 512. The text encoder contains 2 conformer layers and 4 lightweight convolutional upsampling layers [18]. HAT decoders [19] with \(v^{2}\) embeddings [20] are used in both decoder and an alignment decoder with the distinction of the former produces word-piece models as text outputs and the latter uses phonemes as model units to get speech-text alignments. The overall model contains 165M parameters, with an additional 58M in the text encoder which is only used during training. Text-injection training involves speech-text and text-only training paths in a curriculum fashion. Initially, speech-text training is used to minimize a consistency loss, a HAT decoder loss and an alignment decoder loss, for the duration model. Consistency loss ensures that we learn corresponding mappings from text to speech embedding. The duration model is used to up-sample text embedding before being fed to the shared encoder. After beginning training with speech-text loss, we enable text-only loss to be able to perform text-injection training. The text-only training step involves minimizing an aligned masked-language model loss. Further details of text injection architecture and training can be found in [4]. In all experiments, we train a text-injection model on supervised only data for 10k steps first so that the speech and text encoders generate consistent features. We continue training with target domain text or targeted terms as described in the following sections. While training with text alone is feasible after the initial supervised training, we find that including supervised data during training helps to stabilize the model behavior. ## 4 Data Description We are aware of the sensitive nature of speech recognition research particularly on personal identifiers. Therefore, we ensure that this work abides by the Google AI Principles [21]. ### Medical domain datasets The _Medical Audio_ dataset consists of dictations of clinical notes by healthcare professionals from a variety of medical specialties. During the transcription process any potentially identifying information was redacted. The corresponding audio segment was replaced by silence and a special markup was placed in the transcript to indicate the type of the data that was removed. Examples of the markup tags include PATIENT_NAME, MEDICAL_PROFESSIONAL_NAME, AGE and DATE. The resulting dataset was certified to comply with de-identification requirements of the US HIPAA Privacy Rule [1]. The _Medical Text_ dataset consists of the transcriptions from the _Medical Audio_ dataset (without the audio recordings). As mentioned above, these transcriptions include markup tags for PIIs such as names, addresses, dates, etc. that were redacted. In this dataset, we replace four types of markup tags (names, identifying numbers, dates and ages) with fake random data. For example, any name tag is replaced by a random name from a real-world distribution, each redacted digit is replaced with a random digit between 0-9, and so on. Note that replacing redacted PII information in text is quite easy and straightforward. However, splicing synthetic speech within speech recordings where the PII was silenced in a natural manner is challenging and error-prone, especially in the case of fast-paced speech recorded in a noisy environment as it is in the _Medical Audio_ dataset. The _Synthetic Notes_ dataset is a collection of fake dictations, recorded by clinicians in a noise-free environment and consists of 14.5 hours of audio. The dataset consists of synthetic hospital visit summaries which include fake identifiers such as names, IDs, dates, etc. The dataset creation process started with creation of fake hospital visit summaries by trained clinicians. Later, a separate group of clinicians simulated dictations of hospital notes based on these visit summaries, which were then independently transcribed. The _Synthetic Names_ dataset is a 14-hours long dataset that consists of short phrases (average length 7.6s) that resemble phrases in the dictation of clinical notes and include references to patient or clinician names. First, medical specialists created 550 textual templates with placeholders for names, ensuring diversity of name formats. Each template was recorded in a quiet environment by 17 different speakers, with placeholders substituted by names, each time randomly drawn from a real-world distribution. All name occurrences are tagged, which allows to compute a recall metric for names alone (along with more global metrics such as WER). \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline ### Alphanumeric Sequence Identifier Datasets For personal identifier text-injection train data, we use 100M sequences each of alphanumeric sequences and text sequences. The length of these sequences is sampled from a skewed normal distribution \(X(\mu=10,\,\sigma=5)+3\). For digit sequences, characters from 0-9 are sampled uniformly. For alphanumeric sequences, characters from the alphanumeric set 0-9-a-z are sampled uniformly. 10% of sequences are chosen at random, and character repeats of length 2, 3 or 4 are added. This is done because character or digit repetitions are a challenging use case for an ASR system. The _spoken digit sequence_ test set consists of 2146 utterances (average length 4.3s) with randomly generated digit sequences, spoken by voice actors. The _spoken alphanumeric sequence_ test set consists of 1,992 utterances (average length 5.2s) with randomly generated alphanumeric sequences, spoken by voice actors. The voice actors in both sequence test sets have acted out hesitations and pauses while speaking these sequences. The following TTS datasets contain utterances synthesized by a commercial American English TTS system sampled from 6 voices. In this work, we use the TTS datasets as test data, and the spoken sequence data as either test or training material (Section 5.2). _TTS digit sequence_ test set consists of 1991 utterances (average length 5.4s) with randomly generated digit sequences. The _TTS alphanumeric sequence_ test set consists of 2000 utterances (average length 5.7s) with randomly generated alphanumeric sequences. The _TTS digit sequence repetition_ consists of 3188 utterances (average length 5.4s) with randomly generated digit sequences which have at minimum, 1 consecutive digit repetition. 1001 utterances have 2, 3 and 4 consecutive repetitions and 185 utterances have more than 4 repetitions. The _TTS alphanumeric sequence repetition_ consists of 2179 utterances (average length 5.7s) with randomly generated alphanumeric sequences which have at minimum, 1 consecutive character repetition. 1001 utterances have 2, 3 consecutive repetitions and 177 utterances have more than 4 repetitions. ### General datasets The base model (B1) is trained with close to 300k hours of multidomain utterances including YouTube, Telephony and DICATION for US English as described in [22]. Multi-condition training [23], random 8kHz down-sampling [24] are applied on the training data. Importantly, any utterance that includes alphanumeric and digit sequences are eliminated from transcribed training data, as they may represent personal identifiers. This data is only included in initial checkpoint training, described as A1/B1 in Section 5, and is not further used during finetuning. Along with the medical domain data described above, we also include 377k hours of YouTube _Captions_ data as described in [25] to maintain strong general purpose ASR performance. While this material may incidentally include medical content, no particular targeting, selection or filtering for in-domain material was performed. ## 5 Recognizing Redacted and Eliminated Terms ### Identifiers in medical speech All models used the architecture described in Section 3, and use a base checkpoint, B1, trained on multidomain data (Section 4.3). The A1 model was trained on the (non-medical) _Captions_ dataset, in which medical terms are relatively uncommon. This shows clear domain-transfer effects; the WER on the medical-term rich _Synthetic Notes_ dataset is high, with a WER of 14.6% (Table 2). Adapting this model to the medical domain is done by training on a mixture of _Captions_ and _Medical Audio_ datasets with a 90%/10% ratio, resulting in model A2. The _Synthetic Notes_ WER drops significantly, and we observe higher recall for medical entities such as conditions and medications. Yet, for other entities such as names and dates the recall metrics degrade. This is a direct consequence of names, dates and other identifiers being redacted from the _Medical Audio_ dataset. To improve error rates on PII, model T1 is trained on the same mixture of speech data as for A2, with the _Medical Text_ dataset injected during training. This textual data contains identifiers such as names and dates, leading to a 8%/13% boost in names/dates recall, respectively. In Table 3 we show two cases were model T1, which has been exposed to medical text with (fake) identifiers, performs better in transcribing examples from the _Synthetic Names_ dataset. We also observe that the introduction of medical text further improves the overall WER from 3.3 to 3.1, demonstrating the value of additional in-domain text to address domain transfer as well as targeted terms. The training data used for models A1, A2 and T1 above, includes a large general-domain multidomain and _Captions_ dataset. This dataset includes identifying entities such as names and dates to some extent. To isolate the effect of PII redaction, we train a cold-start model A3 only on the _Medical Audio_ dataset (where all identifiers are redacted). The results of this experiment are in Table 4. All error metrics degrade, but particularly noticeable is the names recall which drops to 1%, as \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline & \multirow{2}{*}{**Datasets**} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ \cline{1-1} \cline{6-6} & & & & & \\ \hline **B1** & Multidomain (MD) & 14.6 & 82.6\% & 60\% & 60\% \\ \hline **A1** & MD, Captions & 14.7 & 82.1\% & 59\% & 72\% \\ \hline **A2** & MD, Captions & 3.3 & 97.4\% & 53\% & 64\% \\ \hline & MD, Captions & & & & \\ **T1** & Medical Audio & 3.1 & 97.5\% & 61\% & 77\% \\ \hline & Medical Text (injected) & & & & \\ \hline \end{tabular} \end{table} Table 2: _Word error rate for Synthetic Notes and medical-term/name/date recall for Synthetic Names._ \begin{table} \begin{tabular}{|l|l|} \hline **Model** & **Transcription** \\ \hline **Truth** & Scarlet Kathieen Ibarra was admitted to the floors \\ \hline **A2** & Scarlet Califachen Barwa was admitted to the floors \\ \hline **T1** & Scarlet Kathieen Ibarra was admitted to the floors \\ \hline **Truth** & Oliver Barry Matthews is adamant that the patient had no past surgeries \\ \hline **A2** & All of our bearing math uses adamant that the patient had no past surgeries \\ \hline **T1** & Oliver Barry Matthews is adamant that the patient had no past surgeries \\ \hline \end{tabular} \end{table} Table 3: _Examples of name mistranscriptions from Synthetic Names corpus. Note: All examples are fake descriptions of hypothetical scenarios._ the model has not seen _any_ names during training. Yet performance can be improved by injecting text which includes fake identifiers, even without a single audio example of such entities, as seen for model T2. While the metrics indicate that text injection alone cannot match the performance of T1, the recall of Names and Dates are improved by a factor of 1800% and 207%, respectively. Moreover, the introduction of medical text improves the WER performance of the model from 6.1 to 4.1. ### Recognizing Alphanumeric Sequence Identifiers The architecture of the base ASR model B1 is the same RNN-T model with a cascade encoder and HAT decoder with \(v^{2}\) embeddings in Section 5.1. The baseline text injection model, M1 was trained with the same paired text data as B1, and additional base text injection data (not related to target domain) consists of 100B anonymized sentences across different domains. Note that this text injection model is trained in arbitrary text to improve core ASR performance rather than targeting any particular domain or type of lexical content. We aim to improve recognition of alphanumeric and digit sequence identifiers. Thus, we perform a round of text injection training based on the initial model described above, specifically, the personal identifier text data (cf. Section 4.2). Results of these experiments measured by character error rate (CER) and Sentence Accuracy (SACC) in Tables 5 and 6. Character error rate is more fine grained, by being able to measure individual errors. However, when recognizing an identifier it is crucial to recognize the full sequence; this is measured by SACC. We use CER as a metric instead of word error rate for evaluation of personal identifier recognition because CER is a better fit for these sequences; an utterance may contain a single "word" like _A1B2C3_. Normalization helps to ignore formatting issues with respect to such sequences. As can be seen from Table 5, even the addition of base text injection data which is not related to target domain (M1) improves CER from 8.3 to 7.8, a reduction of 6% from B1. Additional training on domain specific text injection data (M2), further reduces normalized CER to 6.9, a reduction of 14.1% from M1. Correspondingly when considering SACC, after adding base text injection data (M1), we see an increase from 75.3 to 76.1, a modest increase of 1.1% from B1. SACC further increases to 78.5 after training on domain specific text injection data (M2), an increase of 3.1% from M1. In M3, we add Spoken sequence data (4138 utterances) to our training corpus. The motivation for this experiment is to show the effect of adding a small amount of paired speech-text data in addition to text-injection training can further help. This further reduces CER on alphanumeric test sets, but gives mixed results in digit test sets. Catastrophic forgetting is a concern when fine-tuning a pre-trained model on material from a narrow domain, as we are doing here with text-injection of sequences. To assess the impact of this we assess performance on a Short Ut Utterance test set representative of spoken search queries, and TTS test sets focusing on rare proper names from 5 different domains (Table 7). While we see a minor regression between M1 and M2, the performance is still substantially better than B1, the base model with no text-injection. This suggests that this regression can be mitigated with tuning of a ratio and schedule of the text-injection training. ## 6 Conclusions Due to the sensitive nature of medical speech and personal identifiers, substantial care must be taken in the collection, transcription and use of this data, often requiring the removal of personal identifiers from the data. Models trained on such data have higher error rates when recognizing such identifiers. In this paper, we demonstrate that text-injection is an effective approach to improving ASR performance of models trained on identified speech, including improved recognition of personal identifiers. Text-injection can leverage text where speech is unavailable, and synthetic text is simple to construct, frequently indistinguishable from real transcripts, and not associated with any person's private communication. In Medical ASR where names and dates are redacted from speech and transcripts, we find core WER improves from 3.3 to 3.1, and recall of names and dates are improved by 8% and 13% respectively. For short utterances containing alphanumeric and digit sequences which can be used as personal identifiers in a broad range of contexts, we improve SACC by an average of 3.2% while CER is reduced from 8.3% to 6.9%
2305.19151
Host group degeneracy in gravitational lensing time delay determination of $H_0$
Massive elliptical galaxies, that serve as lenses in gravitational lensing time delay measurements of the Hubble parameter $H_0$, often reside in a host group. We consider degeneracies in the modeling of the group halo. When the group effect on imaging can be summarized by its flexion (the next order term beyond shear in the tidal expansion), the posterior likelihood map can develop disjoint local minima, associated with an approximate discrete symmetry of a dominant flexion term. Monte-Carlo Markov Chain (MCMC) algorithms that are not designed to explore a rich posterior landscape can miss some of the minima, introducing systematic bias. We study mock data and demonstrate that the bias in $H_0$ can exceed $10\%$, and pulls the inference value of $H_0$ above its truth value, for a reason that can be traced to the structure of a mismodeled flexion term. MCMC algorithms that are designed to cope with a rich posterior landscape can uncover the structure. If the group is X-ray bright enough, X-ray data may also help to resolve the degeneracy, by pinpointing the group's center of mass. Finally, we show that some implementations in the literature used an inaccurate kinematical prior, mis-modeling the group velocity dispersion by as much as $20\%$
Luca Teodori, Kfir Blum
2023-05-30T15:54:55Z
http://arxiv.org/abs/2305.19151v2
# Host group degeneracy in gravitational lensing time delay determination of \(H_{0}\) ###### Abstract Massive elliptical galaxies, that serve as lenses in gravitational lensing time delay measurements of the Hubble parameter \(H_{0}\), often reside in a host group. We consider degeneracies in the modeling of the group halo. When the group effect on imaging can be summarized by its flexion (the next order term beyond shear in the tidal expansion), the posterior likelihood map can develop disjoint local minima, associated with an approximate discrete symmetry of a dominant flexion term. Monte-Carlo Markov Chain (MCMC) algorithms that are not designed to explore a rich posterior landscape can miss some of the minima, introducing systematic bias. We study mock data and demonstrate that the bias in \(H_{0}\) can exceed 10%, and pulls the inference value of \(H_{0}\) above its truth value, for a reason that can be traced to the structure of a mismodeled flexion term. MCMC algorithms that are designed to cope with a rich posterior landscape can uncover the structure. If the group is X-ray bright enough, X-ray data may also help to resolve the degeneracy, by pinpointing the group's center of mass. Finally, we show that some implementations in the literature used an inaccurate kinematical prior, mis-modeling the group velocity dispersion by as much as \(20\%\). ###### Contents * I Introduction * II Host group as a core-MSD * III Illustration with mock data * III.1 Mock setup description * III.2 External priors: tracer galaxy kinematics and theoretical input from N-body simulations * III.3 Results * IV Origin of \(H_{0}\) bias in a displaced group * V Summary * A Deflection angle expansion, NFW profile * A.1 MSD * A.2 \(G\)-term degeneracy * A.3 Time delays * A.4 Effect of ellipticity * B Kinematics constraints for host group * B.1 Power law * B.2 NFW * B.3 Kinematical and cosmological priors * B.4 Sample variance * C Estimating the probability of an MCMC to fall into a displaced minimum * D Full corner plots. ## I Introduction Gravitationally lensed quasars allow a determination of the Hubble parameter \(H_{0}\)[1, 2, 3, 4, 5], and the results of such measurements [6, 7, 8, 9, 10, 11] were widely considered as tests of the cosmological model [12, 13, 14]. However, systematic degeneracies are a limiting factor in the interpretation of lensing data [15, 16, 17, 18, 19, 20, 21, 22]. Relaxing some of the modeling assumptions made in [6, 7, 8, 9, 10, 11], a possible tension between the value of \(H_{0}\) inferred via lensing and via cosmic microwave background (CMB) and large-scale structure (LSS) [23, 24, 25] analyses may be attributed to a core feature in the density profile in or around the lenses [26, 27, 28, 19]. A core feature is an approximate mass-sheet degeneracy (MSD) [15]. It could be an intrinsic characteristic of the lens galaxy itself, on distances of dozens of kpc [26]. However, the effect could also come from larger scales. On intermediate scales, in between cosmology and lens internal structure, it is noteworthy that massive galaxies like the lenses of [6, 7, 8, 9, 10, 11] are often members of a group, and the dark matter halo of a host group may act to some extent as a core-MSD. Ref. [29, 30] studied the impact of host and line of sight (LOS) groups on lensing systems. Among the systems considered, PG1115+080, RXJ1131-1231, HE0435-1223, and WFI2033-4723 featured in the \(H_{0}\) campaign of [11]. Interestingly, PG1115+080 yields a high central value of \(H_{0}\) (\(81^{+8}_{-7}\) km/s/Mpc) [11]; at the same time, this lens resides in a massive group, inducing convergence \(\kappa_{\rm g}\approx 0.2\)[30]. The group convergence goes directly into the inference of \(H_{0}\), via \(\delta H_{0}/H_{0}^{\rm truth}\approx-\delta\kappa_{\rm g}\), where \(\delta\kappa_{\rm g}=\kappa_{\rm g}^{\rm truth}-\kappa_{\rm g}^{\rm model}\) is any error in the model determination of \(\kappa_{\rm g}\), and \(\delta H_{0}=H_{0}^{\rm truth}-H_{0}^{\rm model}\). Clearly, it is important to understand to what accuracy can lensing analyses determine \(\kappa_{\rm g}\). In this paper we explore the group effect. The outline and main results are as follows. In Sec. II we consider simple analytic estimates. We show that when the distance separating the group's centroid from the primary lens is large in comparison to the primary lens's Einstein angle, attempts to model the group halo effects based on imaging data suffer from an approximate MSD, where the mass sheet comes from the group halo itself. This version of the MSD persists even when second-order tidal effects (flexion terms [31; 32]) are clearly detectable. Kinematics and astrometry of tracer galaxies are needed to break the degeneracy. In Sec. III we perform numerical mock data analysis, motivated by a realistic example. We show that the posterior likelihood of the lensing reconstruction problem exhibits three disjoint minima. These minima reflect an approximate discrete degeneracy, related to the transformation properties of a dominant flexion term under coordinate rotations around the primary lens. We show that a naive implementation of a commonly used Monte-Carlo Markov Chain (MCMC) algorithm tends to fall into one of the minima, missing the others. Interestingly, and dangerously, the best fit \(H_{0}\) obtained in any one of the wrong minima is systematically biased high, for a reason that we explain in Sec. IV. The size of the \(H_{0}\) bias can reach \(\sim 10\%\). An MCMC implementation that is especially designed to probe a rich likelihood landscape, uncovers the full degeneracy structure. We summarize in Sec. V. Many details are kept to appendices, including relevant calculations that exist in the literature (for completeness of some of our arguments), and also a few results that we did not see elsewhere. In App. A we spell out some details of the lensing potential of a Navarro-Frenk-White (NFW) profile [33]. In App. B we review the relation between group member velocity dispersion and halo model. We note that some lensing studies implemented an incorrect kinematics prior for the NFW group model. Depending on selection cuts and some other factors, the systematic error in the interpretation of group velocity dispersion can reach \(\sim 20\%\). In App. C we give a rough estimate of the probability that an MCMC will actually fall into a wrong minimum; for the example of PG1115+080 [11] this probability is not large, on the order of \(\sim 10\%\). In App. D we collect expanded versions of MCMC triangle plots along with some sanity checks of our analysis. We should comment that throughout the analysis, we do not consider stellar kinematic measurements of the primary lens itself (see [34; 35] for state-of-the-art). Primary lens kinematics can constrain the group-induced MSD, if it can reach a sensitivity for over-all scaling of the primary lens mass model at the level of \(1-\delta\kappa_{g}\). ## II Host group as a core-MSD In this section we provide simple analytic estimates that clarify the group effect on the lensing problem. The discussion is useful in understanding features of the numerical analysis of the next section. In systems like those considered in [11], the Einstein angle of the primary lens is of the order of \(\theta_{\rm E}\sim 1^{\prime\prime}\), and most of the imaging information lies at angular separation \(\theta\sim\theta_{\rm E}\) around the lens. This angular scale projects onto a physical separation of the order of \(\lesssim 10\) kpc at the redshifts \(z_{\rm i}\sim 0.1-1\) of typical lenses. In comparison, a typical separation of any galaxy (with the possible exception of the brightest group galaxy (BGG)) from the host group's center of mass is \(\gtrsim 100\) kpc, that is, angular distance \(h\gtrsim 10^{\prime\prime}\). Therefore, for a rough estimate of the impact of a group on imaging analyses, it is sensible to expand the group's lensing potential in powers of \(\theta/h\). We adopt complex notation for 2D angle vectors on the sky [36; 37], defining, e.g., \[\theta\ =\ \theta_{1}+{\rm i}\theta_{2},\quad\beta\ =\ \beta_{1}+{\rm i}\beta_{2}, \tag{1}\] etc. We set the origin of coordinates at the center of the primary lens. With this formalism, the lensing equation can be written as \[\beta\ =\ \theta-\alpha_{\rm l}(\theta)-\Delta\alpha(\theta)-\kappa_{\rm ext }\theta-\gamma_{\rm ext}\theta^{*}. \tag{2}\] Here, \(\alpha_{\rm l}(\theta)\) is the deflection angle due to the primary lens, \(\Delta\alpha(\theta)\) is the deflection due to the host group or cluster, and \(\beta\) is the source position. The external convergence and shear, \(\kappa_{\rm ext}\) and \(\gamma_{\rm ext}\), contain a combination of different line of sight (LOS) contributions1. Footnote 1: For example, in terms of observer-source, observer-lens, and lens-source LOS terms, we have \(\kappa_{\rm ext}=\kappa_{\rm a}+{\rm i}-\kappa_{\rm ls}\). We also absorb some external convergence and shear terms into the definition of the “primary lens”, “group”, and “source position” [22]. In what follows, when we refer to a host group, we consider the group's central dark matter halo, rather than individual member galaxies. For a group located at center of mass position \({\rm e}^{{\rm i}\phi}h\), with \(h\gg|\theta|\), \(\Delta\alpha\) can be expanded as a power series in \(\theta\)[37] (see App. A for more details): \[\Delta\alpha(\theta) = \Delta\beta+\Delta\kappa\,\theta+\Delta\gamma\,\theta^{*} \tag{3}\] \[+ \frac{1}{4}F^{*}\theta^{2}+\frac{1}{2}F\theta\theta^{*}+\frac{1}{ 4}G\theta^{*2}+\ldots\,.\] In the case of an axisymmetric group halo profile, it is possible to decompose the expansion coefficients as \[\Delta\beta=\Delta\beta_{0}{\rm e}^{{\rm i}\phi},\ \Delta\gamma=\Delta\gamma_{0}{ \rm e}^{2{\rm i}\phi},\ F=F_{0}{\rm e}^{{\rm i}\phi},\ G=G_{0}{\rm e}^{3{\rm i }\phi}, \tag{4}\] where \(\Delta\beta_{0},\Delta\gamma_{0},F_{0},G_{0}\) are real numbers that depend on \(h\) but not on \(\phi\). Note that \(\Delta\kappa\) is independent on \(\phi\). Axisymmetry is mildly broken in realistic elliptic profiles; we discuss this point in App. A4. However, as long as the ellipticity is small, the \(\phi\) dependence of \(\Delta\beta_{0},\Delta\gamma_{0},F_{0},G_{0}\) is weak, and does not affect our main point. Of the expansion coefficients in Eq. (3), \(\Delta\beta\) is degenerate with \(\beta\), which is a free parameter in the modeling, and has no effect on time delays. \(\Delta\kappa\) is degenerate with \(\kappa_{\rm ext}\), a free parameter.2 Varied along the MSD, the combination \(\Delta\kappa+\kappa_{\rm ext}\) is invisible to imaging, but affects time delays. \(\Delta\gamma\) is degenerate with \(\gamma_{\rm ext}\), a free parameter. Thus, as far as imaging data is concerned, the leading order effect that constrains the group comes from the MSD-invariant combinations \(F/(1-\Delta\kappa-\kappa_{\rm ext})\) and \(G/(1-\Delta\kappa-\kappa_{\rm ext})\), related to what is known in the literature as reduced flexion [31; 32]. An attempt to constrain the group model via imaging data, suffers from the following three main difficulties. The first obvious point is that the flexion terms are small; in the \(1/h\) expansion, the flexion terms are parametrically suppressed as \(F,G\propto\Delta\kappa/h\). The second point is that direct modeling of the group halo can still leave room for a residual MSD: a model to capture correctly the imaging distortion produced by the flexion, while mismodeling the convergence. The third point, which may be the most important in practice, is that the posterior likelihood of the group halo model exhibits a discrete approximate degeneracy, related to the \(2\pi/3\) phase degeneracy of the \(G\) term (see Eq. (4)). It is useful to study a simple example. Consider a group halo described by an isotropic power-law (PL) density profile with 3D slope \(\gamma_{\rm g}\) and Einstein angle \(\theta_{\rm Eg}\). In this case, the coefficients of Eqs. (3) and (4) are given by \[\Delta\kappa = \frac{3-\gamma_{\rm g}}{2}\left(\frac{\theta_{\rm Eg}}{h}\right) ^{\gamma_{\rm g}-1},\;\;\;\Delta\gamma_{0}=\frac{1-\gamma_{\rm g}}{3-\gamma_{ \rm g}}\Delta\kappa, \tag{5}\] \[F_{0} = \left(\gamma_{\rm g}-1\right)\frac{\Delta\kappa}{h},\;\;\;G_{0}= -\frac{\gamma_{\rm g}^{2}-1}{3-\gamma_{\rm g}}\frac{\Delta\kappa}{h},\] (6) \[\Delta\beta = -\frac{2\,h}{3-\gamma_{\rm g}}\Delta\kappa. \tag{7}\] For \(\gamma_{\rm g}\approx 2\) (singular isothermal sphere (SIS)), the line of sight velocity dispersion (LOSVD) in the center of the group is related to the group's Einstein radius via3 Footnote 3: See App. B for a review of the derivation. For \(\gamma_{\rm g}\) close, but not equal to 2, the RHS of Eq. (8) is rescaled by an \(\mathcal{O}(1)\) factor, equal to \(\sim 0.5\) (\(\sim 1.5\)) for \(\gamma_{\rm g}=1.5\) (\(\gamma_{\rm g}=2.2\)), and a weak dependence arises on the scale radius of the profile. \[\theta_{\rm Eg}\;\approx\;\frac{4\pi d_{\rm ls}}{d_{\rm s}}\sigma_{\rm los}^{2 }\,\approx\,4.5^{\prime\prime}\,\frac{d_{\rm ls}}{d_{\rm s}}\left(\frac{ \sigma_{\rm los}}{400\ {\rm km/s}}\right)^{2}. \tag{8}\] We can therefore estimate the convergence, \[\Delta\kappa_{\rm SIS}\;\approx\;0.5\frac{d_{\rm ls}}{d_{\rm s}}\left(\frac{ \sigma_{\rm los}}{400\ {\rm km/s}}\right)^{2}\left(\frac{20^{\prime\prime}}{h}\right), \tag{9}\] and to the flexion terms, \[F_{0,{\rm SIS}}\,\approx\,\frac{0.025}{1^{\prime\prime}}\left(\frac{20^{\prime \prime}}{h}\right)\Delta\kappa_{\rm SIS},\;\;\;G_{0,{\rm SIS}}\,\approx\,-3\,F _{0,{\rm SIS}}. \tag{10}\] For example, \(\Delta\kappa\approx 0.1\) produced by a SIS group with \(d_{\rm ls}/d_{\rm s}=0.5\), \(\sigma_{\rm los}=400\ {\rm km/s}\), and \(h=50^{\prime\prime}\), would cause (if not modeled) a \(\sim 10\%\) bias in the inference of \(H_{0}\), while the imaging distortion produced by the group's flexion field at \(\theta\sim 1^{\prime\prime}\) would only amount to \(\delta\theta\approx 0.0075^{\prime\prime}\), typically dominated by the \(G\) term. Next order terms in the expansion (beyond the flexion) are further suppressed by \(\sim 1^{\prime\prime}/h\sim\mathcal{O}(1/10)\), and can be neglected. This discussion highlights the obvious hierarchy between flexion and convergence, but as was mentioned earlier, there is also MSD. In Eqs. (5-6), the leading effect of the MSD (at small \(\Delta\kappa\ll 1\)) is seen by noticing that even if one can fix \(F\) and \(G\) exactly from imaging data, this still allows \(\Delta\kappa\) to vary freely, as long as \(h\) is varied simultaneously with \(\Delta\kappa\propto h\). Thus, unless we have good external prior data on the group's center of mass (parameterised by \(h\)), determining the flexion from imaging data alone does not fix the convergence. (More precisely, the degeneracy is somewhat regulated by the fact that the true MSD invariant quantities are \(F/(1-\Delta\kappa-\kappa_{\rm ext})\) and \(G/(1-\Delta\kappa-\kappa_{\rm ext})\), rather than \(F\) and \(G\) themselves. The exact MSD is then captured via the freedom to adjust \(\Delta\kappa\) and \(h\) while keeping \((\Delta\kappa/h)/\left(1-\kappa_{\rm ext}-\Delta\kappa\right)\) constant.) Finally, another important point is the phase degeneracy of the \(G\) term. This degeneracy means that models of the halo in which the direction to the group's center is changed by \(\phi\to\phi\pm 2\pi/3\) yield identical \(G\) terms. Although the rotated models do not reproduce the truth value of the \(F\) term, the \(G\) degeneracy can produce isolated local minima in the posterior likelihood, that can trap unwary MCMC chains. Interestingly, the offset \(F\) term in the "wrong" minima causes the model to pull towards a biased estimate of the group's convergence, thereby biasing \(H_{0}\). This issue is seen to be an important point in the next section. ## III Illustration with mock data The host group is often modeled explicitly if the system is known to reside in a group (see e.g. [11; 29; 30; 38]). We now study such modeling using mock data. Our implementation is based on the package lenstronomy [39; 40; 41]. ### Mock setup description We chose PG1115+080 as a reference object to guide our study. Ref. [11] inferred \(H_{0}=81^{+8}_{-7}\ {\rm km/s/Mpc}\) from this system, quite high compared to the CMB value [23]. At the same time, the system is known to reside in a group [30] with \(\sigma_{\rm los}=390^{+50}_{-60}\ {\rm km/s}\) (an earlier study found \(\sigma_{\rm los}=440^{+90}_{-80}\ {\rm km/s}\)[38]), estimated to induce \(\kappa_{\rm g}\sim 0.2\). An image of the field is shown in Fig. 1. We consider the following mock setup. For the primary lens, we consider an elliptic power-law density profile with 3D slope \(\gamma=2.17\), Einstein angle \(\theta_{\rm E}=1.08^{\prime\prime}\), and ellipticity parameters \(e_{1}=-0.2\), \(e_{2}=0.05\), corresponding to \(q=(1-e)/(1+e)\approx 0.66\), where \(e=\sqrt{e_{1}^{2}+e_{2}^{2}}\). For the group, we consider an elliptic NFW profile [42; 43] with \(e_{1}=-0.07\), \(e_{2}=0.03\), compatible with the findings of [44]. Fig. 2 illustrates the setup, including the truth position of the group halo center of mass. This setup could mimic PG1115+080 (Fig. 1) if the BGG - or the X-ray blob found by [45] - happens to indicate the group's center of mass position. Following [9], we choose our inference pipeline to include only a simple spherical group model, ignoring group halo ellipticity in the modeling. ### External priors: tracer galaxy kinematics and theoretical input from N-body simulations As in [9], we include external priors on the group halo, dictated by cosmological N-body simulations and by kinematics data. We defer most of the details to App. B, notably Secs. B.2 and B.3. However, we would like to point out a potential inaccuracy in the kinematics analyses of some previous works. Galaxy groups are often assumed to follow the NFW density profile [33], \[\rho(r)=\frac{\rho_{0}R_{\rm s}^{3}}{r(r+R_{\rm s})^{2}}, \tag{11}\] In terms of the parameters \(\rho_{0}\) and \(R_{\rm s}\), the LOSVD of tracer galaxies (with number density assumed to follow the same profile as the dark matter density) can be expressed as \[\sigma_{\rm los}^{2}(\theta)\ =\ G\rho_{0}R_{\rm s}^{2}\,f\left(\frac{\theta}{ \theta_{\rm s}}\right), \tag{12}\] where \(f(a)\) is a dimensionless function derived in Sec. B.2. While Eq. (12) (averaged as needed within some aperture cut of the observations) gives the correct translation between LOSVD data and the NFW model parameters, Ref. [9] (after [48, 49]) considered a different expression as a proxy for the LOSVD data: \[\bar{\sigma}^{2}\ =\ \frac{GM_{\rm vir}}{3R_{\rm vir}}, \tag{13}\] where \(R_{\rm vir}\) and \(M_{\rm vir}=M(R_{\rm vir})\) are the virial radius and virial mass, and \(c_{\rm vir}=R_{\rm vir}/R_{s}\) is the NFW concentration parameter.In App. B we show that identifying the quantity \(\bar{\sigma}^{2}\) with the observable \(\sigma_{\rm los}^{2}\) introduces an error of up to \(\sim 20\%\) (the precise error depends on the analysis aperture). One reason for introducing the auxiliary quantities \(R_{\rm vir}\) and \(M_{\rm vir}\), is that N-body simulations provide theoretically-motivated priors that are often presented in terms of these quantities [50, 51]. These cosmological priors are, however, quantitatively and conceptually decoupled from the kinematics data interpretation. Instead, as we review in App. B, the cosmological priors dictate a certain redshift-dependent relation between the NFW parameters \(\rho_{0}\) and \(R_{\rm s}\). There is no obstacle to implement this theoretical prior while still maintaining the correct kinematical expression, Eq. (12). In our main implementation of the MCMC, we set the standard deviation for \(\sigma_{\rm los}\) to \(120\) km/s. This doubles the nominal uncertainty quoted by [30] for the group of PG1115+080, but we believe that such cautionary procedure is reasonable. To be clear, we are not suggesting to doubt the observational LOSVD from [30]. Rather, the uncertainties we worry about concern the theoretical interpretation within simplified halo models. The systematic error due to using \(\bar{\sigma}\) instead of \(\sigma_{\rm los}\), as done in [9], is of the order of \(\delta\sigma_{\rm los}^{2}/\sigma_{\rm los}^{2}\sim 20\%\), so \(\delta\sigma_{\rm los}/\sigma_{\rm los}\sim 10\%\), or \(\delta\sigma_{\rm los}\sim 40\) km/s, quite comparable to the "bare" observational uncertainty. There are additional plausible errors: the group is likely to be aspherical [44], the velocity distribution need not be isotropic, and the group may not be fully virialised. Each of these could cause systematic shifts of tens of percent in the kinematics interpretation. For completeness, in App. D we check how a LOSVD uncertainty of \(60\) km/s changes the results. We find the difference to be quantitatively insignificant for our main results. ### Results First, to obtain a global view of the posterior "landscape", we run the zeus MCMC algorithm [52, 53], which is designed to cope with multiple likelihood minima. We show the main result in the **top-left panel** of Fig. 3. Notice the 3 disjoint minima in the posterior likelihood in the Figure 2: Illustration of the mock setup. Contours show \(\log_{10}\) of the convergence for the primary lens and the NFW halo in black and red, respectively. Figure 1: The field of PG1115+080 (image from SIMBAD [46]). The primary lens, GL, is marked by a cross, surrounded by the quasar images. White ellipse shows the 68% CL prior derived in [29] and used in [9] to constrain the group’s center of mass. Green and blue ellipses give a rough illustration of the X-ray emission associated to the group, found in [45] and [47], respectively, using different subtraction schemes for the quasar emission. panel, where \(c_{x,y}^{\rm nfw}\) are the angular coordinates of the halo center of mass. The origin of this threefold degeneracy is the \(\phi\to\phi\pm 2\pi/3\) degeneracy of the flexion \(G\) term. This run has no prior on \(c_{x}^{\rm nfw}\) and \(c_{y}^{\rm nfw}\). More details and comprehensive MCMC corner plots are provided in App. D. The local minima could trap an MCMC, if the scanning algorithm is not suited to probe multimodal posteriors. To see this, we repeat the analysis, this time using the emcee algorithm [54]. The results are shown in the **top-right** and **bottom panels** of Fig. 3. Indeed, emcee tends to discover only one of the local minima, missing the others. In the **top-right** and **bottom-right panels**, emcee converges on a biased group halo position. The biased local minima yield an \(H_{0}\) posterior that is on the high side, pulling \(H_{0}\) above its truth value by \(\sim 3\sigma\) and \(\sim 1.8\sigma\), respectively. The convergence pattern associated to the three minima is shown in Fig. 4. Performing more MCMC runs, we find that emcee consistently converges into just one of the minima, missing the others. The choice of the minimum found by emcee mostly depends on the initial position of the MCMC walkers in the \(c_{x}^{\rm nfw}-c_{y}^{\rm nfw}\) parameter space. In our runs, the initial walker allocation is guided by a Gaussian prior with a similar 68%CL radius as that used in [9] (see Fig. 1), and with centers shown by star symbols in Fig. 4. After performing a series of numerical trials we emphasize that, at least for the given, rather broad (but realistic) width of the prior, the starting point of the MCMC walkers Figure 3: Mock analysis. Truth set-up: elliptic group located at \(c_{x}^{\rm nfw}=18^{\prime\prime}\), \(c_{y}^{\rm nfw}=-10^{\prime\prime}\). **Top left panel**: zeus run, exposing the global likelihood landscape. **Other panels**: emcee runs, falling into different local minima. The emcee runs are initiated with different priors for the position of the group center, keeping the same Gaussian standard deviation of \(16^{\prime\prime}\). See also Fig. 4**O**. appears to be a more significant factor in determining which of the three minima traps the chain, than the prior center itself. These considerations coincide if the MCMC walkers are initiated at the prior center. Given the above discussion, we can make a rough estimate of the probability of the MCMC to fall into a displaced minimum, by matching it with the probability of the group center prior to be nearer a false minimum than the truth one. In Sec. C we estimate this probability from mock realizations of samples of tracer galaxies. The result depends on the analyses aperture, the number of galaxies in a sample, and the underlying group profile. For 13 members in the NFW profile, with an aperture of \(\sim 10R_{s}\) (i.e., around 3 virial radii), the false minimum probability we find is \(\sim 10\%\). We comment that X-ray data may help to pinpoint the centroid of a massive group, resolving the threefold degeneracy. For example, Ref. [45] found an X-ray blob that appears to be centered around the BGG of the host group of PG1115+080 (green contour in Fig. 1). If the X-ray emission can be associated with the group's centroid, it may provide a narrower prior than that derived from tracer galaxies. The X-ray analysis may be complicated by blending with the lensed quasar: using a different method to mask the quasar, Ref. [47] found a shifted, more extended, and brighter group emission. Even so, it seems plausible that X-ray data could help narrowing the group's centroid prior. ## IV Origin of \(H_{0}\) bias in a displaced group As we explained, the three-fold approximate degeneracy manifest in Fig. 3 is due to the behavior under rotations of the \(G\) flexion. The \(F\) flexion, however, behaves like a vector. Hence, when the inference falls into a wrong minimum, it attempts to minimize the difference between the truth \(F\) deflection and the wrong-minimum inference value of \(F\). This can be achieved by keeping \(G_{\rm eff}\) the same, but reducing \(F_{\rm eff}\) as much as possible, where \[F_{\rm eff}=\frac{F_{0}}{1-\Delta\kappa}\,\ G_{\rm eff}=\frac{G_{0}}{1- \Delta\kappa} \tag{14}\] are the reduced flexion terms (for clarity, here we omit the cosmological term \(\kappa_{\rm ext}\)). To see this point, denote the reduced flexion of the inference model by \(F_{\rm eff}\), and denote the truth flexion by \(F_{\rm eff}^{\rm(truth)}\).4 We expect the MCMC to minimize Footnote 4: The truth and model values of the \(G\) term are assumed to approximately coincide, \(G_{\rm eff}^{\rm(truth)}\approx G_{\rm eff}\). \[\left|F_{\rm eff}^{\rm(truth)}\right|\sqrt{1-2\frac{F_{\rm eff}}{ F_{\rm eff}^{\rm(truth)}}\cos\left(\frac{2n\pi}{3}\right)+\left(\frac{F_{\rm eff }}{F_{\rm eff}^{\rm(truth)}}\right)^{2}}, \tag{15}\] with \(n=\pm 1\) selecting the position of the false minimum. Since \(\cos(\pm 2\pi/3)=-1/2\), the above expression is minimized for \(F_{\rm eff}=0\). Consider the PL model of Sec. II. In this model, we have \(F_{\rm eff}/G_{\rm eff}=(\gamma_{\rm g}-3)/(\gamma_{\rm g}+1)\). Therefore, for \(1<\gamma_{\rm g}<3\) (the range of interest), decreasing \(|F_{\rm eff}|\) at fixed \(G_{\rm eff}\) entails increasing \(\gamma_{\rm g}\), while adjusting the other parameters of the model so as to keep \(G_{\rm eff}\) constant. Those other parameters were introduced in Eqs. (5-6) as \(h\) and \(\theta_{\rm Eg}\), but we can equally well replace \(\theta_{\rm Eg}\) by \(\Delta\kappa\). Now, we have \(G_{\rm eff}=-\frac{\gamma_{\rm g}^{2}-1}{3-\gamma_{\rm g}}\frac{\Delta\kappa}{ h(1-\Delta\kappa)}\). The \(\gamma_{\rm g}\)-dependent factor, \(\frac{\gamma_{\rm g}^{2}-1}{3-\gamma_{\rm g}}\), increases with increasing \(\gamma_{\rm g}\); to compensate for this and keep \(G_{\rm eff}\) constant, the factor \(\frac{\Delta\kappa}{h(1-\Delta\kappa)}\) needs to decrease. For small \(\Delta\kappa\ll 1\), this means that near any one of the displaced likelihood minima, the MCMC will attempt to decrease \(\Delta\kappa/h\) in comparison to its truth value. Part of this adjustment entails decreasing the model value of \(\Delta\kappa\), which therefore biases \(H_{0}\) high, as \(\Delta H_{0}/H_{0}\approx-\left(\Delta\kappa^{\rm(truth)}-\Delta\kappa^{\rm( model)}\right)\). In App. A we show that a similar analysis holds also for the NFW model, used in the MCMC implementation: also in that case, falling into a displaced group minimum causes the fit to underestimate of \(\Delta\kappa\), leading to an overestimate of \(H_{0}\). This analysis clarifies the trend seen in Fig. 3. Figure 4: Illustration of the three posterior likelihood minima probed by the MCMC inference. Contours show \(\log_{10}\) convergence isocontours, for the 3 group halo solutions (red, orange, and blue, respectively). Dots and stars highlight the inferred source position and the center of the group prior, respectively. Green squares denote the position of the images. The parameter values used in the plot are read from the emcee run best fit, see Fig. 3**O**. Summary Lens galaxies in quasar lensing time delay measurements are often members of galaxy groups, that must be modeled for an accurate determination of \(H_{0}\). The group modeling exhibits approximate versions of the MSD (Sec. II). Essentially, it is a displaced-center version of the core-MSD considered in [19, 26]. At leading order in the tidal approximation, the group halo enters imaging through the flexion. We showed an approximate threefold discrete modeling degeneracy, associated with rotating the assumed position of the group centroid by an angle of \(2\pi/3\) around the primary lens (Secs. II and III). This produces a posterior likelihood with three disjoint minima. MCMC algorithms that fail to expose this structure may fall into a displaced minimum. The inferred value of \(H_{0}\) found in a displaced minimum is systematically biased high (Sec. IV). Using numerical mock data experiments motivated by a realistic system, we demonstrated that the \(H_{0}\) bias can reach \(\sim 10\%\). The choice of the minimum detected by the MCMC can strongly depend on the starting position of the walkers in the space of group centroid coordinates. If the starting point is chosen as the centroid prior center, then the probability for the MCMC to land in a displaced minimum may be rather small. For a sample of 13 tracer galaxies (relevant for PG1115+080) with an aperture of about 3 virial radii in an NFW halo, this probability is \(\sim 10\%\). Our analysis suggests the following recommendations. 1. Bayesian cosmography analyses should explore the full posterior likelihood landscape. Awareness of this possible three disjoint minima structure, if not already there, is needed. 2. X-ray data may help to pinpoint the centroid of a massive group, resolving the degeneracy. 3. As an aside, we note that some cosmography analyses (e.g. [9]) used an incorrect kinematics prior to constrain the group model. The error in the interpretation of tracer galaxy velocity dispersion depends on the analysis aperture, and can reach \(\sim 20\%\). A correct version of the kinematics prior is reviewed in App. B. The number of strong lensing time delay systems is expected to increase by more than an order of magnitude in the near future [55, 56, 57], an important step towards possibly reaching a few percent lensing determination of \(H_{0}\)[58]. This program could be further assisted by many resources [59, 60, 61, 62, 63]. Our study highlights some pitfalls (and suggests solutions) that need to be taken into account if the precision goal for \(H_{0}\) should also be accurate. ###### Acknowledgements. We thank Simon Birrer, Marko Simonovic, and Raphael Flauger for useful discussions. This work made use of the following public software packages: lenstronomy [39, 64, 65, 66], emcee [54], zeus [52, 53], corner [65], astropy [66, 67]. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France [46]. This work was supported by the Israel Science Foundation grant 1784/20, and by MINERVA grant 714123. LT wishes to acknowledge association with the International Helmholtz-Weizmann Research School for Multimessenger Astronomy. ## Appendix A Deflection angle expansion, NFW profile We start with some general preliminaries; for the discussion of the NFW profile, the reader can skip to Eq. (11). For our purpose, which is to analyze the lensing equation in the vicinity of a particular galaxy member of the group, it is convenient to use a coordinate system that is centered on the primary lens galaxy, and displaced from the group center of mass by a separation angle \(\vec{h}\). In these coordinates the group lensing potential reads \[\Psi_{\rm halo}(\theta_{1},\theta_{2}):=\Psi_{\rm halo}(|\vec{\theta}-\vec{h} |). \tag{12}\] We can expand with respect to the small parameters \(\theta_{1}/h,\ \theta_{2}/h\), \[\Psi_{\rm halo}(\theta_{1},\theta_{2})\simeq\Psi_{\rm halo}(0,0)+\mathbf{\nabla}_ {\vec{\theta}}\Psi_{\rm halo}(0,0)\cdot\vec{\theta}+\ldots. \tag{13}\] Recall that the lensing potential and the deflection angle are related as \(\mathbf{\nabla}\Psi=\vec{\alpha}\); in particular, we can write the lens equation as \[\beta_{i}=\theta_{i}-\partial_{i}\Psi_{\rm lens}(\vec{\theta})-\partial_{i} \Psi_{\rm halo}(0,0)-\partial_{i}\partial_{j}\Psi_{\rm halo}(0,0)\theta_{j}+ \ldots. \tag{14}\] As discussed on the main text, in complex notation \(\vec{h}\to h\mathrm{e}^{\mathrm{i}\phi}\). The lensing potential \(\Psi\) in this formalism is real. One can obtain the complex deflection angle by means of the derivative operator \[\mathbf{\nabla}_{\rm c}:=\partial_{1}+\mathrm{i}\partial_{2}\implies\alpha=\mathbf{ \nabla}_{\rm c}\Psi. \tag{15}\] In the expansion of the lensing potential, it is easy to see that \[\Delta\beta=\mathbf{\nabla}_{\rm c}\Psi_{\rm halo},\ \Delta\kappa=\frac{1}{2}\mathbf{ \nabla}_{\rm c}\mathbf{\nabla}_{\rm c}^{*}\Psi_{\rm halo},\ \Delta\gamma=\frac{1}{2}\mathbf{\nabla}_{\rm c}\mathbf{\nabla}_{\rm c}\Psi_{\rm halo}, \tag{16}\] where the derivatives are computed at the origin. Analogously, we can express third order derivatives of the lensing potential as derivatives of the shear, \[G=\mathbf{\nabla}_{\rm c}\Delta\gamma,\ F=\mathbf{\nabla}_{\rm c}^{*}\Delta\gamma, \tag{17}\] where \(G\), \(F\) are the flexion terms. In the axisymmetric case, a group halo center at \(\phi=0\) and fixed \(h\) would yield the same expansion coefficients of a halo located at a generic \(\phi\), granted that we rotate our coordinate system accordingly. We can pass to cylindrical coordinates5 Footnote 5: With this definition, when \(\theta=0\), \(\phi\) corresponds to the angle between the origin and the position of the halo center. \[\vec{h}-\vec{\theta}=r(\cos\phi,\sin\phi)^{\top}, \tag{100}\] and write \[\frac{\partial}{\partial\theta_{i}}\Psi_{\rm halo}=\frac{\partial r}{\partial \theta_{i}}\frac{\partial}{\partial r}\Psi_{\rm halo}(r)=\frac{\theta_{i}-h_{ i}}{r}\Psi^{\prime}_{\rm halo}(r). \tag{101}\] In complex notation, we thus have \[\mathbf{\nabla}_{\rm c}\Psi_{\rm halo}=-{\rm e}^{{\rm i}\phi}\Psi^{\prime}_{\rm halo }(r). \tag{102}\] Notice the minus sign, which is there due to our choice of coordinates; it is easy to see that \[\frac{\partial\phi}{\partial\theta_{1}}=\frac{\sin\phi}{r}\,\ \frac{\partial \phi}{\partial\theta_{2}}=-\frac{\cos\phi}{r}, \tag{103}\] which has opposite sign with respect to the usual choice of cylindrical coordinate. We can thus write \[\mathbf{\nabla}_{\rm c}=-{\rm e}^{{\rm i}\phi}\bigg{(}\frac{\partial}{\partial r }+\frac{{\rm i}}{r}\frac{\partial}{\partial\phi}\bigg{)}, \tag{104}\] so that \[\Delta\kappa=\frac{1}{2}\bigg{(}\Psi^{\prime\prime}_{\rm halo}+ \frac{\Psi^{\prime}_{\rm halo}}{r}\bigg{)}, \tag{105}\] \[\Delta\gamma=\frac{{\rm e}^{2{\rm i}\phi}}{2}\bigg{(}\Psi^{\prime \prime}_{\rm halo}-\frac{\Psi^{\prime}_{\rm halo}}{r}\bigg{)},\] (106) \[G=-\frac{{\rm e}^{3{\rm i}\phi}}{2}\bigg{(}\Psi^{\prime\prime \prime}_{\rm halo}-3\frac{\Psi^{\prime\prime}_{\rm halo}}{r}+3\frac{\Psi^{ \prime}_{\rm halo}}{r^{2}}\bigg{)},\] (107) \[F=-\frac{{\rm e}^{{\rm i}\phi}}{2}\bigg{(}\Psi^{\prime\prime \prime}_{\rm halo}+\frac{\Psi^{\prime\prime}_{\rm halo}}{r}-\frac{\Psi^{ \prime}_{\rm halo}}{r^{2}}\bigg{)}=-{\rm e}^{{\rm i}\phi}\frac{{\rm d}\Delta \kappa}{{\rm d}r}. \tag{108}\] We now specify to the NFW model. Here we discuss the spherical model without ellipticity, commenting on ellipticity App. A.4. Using a coordinate system centered on the group, the NFW lensing potential is [68] \[\Psi_{\rm NFW}(\theta)=2\tilde{\kappa}\,\theta_{\rm s}^{2}\bigg{(}\log^{2} \bigg{(}\frac{\theta}{2\theta_{\rm s}}\bigg{)}-\arccos^{2}\bigg{(}\frac{ \theta_{\rm s}}{\theta}\bigg{)}\bigg{)}, \tag{109}\] \[\theta_{\rm s}:=\frac{R_{\rm s}}{d_{\rm l}},\ \tilde{\kappa}:=\frac{\rho_{0}R_{ \rm s}}{\Sigma_{\rm c}},\ \Sigma_{\rm c}=\frac{d_{\rm s}}{4\pi Gd_{\rm l}d_{\rm ls}}. \tag{110}\] Defining \(x=\theta_{\rm s}/h\), we have: \[\Delta\beta_{0}=4h\,\tilde{\kappa}\,x^{2}\Bigg{(}\log 2x- \frac{\arccosh\,x}{\sqrt{1-1/x^{2}}}\Bigg{)}\, \tag{111}\] \[\Delta\kappa=\tilde{\kappa}\,\frac{2x^{2}}{1-x^{2}}\Bigg{(}1- \frac{\arccosh\,x}{\sqrt{1-1/x^{2}}}\Bigg{)}\,\] (112) \[\Delta\gamma_{0}=\tilde{\kappa}\frac{2x^{2}}{1-x^{2}}\Bigg{(}1+ \frac{(2x^{2}-3)\arccosh\,x}{\sqrt{1-1/x^{2}}}\] \[+2\big{(}1-x^{2}\big{)}\log(2x)\Bigg{)},\] (113) \[F_{0}=\frac{\tilde{\kappa}}{h}\frac{2x^{2}}{(1-x^{2})^{2}} \Bigg{(}2+x^{2}-\frac{3\arccosh\,x}{\sqrt{1-1/x^{2}}}\Bigg{)},\] (114) \[G_{0}=\frac{\tilde{\kappa}}{h}\frac{2x^{2}}{(1-x^{2})^{2}} \Bigg{(}-\frac{15-20x^{2}+8x^{4}}{\sqrt{1-1/x^{2}}}\arccosh\,x\] \[+6-3x^{2}+8\left(1-x^{2}\right)^{2}\log(2x)\Bigg{)}. \tag{115}\] Note that applying these expressions for \(x<1\) requires using the logarithmic definition of \(\arccosh({\rm z})=\ln\big{(}2+\sqrt{x^{2}-1}\big{)}\) and allowing complex \(z\). It is useful to define \(f_{\kappa}(x)\), \(f_{G}(x)\), \(f_{F}(x)\) such that \(\Delta\kappa=\tilde{\kappa}\,f_{\kappa}(x)\), \(F_{0}=\frac{\tilde{\kappa}}{h}\,f_{F}(x)\), \(G_{0}=\frac{\tilde{\kappa}}{h}\,f_{G}(x)\). These functions are shown in Fig. 5. We can use these expressions to analyze the modeling constraints obtained from imaging data. As we already remarked in Sec. II, of the expansion terms, \(\Delta\beta_{0}\) is exactly degenerate with the unknown source position; \(\Delta\kappa\) is degenerate with external convergence, and can be absorbed by the MSD (changing the inference of \(H_{0}\)); \(\Delta\gamma_{0}\) is degenerate with external shear; and so, only \(F_{0}\) and \(G_{0}\) can produce useful modeling constraints. However, since the axisymmetric NFW model contains three free parameters \(\tilde{\kappa}\), \(h\), \(x=\theta_{\rm s}/h\), even a perfect measurement of the \(F_{0}\) and \(G_{0}\) terms still leaves a degeneracy. ### Msd Consider the usual MSD transformation, induced by a parameter \(\lambda\): \[\Delta\kappa \rightarrow \Delta\kappa_{\lambda}=\lambda\Delta\kappa+1-\lambda, \tag{116}\] \[F_{0} \rightarrow F_{0,\lambda}=\lambda F_{0},\ \ G_{0}\,\rightarrow\,G_{0,\lambda}=\lambda G_{0} \tag{117}\] (with matching transformation on the other terms, that are not relevant here). The combinations \(F_{\rm eff}\) and \(G_{\rm eff}\) of Eq. (14) are MSD-invariant, and are the quantities that constrain the model parameters. Suppose then that we are given precise determination of \(F_{\rm eff}\) and \(G_{\rm eff}\) from the imaging. In this case, the ratio \(F_{\rm eff}/G_{\rm eff}\) determines \(x\). Having fixed \(x\), there remains a degeneracy in \(\Delta\kappa\), that we can express as \[\Delta\kappa = \frac{h\,G_{\rm eff}\,f_{\kappa}(x)}{f_{G}(x)+h\,G_{\rm eff}\,f_{ \kappa}(x)}. \tag{100}\] In Eq. (100), we think of \(G_{\rm eff}\) and \(x\) as fixed by the imaging data, while the model parameter \(h\) is free to vary (up to possible external priors, discussed in the main text). For \(|h\,G_{\rm eff}|\ll 1\), the parameter regime of most interest for us, we have \(\Delta\kappa\approx h\,G_{\rm eff}\,f_{\kappa}(x)/f_{G}(x)\). Namely, an imaging determination of the flexion terms \(G_{\rm eff}\) and \(F_{\rm eff}\) cannot determine \(\Delta\kappa\), which is almost directly degenerate with a change in the model parameter \(h\) while holding \(x=\theta_{\rm s}/h\) and \(\widetilde{\kappa}/h\) fixed. In terms of the original NFW model parameters, the \(\Delta\kappa\) degeneracy maps to adjusting \(R_{\rm s}\propto h\) while holding \(\rho_{0}\) fixed. External priors are needed to break this degeneracy. A prior on \(h\), the cluster center of mass position, obviously ameliorates it. A prior on the group velocity dispersion also ameliorates it, since \(\sigma_{\rm los}^{2}\propto\rho_{0}R_{\rm s}^{2}\). The important point is that the external priors are crucial: without them, the MSD associated with an NFW group is not broken by the imaging data, even with explicit modeling of the group. ### \(G\)-term degeneracy The threefold \(G\)-term degeneracy can also be clarified using the analytic expansion. To this end, it is useful to replace the model parameter \(\widetilde{\kappa}\) by \(\Delta\kappa\). As we have seen in Sec. IV, an MCMC trapped near a displaced likelihood minimum will attempt to reduce \(|F_{\rm eff}|\) while keeping \(G_{\rm eff}\) fixed. This amounts to flowing towards \(x\) that minimizes \(|F_{\rm eff}/G_{\rm eff}|=|f_{F}(x)/f_{G}(x)|\), while adjusting \(\Delta\kappa\) and \(h\) so as to keep \(G_{\rm eff}=(f_{G}/f_{\kappa})(\Delta\kappa/h)/(1-\Delta\kappa)\)=Const. As can be seen from Fig. 5, minimizing \(|f_{F}(x)/f_{G}(x)|\) pulls the fit towards smaller \(x\). In turn, this pulls \(|f_{G}(x)/f_{\kappa}(x)|\) to a larger value, meaning that in order to compensate and keep \(G_{\rm eff}\) constant, the combination \(\left(\Delta\kappa/h\right)/\left(1-\Delta\kappa\right)\) is pulled to a smaller value. For \(\Delta\kappa\ll 1\), this tends to make the fit pull towards a model in which \(\Delta\kappa\) is smaller than its truth value, causing a positive upwards bias in \(H_{0}\). ### Time delays Time delays between image A and B can be written as (see e.g. [69, 70]) \[\begin{split}\Delta t_{\rm AB}&=D_{\rm dt}\bigg{(} \frac{\theta_{\rm A}^{2}}{2}-\vec{\beta}\cdot\vec{\theta}_{\rm A}-\Psi(\vec{ \theta}_{\rm A})-({\rm A}\leftrightarrow{\rm B})\bigg{)}\\ &=:D_{\rm dt}(\tau_{\rm A}-\tau_{\rm B})\,\ D_{\rm dt}=(1+z_{ 1})\frac{d_{\rm f}d_{\rm s}}{d_{\rm ls}}.\end{split} \tag{101}\] One can find the time delay from the complex lens equation, by noticing \[\mathbf{\nabla}_{\rm c}=2\frac{\partial}{\partial\theta^{*}}\implies\tau=\frac{ 1}{2}\int{\rm d}\theta^{*}\,\alpha+f(\theta)\, \tag{102}\] where the integral is a definite integral. Notice that the integral misses a possible function of \(\theta\) alone; this function can be recovered by imposing \(\tau=\tau^{*}\). Focusing on \(\tau_{\rm A}\), in complex notation we can write \[\begin{split}\tau_{\rm A}&=\frac{\theta_{\rm A} \theta_{\rm A}^{*}}{2}-\frac{1}{2}[(\beta+\mathbf{\nabla}_{\rm c}\Psi_{\rm nfw}) \theta_{\rm A}^{*}+{\rm c.c.}]-\Psi_{\rm lens}(\vec{\theta}_{\rm A})\\ &-\Psi_{\rm halo}(0,0)-\frac{1}{2}\kappa\theta_{\rm A}\theta_{\rm A }^{*}-\frac{1}{4}\Big{[}(\gamma+\gamma^{\rm ext})\theta_{\rm A}^{*}{}^{2}+{ \rm c.c.}\Big{]}\\ &-\frac{1}{24}\Big{(}3F\theta_{\rm A}^{*}{}^{2}\theta_{\rm A}+G \theta_{\rm A}^{*}{}^{3}+{\rm c.c.}\Big{)}.\end{split} \tag{103}\] In real notation, \[\begin{split}\tau_{\rm A}&=\frac{\theta_{\rm A}^{2}}{ 2}-(\beta_{i}+\partial_{i}\Psi_{\rm nfw})\theta_{i}^{\rm A}-\Psi_{\rm lens}( \vec{\theta}_{\rm A})-\Psi_{\rm halo}(0,0)\\ &-\frac{1}{2}\kappa\theta_{\rm A}^{2}-\frac{1}{2}(\gamma_{1}+\gamma _{1}^{\rm ext})(\theta_{1}^{\rm A}{}^{2}-\theta_{2}^{\rm A}{}^{2})-(\gamma_{2} +\gamma_{2}^{\rm ext})\theta_{1}^{\rm A}\theta_{2}^{\rm A}\\ &-\frac{1}{12}\Big{(}(3F_{1}+G_{1})\theta_{1}^{\rm A}{}^{3}_{1}+( 3F_{2}-G_{2})\theta_{2}^{\rm A}{}^{3}\\ &+3(F_{1}-G_{1})\theta_{1}^{\rm A}\theta_{2}^{\rm A}{}^{2}+3(F_{2 }+G_{2})\theta_{2}^{\rm A}\theta_{1}^{\rm A}{}^{2}\Big{)}.\end{split} \tag{104}\] ### Effect of ellipticity We can implement ellipticity in the NFW profile by using pseudo-elliptical NFW lens models [71, 72], which is a reliable approximation as long as the ellipticity is not too large. This approximation, implemented in lenstronomy, uses the spherical NFW lensing potential, computed in elliptical coordinates, \[\Psi_{\epsilon}(\vec{\theta})=\Psi(\vec{\theta}_{\epsilon})\,\ \vec{\theta}_{ \epsilon}=\begin{pmatrix}\cos\varphi\sqrt{1-\epsilon}&\sin\varphi\sqrt{1- \epsilon}\\ -\sin\varphi\sqrt{1+\epsilon}&\cos\varphi\sqrt{1+\epsilon}\end{pmatrix}\vec{ \theta}, \tag{105}\] w Figure 5: \(x\)-dependence of lensing expansion terms. where \[\epsilon=\frac{2e}{1+e^{2}},\ e=\sqrt{e_{1}^{2}+e_{2}^{2}},\ \varphi=\frac{1}{2} \arctan\frac{e_{1}}{e_{2}}. \tag{101}\] Notice that under rotations of the coordinate system, \(\epsilon\) is invariant but \(\varphi\) changes. The expansion of the potential \(\Psi_{\mathrm{halo},\epsilon}\) follows Eq. (100); however, since the potential is not axisymmetric, it is not possible to factor out the \(\phi\) dependence on \(\Delta\gamma_{\epsilon}\), \(G_{\epsilon}\), \(F_{\epsilon}\) as we did in Eq. (4). Nevertheless, it is possible to express these quantities as functions of \(\Delta\gamma_{0}\), \(F_{0}\) and \(G_{0}\). Notice that we can write (here, \(\vec{\theta}=(x,y)^{\top}\)) \[\mathbf{\nabla}_{\mathrm{c}}=\mathrm{e}^{\mathrm{i}\varphi}\bigg{(}\sqrt{1- \epsilon}\frac{\partial}{\partial x_{\epsilon}}+\mathrm{i}\sqrt{1+\epsilon} \frac{\partial}{\partial y_{\epsilon}}\bigg{)}. \tag{102}\] As an example, consider \[\gamma_{\epsilon} = \frac{1}{2}\mathbf{\nabla}_{\mathrm{c}}\mathbf{\nabla}_{\mathrm{c}}\mathbf{ \Psi}_{\epsilon}=\frac{\mathrm{e}^{2\mathrm{i}\varphi}}{2}\Big{(}(1-\epsilon )\frac{\partial^{2}}{\partial x_{\epsilon}^{2}}-(1+\epsilon)\frac{\partial ^{2}}{\partial y_{\epsilon}^{2}} \tag{103}\] \[+ 2\mathrm{i}\sqrt{1-\epsilon^{2}}\frac{\partial}{\partial x_{ \epsilon}}\frac{\partial}{\partial y_{\epsilon}}\Big{)}\Psi_{\epsilon}\] \[= \mathrm{e}^{2\mathrm{i}\varphi}(\gamma_{1}(\vec{\theta}_{\epsilon })+\mathrm{i}\sqrt{1-\epsilon^{2}}\gamma_{2}(\vec{\theta}_{\epsilon})- \epsilon\kappa(\vec{\theta}_{\epsilon})),\] where the quantities without \(\epsilon\) subscript refer to the spherical functions computed at the elliptic coordinate. By defining the displacement vector in elliptical coordinates in the complex notation, \(\vec{h}\leftrightarrow h_{\epsilon}\mathrm{e}^{\mathrm{i}\phi_{\epsilon}}\), we can express \[\gamma=\gamma_{0}(h_{\epsilon}(h,\phi))\mathrm{e}^{\mathrm{i}\phi_{\epsilon}( \phi)}. \tag{104}\] For the NFW example, \(\gamma_{0}\) is the expression in Eq. (100). For \(\kappa\), one has \[\kappa_{\epsilon}=\kappa\big{(}h_{\epsilon}(h,\phi)\big{)}-e\gamma_{1}(h_{ \epsilon}(h,\phi)). \tag{105}\] A similar reasoning follows for \(F_{\epsilon}\), \(G_{\epsilon}\). As one would expect, ellipticity modifies the effective flexion and convergence by correction terms of order \(\epsilon\). Our numerical MCMC analysis includes the full effect, and moderate or small ellipticity has only a minor effect on the results. ## Appendix B Kinematics constraints for host group Here we review constraints on a host group, obtainable by measurements of a sample of galaxy members. We only consider spherical systems with an isotropic velocity distribution. We start with a PL mass and tracer galaxy distribution, and go on to consider the NFW profile. For clarity, our discussion repeats a number of statements from the main text, completing these with details and derivations. ### Power law We start with the PL profile, \(\rho(r)=\rho_{0}\left(r/R_{0}\right)^{-\gamma}\). The surface density of this profile is \[\Sigma = \frac{\sqrt{\pi}\Gamma\left(\frac{\gamma-1}{2}\right)\rho_{0}R_{0} }{\Gamma\left(\frac{\gamma}{2}\right)}\left(\frac{d_{1}\theta}{R_{0}}\right)^ {1-\gamma}, \tag{106}\] resulting with the deflection angle and convergence, \[\alpha = \left(\frac{\theta}{\theta_{\mathrm{E}}}\right)^{1-\gamma}\theta, \kappa = \frac{3-\gamma}{2}\left(\frac{\theta}{\theta_{\mathrm{E}}}\right)^{1 -\gamma}, \tag{107}\] where \[\theta_{\mathrm{E}} = \left[\frac{2\sqrt{\pi}\Gamma\left(\frac{\gamma-1}{2}\right)}{(3 -\gamma)\Gamma\left(\frac{\gamma}{2}\right)}\right]^{\frac{1}{\gamma-1}}\frac {R_{0}}{d_{1}}\left(\frac{\rho_{0}R_{0}}{\Sigma_{c}}\right)^{\frac{1}{\gamma-1 }}. \tag{108}\] The circular velocity is \[v_{\mathrm{circ}}^{2} = \frac{GM(r)}{r}\,=\,\frac{4\pi G\rho_{0}R_{0}^{2}}{3-\gamma} \left(\frac{d_{1}\theta}{R_{0}}\right)^{2-\gamma}. \tag{109}\] The circular velocity is not directly measurable. What is measurable is the LOSVD [73], \[\sigma_{\mathrm{los}}^{2} = \frac{2G}{s_{*}(r)}\int_{1}^{\infty}\frac{dy\,y}{\sqrt{y^{2}-1}} \,\int_{y}^{\infty}\frac{dx}{x^{2}}n_{*}(xr)M(xr)\] \[= \frac{8\pi G\rho_{0}R_{0}^{3}r^{3-\gamma}}{(3-\gamma)s_{*}(r)} \int_{1}^{\infty}\frac{dy\,y}{\sqrt{y^{2}-1}}\,\int_{y}^{\infty}dx\,n_{*}(xr) \,x^{1-\gamma},\] where \(n_{*}\) and \(s_{*}\) are the galaxy number density and surface density, respectively. Eq. (109) holds for an arbitrary isotropic profile, while Eq. (109) applies to the PL. If the galaxy number density is distributed similarly to the mass density in the group, \(n_{*}(r)\propto(r/R_{0})^{-\gamma}\), we find (valid for \(\frac{3}{2}<\gamma<\frac{5}{2}\)): \[\sigma_{\mathrm{los}}^{2} = \frac{\Gamma^{2}\left(\frac{\gamma}{2}\right)\Gamma\left(\gamma- \frac{3}{2}\right)}{4\sqrt{\pi}\Gamma^{2}\left(\frac{\gamma-1}{2}\right) \Gamma\left(\gamma\right)}\frac{d_{*}}{d_{\mathrm{ls}}}\theta_{\mathrm{E}} \left(\frac{\theta}{\theta_{\mathrm{E}}}\right)^{2-\gamma}. \tag{110}\] For the SIS (\(\gamma=2\)), \(\theta_{\mathrm{E}}=8\pi^{2}\frac{d_{*}}{d_{*}}G\rho_{0}R_{0}^{2}\) and \(\sigma_{\mathrm{los}}^{2}=\frac{d_{*}}{d_{\mathrm{ls}}}\frac{\theta_{\mathrm{E}} }{4\pi}=\frac{v_{\mathrm{circ}}^{2}}{2}\). The relation \(\sigma_{\mathrm{los}}^{2}=\frac{d_{*}}{d_{\mathrm{ls}}}\frac{\theta_{\mathrm{E}} }{4\pi}\) is sometimes adopted by lensing analyses. However, the expressions for \(\sigma_{\mathrm{los}}^{2}\) depend on a series of simplifying assumptions, including: PL mass distribution; same PL galaxy number distribution; virial state (following Jeans equation); spherical symmetry; isotropic velocity distribution; and to satisfy \(\sigma_{\mathrm{los}}^{2}=\frac{d_{*}}{d_{\mathrm{ls}}}\frac{\theta_{\mathrm{E}} }{4\pi}\), the specific case of the SIS. We do not expect those assumptions to hold precisely for individual galaxy groups. Lensing analyses would be prudent to assign a larger systematic uncertainty than the "bare" observational uncertainty on \(\sigma_{\mathrm{los}}^{2}\). To explore one of those effects, consider varying \(\gamma\) in the relation between \(\sigma_{\mathrm{los}}^{2}\) and \(\theta_{\mathrm{E}}\). For non-SIS systems, \(\sigma_{\mathrm{los}}^{2}(\theta)\) depends on \(\theta\), and a relevant observable is the brightness-weighted average of \(\sigma_{\mathrm{los}}^{2}\) in some aperture \(\theta_{A}\). We can compare such averaged LOSVD to the SIS relation: \[\frac{\langle\sigma_{\mathrm{los}}^{2}\rangle_{A}}{\frac{d_{*}}{d_{ \mathrm{ls}}}\frac{\theta_{\mathrm{E}}}{4\pi}} = \frac{\frac{1}{d_{*}}\frac{\theta_{\mathrm{E}}}{4\pi}}{\frac{d_{ *}}{d_{\mathrm{ls}}}\frac{\theta_{\mathrm{E}}}{4\pi}}\frac{\int_{0}^{\theta_{A}}d \theta\theta s_{*}(\theta)}{\int_{0}^{\theta_{A}}d\theta\theta s_{*}(\theta)} \tag{111}\] \[= \frac{3-\gamma}{5-2\gamma}\frac{\sqrt{\pi}\Gamma^{2}\left(\frac{ \gamma}{2}\right)\Gamma\left(\gamma-\frac{3}{2}\right)}{\Gamma^{2}\left( \frac{\gamma-1}{2}\right)\Gamma\left(\gamma\right)}\left(\frac{\theta_{\mathrm{A}} }{\theta_{\mathrm{E}}}\right)^{2-\gamma}. \tag{112}\] We illustrate Eq. (110) in Fig. 6. ### NFW For the NFW profile, assuming again that the tracer galaxy density follows the mass density, Eq. (100) can be written as: \[\sigma_{\rm los}^{2}(\theta) = G\rho_{0}R_{\rm s}^{2}\,f\left(\frac{\theta}{\theta_{\rm s}} \right), \tag{101}\] \[f(a) = \frac{8\pi a\int_{1}^{\infty}\frac{dyy}{\sqrt{y^{2}-1}}\int_{ya}^ {\infty}\frac{dx}{x^{3}(1+x)^{2}}\left(\ln\left(1+x\right)-\frac{x}{1+x} \right)}{2\int_{0}^{\infty}\frac{dz}{\sqrt{a^{2}+z^{2}}(1+\sqrt{a^{2}+z^{2}})^ {2}}}. \tag{102}\] Some of these integrals can be done in closed form, but that is not particularly illuminating. The function \(f(\theta/\theta_{\rm s})\) is shown in Fig. 7. Assuming the NFW model, a kinematics prior should use Eq. (101) (aperture-averaged as needed) to translate a LOSVD measurement into a constraint on the combination of \(\rho_{0}\) and \(R_{\rm s}\) appearing in the equation. However, analyses in the literature took a somewhat different route. Ref. [9] (after [48; 49]) considered the following expression as a proxy for the LOSVD of the NFW profile: \[\bar{\sigma}^{2} = \frac{GM_{\rm vir}}{3R_{\rm vir}}, \tag{103}\] where \(R_{\rm vir}=c_{\rm vir}\,R_{s}\), \(M_{\rm vir}=M(R_{\rm vir})\), and \(c_{\rm vir}\) is the NFW concentration parameter, for which it is possible to extract a theoretical prediction from N-body simulations [51]. Namely, instead of using the physical relation, Eq. (101), to convert the measured LOSVD into a constraint on the group halo model, Ref. [9] made the identification \(\sigma_{\rm los}^{2}\rightarrow\bar{\sigma}^{2}\), and then used Eq. (103) to constrain \(M_{\rm vir}\) and \(R_{\rm vir}\). The dependence of the RHS of Eq. (103) on \(c_{\rm vir}\) can be clarified by noting that \[\bar{\sigma}^{2} = G\rho_{0}R_{\rm s}^{2}\frac{4\pi}{3c_{\rm vir}}\left(\ln(1+c_{ \rm vir})-\frac{c_{\rm vir}}{1+c_{\rm vir}}\right). \tag{104}\] The dependence is not strong: the factor \(\frac{4\pi}{3c_{\rm vir}}\left(\ln(1+c_{\rm vir})-\frac{c_{\rm vir}}{1+c_{\rm vir }}\right)\) is equal to \(\{0.89,0.85,0.80,0.62\}\) when varying \(c_{\rm vir}=\{3,4,5,10\}\), respectively. However, it is noteworthy that this dependence is not physical, but rather introduced artificially. The red solid line in Fig. 8 compares the physical aperture-weighted LOSVD to the quantity \(\bar{\sigma}^{2}\), using \(c_{\rm vir}=3.5\), relevant for low-redshift massive galaxies. The meaning of Fig. 8 is that using Eq. (103) and identifying \(\sigma_{\rm los}^{2}\rightarrow\bar{\sigma}^{2}\) can introduce a systematic error. The magnitude of the error depends on the aperture and on the value adopted for \(c_{\rm vir}\), but can amount to \(\sim 20\%\). ### Kinematical and cosmological priors Here we describe our attempt to define a kinematics+cosmology prior, following a similar procedure as used in [9] for PG1115+080. The first step taken by [9] is to identify \(\sigma_{\rm los}\rightarrow\bar{\sigma}\). The next step is to invoke theoretical expectations based on N-body simulations. Following App. A of [49], the virial Figure 7: The LOSVD \(\theta\)-dependence for the NFW profile. mass and the virial radius of the halo are related to a characteristic redshift-dependent overdensity: \[M_{\rm vir} = \frac{4\pi}{3}R_{\rm vir}^{3}\Delta_{\rm c}(z)\rho_{\rm c}(z), \tag{14}\] where \(\Delta_{\rm c}(z)=178(\rho_{\rm m}(z)/\rho_{\rm c}(z))^{0.45}\) is the expected halo overdensity at the virial radius and \(\rho_{\rm c}(z)\) and \(\rho_{\rm m}(z)\) are the cosmological critical density and matter density at redshift \(z\)[50]. Combining Eqs. (14) and (15) turns \(\bar{\sigma}\) into separate priors on \(R_{\rm vir}\) and \(M_{\rm vir}\): \[R_{\rm vir} = \frac{3\bar{\sigma}}{2\sqrt{\pi G\Delta_{\rm c}\rho_{\rm c}}}, \tag{15}\] \[M_{\rm vir} = \frac{9\bar{\sigma}^{3}}{2\sqrt{\pi G^{3}\Delta_{\rm c}\rho_{\rm c }}}. \tag{16}\] Next, Ref. [9] adds a prior on \(c_{\rm vir}\), using the \(M_{\rm vir}-c_{\rm vir}\) relation from Ref. [51]: \[\log_{10}c_{\rm vir} \approx 0.97-0.09\log_{10}\left(\frac{M_{\rm vir}\,h}{10^{12}M_{ \odot}}\right). \tag{17}\] Treating \(c_{\rm vir}\) as a function of \(\bar{\sigma}\) via Eqs. (17) and (16), one has now produced separate priors on the parameters \(\rho_{0}\) and \(R_{\rm s}\): \[\rho_{0} = \frac{\Delta_{\rm c}\rho_{\rm c}c_{\rm vir}^{3}}{3}\left(\ln(1+c_ {\rm vir})-\frac{c_{\rm vir}}{1+c_{\rm vir}}\right)^{-1}, \tag{18}\] \[R_{\rm s} = \frac{R_{\rm vir}}{c_{\rm vir}}. \tag{19}\] The uncertainty can be estimated by combining the observational uncertainty on \(\sigma_{\rm los}\) and the theoretical scatter on the \(M_{\rm vir}-c_{\rm vir}\) relation; the latter is taken in Ref. [9] as \(\pm 0.14\) on the \(\log_{10}c_{\rm vir}\) expression in Eq. (17). In practice, our implementation of these priors in the MCMC mock analysis is as follows: 1. The parameters that are directly sampled by the chain, defining the posterior likelihood space, are angular variables that are linearly proportional to \(\rho_{0}\) and \(R_{\rm s}\). In each step of the calculation, we translate the sampled value of \(\rho_{0}\) into an expected value of \(c_{\rm vir}\), using Eq. (18). 2. Having obtained \(c_{\rm vir}\), we convert Eq. (17) into an equation for \(R_{\rm vir}\), replacing \(M_{\rm vir}\) by \(R_{\rm vir}\) using Eq. (14). Now, we implement the estimated theoretical scatter of \(\pm 0.14\) dex on \(c_{\rm vir}\) in the \(M_{\rm vir}-c_{\rm vir}\) relation, to define a range of acceptable values for \(R_{\rm s}\) via \(R_{\rm s}\subset R_{\rm vir}/(10^{+0.14},10^{-0.14})\). In the chain, we discard sampled values of \(R_{\rm s}\) that fall outside of this range. This step therefore enforces a redshift-dependent correlation between \(\rho_{0}\) and \(R_{\rm s}\). Up to this point, we made no connection to kinematics. 3. Finally we come to the kinematics. Using Eq. (13), we convert the model point represented by \(\rho_{0}\) and \(R_{\rm s}\), along with the central value of \(c_{\rm vir}\), into a model prediction for \(\bar{\sigma}\). This prediction is then compared to the measured value of \(\sigma_{\rm los}\) using the nominal observational uncertainty to obtain a likelihood factor. ### Sample variance Sample variance is the dominant nominal source of uncertainty in \(\sigma_{\rm los}^{2}\), given that typically only a handful of galaxies are measured as tracers of the group. Ref. [30] used bootstrap to estimate sample variance directly from the measured sample of galaxies; here, we complement this route by generating random sets of \(N_{\rm g}\) galaxies tracing an NFW halo. The realizations are drawn from an equilibrium phase space distribution function \(f(r,v)\). For a spherical halo with a statistically static distribution function, we have \(f(r,v)=f(\varepsilon(r,v))\), where \(\varepsilon=\psi(r)-\frac{v^{2}}{2}\), \(\psi(r)=-\phi(r)\), and \(\phi(r)\) is the halo Newtonian potential. \(f(\varepsilon)\) is given by6 Footnote 6: See [73], Ch.4.3.1. \[f(\varepsilon) = \frac{1}{\sqrt{8}\pi^{2}}\int_{0}^{\varepsilon}\frac{d\psi}{ \sqrt{\varepsilon-\psi}}\frac{d^{2}\rho}{d\psi^{2}}. \tag{20}\] We calculate \(f(\varepsilon)\) numerically, and use it to draw samples of \(N_{\rm g}\) tracer galaxies that fall within projected aperture \(\theta_{\rm A}\). The results of this exercise for \(N_{\rm g}=13\) (as in PG1115+080) and for different apertures are shown as blue dots in Fig. 8. For each value of \(\theta_{\rm A}/\theta_{\rm s}\) we generate 100 mock samples. For each sample we calculate \(\sigma_{\rm los}^{2}\) directly as the variance of LOS velocity across the \(N_{\rm g}\) galaxies. The mean and standard deviation of \(\sigma_{\rm los}^{2}\) are shown by the thick blue line and shaded region. The variance we find for \(\sigma_{\rm los}^{2}\) in Fig. 8 is roughly consistent with sample variance of a normal distribution: \(\frac{\delta\sigma_{\rm los}^{2}}{\sigma_{\rm los}^{2}}\approx\sqrt{\frac{2}{ N_{g}-1}}\approx 0.4\) for \(N_{g}=13\). For comparison with Ref. [29], we also calculate uncertainty estimates for \(\sigma_{\rm los}^{2}\) using the bootstrap method. As a rule, the bootstrap method provides a slightly lower uncertainty estimate than the variance found with mock realizations, but the difference is small: whereas direct sample variance predicts \(\Delta\sigma_{\rm los}^{2}/\sigma_{\rm los}^{2}\approx 0.4\), bootstrap predicts \(\Delta\sigma_{\rm los}^{2}/\sigma_{\rm los}^{2}\approx 0.35\). Thus, we reproduce the sample variance-dominated uncertainty estimate of Ref. [30] for PG1115+080. ## Appendix C Estimating the probability of an MCMC to fall into a displaced minimum As noted in Sec. III.3, at least for a wide group centroid prior (as derived in Ref. [30] for PG1115+080), trial and error with emcee [54] suggests that the initial placement of the walkers is a key factor in deciding which likelihood minimum will attract the fit. We can (roughly) estimate the probability of falling into a wrong minimum by the probability for the walker placement to start off closer to a false minimum than to the truth one. Let us assume that the initial placement of the walkers is chosen to coincide with the prior's center (this seems like a natural choice). In this case, the probability to fall into a wrong minimum is roughly given by the probability of the group centroid prior to be nearer a false minimum than the truth one. We can estimate this probability using mock samples of tracer galaxies as in Sec. B.4. Consider a sample of \(N_{\rm g}\) galaxies, and choose a random member to be the "primary lens". From the same sample, derive a group center prior as the center of mass of the members. In general, of course, the prior center does not coincide with the center of the halo used to generate the mock.7 Now, draw a line connecting the primary lens with the true halo center, and another line connecting the primary lens with the prior center: the prior center is closer to a false minimum if the smaller angle between these lines is larger than \(60^{\rm o}\). Footnote 7: The statistical distribution of the mismatch, which goes to defining the prior width, can be estimated either by bootstrap, as done in [30] for the actual data, or by repeated mocks. We test both, and find them to be compatible. The result of this calculation depends on the number of member galaxies \(N_{\rm g}\), the group halo profile, and the analysis aperture. For \(N_{\rm g}=13\) in the NFW model with an aperture of 10 \(R_{\rm s}\) (around 3 virial radii), we find the wrong minimum probability to be \(\sim 10\)%. ## Appendix D Full corner plots. In this Appendix we collect some detailed results from the MCMC analysis. In Fig. 9 we show triangle plots in which the cosmological prior on \(c_{\rm vir}\) is not included. We do this exercise in order to investigate the impact of this prior on the results. The main point to notice is that omitting the \(c_{\rm vir}\) prior, the bias on \(H_{0}\) becomes somewhat more pronounced (compare Fig. 4, that includes this prior). At the same time, without this prior, the best fit result for \(R_{\rm s}\) in displaced (false) posterior likelihood minima is driven to small values. This point is shown by a comprehensive triangle plot in Fig. 10. In Fig. 11 we show triangle plots in which the kinematics prior is enforced with a standard deviation of \(60\) km/s on \(\sigma_{\rm los}\). These results can be compared to Fig. 4 from the main text, where, as noted in Sec. III.2, the standard deviation on \(\sigma_{\rm los}\) was taken as \(120\) km/s. We do not find a significant difference. Fig. 12 gives a more complete perspective on the degeneracies and the global structure of the likelihood as exposed by a zeus run.
2304.06358
Deep Metric Multi-View Hashing for Multimedia Retrieval
Learning the hash representation of multi-view heterogeneous data is an important task in multimedia retrieval. However, existing methods fail to effectively fuse the multi-view features and utilize the metric information provided by the dissimilar samples, leading to limited retrieval precision. Current methods utilize weighted sum or concatenation to fuse the multi-view features. We argue that these fusion methods cannot capture the interaction among different views. Furthermore, these methods ignored the information provided by the dissimilar samples. We propose a novel deep metric multi-view hashing (DMMVH) method to address the mentioned problems. Extensive empirical evidence is presented to show that gate-based fusion is better than typical methods. We introduce deep metric learning to the multi-view hashing problems, which can utilize metric information of dissimilar samples. On the MIR-Flickr25K, MS COCO, and NUS-WIDE, our method outperforms the current state-of-the-art methods by a large margin (up to 15.28 mean Average Precision (mAP) improvement).
Jian Zhu, Zhangmin Huang, Xiaohu Ruan, Yu Cui, Yongli Cheng, Lingfang Zeng
2023-04-13T09:25:35Z
http://arxiv.org/abs/2304.06358v1
# Deep Metric Multi-View Hashing for Multimedia Retrieval ###### Abstract Learning the hash representation of multi-view heterogeneous data is an important task in multimedia retrieval. However, existing methods fail to effectively fuse the multi-view features and utilize the metric information provided by the dissimilar samples, leading to limited retrieval precision. Current methods utilize weighted sum or concatenation to fuse the multi-view features. We argue that these fusion methods cannot capture the interaction among different views. Furthermore, these methods ignored the information provided by the dissimilar samples. We propose a novel deep metric multi-view hashing (DMVM) method to address the mentioned problems. Extensive empirical evidence is presented to show that gate-based fusion is better than typical methods. We introduce deep metric learning to the multi-view hashing problems, which can utilize metric information of dissimilar samples. On the MIR-Flickr25K, MS COCO, and NUS-WIDE, our method outperforms the current state-of-the-art methods by a large margin (up to \(15.28\) mean Average Precision (mAP) improvement). Multi-view hash, Multi-modal hash, Deep metric learning, Multimedia retrieval ## I Introduction Multi-view hashing is utilized to solve multimedia retrieval problems. A well-designed multi-view hashing algorithm can dramatically improve the precision of multimedia retrieval tasks. Different from single-view hashing, which only searches in a single-view way, multi-view hashing can utilize data from different sources (e.g., image, text, audio, and video). Multi-view hashing representation learning first extracts heterogeneous features from different views, then fuses multi-view features to capture a global representation of different views. Current multi-view hashing algorithms suffer from low retrieval precision. It is mainly caused by the following two aspects. First, the fusion of multi-view features is insufficient for current multi-view hashing algorithms. To get a global representation, typical multi-view hashing methods (e.g., Deep Collaborative Multi-View Hashing) (DCMVH) [1], Flexible Multi-modal Hashing (FMH) [2]) utilizes weighted sum or concatenation to fuse the multi-view features. The relationship between the texts and images is ignored during the fusing process, which incurs a weak expressiveness of the obtained global representation. Second, current methods are confined by the information provided by similar samples. The importance of measuring the distance between dissimilar samples is underrated. For instance, Flexible Graph Convolutional Multi-modal Hashing (FGCMH) [3] is a GCN-based [4] multi-view hashing method, which constructs the edges of a graph based on similarity and aggregates features of adjacent nodes. Hence, dissimilar samples do not play a role during this procedure. We propose a _Deep Metric Multi-View Hashing_ method termed DMMVH. It takes advantage of Context Gating [5] to learn the interaction and dependency between the image and text features. Unlike typical methods, DMMVH fuses multi-view features into a global representation without losing dependency on these features. Moreover, deep metric learning is introduced to DMMVH. As shown in Fig. 1, initially, samples are distributed randomly in the raw data space. Using deep metric learning, semantically similar samples are close to one another, while dissimilar samples are pushed away. To utilize the distance information of dissimilar samples, we design a deep metric loss function. Furthermore, we introduce a Fig. 1: A schematic of deep metric learning. The inputs are randomly distributed in the data space. Deep metric learning projects the inputs to the embedding space, where the embeddings are allocated concerning their semantic meaning. hyper-parameter to reduce the complexity of the designed loss function. The optimal embedding space is obtained through deep metric learning, which follows the semantics-preservation principle of hash representation learning. We evaluate our method on MIR-Flickr25K, MS COCO, and NUS-WIDE datasets in multi-view hash representation learning benchmarks. The proposed method provides up to \(15.28\%\) mAP improvement in benchmarks. Our main contributions are as follows: * We propose a novel multi-view hash method, which achieves state-of-the-art results in multimedia retrieval. * We take advantage of Context Gating to learn a better global representation of different views to address the insufficient fusion problem. * Deep metric learning is introduced to multi-view hashing for the first time. A deep metric loss with linear complexity is designed and optimized. ## II The Proposed Methodology DMMVH aims to utilize a newly designed deep metric loss to train a deep multi-view hashing network. We first present the deep multi-view hashing network, which deeply fuses the multi-view features into a global representation. Then the new deep metric loss is turned to illustrate. Eventually, a hyper-parameter \(\lambda\) is introduced to reduce the complexity. ### _Deep Multi-View Hashing Network_ Deep multi-view hashing network is designed to convert multi-view data into hash code. As shown in Fig. 2, DMMVH consists of a vision backbone, text backbone, normalization modules, multi-view fusion module, and a hash layer. These modules are described in detail below. 1. **Vision Backbone:** Deep ResNet [6] is employed to produce visual features. 2. **Text Backbone:** The BERT-base [7] is utilized to extract text features. 3. **Normalization Module:** Normalization module projects multi-view features (visual and text features) into the same dimension and threshold. 4. **Multi-View Fusion Module:** We employ Context Gating to fuse the concatenated visual and text features. The multi-view fusion module projects the input multi-view features into a new global representation as: \[X_{\text{fusion}}=\sigma(w_{\text{fusion}}X_{\text{concat}}+b_{\text{fusion}}) \circ X_{\text{concat}},\] (1) where \(X_{\text{concat}}\in\mathbb{R}^{n}\) is the multi-view feature vector, \(\sigma\) is the element-wise sigmoid activation, and \(\circ\) is the element-wise multiplication. \(w_{\text{fusion}}\in\mathbb{R}^{n\times n}\) and \(b_{\text{fusion}}\in\mathbb{R}^{n}\) are trainable parameters. The vector of weights \(\sigma(w_{\text{fusion}}X_{\text{concat}}+b_{\text{fusion}})\in[0,1]\) represents a set of learned gates applied to the individual dimensions of the input feature \(X_{\text{concat}}\). 5. **Hash Layer:** A linear layer with a \(\tanh\) activation is hired as the hash layer, which can be represented as \(h_{\text{k-bit}}=\text{sgn}[\tanh(w_{\text{hash}}X_{\text{fusion}}+b_{\text{ hash}})]\), where \(sgn\) represents the signum function. \(w_{\text{hash}}\in\mathbb{R}^{n\times n}\) and \(b_{\text{hash}}\in\mathbb{R}^{n}\) are trainable parameters. The output has the same number of dimensions as the hash code. ### _Deep Metric Loss_ Assume that the training dataset \(\mathcal{X}=\left\{\{(x_{i},y_{i})\}_{i=1}^{N}\right\}\), where \(x_{i}\in\mathbb{R}^{D}\) is a multi-view instance and \(y_{i}\) denotes the category information of \(x_{i}\). Furthermore, \(F:x\mapsto h\) denotes the deep multi-view hashing network, which maps the input space \(\mathbb{R}^{D}\) to K-bit Hamming space \(\{-1,1\}^{K}\). Let \(h_{i}=F(x_{i})\) be the hash code of \(x_{i}\). Then, we have an elegant linear relationship between Hamming distance \(dist_{H}(\cdot,\cdot)\) and inner product \(\left\langle\cdot,\cdot\right\rangle\) \[dist_{H}\left(h_{i},h_{j}\right)=\frac{1}{2}\left(K-\phi_{ij}\right), \tag{2}\] where \(\phi_{ij}=\langle h_{i},h_{j}\rangle\). For \(x_{i}\), its label \(y_{i}\in\{0,1\}^{C}\) and \(C\) is the number of categories. Notice that, one sample may belong to multiple categories. Given the semantic label information, the pairwise similarity matrix \(S=\{s_{ij}\}\) can defined as follows: if \(x_{i}\) and \(x_{j}\) are semantically similar then \(s_{ij}=1\), otherwise, \(s_{ij}=0\). Fig. 2: The flow chart of the DMMVH method. The image and text features are extracted by ResNet and BERT, respectively. The features are normalized by the normalization module and concatenated together. Multi-view fusion module performs Context Gating on the concatenated features and fuses multi-view features while preserving the dependency. Finally, the hash layer produces a hash code based on the fused representation. Provided the matrix \(\Phi=(\phi_{ij})\) and \(S=(s_{ij})\), combining the cross-entropy loss and deep metric learning yields the loss function \[L_{m}^{\prime}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}[s_{ij}\log(1+e^{\phi_{ ij}})-s_{ij}\phi_{ij}]. \tag{3}\] Since \(s_{ij}\) can only be \(0\) or \(1\), when \(s_{ij}=0\), the loss \(L_{m}^{\prime}\) vanishes, which means the dissimilar samples do not play any role in the training. Notice that, the first part of the metric loss is \(\log(1+e^{\phi_{ij}})\). Considering the elegant linear relationship between Hamming distance and the inner product, i.e., Eq. (2), as the inner product \(\phi_{ij}\) decreases, the Hamming distance will increases. Therefore, this part is a proper metric loss. It punishes the dissimilar samples having a closer distance in the embedding space while rewarding a larger distance between them. Due to the above analysis, we revise Eq. (3) as \[L_{m}=\frac{1}{N^{2}}\sum_{i=1}^{N}\sum_{j=1}^{N}[w_{d}\log(1+e^{\phi_{ij}})-s_ {ij}\phi_{ij}]. \tag{4}\] \(w_{d}\) represents the loss weight of dissimilar sample pairs. With this revising, the dissimilar samples can also help the training. The derivation of the metric loss can be found in the appendix. ### _Hyper-parameter \(\lambda\)_ Notice that, calculating the matrix \(\Phi\) or \(S\) is \(O(N^{2})\) complexity. By introducing a hyper-parameter \(\lambda\), calculating any one of them only has \(O(\lambda^{2}bN)\) complexity, where \(b\) is the batch size. We randomly choose a portion of them to calculate the similarity matrix, instead of calculating a global similarity matrix \(S\) for every sample. Assume the samples are already shuffled. Let \(b\) be the batch size and \(\lambda\) be a hyper-parameter. We take the first \(\lambda b\) and last \(\lambda b\) samples to calculate the loss. Specifically, let \(H_{\text{prec}}=\{h_{1},h_{2},\dots,h_{\lambda b}\}\), \(H_{\text{rest}}=\{h_{(1-\lambda)b+1},h_{(1-\lambda)b+2},\dots,h_{b}\}\), \(Y_{\text{prec}}=\{y_{1},y_{2},\dots,y_{\lambda b}\}\), and \(Y_{\text{rest}}=\{y_{(1-\lambda)b+1},y_{(1-\lambda)b+2},\dots,y_{b}\}\). Then we have two matrices: \(\Phi_{batch}\) and \(S_{batch}\) \[\Phi_{\text{batch}}=H_{\text{prec}}\times H_{\text{rest}}^{T}=\left[\phi_{ij}\right] \tag{5}\] \[S_{\text{batch}}=Y_{\text{prec}}\times Y_{\text{rest}}^{T}=\left[s_{ij} \right], \tag{6}\] where \(\times\) represents the matrix multiplication operation. With this designing, Eq. (4) reduces to \[L_{m}=\frac{1}{(\lambda b)^{2}}\sum_{i=1}^{\lambda b}\sum_{j=(1-\lambda)b+1}^{ b}[w_{d}\log(1+e^{\phi_{ij}})-s_{ij}\phi_{ij}]. \tag{7}\] Eventually, a quantization loss is introduced to refine the generated hash codes, which can be represented as: \[L_{q}=\frac{1}{b}\sum_{i\in I}(\|\mathbf{h}_{i}|-\mathbf{1}\|_{2}), \tag{8}\] where \(I=\{i\ |\ 1\leq i\leq\lambda b,i\in\mathbb{N}\}\cup\{i\ |\ (1-\lambda)b+1\leq i\leq b,i\in \mathbb{N}\}\). Combining the metric loss and the quantization loss by weighted sum yields the total loss function of our method \[L_{\text{Total}}=L_{m}+\mu L_{q}, \tag{9}\] where \(\mu\) is a hyper-parameter obtained through grid search in our work. ## III Experiments Extensive experiments are conducted to evaluate the proposed DMMVH method against eleven state-of-the-art multi-view hashing methods on three public benchmark datasets. **Datasets:** Three genetic datasets are adopted: MIR-Flickr25K [8], NUS-WIDE [9], and MS COCO [10]. These datasets have been widely used for evaluating multimedia retrieval performance. The statistics of three datasets are summarized in Table I. **Evaluation Metric:** We utilize the mean Average Precision (mAP) as the evaluation metric. **Baseline:** To evaluate the retrieval performance, the proposed method is compared with eleven multi-view hashing methods, including four unsupervised methods (MFH [11], MAH [12], MVLH [13], and MvDH [14]) and seven supervised methods (MFKH [15], DMVH [16], FDMH [17], FOMH [18], DCMVH [1], SAPMH [19], and FGCMH [3]). **Implementation Details:** Our implementation is on the PyTorch platform. For the feature extraction backbones, we use the pre-trained models, specifically ResNet-50 and BERT-base. The dropout probability is set to be \(0.1\) to improve the generalization capability. We employ the AdamW optimizer with an initial learning rate \(1\times 10^{-5}\) and set \(\beta_{1}=0.9\), \(\beta_{2}=0.999\). The hyper-parameter \(\lambda\) of the loss function for deep metric learning is \(0.5\). The combination coefficient \(\mu\) of the total loss function is set to be \(0.5\). Let the loss weight \(w_{d}\) of dissimilar sample pairs be \(1.5\). ### _Analysis of Experimental Results_ **mAP:** The results are presented in Table II, which show that DMMVH is overall better than all the compared multi-view hashing methods by a large margin. For example, compared with the current state-of-the-art multi-view hashing method FGCMH, the average mAP score of our approach has increased by \(3.51\%\), \(9.58\%\), and \(13.85\%\) on MIR-Flickr25K, NUS-WIDE, and MS COCO, respectively. That is, deep metric learning can indeed enhance the discriminative capability of hash codes. **Hash Code Length:** Intuitively, a longer hash code should preserve more semantic information and achieve better precision. Further, we study the effect of hash code length on multimedia retrieval mAP. The hash code is learned by setting the same code length for different methods. From Table II, we notice that the mAP of our method increases as the hash code length grows. On the MS COCO dataset, our method obtains a performance improvement of \(5.25\%\) when ranging hash code length from \(16\) bits to \(128\) bits. The experiments on other datasets show the same conclusion. However, some previous methods show a precision degradation while adding more hash bits, which indicates that these methods cannot scale well to hashing tasks with a longer length of hash code. On the contrary, our results demonstrate that the proposed method has a noticeable improvement in mAP as the length increases. Eventually, the experiments on the hyper-parameters are detailed in the appendix. ### _Ablation Study_ **Experiment Settings:** To evaluate the effectiveness of our method, we perform an ablation study with different settings and report the performance. * DMMVH-metric: The quantization loss is removed. * DMMVH-quant: The metric loss is removed. * DMMVH-image: Only the visual features are used. * DMMVH-text: Only the text features are used. * DMMVH-concat: Image and text features are fused with concatenation. * DMMVH: Our full framework. **Ablation Analysis:** The comparison results are presented in Table III. Starting with the loss function, the quantization loss can not perform any optimization on the embeddings. The method retrieves data randomly, leading to terrible mAP across all the tasks. Deep metric loss, on the contrary, can help the method learn the embedding well. We notice that DMMVH-metric is slightly worse than the full method due to the lack of binarization constraint. From the view aspect, DMMVH-text is barely better than DMMVH-quant. DMMVH-image outperforms DMMVH-text in all tasks by a large margin indicating the image features contain more information than text. With concatenated multi-view features, our method already outperforms the state-of-the-art methods. But Context Gating further improves mAP. In addition, the comparison experiment of the old backbone network is detailed in the appendix. ### _Convergence Analysis_ We conduct experiments to validate the convergence and generalization capability of DMMVH. We run hash benchmarks on the MIR-Flickr25K dataset in different code lengths. The results are shown in Fig. 3. The figure delivers training loss and test mAP for analysis. As the training goes on, the loss gradually decreases. After 500 epochs, the loss becomes stable, which implies a local minimum is reached. For the test performance, the mAP goes up rapidly at the beginning \begin{table} \begin{tabular}{l l l l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Ref.} & \multicolumn{4}{c}{MIR-Flickr25K*} & \multicolumn{4}{c}{NUS-WIDE*} & \multicolumn{4}{c}{MS COCO*} \\ \cline{3-14} & & & 16 bits & 32 bits & 64 bits & 128 bits & 16 bits & 32 bits & 64 bits & 128 bits & 16 bits & 32 bits & 64 bits & 128 bits \\ \hline MFH & TMM13 & 0.5795 & 0.5824 & 0.5831 & 0.5836 & 0.3603 & 0.3611 & 0.3625 & 0.3629 & 0.3948 & 0.3699 & 0.3960 & 0.3980 \\ MAH & TIP15 & 0.6488 & 0.6649 & 0.6990 & 0.7114 & 0.4633 & 0.4945 & 0.5381 & 0.5476 & 0.3967 & 0.3943 & 0.3966 & 0.3988 \\ MVLH & MM15 & 0.6541 & 0.6421 & 0.6044 & 0.5982 & 0.4182 & 0.4092 & 0.3789 & 0.3897 & 0.3993 & 0.4012 & 0.4065 & 0.4099 \\ MVDH & TIST18 & 0.6828 & 0.7210 & 0.7344 & 0.7527 & 0.4947 & 0.5661 & 0.5789 & 0.6122 & 0.3978 & 0.3966 & 0.3977 & 0.3998 \\ \hline MFKH & MM12 & 0.6369 & 0.6128 & 0.5985 & 0.5807 & 0.4768 & 0.4359 & 0.4342 & 0.3956 & 0.4216 & 0.4211 & 0.4230 & 0.4229 \\ DMVH & ICMR17 & 0.7231 & 0.7326 & 0.7495 & 0.7641 & 0.5676 & 0.5883 & 0.6902 & 0.6279 & 0.4123 & 0.4288 & 0.4355 & 0.4563 \\ FOMH & MM19 & 0.7557 & 0.7632 & 0.7564 & 0.7705 & 0.6329 & 0.6456 & 0.6678 & 0.6791 & 0.5008 & 0.5148 & 0.5172 & 0.5294 \\ FDMH & NPLD20 & 0.7802 & 0.7963 & 0.8094 & 0.8181 & 0.6575 & 0.6665 & 0.6712 & 0.6823 & 0.5404 & 0.5485 & 0.5600 & 0.5674 \\ DCMVH & TIP20 & 0.8097 & 0.8279 & 0.8354 & 0.8467 & 0.6509 & 0.6625 & 0.6905 & 0.7023 & 0.5387 & 0.5427 & 0.5490 & 0.5576 \\ SAPMH & TMM21 & 0.7657 & 0.8098 & 0.8188 & 0.8191 & 0.6503 & 0.6703 & 0.6898 & 0.6901 & 0.5467 & 0.5502 & 0.5563 & 0.5672 \\ FGCMH & MM21 & 0.8173 & 0.8358 & 0.8377 & 0.8606 & 0.6677 & 0.6874 & 0.6936 & 0.7011 & 0.5641 & 0.5273 & 0.5797 & 0.5862 \\ \hline DMMVH & Proposed & **0.8587** & **0.8707** & **0.8798** & **0.8827** & **0.7714** & **0.7820** & **0.7879** & **0.7916** & **0.6716** & **0.7030** & **0.7122** & **0.7244** \\ \hline \hline \end{tabular} \end{table} TABLE II: mAP Comparison Results on MIR-Flickr25K, NUS-WIDE, and MS COCO. The best results are bolded, and the previous state-of-the-art results are underlined. The * indicates that the results on this dataset are of statistical significance. \begin{table} \begin{tabular}{l l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{4}{c}{MIR-Flickr25K} & \multicolumn{4}{c}{NUS-WIDE} & \multicolumn{4}{c}{MS COCO} \\ \cline{2-13} & 16 bits & 32 bits & 64 bits & 128 bits & 16 bits & 32 bits & 64 bits & 128 bits & 16 bits & 32 bits & 64 bits & 128 bits \\ \hline DMMVH-metric & 0.8531 & 0.8614 & 0.8708 & 0.8738 & 0.7671 & 0.7766 & 0.7809 & 0.7872 & 0.6686 & 0.6970 & 0.7066 & 0.7078 \\ DMMVH-quant & 0.5531 & 0.5531 & 0.5531 & 0.3085 & 0.3085 & 0.3085 & 0.3085 & 0.3085 & 0.3502 & 0.3502 & 0.3502 \\ DMMVH-text & 0.6047 & 0.6107 & 0.6104 & 0.6119 & 0.3623 & 0.3585 & 0.3649 & 0.3631 & 0.5819 & 0.5886 & 0.5955 & 0.5992 \\ DMMVH-image & 0.8292 & 0.8425 & 0.8547 & 0.8631 & 0.7530 & 0.7593 & 0.7689 & 0.7778 & 0.6598 & 0.6886 & 0.7033 & 0.7160 \\ DMMVH-concat & 0.8498 & 0.8633 & 0.8742 & 0.8777 & 0.7635 & 0.7713 & 0.7827 & 0.7866 & 0.6615 & 0.6932 & 0.7056 & 0.7188 \\ \hline DMMVH & **0.8587** & **0.8707** & **0.8798** & **0.8827** & **0.7714** & **0.7820** & **0.7879** & **0.7916** & **0.6716** & **0.7030** & **0.7122** & **0.7244** \\ \hline \hline \end{tabular} \end{table} TABLE III: Ablation Experiments On Three Datasets. Effects of Deep Multi-View Hash Network Architecture. \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Ref.} & \multicolumn{4}{c}{MIR-Flickr25K} & \multicolumn{4}{c}{NUS-WIDE*} & \multicolumn{4}{c}{MS COCO*} \\ \cline{2-13} & & 16 bits & 32 bits & 64 bits & 128 bits & 16 bits & 32 bits & 64 bits & 128 bits & 16 bits & 32 bits & 64 bits & 128 bits \\ \hline MIR-Flickr25K & 5000 & 17772 & 2243 & 24 & ResNet(768-D) & BERT(768-D) \\ MS COCO & 18000 & 82783 & 5981 & 80 & ResNet(768-D) & BERT(768-D) \\ NUS-WIDE & 21000 & 193749 of training. After 100 epochs, the test mAP stays stable. With further training, no degradation is observed on the test mAP, which indicates a good generalization capability. Similar convergence results are observed on other datasets. ### _mAP@K and Recall@K_ Fig. 4 shows the mAP@K and Recall@K curves with the increasing number of retrieval results on the MIR-Flickr25K dataset in different code lengths. The mAP of the four cases slightly decreases as \(K\) increases, while the recall curve shows rapid linear growth. The tendency suggests that our method performs well in the retrieval tasks. Typical users only pay attention to a few results at the beginning of the retrieval results. Our method has even higher precision in this scenario. Experts tend to go through more results than typical users. Our approach can provide a linear growth recall as the number of retrieval results grows. Experts can expect consistent, high-quality results during their searches. To recap, DMMVH can deliver satisfying retrieval results for different user groups. ## IV Conclusion and Future Work We propose a new multi-view hashing framework (DM-MVH). It introduces deep metric learning to solve multi-view hashing problems. We showed that DMMVH provides satisfying retrieval results to different types of users. Compared to typical graph-based methods, DMMVH is less computationally intensive. It utilizes Context Gating for multi-view features fusion and deep metric learning for representation optimization. The proposed method conquers two main challenges of the multi-view hashing problem. Under multiple experiment settings, it delivers up to \(15.28\%\) performance gain over the state-of-the-art methods. In the experiment, we noticed some issues. For example, the performance gain is not quite significant as the length of the hash code increases. We will work on these issues to improve the proposed method further. ## Acknowledgment This work is supported in part by the Zhejiang provincial "Ten Thousand Talents Program" (2021R52007), the National Key R&D Program of China (2022YFB4500405), and the Science and Technology Innovation 2030-Major Project (2021ZD0114300). Fig. 4: The mAP@K and Recall@K curves on the MIR-Flickr25K dataset. Fig. 3: The upper curve is the test mAP, and the bottom is the training loss on the MIR-Flickr25K dataset.
2304.14984
Quantum Fisher Information and its dynamical nature
The importance of the quantum Fisher information metric is testified by the number of applications that this has in very different fields, ranging from hypothesis testing to metrology, passing through thermodynamics. Still, from the rich range of possible quantum Fisher information, only a handful are typically used and studied. This review aims at collecting a number of results scattered in the literature that can be useful to people who begin the study of Fisher information and to those who are already working on it to have a more organic understanding of the topic. Moreover, we complement the review with new results about the relation between Fisher information and physical evolutions. Extending the study done in [1], we prove that all the physically realisable dynamics can be defined solely in terms of their relation with respect to the Fisher information metric. Moreover, other properties as Markovianity, retrodiction or detailed balance can be expressed in the same formalism. These results show a fact that was partially overseen in the literature, namely the inherently dynamical nature of Fisher information.
Matteo Scandi, Paolo Abiuso, Jacopo Surace, Dario De Santis
2023-04-28T17:08:44Z
http://arxiv.org/abs/2304.14984v2
# Quantum Fisher Information and its dynamical nature ###### Abstract The importance of the quantum Fisher information metric is testified by the number of applications that this has in very different fields, ranging from hypothesis testing to metrology, passing through thermodynamics. Still, from the rich range of possible quantum Fisher information, only a handful are typically used and studied. This review aims at collecting a number of results scattered in the literature that can be useful to people who begin the study of Fisher information and to those who are already working on it to have a more organic understanding of the topic. Moreover, we complement the review with new results about the relation between Fisher information and physical evolutions. Extending the study done in [1], we prove that all the physically realisable dynamics can be defined solely in terms of their relation with respect to the Fisher information metric. Moreover, other properties as Markovianity, retrodiction or detailed balance can be expressed in the same formalism. These results show a fact that was partially overseen in the literature, namely the inherently dynamical nature of Fisher information. **I. Introduction** **II. From quantum contrast functions to quantum Fisher information** _Contrast functions are introduced, and their general properties are discussed. Thm. 1 gives a characterisation of their local behaviour in terms of a family of functionals. The same objects also appear in the characterisation of monotone metrics on the space of states (Thm. 2). These are the two ways of defining the family of quantum Fisher information metrics. The informative boxes of this sections are dedicated to give more details about the relation between contrast functions and their \(g\) function (Box 1), provide a first characterisation of standard monotones (Box 2) and describe other possible quantifiers of statistical difference (Box 3)._ **III. Dynamical properties of Fisher information** _The equivalence between the positivity of linear maps and the contraction of Fisher information is proved (Thm. 3). This implies that one can define the physicality of an evolution by looking at whether it contracts everywhere the Fisher information on the space of states tensor an ancilla of the same dimension (Corollary 3.1). In Box 4 the same results are derived by considering contrast functions._ **A. Markovianity as monotonous contraction of Fisher information** _A novel expression of the Fisher information currents is presented (Thm. 4). The relation between Fisher information and Markovianity is completely characterised: it is shown that the monotonous contraction of Fisher information on the space of states (adjoined with an ancilla of the same dimension) is equivalent to the Markovianity of the corresponding evolution (Thm. 5). Moreover, it is discussed how, despite this deep connection between the two notions, it is impossible to operationally witness non-Markovianity through the expansion of Fisher information for all evolutions only by the use of extra ancillas and copies of the channel (Thm. 6). Still, if one allows for a post-processing at the end of the dynamics, it is also shown that one can actually provide an operational witness (Thm. 7)._ **B. Retrodiction and Fisher information** _A generalised version of the Petz recovery map is introduced, and it is shown how this allows to canonically map evolved states (close to a prior) back into their initial condition. Markovianity is then equivalent to a monotonous increase of the error committed during this procedure (Thm. 9). In Box 9 we discuss the topic of universal recovery, and we give some preliminary results using \(\chi_{f}^{2}\)-divergences (Thm. 10 and corollaries thereof). Moreover, we find that the traditional Petz recovery map can be characterised in two different ways: either as the unique universal recovery map that is a quantum channel (Corollary 10.1 and 10.2); or as the one whose spectrum dominates all other maps in the family (Thm. 11)._ **C. Fisher information and detailed balance** _Detailed balance corresponds to a stronger form of equilibration in which the dynamics appears to be at equilibrium with its reverse process close to the steady state. In Thm. 12 we show how this condition can naturally be formulated for classical systems in terms of the self-adjointness of the generator with respect to the Fisher scalar product. A similar result is also proven for quantum dynamics (Thm. 13) where we find a slightly more general form of the Lindbladian compared to the most used choice in the literature._ **IV. Mathematical properties of quantum Fisher information** **A. The set of standard monotone functions** _The set of standard monotone functions is characterised, focusing on an important symmetry of the set and its convex structure, generated by a continuous family of vertices._ **B. Properties of quantum Fisher operators** _The way in which the properties of the defining functions are mirrored on their corresponding Fisher operators is presented, giving a first general characterisation._ **C. Complete positivity of the Fisher information operators** _A full characterisation of the complete positive Fisher information operator is provided. In particular, Thm. 14 gives the most general expression for such maps._ **V. A garden of quantum Fisher information** _We present the most notable examples of quantum Fisher operators, summarised in table 4, and in Fig. 5._ **VI. Conclusions and open questions** Introduction The goal of physics is to find fixed laws in an evolving world. The discoveries of the last century, especially in statistical mechanics, but even more fundamentally in quantum theory, have shown that relaxing the concept of a law from the determinism that characterised Newtonian mechanics to the more general notion of probabilistic relation not only allows for a richer expressibility, but ultimately for a more faithful representation of our experience of the world. The same change in perspective is also useful in the description of time evolutions, justifying the replacement of deterministic transitions with noisy transformations acting on probability distributions (or density matrices, in the case of quantum theory). For this reason it is no surprise that statistical considerations play such a central role in modern physics. For example, when assessing the quality of a theory, one is confronted with the problem of quantifying the proximity between some experimental data and the predictions of a theoretical model. More fundamentally, quantifying the discrepancy between (classical and quantum) statistical mixtures is crucial in all areas of physics, in order to characterize the informational content of one's particular description of a physical phenomenon. Without further constraints, there is no unique procedure that can capture the many different ways in which probability distributions can differ. Still, if one restricts their attention to metric structures, one can single out a unique family of statistical distances, namely the Fisher information one, only by imposing a simple request: the fact that the distinguishability between two probabilities should decrease under noisy transformations. This condition reflects the intuitive idea that if noise affects the dynamics of an experiment, the corresponding capability to distinguish different physical states will diminish as well. The fact that statistical distances can be solely identified from a requirement about their dynamics, which goes under the name of Chentsov-Petz Theorem (see Thm. 2), is quite remarkable, as _a priori_ it is not obvious why there should be such a strong relation with time evolutions. This peculiarity was long time overlooked, as more technical uses of the Fisher information became famous, in particular in the formulation of the Cramer-Rao bound for estimation theory, or of the Chernoff bound for hypothesis testing. Still, few works have appeared exploring the relation between Fisher metrics and the rate at which general dynamics degrade information (see for example [2; 3]). Here, following the investigations started in [1], we go at the root of the interconnection between Fisher information metrics and physical evolutions: in Corollary 3.1 we prove that a linear map is completely positive if and only if the Fisher information decreases for every point in the space of states (and an ancilla of the same dimension). This means that in principle all the physically realisable quantum channels could be defined as exactly all the linear maps contracting the Fisher information metric. This result can be interpreted as the dual of Chentsov-Petz Theorem: whereas in there one defined Fisher information metrics in terms of their dynamical properties, here we define dynamics in terms of their behaviour with respect to the Fisher information. This identification hints to some deep relation between statistical inference and time evolutions. Then, the goal of this work is twofold: on the one hand, we aim to motivate the reader to the study of the Fisher information by proving its intimately dynamical nature, showing how properties of physical evolutions can naturally be cast in this language; on the other, conscious of the mathematical technicalities that could scare away new practitioners, we provide a comprehensive guide to the properties of quantum Fisher information while trying to keep the exposition as accessible as possible. To this end, we first introduce in Sec. II the bare minimum formalism to discuss the dynamical properties of Fisher information, postponing the more mathematical discussions to the second part of the work. We point out in this context that Sec. V, which contains a list of different expressions that the Fisher information can take, was designed to favour the sporadic consultation, and should be considered as a field guide to the many different forms that this quantity can take. Moreover, in order to keep the exposition as pedagogical as possible, we complement the main text with informative boxes where specific subjects are explored in greater detail, and which in principle can be skipped during a first read. ## II From quantum contrast functions to quantum Fisher information There are two paths to arrive to the definition of quantum Fisher information metrics: one statistical and one dynamical. The first defines them as the metrics that arise from the local expansion of quantifiers of statistical difference, as it is explained in Thm. 1. The other approach singles out the Fisher information from all the possible metrics on the space of states, as the unique family that contracts under all physical evolutions (Thm. 2). It is a truly remarkable fact that these two definitions actually define the same family, leaving us with the choice of which path to undertake. We preferred here to start with the statistical definition, in order to insert local results in a more global context. Moreover, this allows us to introduce the bare minimum formalism needed to understand the dynamical nature of Fisher information (Sec. III). We then have to ask ourselves how to assess the similarity of different classical statistical distributions. This problem does not have a straightforward answer: different methods yield different quantifications, and there is no clear argument that would single out a unique strategy over all the others. As a matter of fact, this difficulty actually reflects the many different behaviours that a probability distribution shows depending on the regime one is focusing on: two distributions could be very similar in the asymptotic regime, but show substantial differences when one restricts their attention to single-shot experiments. For this reason, rather than trying to reduce the richness of the phenomenology that one can focus on, Csizar generalised the usual relative entropy with an axiomatic construction based on the ansatz [4]: \[H_{g}(\rho||\sigma):=\operatorname{Tr}\left[\rho\,g(\sigma/\rho)\right]\,, \tag{1}\] where \(\rho\) and \(\sigma\) are classical probability vectors1\(g\) is a convex function with domain over the positive reals, and finally we abuse the notation slightly by denoting with \(\sigma/\rho\) the vector with components \(\{\sigma_{i}/\rho_{i}\}\). The quantities in Eq. (1) take the name of contrast functions. Footnote 1: Here and in the rest of the work, we identify probability vectors with diagonal density matrices, so the expression in Eq. (1) should be understood as: \(H_{g}(\rho||\sigma):=\sum_{i};\rho_{i}\,g(\sigma_{i}/\rho_{i})\). In [5; 6] the same axiomatic construction was presented for non-commuting states. In particular, the axioms chosen are: 1. _positivity:_\(H(\rho||\sigma)\geq 0\), with equality _iff_\(\rho\equiv\sigma\); 2. _homogeneity:_\(H(\lambda\rho||\lambda\sigma)=\lambda\,H(\rho||\sigma)\), for \(\lambda>0\); 3. _joint convexity:_ namely the condition that \[H(\lambda\rho_{1}+(1-\lambda)\rho_{2}||\lambda\sigma_{1}+(1-\lambda)\sigma_{ 2})\leq\lambda H(\rho_{1}||\sigma_{1})+(1-\lambda)H(\rho_{2}||\sigma_{2})\] (2) for \(0\leq\lambda\leq 1\); 4. _monotonicity:_ for any Completely Positive Trace Preserving (CPTP) map \(\Phi\), it should hold that \(H(\Phi(\rho)||\Phi(\sigma))\leq H(\rho||\sigma)\); 5. _differentiability:_ the function \(h_{\rho,\sigma}(x,y):=H(\rho+xA||\sigma+yB)\) for \(A\) and \(B\) Hermitian operators is \(C^{\infty}\). Whereas the axioms are a straightforward generalisation of the ones chosen by Csizar, for quantum states it is not clear how to generalise the original ansatz, especially due to the ratio in the classical expression in Eq. (1). One standard approach, coming from constructions in C\({}^{*}\)-algebras, is to introduce the two superoperators: \[\mathbb{L}_{\rho}[A]=\rho\,A\qquad\qquad\qquad\mathbb{R}_{\rho}[A]=A\,\rho \tag{3}\] that generalise the multiplication by a scalar to the non-commuting case. It is straightforward to verify that the inverse of the left/right multiplication operator is simply given by \((\mathbb{L}_{\rho})^{-1}/(\mathbb{R}_{\rho})^{-1}=\mathbb{L}_{\rho^{-1}}/ \mathbb{R}_{\rho^{-1}}\). Moreover, it should also be noticed that \([\mathbb{L}_{\rho},\mathbb{R}_{\sigma}]=0\) for any two states \(\rho\) and \(\sigma\). Then, the ansatz proposed in [5] reads: \[H_{g}(\rho||\sigma):=\operatorname{Tr}\left[g(\mathbb{L}_{\sigma}\mathbb{R}_{ \rho}^{-1})\,[\rho]\right]\,, \tag{4}\] where now the role of the ratio in Eq. (1) is taken by \(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}\), called the relative modular operator. In order for the axioms above to hold, one needs to impose that \(g(x)\) is an operator convex function2\(g(x):(0,\infty)\to\mathbb{R}\) satisfying \(g(1)=0\)[6]. In the Box 1 at the end of the section we show how the properties of \(g\) are connected to the axioms above. It should also be noticed that if one chooses \(g(x)=-\log x\), then Eq. (4) gives the usual relative entropy. Footnote 2: An operator convex function \(g(x)\) is defined by the property \(g(tA+(1-t)B)\leq t\,g(A)+(1-t)\,g(B)\) for any two Hermitian \(A\) and \(B\), and \(t\in[0,1]\). In order to highlight the similarities and the differences with the classical contrast functions, it is insightful to look at the coordinate expression of Eq. (4). First, define the eigensystems of \(\rho\) and \(\sigma\) as: \[\rho=\sum_{i}\;\rho_{i}\;|\rho_{i}\rangle\langle\rho_{i}|\,\qquad\qquad \sigma=\sum_{j}\;\sigma_{j}\;|\sigma_{j}\rangle\langle\sigma_{j}|. \tag{5}\] Since \(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}|\langle\rho_{i}\,|\,\rangle=\frac{ \sigma_{j}}{\rho_{i}}\;|\sigma_{j}\,\rangle\langle\rho_{i}\,|\), the relative modular operator is diagonal in the basis given by \(\{\,|\sigma_{j}\rangle\langle\rho_{i}\,|\}\). Hence, Eq. (4) can be expanded as: \[H_{g}(\rho||\sigma): =\sum_{i,j}\;\operatorname{Tr}\left[g(\mathbb{L}_{\sigma}\mathbb{R }_{\rho}^{-1})\,[\,|\sigma_{j}\rangle\langle\sigma_{j}\,|\,\rho\;|\rho_{i} \rangle\langle\rho_{i}\,|\right]=\sum_{i,j}\;\rho_{i}\,g\left(\frac{\sigma_{j} }{\rho_{i}}\right)\,|\langle\sigma_{j}|\rho_{i}\rangle|^{2}\,, \tag{6}\] where we inserted the two resolutions of the identity \(\sum_{i}|\rho_{i}\rangle\langle\rho_{i}\,|=\sum_{j}\,|\sigma_{j}\rangle\langle \sigma_{j}|=\mathbb{1}\) to rewrite the state \(\rho\) in the diagonal basis of the relative modular operator. In particular, it is clear that if \(\rho\) and \(\sigma\) are diagonal in the same basis, then \(|\langle\sigma_{j}|\rho_{i}\rangle|^{2}=\delta_{i,j}\), and Eq. (4) reduces to the classical expression in Eq. (1). Moreover, it should be noticed that the requirement \(g(1)=0\) implies that \(H_{g}(\rho||\rho)=0\) for any \(\rho\). Interestingly, if one restricts its attention to normalised states, any linear term in \(g(x)\) does not contribute to the contrast function, as it follows from the identity \(H_{(x-1)}(\rho||\sigma)=\operatorname{Tr}\left[\rho-\sigma\right]=0\) (which can be verified in coordinates or directly from Eq. (4) (see Box 1). For this reason, we consider two functions to be equivalent if \(g_{1}(x)-g_{2}(x)\propto(x-1)\). Then, ignoring the linear contributions, any operator convex function with \(g(1)=0\) has the following integral expression: \[g(x)=\frac{1}{2}\int_{0}^{\infty}\mathrm{d}\nu_{g}(s)\,\,\frac{(x-1)^{2}}{x+s }+\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d}\nu_{g}(s^{-1})}{s}\,\,\frac{( x-1)^{2}}{1+sx}\,, \tag{7}\] where \(\nu_{g}\) is a positive measure with finite mass (see App. A). Eq. (7) is particularly useful to give a unified expression for all contrast functions. Indeed, thanks to the following two identities: \[(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}-\mathbb{1})\left[\rho ^{1/2}\right] =(\sigma-\rho)\rho^{-1/2}=\mathbb{R}_{\rho}^{-1/2}(\sigma-\rho)\,, \tag{8}\] \[(\mathbb{R}_{\sigma}\mathbb{L}_{\rho}^{-1}-\mathbb{1})\left[\rho ^{1/2}\right] =\rho^{-1/2}(\sigma-\rho)=\mathbb{L}_{\rho}^{-1/2}(\sigma-\rho)\,, \tag{9}\] one can rewrite the contrast functions in Eq. (4) as: \[H_{g}(\rho||\sigma)=\operatorname{Tr}\left[(\rho-\sigma)\,\mathbb{R}_{\rho}^{ -1}h(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1})[(\rho-\sigma)]\right], \tag{10}\] where we defined the function \(h(x):=g(x)/(x-1)^{2}\) (here and in the following we refer to App. A for the explicit calculations). Putting together Eq. (7) and Eq. (10), it is a matter of simple algebra to give the general integral form of quantum contrast functions: \[H_{g}(\rho||\sigma)=\frac{1}{2}\int_{0}^{\infty}\mathrm{d}\nu_{g }(s)\,\operatorname{Tr}\left[(\rho-\sigma)(\mathbb{L}_{\sigma}+s\mathbb{R}_{ \rho})^{-1}[(\rho-\sigma)]\right]+\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d }\nu_{g}(s^{-1})}{s}\,\operatorname{Tr}\left[(\rho-\sigma)(\mathbb{L}_{\rho}+s \mathbb{R}_{\sigma})^{-1}[(\rho-\sigma)]\right]\,. \tag{11}\] This expression also shows that a contrast function is symmetric if and only if \(\mathrm{d}\nu_{g}(s)=\mathrm{d}\nu_{g}(s^{-1})/s\), since exchanging \(\rho\leftrightarrow\sigma\) exchanges the two integrals above (the other direction follows from the fact that the quantities in the integral of Eq. (11) are extreme points in the set of symmetric contrast functions, see Box 10). At the level of the defining function this corresponds to the requirement that \(g(x)=x\,g(x^{-1})\). In order to put contrast functions in relation with their symmetric versions, it is useful to introduce the measure \(\mathrm{d}N_{g}(s):=(\mathrm{d}\nu_{g}(s)+\mathrm{d}\nu_{g}(s^{-1})/s)/2\). Then, it follows from Eq. (11) that: \[\frac{1}{2}\left(H_{g}(\rho||\sigma)+H_{g}(\sigma||\rho)\right)=\frac{1}{2} \int_{0}^{1}\mathrm{d}N_{g}(s)\,\operatorname{Tr}\left[(\rho-\sigma)\left(( \mathbb{L}_{\sigma}+s\mathbb{R}_{\rho})^{-1}+(\mathbb{L}_{\rho}+s\mathbb{R}_ {\sigma})^{-1}\right)[(\rho-\sigma)]\right]\,. \tag{12}\] We are now ready to study the local behaviour of contrast functions. Indeed, thanks to Eq. (10) and Eq. (11), it is particularly simple to show that the following theorem holds: **Theorem 1** (Lesniewski, Ruskai [6]).: _For each \(g\) satisfying the required properties to define a contrast function, one can locally approximate \(H_{g}(\pi+\varepsilon A||\pi+\varepsilon B)\) up to corrections of order \(\mathcal{O}\left(\varepsilon^{3}\right)\) as:_ \[H_{g}(\pi+\varepsilon A||\pi+\varepsilon B) =\varepsilon^{2}\,\int_{0}^{\infty}\mathrm{d}N_{g}(s)\, \operatorname{Tr}\left[(A-B)(\mathbb{L}_{\pi}+s\mathbb{R}_{\pi})^{-1}[(A-B)] \right]+\mathcal{O}\left(\varepsilon^{3}\right)= \tag{13}\] \[=\frac{\varepsilon^{2}}{2}\operatorname{Tr}\left[(A-B)\,\mathbb{ J}_{f}^{-1}\big{|}_{\pi}[(A-B)]\right]+\mathcal{O}\left(\varepsilon^{3} \right)\,, \tag{14}\] _where \(A\) and \(B\) are traceless, Hermitian perturbations, and the superoperator \(\mathbb{J}_{f}\big{|}_{\pi}\) is defined as:_ \[\mathbb{J}_{f}\big{|}_{\pi}:=\mathbb{R}_{\pi}\,f(\mathbb{L}_{\pi}\mathbb{R}_{ \pi}^{-1})\,. \tag{15}\] _Moreover, the two functions \(g\) and \(f\) are connected by the equation:_ \[f(x)=\frac{(x-1)^{2}}{g(x)+x\,g(x^{-1})}\,. \tag{16}\] Eq. (13) can be directly derived from Eq. (11). Indeed, thanks to the quadratic dependence of the contrast function on \((\rho-\sigma)\), up to correction of order \(\mathcal{O}\left(\varepsilon^{3}\right)\), one can substitute \((\mathbb{L}_{\pi+\varepsilon B}+s\mathbb{R}_{\pi+\varepsilon A})^{-1}=( \mathbb{L}_{\pi+\varepsilon A}+s\mathbb{R}_{\pi+\varepsilon B})^{-1}=( \mathbb{L}_{\pi}+s\mathbb{R}_{\pi})^{-1}\). On the other hand, one can verify Eq. (14) starting from Eq. (10) as: \[H_{g}(\pi+\varepsilon A||\pi+\varepsilon B) =\varepsilon^{2}\operatorname{Tr}\left[\left(A-B\right)\mathbb{R }_{\pi+\varepsilon A}^{-1}h(\mathbb{L}_{\pi+\varepsilon B}\mathbb{R}_{\pi+ \varepsilon A}^{-1})[\left(A-B\right)]\right]= \tag{17}\] \[=\frac{\varepsilon^{2}}{2}\,\left(\operatorname{Tr}\left[\left(A -B\right)\,\left(\mathbb{R}_{\pi+\varepsilon A}^{-1}h(\mathbb{L}_{\pi+ \varepsilon B}\mathbb{R}_{\pi+\varepsilon A}^{-1})\right](A-B)\right]+ \operatorname{Tr}\left[A\leftrightarrow B\right]\right)+\mathcal{O}\left( \varepsilon^{3}\right)=\] (18) \[=\frac{\varepsilon^{2}}{2}\operatorname{Tr}\left[\left(A-B \right)\mathbb{R}_{\pi}^{-1}\,\frac{1}{f}(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{- 1})\left[\left(A-B\right)\right]\right]+\mathcal{O}\left(\varepsilon^{3} \right)\,, \tag{19}\] where Eq. (17) is exact, while in Eq. (18) we denote by \(\operatorname{Tr}\left[A\leftrightarrow B\right]\) the first trace of the line but with \(A\) and \(B\) exchanged, and we used the fact that exchanging the arguments of \(H_{g}(\rho||\sigma)\) does not affect the result at this order of approximation. Then, in the last line one can read the explicit expression of \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\). In fact, due to the multiplicative behaviour of \(\mathbb{L}_{\pi}/\mathbb{R}_{\pi}\) the inverse of \(\mathbb{J}_{f}\big{|}_{\pi}\) is given by: \[\mathbb{J}_{f}^{-1}\big{|}_{\pi}=\mathbb{R}_{\pi}^{-1}\,\frac{1}{f}(\mathbb{L }_{\pi}\mathbb{R}_{\pi}^{-1})\,. \tag{20}\] It should be noticed that each \(f\) is in one to one relation with a unique symmetric contrast function, as in this case it holds that \(g(x)=xg(x^{-1})\), and \(f(x)=(x-1)^{2}/(2\,g^{\text{symm}}(x))\). Moreover, \(f\) satisfies the following three conditions: 1. since \(g\) is matrix convex, \(f\) is matrix monotone [6; 7]. Indeed, combining Eq. (7) with Eq. (16), we get that \(1/f\) can be expanded as: \[\frac{1}{f(x)}=\int_{0}^{1}\mathrm{d}N_{g}(s)\,\,\left(\frac{1}{x+s}+\frac{1} {1+s\,x}\right)\,.\] (21) Since the functions in the integrand are matrix monotone decreasing, the same holds for \(1/f\). This directly implies that \(f\) is matrix monotone; 2. \(f(x)\) satisfies the symmetry \(f(x)=xf(x^{-1})\), as it can be verified by a straightforward calculation; 3. without loss of generality, we require the normalisation condition \(f(1)=1\), corresponding to fixing the value of \(g^{\prime\prime}(1)=1\), which can be imposed by the simple rescaling \(\tilde{g}(x)=g(x)/g^{\prime\prime}(1)\). The functions satisfying these three conditions are called standard monotone. It can be shown (see Box 2 at the end of the section) that they are all pointwise bounded as: \[\frac{2x}{x+1}\,\leq f(x)\leq\,\frac{x+1}{2}\,. \tag{22}\] Moreover, thanks to normalisation condition \(f(1)=1\), in the case of commuting observables (i.e., \([\pi,A]=[\pi,B]=[A,B]=0\)), it follows that: \[H_{g}(\pi+\varepsilon A||\pi+\varepsilon B) =\frac{\varepsilon^{2}}{2}\operatorname{Tr}\left[\left(A-B\right) \mathbb{R}_{\pi}^{-1}\,\frac{1}{f}(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}) \left[\left(A-B\right)\right]\right]+\mathcal{O}\left(\varepsilon^{3}\right) \tag{23}\] \[=\frac{\varepsilon^{2}}{2}\operatorname{Tr}\left[\left(A-B\right) \mathbb{R}_{\pi}^{-1}\,\frac{1}{f}(\mathbb{I})\left[\left(A-B\right)\right] \right]+\mathcal{O}\left(\varepsilon^{3}\right)=\] (24) \[=\frac{\varepsilon^{2}}{2}\operatorname{Tr}\left[\left(A-B\right) ^{2}\pi^{-1}\right]+\mathcal{O}\left(\varepsilon^{3}\right)=\frac{ \varepsilon^{2}}{2}\sum_{i}\,\,\frac{(A_{i}-B_{i})^{2}}{\pi_{i}}+\mathcal{O} \left(\varepsilon^{3}\right)\,, \tag{25}\] where in the first line we used the fact that on commuting observables \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}[A]=\mathbb{I}[A]=A\), and we expanded \(A\), \(B\) and \(\pi\) on a common eigenbasis. This shows that from the point of view of classical probability, up to a normalisation, all contrast functions locally behave in the same way. This is a well known result in statistics, and it is one of the ways of defining the Fisher information metric. Indeed, consider the case in which \([\pi,\delta\rho]=0\), where \(\delta\rho\) is a vector in the tangent space of \(\pi\) (i.e., a traceless, Hermitian perturbation). Then, from Eq. (25) we can see that: \[H_{g}(\pi||\pi+\varepsilon\,\delta\rho)=\frac{\varepsilon^{2}}{2}\sum_{i}\,\, \frac{(\delta\rho_{i})^{2}}{\pi_{i}}+\mathcal{O}\left(\varepsilon^{3}\right)= \frac{\varepsilon^{2}}{2}\sum_{i,j}\,\,\delta\rho_{i}\,\eta_{i,j}\,\delta\rho_{j} +\mathcal{O}\left(\varepsilon^{3}\right)\,, \tag{26}\] where we introduced the matrix \(\eta_{i,j}=\delta_{i,j}/\pi_{i}\). This is symmetric, positive, and it smoothly depends on the base-point (assuming that \(\pi\) is a full rank state, i.e., we are on the interior of the space of states). These are the defining properties of a metric, so that \(\eta\) endows the tangent space of diagonal density matrices with a metric structure. In this context, \(\eta\) is exactly what takes the name of classical Fisher information metric. It should be noticed, though, that the uniqueness of \(\eta\) is lost when moving to non-commuting observables. This might leave open the doubt whether the family of operators \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) is the right generalisation of the Fisher information metric to quantum states, or whether one should introduce some further constraints to single out a unique quantity. In order to resolve the question one can use another independent characterisation of the classical Fisher information: the Chentsov theorem. This says that the unique metric on the space of probability distributions contracting under all stochastic maps is exactly the Fisher information [8]. The generalisation of this result to quantum systems was provided by Petz in the study of monotone metrics. These are scalar products \(K_{\pi}(A,B)\) on the tangent space of the manifold of states that satisfy the two additional properties: 1. _smoothness:_\(K_{\pi}(A,B)\) depends smoothly on the base-point \(\pi\); 2. _monotonicity:_ for every CPTP \(\Phi\) the metric is contractive: \[K_{\Phi(\pi)}(\Phi(A),\Phi(A))\leq K_{\pi}(A,A)\,.\] As we mentioned, Chentsov theorem identifies the Fisher information as the unique monotone metric on the space of classical probability distributions. For quantum states, on the other hand, we have: **Theorem 2** (Petz [9]).: _The monotone metrics on quantum states are all given in the form:_ \[K_{f,\pi}(A,B):=\operatorname{Tr}\left[A\left.\mathbb{J}_{f}^{-1}\big{|}_{\pi }[B]\right.\right], \tag{27}\] _where \(f:\mathbb{R}^{+}\to\mathbb{R}^{+}\) is an operator monotone function. Moreover, requiring that \(K_{f,\pi}(A,B)\) is real and that it reduces to the classical Fisher information for commuting variables constrains \(f\) to be a standard monotone function._ This last result corroborates the interpretation of \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) as the natural extension of the classical Fisher information metric to quantum mechanical systems. Moreover, it also shows that the definition of contrast functions in Eq. (4) is well justified, as their local behaviour correctly reduces to the quantum Fisher information. This concludes the introduction to the quantum Fisher information. A more in-depth treatment will be provided in Sec. IV, where we present the main properties of the superoperator \(\mathbb{J}_{f}\big{|}_{\pi}\) and of the standard monotone functions, and in Sec. V, where we provide a list of the most frequently used quantum Fisher information. Before moving on, though, it is useful to clarify the nomenclature. We will use the expression quantum Fisher information scalar product to refer to \(K_{f,\pi}(A,B)\), whereas the quantity \(\mathcal{F}_{f,\pi}(\delta\rho):=K_{f,\pi}(\delta\rho,\delta\rho)\) is traditionally referred to as quantum Fisher information. Finally, we call the two maps \(\mathbb{J}_{f}\big{|}_{\pi}\) and \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) the quantum Fisher operators. Since our results apply both to the quantum and to the classical case, in the following we sometimes drop the reference to which scenario we are considering, as it should also be clear from the context. [colframe=black, left=0.5cm, right=0.5cm] **Fos 1.** Contributions on the defining function \(y\Rightarrow\) In this section we explore which properties one has to impose on the function \(g\) in Eq. (4) in order for conditions 1-5 to be satisfied. Condition 1 can be decomposed in two parts: first, that the contrast function is positive, and moreover, that it is zero if and only if \(\rho\equiv\sigma\). As one can verify from the coordinate expression in Eq. (6), this second part of condition 1 corresponds to imposing that \(g(1)=0\) is the only zero of the function. Moreover, as we mentioned in the main text, one can verify that \(g(x)=(x-1)\) is identically zero for normalised states, as this corresponds to: \[H_{(x-1)}(\rho||\sigma)=\operatorname{Tr}\left[\left(\mathbb{L}_{\sigma} \mathbb{R}_{\rho}^{-1}-\mathbf{I}\right)[\rho]\right]=\operatorname{Tr}\left[ \sigma-\rho\right]=0\,. \tag{28}\] Then, in order to ensure the positivity of \(H_{g}(\rho||\sigma)\) it is sufficient to require that \(g(x)+a(x-1)\geq 0\), for some arbitrary constant \(a\). On the other hand, thanks to the way in which the ansatz is formulated, condition 2 is automatically satisfied, while one needs to require that \(g(x)\) is matrix convex at \(x=1\) for the joint convexity to hold (condition 3). This directly implies the monotonicity of the contrast functions, condition 4, fact that can be proved as follows: first, it should be noticed that \(H_{g}(\rho||\sigma)\) are unitary invariant. Indeed, one has: \[H_{g}(U\,\rho\,U^{\dagger}||U\,\sigma\,U^{\dagger}): =\operatorname{Tr}\left[U\,\rho^{1/2}\,U^{\dagger}\,g\big{(} \mathbb{L}_{U\,\sigma\,U^{\dagger}}\mathbb{R}_{U\,\rho^{-1}\,U^{\dagger}}\big{)} \left[U\,\rho^{1/2}\,U^{\dagger}\right]\right]= \tag{29}\] \[=\operatorname{Tr}\left[U\,\rho^{1/2}\,U^{\dagger}\,U\,g\big{(} \mathbb{L}_{\varpi}\mathbb{R}_{\rho^{-1}}^{-1}\big{)}\left[\rho^{1/2}\right]\, U^{\dagger}\right]=H_{g}(\rho||\sigma)\,, \tag{30}\] where the step from Eq. (29) to the Eq. (30) can be verified either in coordinates (see Eq. (6)), or by expanding \(g(x)\) in Laurent series, while in Eq. (30) we exploited the unitarity of \(U\) and the cyclicity of the trace. Secondly, also notice that for generic contrast functions it holds that: \[H_{g}(\rho\otimes\tau||\sigma\otimes\tau) =\operatorname{Tr}\left[g\big{(}\mathbb{L}_{\sigma\otimes\tau} \mathbb{R}_{\rho\otimes\tau}^{-1}\big{)}\left[\rho\otimes\tau\right]\right]= \operatorname{Tr}\left[g\big{(}\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1} \otimes\mathbb{I}\big{)}\left[\rho\otimes\tau\right]\right]= \tag{31}\] \[=\operatorname{Tr}\left[g\big{(}\mathbb{L}_{\sigma}\mathbb{R}_{ \rho}^{-1}\big{)}\left[\rho\right]\right]\operatorname{Tr}\left[\tau\right]= H_{g}(\rho||\sigma)\,, \tag{32}\] where in Eq. (31) we used the fact that \(\mathbb{L}_{\tau}\mathbb{R}_{\tau}^{-1}\) coincides with the identity operator on the commutant of \(\tau\) (again one can see this either from the Laurent series or directly from the coordinate expression in Eq. (6)). These two facts allow to deduce the monotonicity of the contrast functions in Eq. (4) from their joint convexity. Indeed, given a CPTP map \(\Phi\), one can express it in terms of its Stinespring's dilation: \[\Phi(\rho)=\operatorname{Tr}_{E}\left[U\left(\rho\otimes\,|\psi\rangle \langle\psi\,|\right)U^{\dagger}\right]\,, \tag{33}\] where \(U\) is a unitary operator and \(|\psi\rangle\langle\psi|\) is an environmental pure state of dimension \(d_{E}\). Take a unitary basis \(\{V_{i}\}\) for the space of bounded operators of dimension \(d_{E}\times d_{E}\). It is a well-known result that \(\sum_{i}(V_{i}(\rho_{E})V_{i}^{\dagger})/d_{E}^{2}=\mathbb{1}_{E}/d_{E}\) for any \(\rho_{E}\)[10]. We denote this superoperator by \(\Delta_{1}(\rho)\). This identity, together with Eq. (33) allows to rewrite the action of the channel as: \[\Phi(\rho)\otimes\frac{\mathbb{1}_{d_{E}}}{d_{E}}=\frac{1}{d_{E}^{2}}\,\sum_{ i=1}^{d_{E}^{2}}\,\left(\mathbb{1}\otimes V_{i}\right)U\left(\rho\otimes\,| \psi\rangle\langle\psi\,|\right)U^{\dagger}(\mathbb{1}\otimes V_{i}^{\dagger })\,. \tag{34}\] Putting together this expression with Eq. (32) we can finally prove monotonicity. Indeed, one has: \[H_{g}(\Phi(\rho)||\Phi(\sigma))=H_{g}\left(\Phi(\rho)\otimes \frac{\mathbb{1}_{d_{E}}}{d_{E}}\bigg{|}\bigg{|}\Phi(\sigma)\otimes\frac{ \mathbb{1}_{d_{E}}}{d_{E}}\bigg{)}= \tag{35}\] \[=H_{g}\left(\left(\mathbb{I}\otimes\Delta_{1}\right)\left(U\left( \rho\otimes\,|\psi\rangle\langle\psi|\right)U^{\dagger}\right)||(\mathbb{I} \otimes\Delta_{1})\left(U\left(\sigma\otimes\,|\psi\rangle\langle\psi|\right) U^{\dagger}\right)\right)\leq\] (36) \[\leq\frac{1}{d_{E}^{2}}\,\sum_{i=1}^{d_{E}^{2}}\,H_{g}(\rho \otimes\,|\psi\rangle\langle\psi|\,||\sigma\otimes\,|\psi\rangle\langle\psi \,|)=H_{g}(\rho||\sigma)\,, \tag{37}\] where in the first step we used Eq. (32), then we applied the decomposition in Eq. (34) (notice that we denote by \(\mathbb{I}\) the identity superoperator), and the inequality comes from the joint convexity of \(H_{g}(\rho||\sigma)\), together with the unitary invariance of contrast functions. Finally, we applied Eq. (32) once more. This proves condition 4. Finally, condition 5 follows from requiring \(g(x)\) to be smooth. Hence, in order for \(H_{g}(\rho||\sigma)\) to be a proper contrast function, \(g(x)\) has to be a matrix convex function in \(C^{\infty}(\mathbb{R}^{+})\), such that \(g(1)=0\) is the unique zero and \(g(x)+a(x-1)\geq 0\), for some arbitrary constant \(a\). **Box 2**.: Bounding the set of standard monotone functions \(\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{ \mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{ \mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}_{\mathbb{L}}}}}}}}}}}}}}}}}\)\) The characterisation in Eq. (22) of the set of standard monotone functions follows from the Lemma: **Lemma 1** (Theorem 4.43 from [7]).: _Consider a function \(f:(0,\infty)\to(0,\infty)\). The following conditions are equivalent:_ 1. \(f(x)\) _is matrix monotone;_ 2. \([Tf](x):=x/f(x)\) _is matrix monotone;_ 3. \(f(x)\) _is matrix concave._ Then, we can show that standard monotones are all contained in the interval: \[\frac{2x}{x+1}\,\leq f(x)\,\leq\frac{x+1}{2}\,. \tag{38}\] This can be proved as follows: thanks to the condition \(f(x)=xf(x^{-1})\) one only needs to characterise the properties of \(f(x)\) in the interval \([0,1]\). Moreover, the same condition also implies that \(f^{\prime}(1)=\frac{1}{2}\). In fact, this can be easily verified from the equation: \[f^{\prime}(x)=f(x^{-1})-\frac{1}{x}f^{\prime}(x^{-1})\,, \tag{39}\] by setting \(x\) to \(1\), and using the normalisation \(f(1)=1\). Then, from concavity it follows that \(f(x)\leq f(1)+f^{\prime}(1)(x-1)=(x+1)/2\). The upper bound satisfies all the necessary constraints, so it can be identified as the biggest standard monotone function, which we indicate by \(f_{B}\). Finally, notice that the transformation \(f\to Tf\) (defined in 2) inverts the inequality and maps standard monotones into standard monotones. For this reason, any standard monotone function can be pointwise bounded as in Eq. (38). **Box 3.**: **Option possible quantifies of statistical difference \(\leftrightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\Rightarrow\)\(\Rightarrow\)\ contrast function is studied in Sec. V.8, where its explicit expression is provided. Interestingly, it is proven in there that for \(f_{H}(x):=\frac{2\pi}{x+1}\), one can rewrite: \[H_{(x-1)^{2}}(\rho||\sigma)=\frac{1}{2}\,\chi_{f_{H}}^{2}(\rho||\sigma)\,, \tag{44}\] proving the representability of the \(\chi_{f_{H}}^{2}\)-divergence. On the other hand, for all other \(\chi_{f}^{2}\)-divergences this also shows that they cannot be expressed in terms of contrast functions. The method just presented is taken from [11], where it was used to prove that the fidelity, the Chernoff distance and the Hoeffding distances are all not representable as contrast functions. ## III Dynamical properties of Fisher information In the standard treatment of Fisher information, most of the focus goes into its significance as a distinguishability quantifier (see Thm. 1), and in the many different results linking it to estimation theory and information theory, as in the case of the Cramer-Rao bound [12; 13] and of the Chernoff bound [14] (cf. Eq. (206) and Eq. (260)). At the same time, Thm 2 hints at a more dynamical character of the Fisher information metrics: these are the only metrics that monotonically decrease under the action of any CPTP - i.e., physical - map. Hence, there is a dual manner of defining Fisher information, one statistical and one dynamical, and it is not clear how the two should be connected. In fact, it is not even obvious that these two properties should define the same class of metrics. Here, we will focus on the latter point of view. Indeed, not only the Fisher information can be defined in terms of its contractivity under physical evolutions, but also many properties of the dynamical maps can be given solely in terms of their behaviour with respect to the Fisher information. We prove a first result in this direction, given by the following Theorem and its Corollary: **Theorem 3**.: _Consider a trace non-increasing, Hermitian preserving, linear map \(\Phi:\mathcal{M}_{d}(\mathbb{C})\to\mathcal{M}_{d}(\mathbb{C})\), where \(\mathcal{M}_{d}(\mathbb{C})\) is the space of \(d\times d\) complex matrices. Define \(\mathcal{S}_{d}\subset\mathcal{M}_{d}(\mathbb{C})\) to be the set of positive semidefinite, trace-one matrices. If \(\Phi\) satisfies the following three properties:_ 1. \(\Phi\) _is invertible;_ 2. \(\Phi\) _maps at least one point from the interior of_ \(\mathcal{S}_{d}\) _into_ \(\mathcal{S}_{d}\)_;_ 3. _for any state_ \(\rho\) _in_ \(\Phi^{-1}(\mathcal{S}_{d}\cap\Phi(\mathcal{S}_{d}))\) _and tangent vector_ \(\delta\rho\)_, one has:_ \[K_{f,\rho}(\delta\rho,\delta\rho)\geq K_{f,\Phi(\rho)}(\Phi(\delta\rho),\Phi( \delta\rho))\,;\] (45) _then the image of \(\Phi\) is completely contained in \(\mathcal{S}_{d}\) (\(\Phi\) is a Positive (P) map)._ From this result it trivially follows that: **Corollary 3.1**.: _Consider the extended channel \(\Phi_{\rm anc}:=\Phi\otimes\mathds{I}_{d}\). Under the same assumptions of Thm 3, where the contractivity requirement for \(\Phi_{\rm anc}\) takes the form:_ _3': for any state \(\rho\) in \(\Phi_{\rm anc}^{-1}(\mathcal{S}_{d}\otimes\mathcal{S}_{d}\cap\Phi_{\rm anc}( \mathcal{S}_{d}\otimes\mathcal{S}_{d}))\) and tangent vector \(\delta\rho\), it holds that:_ \[K_{f,\rho}(\delta\rho,\delta\rho)\geq K_{f,\Phi_{\rm anc}(\rho)}(\Phi_{\rm anc }(\delta\rho),\Phi_{\rm anc}(\delta\rho))\,; \tag{46}\] _then, \(\Phi\) is Completely-Positive (CP)._ The theorem and its corollary show that the property of being completely positive can be defined exclusively in statistical terms. These are all the physically realisable dynamics, so the results above gives a characterisation of which physical evolutions are possible without any reference to an actual theory of the world. Indeed, this characterisation could be regarded as the dual of the Chentsov/Petz theorem: not only the Fisher information is the unique family of metrics that contracts under arbitrary CP maps, but also CP maps are the only set of maps that contract the Fisher information. Proof.: We prove Theorem 3 by contradiction: suppose there exists a map \(\Phi\) that is not P, but that satisfies Eq. (45). Thanks to condition 2 there exists at least one point \(\pi\) in the interior of \(\mathcal{S}_{d}\) such that \(\Phi(\pi)\in\mathcal{S}_{d}\) is also a state. Moreover, from the assumption that \(\Phi\) is not P there is also at least one state \(\sigma\) such that \(\Phi(\sigma)\notin\mathcal{S}_{d}\). Without loss of generality one can choose \(\sigma\) to be in the interior of \(\mathcal{S}_{d}\): if this is not the case, one can take a ball around \(\sigma\) whose image still lays outside of the state space, and by inspecting the intersection between its preimage and \(\mathcal{S}_{d}\) one can find a point satisfying the assumption. Hence, the line \(\rho_{\lambda}:=\left(1-\lambda\right)\pi+\lambda\,\sigma\) also lays in the interior of the state space for \(\lambda\in[0,1]\). From this, it follows that the following superior is finite: \[\sup_{\lambda,\operatorname{Tr}[\delta\rho^{2}]=1}\;K_{f,\rho_{\lambda}}( \delta\rho,\delta\rho)=\sup_{\lambda,\operatorname{Tr}[\delta\rho^{2}]=1} \operatorname{Tr}\left[\delta\rho\,\mathbb{J}_{f}^{-1}\big{|}_{\rho_{\lambda} }[\delta\rho]\right]<\infty\,, \tag{47}\] where we used the fact that the Fisher information is a bounded operator when one is restricted to a closed set completely inside the interior of the state space (see Eq. (155)). Since by varying \(\lambda\), \(\Phi(\rho_{\lambda})\) interpolates linearly between the positive definite matrix \(\Phi(\pi)\) and one with at least one negative eigenvalue, namely \(\Phi(\sigma)\), there exists a \(\lambda^{*}\) such that \(\Phi(\rho_{\lambda^{*}})\) is a state with at least one zero-eigenvalue. We are now ready to prove the claim. Set the state \(\rho_{\eta}\) so that the smallest eigenvalue of \(\Phi(\rho_{\eta})\) is of order \(\eta\), where \(\eta\ll 1\). Moreover, consider a perturbation \(\delta\rho_{\eta}\) such that \([\Phi(\rho_{\eta}),\Phi(\delta\rho_{\eta})]=0\), and having a positive finite contribution along the eigenvectors corresponding to \(\eta\)-eigenvalues (the positivity condition ensures that in the limit \(\eta\to 0\) the perturbed state is still in the interior of \(\mathcal{S}_{d}\) for any finite \(\eta\)). Since \(\Phi\) is full rank one can always find such \(\delta\rho_{\eta}\). Then, choosing a common eigenbasis for \(\rho_{\eta}\) and \(\delta\rho_{\eta}\), the evolved Fisher information reads: \[K_{f,\Phi(\rho_{\eta})}(\Phi(\delta\rho_{\eta}),\Phi(\delta\rho_{\eta}))=\sum _{i}\;\frac{\Phi(\delta\rho_{\eta})_{i}^{2}}{\Phi(\rho_{\eta})_{i}}\,, \tag{48}\] where we have used the expression of the Fisher information for commuting observables (see Eq. (26)). The quantity in (48) scales as \(\eta^{-1}\) as \(\eta\to 0\). Hence, we can always find a \(\eta\) small enough such that: \[K_{f,\Phi(\rho_{\eta})}(\Phi(\delta\rho_{\eta}),\Phi(\delta\rho_{\eta}))>\sup _{\lambda,\operatorname{Tr}[\delta\rho^{2}]=1}\;K_{f,\rho_{\lambda}}(\delta \rho,\delta\rho)\geq\;K_{f,\rho_{\eta}}(\delta\rho_{\eta},\delta\rho_{\eta})\;, \tag{49}\] contradicting the assumption that \(\Phi\) contracts the Fisher metric for any two points in the interior of the space of states (condition 3). This proves the claim. It should be noticed that since the counterexample we find only comprises commutative observables, one does not need to specify which Fisher information we are actually using in Thm. 3: indeed, if this condition is valid for one \(f\), then it is automatically valid for all others. The connection between the Fisher information and the dynamics of quantum systems is actually even deeper. In the following, we are going to focus on three dynamical aspects of the quantum Fisher information: 1. Markovianity of an evolution can be related to the monotonous contraction of Fisher information, using the same principle that allows to express the CP-ness of a map in terms of its contractivity property with respect to the Fisher information; 2. The contractivity of Fisher information strictly relates also to the ability of recovering the original states of the dynamics, via a generalisation of Bayesian retrodiction; 3. Detailed balanced dynamics can be characterized in terms of the adjointness of the dynamical generators of the evolution, with respect to the scalar product induced by the Fisher information. These topics are the subject of the next subsections and corroborate the interpretation of the Fisher information as an intimately dynamical quantity. [box 4]Concludes for finite distributions [box 4] Thm. 3 gives a characterisation of P-maps in terms of their contractivity with respect to the Fisher information. Still, it should be noticed that since the proof proceeds by providing a local counterexample, the same result could in principle be formulated in terms of finite distinguishability measures. In particular, it is easy to prove the following: **Corollary 3.2**.: _Consider a channel satisfying the premises of Thm. 3, and condition 1 and 2 therein. If one also requires that:_ 3*. For any two states \(\rho\) and \(\sigma\) in \(\Phi^{-1}\big{(}\mathcal{S}_{d}\cap\Phi(\mathcal{S}_{d})\big{)}\) it holds that:_ \[H_{g}(\rho||\sigma)\geq H_{g}(\Phi(\rho)||\Phi(\sigma))\,; \tag{50}\] _then, the image of \(\Phi\) is completely contained in \(\mathcal{S}_{d}\), i.e., \(\Phi\) is a P-map._ Proof.: The proof of the corollary directly follows from the one of Thm. 3. Indeed, by contradiction, assume there exists a state \(\sigma\) that gets mapped outside of \(\mathcal{S}_{d}\), and consider a state \(\pi\in\Phi^{-1}\big{(}\mathcal{S}_{d}\cap\Phi(\mathcal{S}_{d})\big{)}\). Analogously to the proof above, define \(\rho_{\lambda}:=(1-\lambda)\,\pi+\lambda\,\sigma\). For \(\varepsilon\) small enough we can also apply Thm. 1 to obtain: \[\sup_{\lambda,\mathrm{Tr}[\delta\rho^{2}]=1}\,H_{g}(\rho_{\lambda}||\rho_{ \lambda}+\varepsilon\,\delta\rho)=\sup_{\lambda,\mathrm{Tr}[\delta\rho^{2}]=1 }\,\frac{\varepsilon^{2}}{2}\,K_{f,\rho_{\lambda}}(\delta\rho,\delta\rho)+ \mathcal{O}\left(\varepsilon^{3}\right)<\infty\,. \tag{51}\] Then, following the steps presented for Thm. 3, we define a \(\rho_{\eta}\) such that \(\Phi(\rho_{\eta})\) is of order \(\eta\), where \(\eta\ll 1\), and a \(\delta\rho_{\eta}\) such that \([\Phi(\rho_{\eta}),\Phi(\delta\rho_{\eta})]=0\), and having a positive finite contribution along the eigenvectors corresponding to \(\eta\)-eigenvalues. The positivity condition ensures that in the limit \(\eta\to 0\) the perturbed state is still in the interior of \(\mathcal{S}_{d}\) for any finite \(\eta\), and \(\varepsilon\) small enough. Then, \(H_{g}(\Phi(\rho_{\eta})||\Phi(\rho_{\eta}+\varepsilon\,\delta\rho_{\eta}))\) scales as \(\varepsilon^{2}/\eta\). Hence, one can find \(\eta\) small enough such that: \[H_{g}(\Phi(\rho_{\eta})||\Phi(\rho_{\eta}+\varepsilon\,\delta\rho_{\eta}))> \sup_{\lambda,\mathrm{Tr}[\delta\rho^{2}]=1}\,H_{g}(\rho_{\lambda}||\rho_{ \lambda}+\varepsilon\,\delta\rho)\geq\ H_{g}(\rho_{\eta}||\rho_{\eta}+ \varepsilon\,\delta\rho_{\eta})\,. \tag{52}\] This gives the desired contradiction, concluding the proof. The same kind of argument could be made for \(\chi_{f}^{2}\)-divergences and geodesic distances. Moreover, even in this case one can restrict their attention to a single contrast function (or any other distinguishability measure that locally expands to the Fisher information), as the counterexample we use is commutative. ### Markovianity as monotonous contraction of Fisher information In order to investigate how the Fisher information relates to Markovianity we need to introduce the concept of CP-divisible evolutions. To this end, consider a family of CP-maps \(\Phi_{t}\) depending smoothly on \(t\), representing the time-parameter of the evolution. We assume that for any two times \(t\) and \(s\) (\(t\geq s\)) one can define an intermediate map \(\Phi_{t,s}\) satisfying the property \(\Phi_{t}=\Phi_{t,s}\circ\Phi_{s}\). This kind of one-parameter families, or evolutions, are called divisible3. In particular, if all the intermediate maps are positive preserving, the global dynamics is called P-divisible; in the same way, if the intermediate maps are all CP, then the corresponding dynamics is called CP-divisible. In the following we will identify CP-divisible evolutions with Markovian ones, according to the most canonical notion of quantum Markovianity [15]. It should be noticed that for CP-divisible evolutions all the contrast functions monotonically decrease, as it can be verified from their derivative: Footnote 3: Note that if \(\Phi_{t}\) is invertible \(\forall\,t\), then it is trivially divisible as one can set \(\Phi_{t,s}\equiv\Phi_{t}\circ\Phi_{s}^{-1}\). \[\frac{\mathrm{d}}{\mathrm{d}t}H_{g}(\Phi_{t}(\rho)||\Phi_{t}( \sigma)) =\lim_{\varepsilon\to 0}\frac{H_{g}(\Phi_{t+\varepsilon}(\rho)||\Phi_{t+ \varepsilon}(\sigma))-H_{g}(\Phi_{t}(\rho)||\Phi_{t}(\sigma))}{\varepsilon}=\] \[=\lim_{\varepsilon\to 0}\frac{H_{g}(\Phi_{t+\varepsilon,t}\Phi_{t}( \rho)||\Phi_{t+\varepsilon,t}\Phi_{t}(\sigma))-H_{g}(\Phi_{t}(\rho)||\Phi_{t}( \sigma))}{\varepsilon}\leq 0\;, \tag{53}\] where we used the contractivity of the contrast functions under the action of quantum channels. Since \(\rho\) and \(\sigma\) are arbitrary, the same result also holds for quantum Fisher metrics, and this behaviour is referred to as monotonic degradation of information, which also justifies the identification of CP-divisibility with Markovian dynamics. Since \(\Phi_{t,s}\) is defined for any \(t\geq s\), one can translate the study of the semigroup to the one of their generators, defined as: \[\mathcal{L}_{t}:=\lim_{\varepsilon\to 0}\,\frac{\Phi_{t+\varepsilon,t}- \mathbb{I}}{\varepsilon}\,. \tag{54}\] It can be shown that an invertible evolution is divisible if and only if \(\mathcal{L}_{t}\) can be written in the time-dependent GKLS form [16], namely: \[\mathcal{L}_{t}[\rho]=-i[H(t),\rho]+\sum_{\alpha=1}^{d^{2}-1}\,\lambda_{\alpha}( t)\,\left(A_{\alpha}(t)\,\rho\,A_{\alpha}^{\dagger}(t)-\frac{1}{2}\{A_{ \alpha}^{\dagger}(t)\,A_{\alpha}(t),\rho\}\right)\,, \tag{55}\] where \(H(t)\) is Hermitian, \(\lambda_{\alpha}(t)\) are real scalars called rates and the operators \(A_{\alpha}(t)\), called jump operators, are orthonormal with respect to Hilbert-Schmidt scalar product (i.e. \(\mathrm{Tr}\left[A_{\alpha}^{\dagger}A_{\beta}\right]=\delta_{\alpha\beta}\)) and traceless. In this context, CP-divisibility is imposed by requiring that \(\lambda_{\alpha}(t)\geq 0\) at all times. The evolution of the Fisher information metric under divisible dynamics can be divided as a sum of independent fluxes, each associated to a single rate \(\lambda_{\alpha}(t)\). In particular, the object we want to consider is given by: \[\frac{\mathrm{d}}{\mathrm{d}t}H_{g}(\Phi_{t}(\pi)||\Phi_{t}(\pi+ \varepsilon\,\delta\rho)) =\frac{\varepsilon^{2}}{2}\,\frac{\mathrm{d}}{\mathrm{d}t} \mathrm{Tr}\left[\Phi_{t}(\delta\rho)\,\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}( \pi)}[\Phi_{t}(\delta\rho)]\right]+\mathcal{O}\left(\varepsilon^{3}\right)= \tag{56}\] \[=\frac{\varepsilon^{2}}{2}\,\mathcal{F}_{f,t}^{\prime}+\mathcal{O }\left(\varepsilon^{3}\right)\,, \tag{57}\] where we used the shorthand notation for the Fisher information \(\mathcal{F}_{f,t}:=\mathcal{F}_{f,\Phi_{t}(\pi)}(\Phi_{t}(\delta\rho))\). In the following we use the notation \(\pi_{t}:=\Phi_{t}(\pi)\) and \(\delta\rho_{t}:=\Phi_{t}(\delta\rho)\). Then, we have the following result: **Theorem 4**.: _For any divisible dynamics, let \(\{A_{\alpha}(t)\}\) and \(\{\lambda_{\alpha}(t)\}\) be respectively the time dependent jump operators and time dependent rates, defined according to Eq. (55). Then, the derivative of the Fisher information takes the form:_ \[\mathcal{F}_{f,t}^{\prime}=\sum_{\alpha}\ \lambda_{\alpha}(t)\ \mathcal{I}_{\alpha}^{f}(t)\,, \tag{58}\] _where the current \(\mathcal{I}_{\alpha}^{f}(t)\) is given by:_ \[\mathcal{I}_{\alpha}^{f}(t)=-2\,\int_{0}^{1}\mathrm{d}N_{g}(s)\,\left( \mathrm{Tr}\left[\pi_{t}\left[A_{\alpha}(t),B_{s}(t)^{\dagger}\right]^{ \dagger}\left[A_{\alpha}(t),B_{s}(t)^{\dagger}\right]\right]+s\,\mathrm{Tr} \left[\pi_{t}\left[A_{\alpha}(t),B_{s}(t)\right]^{\dagger}\left[A_{\alpha}(t),B _{s}(t)\right]\right]\,\right), \tag{59}\] _and the measure \(\mathrm{d}N_{g}(s)\) is the one used in Eq. (13), while the operators \(B_{s}(t)\) are defined as:_ \[B_{s}(t):=(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^{-1}[\delta\rho_{t}]\,. \tag{60}\] Figure 1: Evolution of the quantum Fisher informations under the action of the depolarising channel \(\Delta_{\lambda_{t}}(\rho)=(1-\lambda_{t})\rho+\lambda_{t}\,\frac{1}{2}\) on a qubit. The time dependent contraction coefficient are \(\lambda_{t}^{M}=1-e^{-t}\) and \(\lambda_{t}^{NM}=1-e^{-t}\cos(2t)\), respectively in the Markovian and non-Markovian case. Notice that non-Markovianity is associated to a local increase in the value of \(1-\lambda_{t}\). In the first panel we show the evolution of \(\lambda_{t}\), in the second of \(\mathcal{F}_{f,t}\) (the inset is in logarithmic scale) and in the third the behaviour of \(\mathcal{F}_{f,t}^{\prime}\). Non monotonicity in \(\lambda_{t}\) are mirrored in the change of sign of \(\mathcal{F}_{f,t}^{\prime}\). The colour scheme is from Fig. 5. The proof of this theorem is deferred to App. B. It should be noticed that the two traces in Eq. (59) are positive definite, as they take the form \(\operatorname{Tr}\left[\pi_{t}\,X^{\dagger}X\right]\) for some operator \(X\). This implies that the currents \(\mathcal{I}^{f}_{\alpha}(t)\) are always negative, showing that the summands in Eq. (58) can become positive only if the corresponding rate \(\lambda_{\alpha}(t)\) becomes negative, i.e., in the presence of non-Markovianity. In the same way, we see that \(\mathcal{F}^{t}_{f,t}\) will always be negative for Markovian dynamics, signalling the expected monotonic contraction of the Fisher information. The effects of this decomposition are exemplified in Fig. 1, where we plotted the Fisher information and its derivative for a depolarising channel, both in the Markovian and non-Markovian regime. As it can be seen, in this case the oscillations in \(\mathcal{F}^{\prime}_{f,t}\) mirror the onset of non-Markovianity. **Proof 5.**: **Examples of Fisher Information currents** In order to give a more practical feeling of the expression in Eq. (59), we present here some specific examples for which the currents \(\mathcal{I}^{f}_{\alpha}(t)\) take a particularly simple form. The first case that is interesting to study is the one of classical evolutions, in which case the jump operators are all of the form \(A_{i\gets j}=\left|i\right\rangle\!\langle j\right|\) and all the observables commute with \(\pi_{t}\). Thanks to this fact \(B_{s}(t)\) is simply given by \(B_{s}(t)=B_{s}(t)^{\dagger}=\delta\rho_{t}/((1+s)\pi_{t})\), where we use a slight abuse of notation to indicate the componentwise division. This allows us to rewrite Eq. (59) as: \[\mathcal{I}^{c}_{i\gets j}(t) =-2\,\int_{0}^{1}\mathrm{d}N_{g}(s)\,(1+s)\operatorname{Tr}\left[ \pi_{t}\left[\,\left|i\right\rangle\!\langle j\,\right|,\frac{\delta\rho_{t}}{ (1+s)\pi_{t}}\right]^{\dagger}\left[\,\left|i\right\rangle\!\langle j\,\right|,\frac{\delta\rho_{t}}{(1+s)\pi_{t}}\right]= \tag{61}\] \[=-\int_{0}^{1}\mathrm{d}N_{g}(s)\ \frac{2}{(1+s)}\left(\frac{(\delta\rho_{t})}{(\pi_{t})_{j}}- \frac{(\delta\rho_{t})_{i}}{(\pi_{t})_{i}}\right)^{2}\,(\pi_{t})_{j}=\] (62) \[=-\left(\frac{(\delta\rho_{t})_{j}}{(\pi_{t})_{j}}-\frac{(\delta \rho_{t})_{i}}{(\pi_{t})_{i}}\right)^{2}\,(\pi_{t})_{j}\,, \tag{63}\] where in the last line we used the normalisation condition on the measure \(\mathrm{d}N_{g}(s)\) (see Eq. (165)): \[\int_{0}^{1}\mathrm{d}N_{g}(s)\ \frac{2}{1+s}=1\,, \tag{64}\] This result indeed coincides with the one obtained for classical stochastic dynamics [1]. Another case of particular interest is given by the flux of Bures metric (see Sec. V.1). This corresponds to a measure of the form \(\mathrm{d}N(s)=\frac{\delta(s-1)}{2}\,\mathrm{d}s\). Then, thanks to the self-adjointness relation \(B_{1}(t)=B_{1}(t)^{\dagger}\), by carrying out the integration one obtains: \[\mathcal{I}^{B}_{\alpha}(t)=-2\operatorname{Tr}\left[\pi_{t}\left[A_{\alpha}(t ),B_{1}(t)\right]^{\dagger}\left[A_{\alpha}(t),B_{1}(t)\right]\right]\,. \tag{65}\] Interestingly, in this case the current is directly connected to the symmetric logarithmic derivative of the state. In fact, by inverting Eq. (60), one obtains: \[B_{1}(t)\,\pi_{t}+\pi_{t}\,B_{1}(t)=\delta\rho_{t}\,. \tag{66}\] Comparing this expression with the one in Eq. (205), it is apparent that \(2B_{1}\) corresponds to the symmetric logarithmic derivative \(L\) in \(\pi_{t}\) and in the direction of \(\delta\rho_{t}\). It should be pointed out that the expression of the Bures flux in terms of \(L\) was already found in [17]. At the other extreme, the smallest among the contrast functions corresponds to the measure \(\mathrm{d}N(s)=\frac{\delta(s)}{2}\,\mathrm{d}s\) (see Sec. V.8). Then, the current can be expressed as: \[\mathcal{I}^{H}_{\alpha}(t)=-\operatorname{Tr}\left[\pi_{t}\left[A_{\alpha}(t ),\delta\rho_{t}\,\pi_{t}^{-1}\right]^{\dagger}\left[A_{\alpha}(t),\delta\rho _{t}\,\pi_{t}^{-1}\right]\right]\,. \tag{67}\] This shows how the formula in Thm. 4 generalises the computations presented in [17] to the whole family of quantum Fisher information. We are now ready to give a complete characterisation of the relation between Markovianity and Fisher information metrics (see Fig. 2). As it was mentioned above, all Fisher information metrics monotonically contract under Markovian evolutions. With the hindsight of Thm. 3 it is then not surprising that the reverse also holds: **Theorem 5**.: _A divisible evolution \(\Phi_{t}\) acting on a \(d\)-dimensional state space is P-divisible if and only if it induces a monotonic decrease in the Fisher information at all times and on all states in \(\mathcal{S}^{\mathrm{o}}_{d}\), the interior of the space of states. In formulae, P-divisibility of the evolution \(\Phi_{t}\) is equivalent to the condition:_ \[\frac{\mathrm{d}}{\mathrm{d}t^{\prime}}\operatorname{Tr}\left[\Phi_{t^{\prime}, t}(\delta\rho)\,\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t^{\prime},t}(\rho)}[\Phi_{t^{ \prime},t}(\delta\rho)]\right]\bigg{|}_{t^{\prime}=t}\leq 0\quad\forall\,t\,, \rho\,,\delta\rho\,, \tag{68}\] _where \(\rho\) is an arbitrary point in \(\mathcal{S}^{\mathrm{o}}_{d}\), and \(\delta\rho\) an arbitrary perturbation in the tangent space. Moreover, if the same also holds for the dynamics \(\Phi_{t}\otimes\mathbb{I}_{d}\) then \(\Phi_{t}\) is CP-divisible, i.e., Markovian._ This theorem, together with Thm. 6 and 7, was already presented in [1], to which we refer for the proofs. Still, it should be noticed that Thm. 5 is a direct corollary of Thm. 3. Indeed, we can see that for invertible divisible dynamics, given \(\varepsilon\) small enough the intermediate map \(\Phi_{t+\varepsilon,t}\) satisfies the first two conditions of Thm. 3, and in the limit \(\varepsilon\to 0\), \(\Phi_{t+\varepsilon,t}^{-1}(\mathcal{S}_{d}\cap\Phi_{t+\varepsilon,t}( \mathcal{S}_{d}))=\mathcal{S}^{\mathrm{o}}_{d}\) (as \(\Phi_{t+\varepsilon,t}^{-1}\) is \(\varepsilon\)-close to the identity superoperator). Moreover, it is easy to see that in the limit \(\varepsilon\to 0\), Eq. (45) and Eq. (68) are equivalent. Then, it directly follows from Thm. 3 that the assumptions of Thm. 5 force \(\Phi_{t}\) to be P-divisible. Interestingly, for classical evolutions this directly implies Markovianity, as in this context there is no difference between P-divisibility and CP-divisibility. It should be noticed that the equivalence one can find for the contractivity of Fisher information is quite peculiar to this quantity. Indeed, one can contrast the result just obtained with the one for the trace distance, the most canonical quantity studied in the context of non-Markovianity (we provide a small review on this topic in Box 7 at the end of the section): in this case one can explicitly construct classical non-Markovian dynamics for which the trace distance between any two points contracts. For quantum evolutions, on the other hand, the use of a \(d\)-dimensional ancilla is necessary to separate positive preserving maps from completely positive maps, as complete positivity of \(\Phi_{t}\) is equivalent to the fact that \(\Phi_{t}\otimes\mathbb{I}_{d}\) is P. Despite the addition of a \(d\)-dimensional ancilla, examining the trace distance is not enough to obtain a result along the lines of Thm. 5. Indeed, it can be shown that one needs the ancillary space to be at least of dimension \(d+1\) for the trace distance to expand in the presence of any non-Markovianity [18]. The difference with the trace distance is actually even sharper. Indeed, in order for the trace distance to operationally witness the non-Markovianity of the evolution it is sufficient to use ancillas of dimension high enough (thanks to the construction in [18], \(d+1\) is enough). This means that an increase of trace distance can always be obtained on the image of \(\Phi_{t}\) when a sufficient number of extra degrees of freedom are provided. Thm. 5 on the other hand, ensures that an expansion in the Fisher information metrics always happens in the presence of non-Markovianity, but it does not say anything about whether the states needed to actually verify it can be physically prepared. If the violation of the monotonicity happens close to the boundary of the state space, for example, and the initial part of the evolution is particularly contracting, there is no way to actually detect the non-Markovian behaviour by looking at the Fisher information alone. Still, one could hope that by using a sufficient number of ancillas the Fisher information could provide witnesses for non-Markovianity. Somehow surprisingly, one can prove that this cannot be the case: Figure 2: Illustrative summary of the results presented in Sec. III.1. On the left we give a pictorial representation of Thm. 5: in red we depict the whole state space \(\mathcal{S}_{d}\otimes\mathcal{S}_{d}\), while in green the image of the evolution, i.e., \(\Phi_{t}\otimes\mathbb{I}_{d}(\mathcal{S}_{d}\otimes\mathcal{S}_{d})\). Thm. 5 tells us that a map is non-Markovian if and only if there exists at least two points in \(\mathcal{S}_{d}\otimes\mathcal{S}_{d}\) (not necessarily in the image of the map) for which the Fisher distance increases. On the right, we compare the Fisher information with the most canonical quantity to witness non-Markovianity, namely the trace distance. While the monotone contractivity of the Fisher information implies the Markovianity of the dynamics, the same does not hold for the trace distance (see Thm. 5 and Box 7 on the trace distance). On the other hand, supplying ancillas to the system allows for the detection of non-Markovianity through the latter quantity, while for the Fisher information one additionally needs some post-processing of the state (Thm. 6 and 7). **Theorem 6**.: _Given a divisible evolution \(\Phi_{t}\), no ancillary degree of freedom of finite dimension or finite number of copies of the dynamics are sufficient to witness all possible non-Markovian evolutions via revivals of the Fisher distance between two initially prepared states._ The difference in behaviour of the trace distance and the Fisher information metrics arises from the linearity of the map \(\Phi_{t}\) and the translational invariance of the trace distance [1, 18]: thanks to this property, if a witness exists anywhere on the state space, then it can always be translated into the image of \(\Phi_{t}\). The Fisher information, on the other hand, has a strong dependence on the base-point, so that the same kind of argument cannot be applied. We refer to [1] for the proof of Thm. 6. Despite the negative result of Thm. 6, one can still define a witness based on Fisher information by using post-processing: **Theorem 7**.: _Given an evolution \(\Phi_{t}\), for any state \(\rho\) and perturbation \(\delta\rho\) defined on the system space and sufficiently many ancillary degrees of freedom, it is possible to implement a class of CP-maps \(F^{(t)}_{\delta\rho}\) depending on \(\Phi_{t}\) and \(\delta\rho\) that can be used to witness non-Markovianity at time \(t\) through the expansion of Fisher information. Formally, if the intermediate evolution \(\Phi_{t+\mathrm{d}t,t}\) is Positive (CP in the quantum case), then_ \[\frac{\mathrm{d}}{\mathrm{d}s}\operatorname{Tr}\left[F^{(t)}_{\delta\rho} \circ\Phi_{s}(\delta\rho)\,\mathbb{J}_{f}^{-1}\big{|}_{F^{(t)}_{\delta\rho} \circ\Phi_{s}(\rho)}[F^{(t)}_{\delta\rho}\circ\Phi_{s}(\delta\rho)]\right] \bigg{|}_{s=t}\leq 0\,, \tag{69}\] _whereas in the presence of non-Markovianity there always exists at least one \(\delta\rho\) for which the left-hand side is strictly positive._ _The minimal dimension of the ancilla for classical systems is \(d_{A}=2\), while for quantum maps one needs \(d_{A}=d+1\)._ There is a shortcoming to this construction, though: in the definition of the post-processing \(F^{(t)}_{\delta\rho}\) one needs to assume complete knowledge about the dynamics \(\Phi_{t}\) until the onset of Markovianity. In this way, one either needs to try all the possible \(\delta\rho\), or has to know in advance the structure of the dynamics in order to provide an explicit construction. Still, this example serves more as a proof of principle showing the possibility of designing post-processing filters to exploit the Fisher information for the detection of non-Markovianity. Again, the proof of this fact is contained in [1], where one can also find the explicit expression of \(F^{(t)}_{\delta\rho}\). Theorems 6 and 7 complete the characterisation of the relation between Markovianity and contractivity of the Fisher information, both on the image of \(\Phi_{t}\) and on the rest of the state space, as summarised in Fig. 2. We point out once again the importance of Thm. 5: both Chentsov theorem and its quantum generalisation by Petz identify the defining property of the Fisher information metric to be its contractivity under stochastic maps or quantum channels. Thm. 5, on the other hand, could be read off as saying that the defining property of Markovianity is that it contracts the Fisher information monotonically at all times. This second implication shows how natural the concept of Fisher metric is in the context of open system dynamics. Classical dissipative evolutions are described by stochastic maps, i.e., matrices \(\Phi\) satisfying the two conditions: \[\sum_{i}\,(\Phi)_{i,j}=1\,; \tag{70}\] \[(\Phi)_{i,j}\geq 0\qquad\forall\ \left\{i,j\right\}, \tag{71}\] where the first condition ensures the conservation of total probability, while the second is needed to make sure that states are mapped into states. In complete analogy to the quantum case, a family of stochastic maps \(\Phi_{t}\) depending smoothly on \(t\) is called divisible if for any two times \(t\) and \(s\) (\(t\geq s\)) one can define an intermediate map \(\Phi_{t,s}\) satisfying the relation \(\Phi_{t}=\Phi_{t,s}\circ\Phi_{s}\). A divisible stochastic dynamics is Markovian if all the intermediate maps \(\Phi_{t,s}\) are stochastic. The smoothness in \(t\) allows to define the rate matrix \(R_{t}\) through the equation \[R_{t}:=\lim_{\varepsilon\to 0}\,\frac{\Phi_{t+\varepsilon,t}-\mathbb{I}}{ \varepsilon}\,. \tag{72}\] Then, since the composition of two stochastic maps is again stochastic, Markovianity holds if and only if \(R_{t}\) generates a stochastic evolution for any time \(t\). For this reason, it is useful to characterise this kind of rate matrices. To this end, consider the matrix \(\Phi_{t+\varepsilon,t}\simeq\mathbb{I}+\varepsilon\,R_{t}\). For the global evolution to be Markovian, this matrix should satisfy the two conditions in Eq. (70) and Eq. (71), namely: \[\sum_{i}\ \left(\delta_{ij}+\varepsilon\left(R_{t}\right)_{i,j}\right) =1+\varepsilon\,\sum_{i}\,(R_{t})_{i,j}=1\,, \tag{73}\] \[\delta_{ij}+\varepsilon\left(R_{t}\right)_{i,j}\geq 0\qquad \forall\ \left\{i,j\right\}, \tag{74}\] where \(\delta_{ij}\) denotes the Kronecker delta. From the first condition we can deduce that \(\sum_{i}(R_{t})_{i,j}=0\). In particular, highlighting the diagonal terms, one obtains \(R_{j,j}^{(t)}=-\sum_{i\neq j}R_{i,j}^{(t)}\). Matrices satisfying this constraint can be decomposed as: \[R_{t}=\sum_{i\neq j}\ a_{i\gets j}^{(t)}\left(\left|i\right\rangle\! \left\langle j\right|-\left|j\right\rangle\!\left\langle j\right|\right) \tag{75}\] for \(a_{i\gets j}^{(t)}\) some real coefficients. We assume that this condition is always satisfied, also for non-Markovian evolutions, as it corresponds to the requirement that the dynamics preserves the normalisation. In fact, since non-Markovian evolutions are trace preserving on their domain, one can argue by linearity that this condition can be extended to the whole space of states. Moreover, the condition in Eq. (74) implies that \((R_{t})_{i,j}\geq 0\) whenever \(i\neq j\). In the parametrisation above this means that the rates satisfy \(a_{i\gets j}^{(t)}\geq 0\). Hence, Markovianity corresponds to the request of having positive rates \(a_{i\gets j}^{(t)}\) for all times. **Box \(R_{t}\).** Use of the trace distance in non-Markovianity The study of non-Markovianity is mainly carried out in terms of distinguishability distances. Indeed, since Markovian dynamics leads to a monotonic decrease in these distances, any increase thereof signals the appearance of non-Markovianity. In this context, the most used quantity is given by the trace distance: \[D_{\rm Tr}(\rho,\sigma)={\rm Tr}\left[\left|\rho-\sigma\right|\right]\,, \tag{76}\] which can be connected to the maximal probability \(p_{d}\) of distinguishing \(\rho\) from \(\sigma\) in a single shot measurement, thanks to the relation \(p_{d}(\rho,\sigma)=(1+D_{\rm Tr}(\rho,\sigma))/2\)[10]. Moreover, since it is translational invariant, it is particularly appealing for analytical calculations. In particular, suppose \(\rho\) and \(\sigma\) are two probability vectors, and define \(\delta\rho:=\sigma-\rho\). Then, the trace distance is given by: \[D_{\rm Tr}(\rho,\sigma)=D_{\rm Tr}(\rho,\rho+\delta\rho)={\rm Tr}\left[\left| \delta\rho\right|\right]\,. \tag{77}\] For classical systems, if the evolution is described by the rate matrix \(R_{t}\), it is straightforward to see that: \[\frac{\rm d}{{\rm d}t}\,D_{\rm Tr}(\rho,\sigma) =\frac{\rm d}{{\rm d}t}\,{\rm Tr}\left[\left|\delta\rho\right| \right]=\sum_{i}\ \frac{\rm d}{{\rm d}t}|\delta\rho_{i}|=\sum_{i}\ {\rm sign}(\delta\rho_{i})\delta\dot{\rho}_{i}= \tag{78}\] \[=\sum_{i}{\rm sign}(\delta\rho_{i})\sum_{j}\left(R_{t}\right)_{i,j }\,d_{j}=\sum_{i\neq j}\ {\rm sign}(\delta\rho_{i})\left(a_{i\gets j}^{(t)} \delta\rho_{j}-a_{j\gets i}^{(t)}\delta\rho_{i}\right)=\] (79) \[=\sum_{i\neq j}\ a_{j\gets i}^{(t)}\,\delta\rho_{i}\left({\rm sign }(\delta\rho_{j})-{\rm sign}(\delta\rho_{i})\right)\,. \tag{80}\] where we used the parametrisation of the rate matrix in Eq. (75), and in the last line we swapped the indexes of the first term in order to put the coefficient \(a_{j\gets i}^{(t)}\) in evidence. It should be noticed that if all the rates are positive, then the sum will be negative: in fact, either \({\rm sign}(\delta\rho_{j})={\rm sign}(\delta\rho_{i})\), in which case the term inside the parenthesis is zero, or \({\rm sign}(\delta\rho_{j})=-{\rm sign}(\delta\rho_{i})\), so that \(\delta\rho_{i}({\rm sign}(\delta\rho_{j})-{\rm sign}(\delta\rho_{i}))=-2\, \delta\rho_{i}\,{\rm sign}(\delta\rho_{i})=-2|\delta\rho_{i}|\). This calculation shows explicitly how the trace distance decreases under Markovian maps. Interestingly, it is a well known fact that an evolution is Markovian if and only if the trace norm of any vector \(\mathbf{v}\) decreases monotonically [15]. For this reason, it would be natural to expect the same to hold for the trace distance as well. It is easy to see, though, that this is false: one simple counterexample can be given in dimension \(d=2\), with \(a_{1\gets 2}<0\) and \(a_{2\gets 1}>0\), and the additional condition that \(|a_{2\gets 1}|>|a_{1\gets 2}|\). With this choice of rates in Eq. (80), since in dimension 2 tracelessness of \(\delta\rho\) implies that \(\delta\rho_{1}=-\delta\rho_{2}\), it is easy to check that the derivative of the trace distance stays negative. Still, there is no contradiction between the two results: indeed, the trace distance can only access vectors of the form \(\delta\rho=\sigma-\rho\), i.e., that are traceless. This reduces the dimension of the vectors tested by one. In fact, by choosing the traceful vector \(v_{i}=\delta_{i2}\), one is able to witness non-Markovianity in the counterexample just presented. Suppose now to have access to extra ancillary degrees of freedom on which the dynamics acts trivially, i.e., the global evolution is given by \(\Phi_{t}\otimes\mathds{I}_{d_{A}}\), where \(d_{A}\) is the dimension of the ancilla. Then, one can always find product states \(\rho\) and \(\sigma\) on the composite space such that the partial tracing out the system gives \(\operatorname{Tr}_{S}\left[\rho-\sigma\right]\neq 0\), while \(\operatorname{Tr}_{A}\left[\rho-\sigma\right]=0\), so that the total trace is zero. In this way, ancillary degrees of freedom give access to traceful vectors, and so to the possibility of witnessing non-Markovianity. A similar argument was also presented in [18] for the case of quantum dynamics. These results lead to the following: **Theorem 8**.: _Given a divisible dynamics \(\Phi_{t}\), there always exists an ancilla of finite dimension \(d_{A}\) on which the dynamics acts trivially (i.e., the total evolution is given by \(\Phi_{t}\otimes\mathds{I}_{d_{A}}\)) such that one can witness any non-Markovianity in \(\Phi_{t}\) via revivals in the trace distance between initially prepared states._ For quantum states the minimal dimension of the ancilla is \(d_{A}=d+1\)[18]. It should be noticed, though, that in order to actually speak about complete positivity one always needs an ancilla of dimension at least \(d\) (as complete positivity coincides with the positivity of the map \(\Phi\otimes\mathds{I}_{d}\)). In this way, the trace distance needs one extra dimension than the minimum in order to witness non-Markovianity. A similar result actually holds also for classical systems. In this case, one only needs to enlarge the state space by one extra dimension. Then, for any vector \(\mathbf{v}\) on the original space, one can always construct a traceless state on this extended space as: \[\delta\rho=\begin{cases}\delta\rho_{i}=v_{i}&\text{if }i\in\{1,\ldots,d\}\ ;\\ \delta\rho_{i}=-\sum_{j=1}^{d}\delta\rho_{j}&\text{if }i\equiv d+1\,.\end{cases} \tag{81}\] This state satisfies \(\operatorname{Tr}\left[\left|\delta\rho\right|\right]=\operatorname{Tr}\left[ \left|\mathbf{\psi}\right|\right]\), which proves the claim. Still, in this case it should be noticed that this construction cannot be carried out by adjoining an ancilla, as in that case the minimum dimension of the state space is \(d_{A}=2\). Nonetheless, this example is useful as it shows that two dimensional ancillas are enough, and that in principle it would be sufficient to embed the dynamics \(\Phi_{t}\) in a space only one dimension larger than the original space in order to witness classical non-Markovianity. ### Retrodiction and Fisher information In the literature about non-Markovianity, one of the most used expression is _backflow of information_. Still, by looking at the usual quantifiers defined to assess it, one might be surprised to discover that for the most part they are actually distinguishability measures accounting for the statistical difference between states _at time \(t\)_ - such as the trace distance and Fisher information, on which we focused in the previous section. Whereas one could in principle justify the interpretation of non-monotonicity for these quantities as actual backflow of information, here we take a different approach: we prove, in fact, that the contractivity of the Fisher information is in one-to-one correspondence with the monotonic degradation in the ability of an agent to retrodict the _initial state_ of the system (Thm. 9). Before doing so, we need some formalism. First of all, we present a way to simulate the Fisher scalar product \(\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}\) at time \(t\) by the scalar product at time \(\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}\) at time \(t=0\). This is obtained by the following rearranging: \[K_{f,\Phi_{t}(\pi)}(\Phi_{t}(A),\Phi_{t}(B)) =\operatorname{Tr}\left[\Phi_{t}(A)\,\mathbb{J}_{f}^{-1}\big{|}_{ \Phi_{t}(\pi)}[\Phi_{t}(B)]\right] \tag{82}\] \[=\operatorname{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{ \pi}\left[\left(\mathbb{J}_{f^{\prime}}\big{|}_{\pi}\,\Phi_{t}^{\dagger}\, \mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}\right)\Phi_{t}(B)\right]\right]=\] (83) \[=\operatorname{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{ \pi}[\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(B)]\right]=K_ {f^{\prime},\pi}(A,\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t} (B))\,, \tag{84}\] where in the last line we have implicitly defined the _generalised Petz recovery map_\(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\), generalizing the famous recovery introduced by Petz [19], which is obtained for \(f(x)=f^{\prime}(x)=\sqrt{x}\). In this way, the evolution of any Fisher scalar products can be modelled without the need to actually evolve the state, at the cost of introducing a time dependent vector \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t}(B)\). We provide here a list of properties that \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) satisfies and we defer the proofs of these facts to Box 8 at the end of the section: 1. \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) reduces to the Bayes rule for classical dynamics, i.e., if \(\Phi_{t}\) is a stochastic map, and \(\pi\) a diagonal state, for any \(f\) and \(f^{\prime}\) one has that: \[\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}(\boldsymbol{\cdot})=\sum_{ i,j}\;\frac{(\Phi_{t})_{j,i}\,\pi_{i}}{(\Phi_{t}(\pi))_{j}}\;|i\rangle\langle j \boldsymbol{\cdot}\;|j\rangle\langle i\boldsymbol{\cdot}|\;;\] (85) 2. if \(\pi\) is full-rank, \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) is trace preserving; 3. the operator \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t}\) is positive, and self-adjoint with respect to \(\big{\|}_{f^{\prime}}^{-1}\big{|}_{\pi}\), meaning that: \[K_{f^{\prime},\pi}(A,\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{ t}(B))=K_{f^{\prime},\pi}(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)} \Phi_{t}(A),B)\,;\] (86) 4. the state \(\pi\), called the _prior state_, is perfectly retrieved by \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t}\), i.e.: \[\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t}(\pi)=\pi\,;\] (87) 5. If \(f^{\prime}(x)\leq f(x)\) for every \(x\in\mathbb{R}^{+}\), then the spectrum of \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t}\) is completely contained in \([0,1]\). Moreover, as it can be seen from Eq. (87), the number one is always part of the spectrum; 6. the transformation associating to a map its generalised Petz recovery, which we denote by \(P_{(f^{\prime},f),\pi}(\Phi_{t}):=\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{ \prime},f)}\), can be reversed as: \[P_{(f,f^{\prime}),\Phi_{t}(\pi)}(P_{(f^{\prime},f),\pi}(\Phi_{t}))=\Phi_{t}\,;\] (88) 7. In the case of divisible evolutions \(\Phi_{t}=\Phi_{t,s}\circ\Phi_{s}\), one can express \(P_{(f^{\prime},f),\pi}(\Phi_{t})\) in terms of \(P_{(f^{\prime},f^{\prime\prime}),\pi}(\Phi_{s})\) and \(P_{(f^{\prime\prime},f),\Phi_{s}(\pi)}(\Phi_{t,s})\), for any \(f^{\prime\prime}\), as: \[P_{(f^{\prime},f),\pi}(\Phi_{t})=P_{(f^{\prime},f^{\prime\prime}),\pi}(\Phi_{s })\circ P_{(f^{\prime\prime},f),\Phi_{s}(\pi)}(\Phi_{t,s})\,.\] (89) Similarly with what happens for the adjoint, the generalised Petz recovery map composes from left to right. Thanks to Eq. (85), one can consider \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) as some kind of generalisation of Bayes rule to quantum systems. This interpretation can be corroborated in the following context: consider a state \(\pi+\delta\rho\), with \(\delta\rho\) a small perturbation, Figure 3: Depiction of the content of Thm. 9: consider a dynamics \(\Phi_{t}\) which is Markovian till some time \(t\). This implies that the Fisher distance between any two points on the state space contracts. If between time \(t\) and \(t+\mathrm{d}t\) we observe non-Markovianity through the Fisher distance, i.e., two points in the image of \(\Phi_{t}\) gets farther away, then retrodicting at time \(t+\mathrm{d}t\) gives a better result than at time \(t\) (we use the notation \(\delta\hat{\rho}_{t}:=\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t} (\delta\rho)\)). and evolve it according to \(\Phi_{t}\); at this point, \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) is applied to recover as much information about the initial state as possible. The quality of the retrieval can be quantified by the contrast function: \[H_{g(f^{\prime})}(\pi+\delta\rho||\widetilde{\Phi}_{t}|_{\pi}^{(f^ {\prime},f)}\,\Phi_{t}(\pi+\delta\rho))=H_{g}(\pi+\delta\rho||\pi+\widetilde{ \Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(\delta\rho))= \tag{90}\] \[\qquad=\frac{1}{2}\,\mathrm{Tr}\left[\left(\delta\rho-\widetilde{ \Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(\delta\rho)\right)\mathbb{J }_{f^{\prime}}^{-1}\big{|}_{\pi}[\left(\delta\rho-\widetilde{\Phi}_{t}\big{|}_ {\pi}^{(f^{\prime},f)}\,\Phi_{t}(\delta\rho)\right)]\right]+\mathcal{O}\left(| \delta\rho|^{3}\right), \tag{91}\] where we introduced the notation \(g(f^{\prime})\) to indicate the contrast function that locally expands to \(\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}\) (i.e., \(g(f^{\prime})\) and \(f^{\prime}\) are related by Eq. (16)), and in the first line we used Eq. (87) to apply the operators to \(\delta\rho\) only. Then, the interpretation of \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) as a recovery map suggests that if \(\Phi_{t}\) is Markovian, the divergence above will increase with time: indeed, the ability of an agent to retrodict the initial state deteriorates more and more as time passes. Still, from a mathematical point of view, in principle there is no intuitive reason for this to be the case: in fact, \(H_{g}(\rho||\sigma)\) is guaranteed to be contractive only if the same channel is applied on both states, whereas in the equation above \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}\) is applied only on the right. Moreover, as we discuss at the end of the section, \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}\) might not even be a channel in general, further eroding any mathematical justification to the intuition discussed. Still, the following theorem bridges the gap between the intuitive interpretation of Eq. (90) and its actual mathematical form, showing that indeed \(H_{g(f^{\prime})}(\pi+\delta\rho||\widetilde{\Phi}_{t}|_{\pi}^{(f^{\prime},f)} \,\Phi_{t}(\pi+\delta\rho))\) monotonically increases for Markovian dynamics: **Theorem 9**.: _Given a divisible dynamics \(\Phi_{t}\) and a prior state \(\pi\), define the generalised Petz map \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) according to Eq. (84). If \(f^{\prime}(x)\leq f(x)\) for every \(x\in\mathbb{R}^{+}\), then we have the equivalence:_ \[\forall\delta\rho\,\,\,\frac{\mathrm{d}}{\mathrm{d}t}\,H_{g(f^{\prime})}(\pi+ \delta\rho||\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(\pi +\delta\rho))>0\,\,\,\,\,\Longleftrightarrow\,\,\,\,\forall\delta\rho\,\,\, \,\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Tr}\left[\Phi_{t}(\delta\rho)\,\mathbb{ J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}[\Phi_{t}(\delta\rho)]\right]<0\,. \tag{92}\] _Since the Fisher information is monotonically contractive under Markovian dynamics, this implies that the contrast function on the left is monotonically increasing in the same regime. Moreover, any backflow in the Fisher information is mirrored in an increased ability to retrodict the initial state \(\pi+\delta\rho\) for some \(\delta\rho\)._ To the best of our knowledge, this result is the first that explicitly considers backflow of information in a single state at time \(t=0\): indeed, whereas here we compare the initial condition with our best guess about it, the usual approach to non-Markovianity is to compare the behaviour of two different states at time \(t\). There are two messages one should get from Thm. 9: first, it legitimates the intuitive interpretation of \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) as a state retrieval map; second, it directly connects the contractivity properties of the Fisher information at time \(t\), which is a distinguishability measure, with the decrease in the ability of an agent to retrodict the initial state of the system. Proof.: Starting from the expansion in Eq. (90), and ignoring corrections of the order \(\mathcal{O}\left(|\delta\rho|^{3}\right)\), we can reassemble the terms as: \[\frac{\mathrm{d}}{\mathrm{d}t}\,H_{g(f^{\prime})}(\pi+\delta\rho|| \widetilde{\Phi}_{t}|_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(\pi+\delta\rho))= \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\,\mathrm{Tr}\left[\left((\mathbb{I}- \widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t})(\delta\rho) \right)\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[\left((\mathbb{I}- \widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t})(\delta\rho) \right)]\right]= \tag{93}\] \[=\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\,\mathrm{Tr}\left[ \delta\rho\,\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[((\mathbb{I}- \widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t})^{2}(\delta\rho) )]\right]=-\mathrm{Tr}\left[\delta\rho\,\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_ {\pi}[((\mathbb{I}-\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t })\circ\left(\frac{\mathrm{d}}{\mathrm{d}t}\,\widetilde{\Phi}_{t}\big{|}_{\pi}^ {(f^{\prime},f)}\,\Phi_{t}\right)(\delta\rho))]\right]\,, \tag{94}\] where in the first equality is just a rewriting of Eq. (90), then we used the self-adjointness property 3 in Eq. (86) to group the superoperator \((\mathbb{I}-\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t})\) on the right, and finally we carried out the time derivative. Now, it should be noticed that thanks to the property 5 the spectrum of \((\mathbb{I}-\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t})\) is also contained in \([0,1]\). On the other hand, \(-\frac{\mathrm{d}}{\mathrm{d}t}\,\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f )}\,\Phi_{t}\) has positive eigenvalues if and only if the Fisher information corresponding to \(f\) is contractive, as one can see from the standard rewriting: \[-\mathrm{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}\Big{(}\frac{ \mathrm{d}}{\mathrm{d}t}\,\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\, \Phi_{t}\Big{)}\Big{)}\left(A\right)\right]=-\frac{\mathrm{d}}{\mathrm{d}t} \mathrm{Tr}\left[\Phi_{t}(A)\,\,\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}[ \Phi_{t}(A))]\right]\,. \tag{95}\] Since the product of two positive operators has positive spectrum, this connection proves the direction \(\iff\). On the other hand, suppose that the Fisher information expands at some point. Then, there exists at least one negative eigenvalue of \(-\frac{\mathrm{d}}{\mathrm{d}t}\,\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f )}\,\Phi_{t}\), which we denote by \(\lambda\). Also, denote by \(\delta\tilde{\rho}\) the corresponding eigenoperator. Then, the left hand side of Eq. (92) reads: \[K_{f^{\prime},\pi}\left(\delta\tilde{\rho},(\mathbb{I}-\widetilde{\Phi}_{t} \big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t})\circ\left(-\frac{\mathrm{d}}{ \mathrm{d}t}\,\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t} \right)(\delta\tilde{\rho})\right)=\lambda\,K_{f^{\prime},\pi}\left(\delta \tilde{\rho},(\mathbb{I}-\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\, \Phi_{t})(\delta\tilde{\rho})\right)<0\,, \tag{96}\] where the last step follows from the fact that \(\lambda<0\) and \((\mathds{I}-\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t})\) is a positive operator. This concludes the proof. As we mentioned, \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) might not even be a CP map for general evolutions: for example, by choosing \(\Phi=\mathds{I}\), the requirement that \(\mathds{I}_{(f^{\prime},f),\pi}=\mathbb{J}_{f^{\prime}}\big{|}_{\pi}\big{|}_{ \pi}^{-1}\big{|}_{\pi}\) is CP is equivalent to constraining both \(\mathbb{J}_{f^{\prime}}\big{|}_{\pi}\) and \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) to be CP. The conditions that are needed to ensure this are deferred to Sec. IV.3, but we can already use Thm. 14 to give the most general expression of a CP Petz recovery \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\): \[\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}=\int_{-\infty}^{\infty} \mathrm{d}\nu_{f^{\prime}}^{+}(t)\int_{-\infty}^{\infty}\mathrm{d}\nu_{f}^{- }(s)\;\mathcal{V}_{\pi}\left(\tfrac{1}{2}-it\right)\circ\Phi_{t}^{\dagger} \circ\mathcal{V}_{\Phi(\pi)}\left(is-\tfrac{1}{2}\right)\,. \tag{97}\] where \(\mathrm{d}\nu_{f^{\prime}}^{+}\) and \(\mathrm{d}\nu_{f}^{-}\) are two symmetric probability distribution on the real line, and we introduced the CP map \(\mathcal{V}_{\pi}(z)[A]:=\pi^{z}\,A\,(\pi^{z})^{\dagger}\). This expression is somehow reminiscent of the one found in [20] for the universal recovery map. Then, it directly follows from Thm. 9 that: **Corollary 9.1**.: _Suppose \(f\) and \(f^{\prime}\) are chosen such that \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) and \(\mathbb{J}_{f^{\prime}}\big{|}_{\pi}\) are both CP. Then, there exists a channel \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) of the form in Eq. (97) that perfectly retrieves the prior state (i.e., \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\Phi_{t}(\pi)=\pi\)), reduces to the Bayes' rule for classical dynamics, and such that Eq. (92) is satisfied._ The fact that \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) can be made CP means that there exists a physical procedure that an agent can carry out to retrodict the initial state of the system after the noisy operation \(\Phi\) is applied. Moreover, Corollary 9.1 gives a recipe to exploit non-Markovianity for error correction, showing that the backflow of information is indeed beneficial. **Box S.**Proof of the properties of the generalized Petz recovery map We present here the derivations of the properties that generalised Petz recovery maps satisfy. First, regarding condition 1, i.e., the fact that \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) reduces to Bayes rule for classical dynamics this can be seen as follows: a channel implementing a classical evolution takes the form \(\Phi_{t}(\cdot)=\sum_{i,j}\left(\Phi_{t}\right)_{i,j}\,\,|i\rangle\langle j| \,\cdot\,|j\rangle\langle i|\), where \(\{\,|i\rangle\,\}\) are part of an orthonormal basis. Moreover, if we restrict to diagonal states, \(\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}(\cdot)=\sum_{i}\left(\Phi_{t}(\pi)_ {i}\right)^{-1}\,|i\rangle\langle i|\cdot\,|i\rangle\langle i|\) and \(\mathbb{J}_{f^{\prime}}\big{|}_{\pi}(\cdot)=\sum_{i}\,\pi_{i}\,|i\rangle \langle i|\cdot\,|i\rangle\langle i|\) irrespective of \(f\) and \(f^{\prime}\). Then, purine this elements together one obtains Eq. (85). This can be interpreted as the Bayes rule by thinking of \(\pi_{i}\) as the probability of obtaining the microstate \(i\), \((\Phi_{t}(\pi))_{j}\) as the probability to be in the microstate \(j\) after the evolution \(\Phi_{t}\) and, finally, \((\Phi_{t})_{j,i}\) as the conditional probability of the transition \(i\to j\). Condition 2 tells us that \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) is trace preserving if \(\pi\) is full-rank (which we always assume to be the case in this work). This can be readily verified by applying its adjoint on the identity matrix: \[(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)})^{\dagger}[\mathbb{I}]= \mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}\,\Phi_{t}\,\mathbb{J}_{f^{\prime}} \big{|}_{\pi}[\mathbb{I}]=\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}[\Phi_{t}( \pi)]=\mathbb{I}\;, \tag{98}\] where we used the fact that for every map \(\Phi\) the condition of being trace preserving can be written as \(\Phi^{\dagger}(\mathbb{I})=\mathbb{I}\). Let us now pass to condition 3. This gives a first characterisation of the spectral properties of \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f^{\prime})}\Phi_{t}\). First, it should be noticed that this map is self-adjoint with respect to \(\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}\), as it can be readily verified: \[\mathrm{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[ \widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(B)]\right]= \mathrm{Tr}\left[\Phi_{t}(A)\,\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}[ \Phi_{t}(B)]\right]= \tag{99}\] \[=\mathrm{Tr}\left[\left(\mathbb{J}_{f^{\prime}}\big{|}_{\pi}^{ \Phi^{\dagger}}\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}\right)\Phi_{t}(A)\, \mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[B]\right]=\mathrm{Tr}\left[ \widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(A)\,\mathbb{J}_{f^ {\prime}}^{-1}\big{|}_{\pi}[B]\right]\,, \tag{100}\] where we repeatedly used the self-adjointness of the Fisher superoperators to move them from right to left. This shows that \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}\) is self-adjoint, so its spectrum is real. Moreover, it is also positive since it can be rewritten as: \[\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}=\mathbb{J}_{f^{ \prime}}^{\frac{1}{2}}\big{|}_{\pi}\left((\mathbb{J}_{f}^{-\frac{1}{2}}\big{|}_{ \Phi_{t}(\pi)}\Phi_{t}\,\mathbb{J}_{f^{\prime}}^{\frac{1}{2}}\big{|}_{\pi})^{ \dagger}(\mathbb{J}_{f}^{-\frac{1}{2}}\big{|}_{\Phi_{t}(\pi)}\Phi_{t}\, \mathbb{J}_{f^{\prime}}^{\frac{1}{2}}\big{|}_{\pi}\right)\mathbb{J}_{f^{\prime}}^{- \frac{1}{2}}\big{|}_{\pi}\,. \tag{101}\] This proves the claim, as similarity transformations preserve the spectrum. It is also easy to see that \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}\) retrieves the prior (condition 4): \[\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(\pi)=\mathbb{J}_{f^{ \prime}}\big{|}_{\pi}\,\Phi_{t}^{\dagger}\,\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}( \pi)}[\Phi_{t}(\pi)]=\mathbb{J}_{f^{\prime}}\big{|}_{\pi}[\Phi_{t}^{\dagger}( \mathbb{I})]=\mathbb{J}_{f^{\prime}}\big{|}_{\pi}[\mathbb{I}]=\pi\,. \tag{102}\] This equation shows that the evolution of the state \(\pi\) can be completely undone by applying \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\). Moreover, the spectrum of \(\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\) contains \(1\), and the associated eigenoperator is \(\pi\). We can now show that, with some restrictions on \(f\) and \(f^{\prime}\), condition 5 is satisfied, namely the fact that the spectrum is actually contained in the interval \([0,1]\). Indeed, this follows from the chain of inequalities: \[\frac{\operatorname{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[ \widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}\,\Phi_{t}(A)]\right]}{ \operatorname{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[A]\right] }\,=\,\frac{\operatorname{Tr}\left[\Phi_{t}(A)\,\mathbb{J}_{f}^{-1}\big{|}_{ \Phi_{t}(\pi)}[\Phi_{t}(A)]\right]}{\operatorname{Tr}\left[A\,\mathbb{J}_{f^{ \prime}}^{-1}\big{|}_{\pi}[A]\right]}\,\leq\,\frac{\operatorname{Tr}\left[A \,\mathbb{J}_{f}^{-1}\big{|}_{\pi}[A]\right]}{\operatorname{Tr}\left[A\, \mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[A]\right]}\,\leq\,\frac{ \operatorname{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[A]\right] }{\operatorname{Tr}\left[A\,\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}[A] \right]}=1\,, \tag{103}\] where the first inequality follows from the contractivity of the Fisher information, while the second only holds if \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\leq\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}\), corresponding to the case in which \(f^{\prime}(x)\leq f(x)\) for every \(x\in\mathbb{R}^{+}\) (see Eq. (157)). Define now the map \(P_{(f^{\prime},f),\pi}(\Phi_{t}):=\widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime },f)}\). Condition 6 tells us that \(P_{(f,f^{\prime}),\Phi_{t}(\pi)}(P_{(f^{\prime},f),\pi}(\Phi_{t}))=\Phi_{t}\). This can be verified by a direct computation: \[P_{(f,f^{\prime}),\Phi_{t}(\pi)}(P_{(f^{\prime},f),\pi}(\Phi_{t})) =\mathbb{J}_{f}\big{|}_{\Phi_{t}(\pi)}(\widetilde{\Phi}_{t}\big{|} _{\pi}^{(f^{\prime},f)})^{\dagger}\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{ \widetilde{\Phi}_{t}\big{|}_{\pi}^{(f^{\prime},f)}}= \tag{104}\] \[=\mathbb{J}_{f}\big{|}_{\Phi_{t}(\pi)}\left(\mathbb{J}_{f}^{-1} \big{|}_{\Phi_{t}(\pi)}\,\Phi_{t}\,\mathbb{J}_{f^{\prime}}\big{|}_{\pi}\right) \mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\pi}=\Phi_{t}\,. \tag{105}\] Interestingly, this make the transformation \(P_{(f,f),\pi}\) involutive, since it can be reversed by just changing the base-point, i.e., through \(P_{(f,f),\Phi_{t}(\pi)}\). This is one of the key properties of the classical Bayes retrieval (as it was discussed in [21]). Moreover, if one also requires \(P_{(f,f),\pi}(\Phi_{t})\) to be CP for general maps, this constrains the defining function to \(f_{SQ}(x)=\sqrt{x}\), as it is the only case in which both \(\mathbb{J}_{f_{SQ}}\big{|}_{\pi}\) and \(\mathbb{J}_{f_{SQ}}^{-1}\big{|}_{\pi}\) are CP for any state. This gives a way to single out the usual Petz recovery map [22]. Finally, condition 7 explains the relation between the generalised Petz recovery and Markovian evolutions. In particular, for divisible dynamics one has: \[P_{(f^{\prime},f),\pi}(\Phi_{t}) =\mathbb{J}_{f^{\prime}}\big{|}_{\pi}\left(\Phi_{t,s}\circ\Phi_{s} \right)^{\dagger}\,\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{t}(\pi)}=\left(\mathbb{J}_ {f^{\prime}}\big{|}_{\pi}\Phi_{s}^{\dagger}\mathbb{J}_{f^{\prime}}^{-1}\big{|}_ {\Phi_{s}(\pi)}\right)\left(\mathbb{J}_{f^{\prime\prime}}\big{|}_{\Phi_{s}( \pi)}\Phi_{t,s}^{\dagger}\mathbb{J}_{f}^{-1}\big{|}_{\Phi_{s},\Phi_{s}(\pi)} \right)= \tag{106}\] \[=P_{(f^{\prime},f^{\prime\prime}),\pi}(\Phi_{s})\circ P_{(f^{ \prime\prime},f),\Phi_{s}(\pi)}(\Phi_{t,s})\,, \tag{107}\] proving the claim. It should be noticed that in the case of \(f\equiv f^{\prime}\) the composition becomes \(P_{(f,f^{\prime\prime}),\pi}(\Phi_{s})\circ P_{(f^{\prime\prime},f),\Phi_{s}( \pi)}(\Phi_{t,s})\) for any \(f^{\prime\prime}\), and in particular if one choses \(f^{\prime\prime}=f\) it shows that \(P_{(f,f),\pi}\) is compatible with the structure of divisible semigroups in the sense that \(P_{(f,f),\pi}(\Phi_{t})=P_{(f,f),\pi}(\Phi_{s})\circ P_{(f,f),\Phi_{s}(\pi)}( \Phi_{t,s})\). **Box 9**: **Contextful universal recovery maps.** We discuss here a possible generalisation of the result in [20], namely that the contraction of the relative entropy (defined in Eq. (263)) can be bounded as: \[S(\rho||\sigma)-S(\Phi(\rho)||\Phi(\sigma))\geq-\,\int_{-\infty}^{\infty}\mathrm{ d}t\,\,\,\beta(t)\log F^{2}(\rho,\widetilde{\Phi}_{P,\sigma,t}\Phi(\rho))\,, \tag{108}\] where \(\beta(t)\) is a symmetric probability distribution (we refer to [20] for its particular expression), \(\widetilde{\Phi}_{P,\sigma,t}:=\mathcal{V}_{\sigma}\left(\frac{1}{2}-it\right) \circ\Phi^{\dagger}\circ\mathcal{V}_{\Phi(\sigma)}\left(it-\frac{1}{2}\right)\) is called the rotated Petz recovery map, and we introduced the squared fidelity \(F^{2}(\rho,\sigma):=\operatorname{Tr}\left[\sqrt{\sqrt{\rho}\,\sigma\sqrt{\rho}}\right]\). The interest in this bound is twofold: on the one hand it quantifies the maximum amount of information lost when one applies the channel \(\Phi\); on the other, it implies that if \(S(\rho||\sigma)=S(\Phi(\rho)||\Phi(\sigma))\), then \(\widetilde{\Phi}_{P,\sigma,t}\Phi(\rho)=\rho\), i.e., \(\widetilde{\Phi}_{P,\sigma,t}\) perfectly recovers every state for which the dynamics does not decrease the relative entropy (with respect to \(\sigma\)). This condition is what one refers to as universality in this context. This can be proven as follows: first, it should be noticed that for any two states the squared fidelity \(F^{2}(\rho,\sigma)\) is in \([0,1]\), so its logarithm is negative definite. Hence, in this case we have the chain of inequalities: \[0=S(\rho||\sigma)-S(\Phi(\rho)||\Phi(\sigma))\geq-\,\int_{-\infty}^{\infty} \mathrm{d}t\,\,\,\beta(t)\log F^{2}(\rho,\widetilde{\Phi}_{P,\sigma,t}\Phi(\rho)) \geq 0\,, \tag{109}\] which implies that \(F^{2}(\rho,\widetilde{\Phi}_{P,\sigma,t}\Phi(\rho))=1\) for all \(t\). This is only possible if \(\widetilde{\Phi}_{P,\sigma,t}\Phi(\rho)=\rho\), proving the claim. One possibility of generalising Eq. (108) is to try to find a similar result holding for the general family of contrast functions defined in Eq. (4). Unfortunately, to the best of our knowledge this problem is still open. Nevertheless, if one focuses on \(\chi_{f}^{2}\)-divergences alone (see the definition in Eq. (40)), we can prove that: **Theorem 10**.: _Given a CPTP map \(\Phi\) and any two standard monotone functions \(f\) and \(f^{\prime}\) such that \(f^{\prime}(x)\leq f(x)\) for every \(x\in\mathbb{R}^{+}\), the following bound holds:_ \[\chi_{f^{\prime}}^{2}(\rho||\sigma)-\chi_{f}^{2}(\Phi(\rho)|| \Phi(\sigma)) =\operatorname{Tr}\left[\Delta\,\mathbb{J}_{f^{\prime}}^{-1} \right]_{\rho}\left[(\mathds{I}-\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)( \Delta)\right]\right]\geq \tag{110}\] \[\geq\mathcal{F}_{f^{\prime},\rho}\left((\mathds{I}-\widetilde{ \Phi}_{(f^{\prime},f),\rho}\,\Phi)(\Delta)\right)\geq\Big{|}\Big{|}(\mathds{ I}-\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)(\Delta)\Big{|}\Big{|}_{1}^{2}\,, \tag{111}\] _where we used the abbreviation \(\Delta=(\rho-\sigma)\) and introduced the trace norm \(||A||_{1}=\operatorname{Tr}\left[|A|\right]\)._ There are a number of remarks to be made: first, it should be noticed that the relative entropy cannot be expressed in terms of \(\chi_{f}^{2}\)-divergences, so the result in Thm. 10 is not a direct generalisation of Eq. (108). Nonetheless, it is a first step in this direction. Moreover, one should also notice the crucial role of the map \((\mathds{I}-\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)\): this was the key ingredient in proving Thm. 9 and it is now again at the centre of the above result. This further corroborates the interpretation of \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\) as a retrieval map: indeed, the quality of the retrodiction can be assessed by comparing \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi\) (the map forward and backwards) with \(\mathds{I}\) (not doing anything). The fact that the operator \((\mathds{I}-\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)\) naturally appears in many different constructions shows that this interpretation is well justified. Finally, we also point out that a similar result as Thm. 10 was found in [23] (Lemma 4.9 therein). Proof.: The first equality in Eq. (110) directly follows from the definition of generalised Petz recovery map in Eq. (84): \[\chi_{f^{\prime}}^{2}(\rho||\sigma)-\chi_{f}^{2}(\Phi(\rho)|| \Phi(\sigma)) =\operatorname{Tr}\left[\Delta\,\mathbb{J}_{f^{\prime}}^{-1} \right]_{\rho}[\Delta]\Big{]}-\operatorname{Tr}\left[\Phi(\Delta)\,\mathbb{J} _{f}^{-1}\right]_{\Phi(\rho)}[\Phi(\Delta)]\Big{]}= \tag{112}\] \[=\operatorname{Tr}\left[\Delta\,\mathbb{J}_{f^{\prime}}^{-1} \right]_{\rho}[\Delta]\Big{]}-\operatorname{Tr}\left[\Delta\,\mathbb{J}_{f^{ \prime}}^{-1}\right]_{\rho}[\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi(\Delta )]\Big{]}=\] (113) \[=\operatorname{Tr}\left[\Delta\,\mathbb{J}_{f^{\prime}}^{-1} \right]_{\rho}\left[(\mathds{I}-\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)( \Delta)\right]\Big{]}\,, \tag{114}\] where we used the same definition of \(\Delta\) as in the theorem. At this point we can use the property 5 once more, which implies that the spectrum of \((\mathds{I}-\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)\) is in \([0,1]\) for \(f\) and \(f^{\prime}\) as in the statement of the theorem. But from this it follows that \((\mathds{I}-\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)\geq(\mathds{I}- \widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi)^{2}\), which directly implies the first inequality in Eq. (111). Finally, since for any \(f\) it holds that \(\mathcal{F}_{f,\rho}(\delta\rho)\geq\mathcal{F}_{B,\rho}(\delta\rho)\geq|| \delta\rho||_{1}^{2}\) (see Sec. V.1 where \(\mathcal{F}_{B,\rho}(\delta\rho)\) is defined) the last inequality also follows. We are now ready to generalise the result in Eq. (109): **Corollary 10.1**.: _Suppose that \(\chi_{f}^{2}(\rho||\sigma)=\chi_{f}^{2}(\Phi(\rho)||\Phi(\sigma))\) for some standard monotone function \(f\). Then, the map \(\widetilde{\Phi}_{(f,f),\rho}\) satisfies:_ \[\widetilde{\Phi}_{(f,f),\rho}\,\Phi(\rho)=\rho\qquad\wedge\qquad \widetilde{\Phi}_{(f,f),\rho}\,\Phi(\sigma)=\sigma\,. \tag{115}\] _Moreover, if \(\chi_{f}^{2}(\rho||\sigma)=\chi_{f}^{2}(\Phi(\rho)||\Phi(\sigma))\) for \(f_{SQ}(x)=\sqrt{x}\), then the equality holds also for all other \(\chi_{f}^{2}\)-divergences._ Proof.: From Thm. 10 it directly follows that: \[0=\chi_{f}^{2}(\rho||\sigma)-\chi_{f}^{2}(\Phi(\rho)||\Phi(\sigma))\geq \mathcal{F}_{f,\rho}\left((\mathds{I}-\widetilde{\Phi}_{(f,f),\rho}\,\Phi)( \Delta)\right)\geq 0\,, \tag{116}\] which implies \(\mathcal{F}_{f,\rho}\left((\mathds{I}-\widetilde{\Phi}_{(f,f),\rho}\,\Phi)( \Delta)\right)=0\). Since the Fisher information arises from a non-degenerate scalar product, this means that \((\mathds{I}-\widetilde{\Phi}_{(f,f),\rho}\,\Phi)(\Delta)=0\implies\widetilde{ \Phi}_{(f,f),\rho}\,\Phi(\Delta)=\Delta\). It follows from property 4 that \(\widetilde{\Phi}_{(f,f),\rho}\,\Phi(\rho)=\rho\). Hence, the only way for \(\widetilde{\Phi}_{(f,f),\rho}\,\Phi\) to retrieve \(\Delta=(\rho-\sigma)\) is that \(\widetilde{\Phi}_{(f,f),\rho}\,\Phi(\sigma)=\sigma\), proving the first part of the claim. As it was mentioned in Box 8, the case \(f_{SQ}(x)=\sqrt{x}\) is the only one in which \(\widetilde{\Phi}_{(f_{SQ},f_{SQ}),\rho}\) is CP in general (in the following we will denote this quantity by \(\widetilde{\Phi}_{P,\rho}\)). Then, if \(\chi^{2}_{f_{SQ}}(\rho||\sigma)=\chi^{2}_{f_{SQ}}(\Phi(\rho)||\Phi(\sigma))\), this implies the existence of a channel recovering both \(\rho\) and \(\sigma\). Then, by using the contractivity of \(\chi^{2}_{f}\)-divergences it follows that: \[\chi^{2}_{f}(\rho||\sigma)\geq\chi^{2}_{f}(\Phi(\rho)||\Phi(\sigma))\geq\chi^ {2}_{f}(\widetilde{\Phi}_{P,\rho}\Phi(\rho)||\widetilde{\Phi}_{P,\rho}\Phi( \sigma))=\chi^{2}_{f}(\rho||\sigma)\,. \tag{117}\] This directly implies that \(\chi^{2}_{f}(\rho||\sigma)=\chi^{2}_{f}(\Phi(\rho)||\Phi(\sigma))\). The proof above shows that the exceptionality of the square root divergence derives from the fact that this is the only case in which \(\widetilde{\Phi}_{(f,f),\rho}\) is a channel. Moreover, allowing the two defining functions \(f\) and \(f^{\prime}\) to be different does not help in general, as the following corollary shows: **Corollary 10.2**.: _Suppose that \(\chi^{2}_{f}(\rho||\sigma)=\chi^{2}_{f}(\Phi(\rho)||\Phi(\sigma))\) for some standard monotone function \(f\). Then, choosing \(f^{\prime}\neq f\), the map \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\) satisfies:_ \[\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi(\rho)=\rho\qquad\wedge\qquad \widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi(\sigma)=\sigma\,. \tag{118}\] _if and only if \([\rho,\sigma]=0\). In this case, the equality for any \(\chi^{2}_{f}\)-divergence such that \(\mathbb{J}^{-1}_{f}\) is CP implies the equality for all other \(\chi^{2}_{f}\)-divergences._ Proof.: It should be noticed that the requirement \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi(\rho)=\rho\) directly follows from the property 4, so it can be imposed without problems. Let us now focus on the second part of Eq. (118), namely: \[\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi(\sigma) =\mathbb{J}_{f^{\prime}}\big{|}_{\rho}\,\Phi^{\dagger}\,\mathbb{ J}^{-1}_{f}\big{|}_{\Phi(\rho)}\,\Phi(\sigma)=\left(\mathbb{J}_{f^{\prime}} \big{|}_{\rho}\mathbb{J}^{-1}_{f}\big{|}_{\rho}\right)\left(\mathbb{J}_{f} \big{|}_{\rho}\,\Phi^{\dagger}\,\mathbb{J}^{-1}_{f}\big{|}_{\Phi(\rho)}\right) \,\Phi(\sigma)= \tag{119}\] \[=\left(\mathbb{J}_{f^{\prime}}\big{|}_{\rho}\mathbb{J}^{-1}_{f} \big{|}_{\rho}\right)\widetilde{\Phi}_{(f,f),\rho}\,\Phi(\sigma)=\mathbb{J}_{f ^{\prime}}\big{|}_{\rho}\mathbb{J}^{-1}_{f}\big{|}_{\rho}[\sigma]\,, \tag{120}\] where in the second line we used Corollary 10.1 to substitute \(\widetilde{\Phi}_{(f,f),\rho}\,\Phi(\sigma)=\sigma\). Now, if \(f\neq f^{\prime}\), \(\mathbb{J}_{f^{\prime}}\big{|}_{\rho}\mathbb{J}^{-1}_{f}\big{|}_{\rho}[\sigma]=\sigma\) if and only if \([\rho,\sigma]=0\), as it can be verified in coordinates (see Sec. IV). Regarding the second part of the Corollary suppose that there exists a standard monotone \(f\) such that \(\chi^{2}_{f}(\rho||\sigma)=\chi^{2}_{f}(\Phi(\rho)||\Phi(\sigma))\) and \(\mathbb{J}^{-1}_{f}\) is CP. Then, by choosing any \(f^{\prime}\) such that \(\mathbb{J}_{f^{\prime}}\) is CP, \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\) is a quantum channel. Moreover, if \([\rho,\sigma]=0\), it follows from Eq. (120) that \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\,\Phi(\sigma)=\sigma\). Thus, for any \(\chi^{2}_{f}\)-divergence it holds that: \[\chi^{2}_{\tilde{f}}(\rho||\sigma)\geq\chi^{2}_{\tilde{f}}(\Phi(\rho)||\Phi( \sigma))\geq\chi^{2}_{\tilde{f}}(\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi( \rho)||\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi(\sigma))=\chi^{2}_{\tilde{f}}( \rho||\sigma)\,, \tag{121}\] proving the claim. In the result above it should be noticed that even if one needs the commutativity of \(\rho\) and \(\sigma\), their evolved versions \(\Phi(\rho)\) and \(\Phi(\sigma)\) can be general. Still, it should be noticed that if one requires \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\) to be a channel, the only unconstrained recovery result of the form in Eq. (118) is given by the Petz recovery map, i.e., for \(f(x)=f^{\prime}(x)=\sqrt{x}\). Finally, we provide yet another way of singling out the Petz recovery map from the generalised family \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\). Indeed, this one corresponds to the best recovery in a specific sense: **Theorem 11**.: _Consider the family of positive superoperators \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi\) such that \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\) is CP. One can introduce the partial order \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi\geq\widetilde{\Phi}_{(\tilde{f^{ \prime}},\tilde{f}),\rho}\Phi\iff(\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi- \widetilde{\Phi}_{(\tilde{f^{\prime}},\tilde{f}),\rho}\Phi)\geq 0\). Then, there is a unique supremum given by:_ \[\sup_{f,f^{\prime}}\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi=\widetilde{\Phi}_{ P,\rho}\,\Phi\,. \tag{122}\] This results means that among all possible recovery channels \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\), the Petz is the one that maximises the spectrum of \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi\). Proof.: As it was mentioned in Box 8, if \(f(x)\leq f^{\prime}(x)\) for every \(x\in\mathbb{R}^{+}\), this implies that \(\mathbb{J}_{f}\big{|}_{\rho}\leq\mathbb{J}_{f^{\prime}}\big{|}_{\rho}\) (see Eq. (157)). Moreover, a necessary condition for \(\mathbb{J}_{f}\big{|}_{\rho}\) to be CP is that \(f(x)\leq\sqrt{x}\), so one can maximise \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\) by choosing the maximum \(f^{\prime}\): \[\sup_{f^{\prime}}\,\widetilde{\Phi}_{(f^{\prime},f),\rho}\Phi=\widetilde{\Phi }_{(f_{SQ},f),\rho}\Phi\,, \tag{123}\] where we denote by \(f_{SQ}(x)=\sqrt{x}\). On the other hand, for \(\mathbb{J}_{f}^{-1}\big{|}_{\rho}\geq\mathbb{J}_{f^{\prime}}^{-1}\big{|}_{\rho}\) if and only if \(f(x)\leq f^{\prime}(x)\) for every \(x\in\mathbb{R}^{+}\). Moreover, a necessary condition for \(\mathbb{J}_{f}^{-1}\big{|}_{\rho}\) to be CP is that \(f(x)\geq\sqrt{x}\). Hence, the maximum \(\widetilde{\Phi}_{(f^{\prime},f),\rho}\) in this case is reached for the minimum function \(f\), that is once again for \(f_{SQ}\). Finally, the fact that the two supremum are independent of each other proves the claim. ### Fisher information and detailed balance In this section we explain how the notion of detailed balanced evolutions can be given in terms of self-adjointness with respect to the Fisher information scalar product. To this end, it is useful to start with the study of the classical case to avoid the complications arising from the variety of different quantum Fisher information. Classical case.As discussed in Box 6, the dynamics of a classical divisible evolution \(\Phi_{t}\) is described by a stochastic matrix, which can be equivalently characterised in terms of its rate matrices \(R_{s}\) (\(\forall s\leq t\), see Eq. (72)), which induce the dynamics: \[\frac{\mathrm{d}}{\mathrm{d}t}\,\Phi_{t}(\rho)=R_{t}(\rho)\,, \tag{124}\] where \(\rho\) is in this case a diagonal matrix. As it was mentioned in Box 6, rate matrices can be generically decomposed as: \[R_{t}=\sum_{i\neq j}\,a_{i\gets j}^{(t)}\left(|i\rangle\langle j|-|j \rangle\langle j|\right)\,, \tag{125}\] where \(a_{i\gets j}^{(t)}\) are real coefficients that are non-negative for Markovian evolutions. In this context, one can formulate the condition of detailed balanced evolution (with respect to some state \(\pi\)) in terms of the rates alone: \[a_{i\gets j}^{(t)}\,\pi_{j}=a_{j\gets i}^{(t)}\,\pi_{i}\,, \tag{126}\] where the equation above holds for every \(i\) and \(j\). This condition directly implies that \(\pi\) is a steady state, as it can be readily verified by a straightforward computation. Indeed, detailed balance corresponds to a stronger notion of equilibration: not only the dynamics has \(\pi\) as a fixed points, but it is also time symmetric at equilibrium. In fact, if one interprets the rates \(a_{j\gets i}^{(t)}\) as the probability per unit of time of the transition \(j\gets i\), Eq. (126) can be read as the condition that the probability of observing the transition \(j\gets i\) is equal to the one for the reverse transition \(j\to i\), when the system is at equilibrium. For this reason, detailed balance encodes the request of microscopic reversibility of general dynamics, i.e., the fact that at a molecular level the equations of motion are time symmetric. We show now that Eq. (126) can be naturally formulated in terms of the Fisher information scalar product. In analogy with the quantum case (Eq. (27)), we use the notation for the scalar product: \[K_{\pi}(\delta\rho,\delta\sigma):=\mathrm{Tr}\left[\delta\rho\,\mathcal{J}_{ \pi}^{-1}[\delta\sigma]\right]=\mathrm{Tr}\left[\delta\rho\,\delta\sigma\, \pi^{-1}\right]\,, \tag{127}\] where we implicitly defined \(\mathcal{J}_{\pi}\) to be the component-wise multiplication by \(\pi\), and all the operators involved commute. This scalar product naturally emerges when one is considering variations of states (as proven in Thm. 1), thus from a differential geometric point of view, the two vectors \(\delta\rho\) and \(\delta\sigma\) should be elements of the tangent space of the state space (i.e., Hermitian, traceless operators and, in this case, diagonal). Moreover, one can also interpret \(\mathcal{J}_{\pi}\) as the scalar product on the cotangent space, i.e., the space of observables: indeed, a metric on the tangent space naturally induces one on its dual by taking the pointwise matrix inverse [24]. Hence, one can also define the Fisher scalar product on the space of observables as: \[K_{\pi}^{o}(A,B):=\mathrm{Tr}\left[A\,\mathcal{J}_{\pi}[B]\right]=\mathrm{Tr} \left[A\,B\,\pi\right]\,, \tag{128}\] where in this case \(A\) and \(B\) are not required to be traceless. It should be noticed that quantities analogous to \(K_{\pi}^{o}(A,B)\) naturally emerge in statistical mechanics and in linear response theory when studying two-point correlation functions. Since the definition of the adjoint of a superoperator \(\Phi\) depends on the underlying scalar product, it is useful to present its expression when considering \(K_{\pi}\). Then, in this context it satisfies the property: \[K_{\pi}(\delta\rho,\Phi(\delta\sigma)) =\mathrm{Tr}\left[\delta\rho\,\mathcal{J}_{\pi}^{-1}[\Phi(\delta \sigma)]\right]=\mathrm{Tr}\left[\mathcal{J}_{\pi}^{-1}[\delta\rho]\Phi( \delta\sigma)\right]=\mathrm{Tr}\left[\Phi^{\dagger}\circ\mathcal{J}_{\pi}^{- 1}[\delta\rho]\,\delta\sigma\right]=\] \[=\mathrm{Tr}\left[\left(\mathcal{J}_{\pi}\,\Phi^{\dagger}\, \mathcal{J}_{\pi}^{-1}\right)(\delta\rho)\,\mathcal{J}_{\pi}^{-1}[\delta \sigma]\right]=K_{\pi}(\widetilde{\Phi}(\delta\rho),\delta\sigma)\,, \tag{129}\] where we implicitly defined \(\widetilde{\Phi}:=\mathcal{J}_{\pi}\,\Phi^{\dagger}\,\mathcal{J}_{\pi}^{-1}\). Since the equation above holds for any \(\delta\rho\) and \(\delta\sigma\) in the tangent space, we can identify \(\widetilde{\Phi}\) with the adjoint of \(\Phi\) with respect to the Fisher information metric. Then, self-adjointness in this context takes the form \(\widetilde{\Phi}=\Phi\), which can be rewritten as \(\Phi\circ\mathcal{J}_{\pi}=\mathcal{J}_{\pi}\circ\Phi^{\dagger}\). In coordinates this condition reads: \[\Phi_{i,j}\,\pi_{j}=\widetilde{\Phi}_{j,i}\,\pi_{i}\,, \tag{130}\] which has a striking similarity with Eq. (126). Indeed, it follows from Eq. (130) that the requirement that \(R_{t}\) is detailed balanced exactly means that \(\widetilde{R}_{t}=R_{t}\), i.e., the generator of the dynamics is self-adjoint with respect to the Fisher information scalar product. This result was derived in the Schrodinger picture, meaning that the states are the only evolving objects, while observables are static quantities. The dual situation, dubbed Heisenberg picture, is the one in which states are fixed in time, while the whole dynamics is relegated to observables. In this case it is well known that the generator of the dynamics is given by \(R_{t}^{\dagger}\). Moreover, as it was argued above, the natural Fisher scalar product is the one given by \(K_{\pi}^{o}\). We denote the adjoint with respect to this scalar product by \(\widetilde{\Phi}^{o}:=\mathcal{J}_{\pi}^{-1}\,\Phi^{\dagger}\,\mathcal{J}_{\pi}\), where this condition can be verified carrying out calculations completely analogous to the one that led to Eq. (129). It is easy to verify that: \[\widetilde{\Phi}=\Phi\qquad\Longleftrightarrow\qquad(\widetilde{\Phi^{ \dagger}})^{o}=\Phi^{\dagger}\,, \tag{131}\] which implies that \(R_{t}\) is \(K_{\pi}\)-self-adjoint if and only if \(R_{t}^{\dagger}\) is \(K_{\pi}^{o}\)-self-adjoint. This shows that one can formulate the condition of being detailed balanced both in the Schrodinger and in the Heisenberg picture, resorting only to the use of Fisher scalar products. Putting these results together we obtain the theorem: **Theorem 12**.: _The following conditions are equivalent in the classical case:_ 1. _the dynamics is detailed balanced, i.e., the rate matrix is characterised by coefficients satisfying_ \[a_{i\gets j}^{(t)}\,\pi_{j}=a_{j\gets i}^{(t)}\,\pi_{i}\,;\] (132) _2. the rate matrix_ \(R_{t}\) _is self-adjoint with respect to the Fisher scalar product:_ \[\widetilde{R}_{t}=R_{t}\,;\] (133) 3. _the rate matrix in the Heisenberg picture (i.e.,_ \(R_{t}^{\dagger}\)_) is self-adjoint with respect to the dual Fisher metric:_ \[(\widetilde{R_{t}^{\dagger}})^{o}=R_{t}^{\dagger}\,.\] (134) This characterisation shows how the condition of being detailed balanced directly corresponds to the self-adjointness of the generator of the dynamics with respect to a properly defined scalar products. With this hindsight in mind, we can now pass to the quantum regime. Quantum case.Consider now the dynamics induced by the Lindbladian operator in the form in Eq. (55), i.e.: \[\mathcal{L}[\rho]=-i[H,\rho]+\sum_{\alpha}^{d^{2}}\,\,\lambda_{\alpha}\,\left( A_{\alpha}\,\rho\,A_{\alpha}^{\dagger}-\frac{1}{2}\{A_{\alpha}^{\dagger}\,A_{ \alpha},\rho\}\right)\,, \tag{135}\] where we dropped the time dependence to simplify the notation. Since the Fisher information is invariant under the action of a purely Hamiltonian dynamics, whereas it is contracting otherwise, we split the two terms in the Lindbladian by introducing the notation \(\mathcal{U}(\rho):=-i[H,\rho]\), and we call _dissipator_ the difference \(\mathcal{L}_{\mathcal{D}}:=\mathcal{L}-\mathcal{U}\). It should be noticed that \(\mathcal{U}\) is skew-Hermitian with respect to the Hilbert-Schmidt scalar product, meaning that: \[\mathrm{Tr}\left[A\,\mathcal{U}(B)\right]=-\mathrm{Tr}\left[\mathcal{U}(A)\,B \right]\,. \tag{136}\] Given the structure of the Lindbladian above, we can introduce the notion of quantum detailed balance. Historically, one of the first formalisations of this notion was provided by Alicki in [25], and it is based on the following scalar product on the space of observables: \[K^{o}_{\pi}(A,B):=\mathrm{Tr}\left[AB\,\pi\right]\,, \tag{137}\] where in this case \(A\), \(B\) and \(\pi\) are not required to commute in general (unlike in Eq. (128)). Similarly to the case of classical systems, this scalar product is quite natural as it is related to two-points correlation functions, but it should also be kept in mind that it is not part of the Fisher family. Using the same notation as in the classical case, we denote by \(\widetilde{\Phi}^{o}\) the adjoint of the map \(\Phi\) with respect to the scalar product in Eq. (137). Then, the definition proposed by Alicki reads: **Definition 1** (Heisenberg picture [25]).: The dynamics generated in the Heisenberg picture by the operator \(\mathcal{L}^{\dagger}\) is detailed balanced if the three conditions are satisfied: 1. \(\mathcal{L}^{\dagger}\) is normal with respect to the scalar product \(K^{o}_{\pi}\): \[[\mathcal{L}^{\dagger},(\widetilde{\mathcal{L}^{\dagger}})^{o}]=0\,;\] (138) 2. the commutator \(\mathcal{U}\) is skew-Hermitian with respect to \(K^{o}_{\pi}\): \[\widetilde{\mathcal{U}}^{o}=-\mathcal{U}\,;\] (139) 3. the dissipator \(\mathcal{L}^{\dagger}_{\mathcal{D}}\) is self-adjoint with respect to \(K^{o}_{\pi}\): \[(\widetilde{\mathcal{L}^{\dagger}_{\mathcal{D}}})^{o}=\mathcal{L}^{\dagger}_{ \mathcal{D}}\,.\] (140) Interestingly, from this definition one can deduce a structural characterisation of detailed balanced Lindbladians (see [25]), which explicitly reads: **Definition 2** (Structural definition).: The dynamics generated by the Lindbladian operator \(\mathcal{L}\) satisfies detailed balance if its diagonal form can be written as: \[\mathcal{L}(\rho)=-i[H,\rho]+\sum_{\omega,i}\;\lambda^{\omega}_{i}\,\left(A^{ \omega}_{i}\,\rho\,(A^{\omega}_{i})^{\dagger}-\frac{1}{2}\{(A^{\omega}_{i})^ {\dagger}A^{\omega}_{i},\rho\}\right)\,, \tag{141}\] and the following conditions hold: 1. \([H,\pi]=0\); 2. \((A^{\omega}_{i})^{\dagger}=A^{-\omega}_{i}\); 3. \(\pi\,A^{\omega}_{i}\,\pi^{-1}=e^{\omega}\,A^{\omega}_{i}\); 4. \(\lambda^{\omega}_{i}=e^{\omega}\,\lambda^{-\omega}_{i}\). In the literature one usually finds Def. 2 as the usual definition of detailed balance, as it mirrors the same structural properties we saw for classical systems [26]. Indeed, from the condition 3 we know that \(A^{\omega}_{i}\) are eigenoperators of the modular operator \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\), with eigenvalue \(e^{\omega}\). On the other hand, all the eigenvalues of \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\) are of the form \(\pi_{i}/\pi_{j}\), so the only values of \(\omega\) that are allowed are the ones satisfying the constraint \(e^{\omega}=\pi_{i}/\pi_{j}\) for some \(i\) and \(j\). Then, by substituting this expression into condition 4, one obtains the analogous characterisation at the level of the rates we found for classical detailed balanced dynamics (see Eq. (132)). Despite this positive result, it should be noticed that the choice of the scalar product in Eq. (137) is somehow arbitrary, as in the passage from commuting observables to the non-commuting case there are many different possible orderings that can be used to extend the multiplication operator \(\mathcal{J}_{\pi}\). Moreover, it should be noticed that the scalar product \(K^{o}_{\pi}\) in Eq. (137) is not monotone under CP-maps, a property that its classical counterpart had. For these reasons, in the following we show how one can define detailed balance through the help of quantum Fisher information metrics. A first possible definition is given by imposing \(\widetilde{\Phi}_{f}=\Phi\) for some Fisher scalar product \(K_{f,\pi}\) (see Eq. (27)), where we simplified the notation from the one used in the generalised Petz map4 defined in Eq. (84), i.e., \(\widetilde{\Phi}_{f}:=\mathbb{J}_{f}\big{|}_{\pi}\,\Phi_{t}^{\dagger}\, \mathbb{J}_{f}^{-1}\big{|}_{\pi}\). As it could be expected, the structure induced is much richer in this case. In fact, one can show that different \(K_{f,\pi}\) induce inequivalent notions of detailed balance, by constructing dynamics that are detailed balanced with respect to one \(f\) but not with respect to others [2]. Moreover, these conditions are all weaker than the one provided by Def. 1. This means that dynamics that are detailed balanced with respect to \(K_{\pi}^{o}\) are also detailed balanced with respect with \(K_{f,\pi}\), but not the other way round. Since in principle there is no preferred definition of quantum Fisher information, we choose to impose the condition of detailed balance for all the \(K_{f,\pi}\) at the same time: Footnote 4: The reason why we do not consider the more general definition \(\widetilde{\Phi}(^{f^{\prime},f})\) is that the adjoint operation should be involutive in finite dimensions, that is the adjoint of the adjoint should give back the original map. As explained in Box 8 this is only true when \(f^{\prime}\equiv f\). **Definition 3** (Schroedinger picture).: The dynamics generated by the Lindbladian operator \(\mathcal{L}\) is detailed balance if for every standard monotone function \(f\) the following holds: 1. \(\mathcal{L}\) is normal with respect to all the scalar products \(K_{f,\pi}\): \[[\mathcal{L},\widetilde{\mathcal{L}}_{f}]=0\,;\] (142) 2. the commutator \(\mathcal{U}\) is skew-Hermitian with respect to \(K_{f,\pi}\): \[\widetilde{\mathcal{U}}_{f}=-\mathcal{U}\,;\] (143) 3. the dissipator \(\mathcal{L}_{\mathcal{D}}\) is self-adjoint with respect to \(K_{f,\pi}\): \[(\widetilde{\mathcal{L}_{\mathcal{D}}})_{f}=\mathcal{L}_{\mathcal{D}}\,.\] (144) At this point we have three different definitions of what it means for a Markovian dynamics to be detailed balance. At this point, we can wrap everything together, providing a characterisation of their interdependency: **Theorem 13**.: _The following conditions are equivalent:_ 1. _the generator of the dynamics in the Heisenberg picture_ \(\mathcal{L}^{\dagger}\) _satisfies the adjointness relations in Def._ 1_;_ 2. _the Lindbladian_ \(\mathcal{L}\) _satisfies the structural characterisation in Def._ 2_._ _These conditions imply the condition:_ 1. _the generator of the dynamics in the Schroedinger picture_ \(\mathcal{L}\) _satisfies the adjointness relations in Def._ 3_._ _Moreover, if the Hamiltonian \(H\) is non-degenerate the three conditions are equivalent._ The proof of this result is provided in App. C. This shows that the definition based on Fisher scalar products is weaker (i.e., it includes a larger set of dynamics) even when taking into consideration all the possible scalar products at the same time. This should be contrasted with the definition by Alicki, in which a single scalar product is used. The difference between the two arises in the way in which coherences in the eigenbasis of \(\pi\) are handled. Still, it should be noticed that in both cases the evolution induced by the unitary part decouples from the dissipative dynamics. In fact, thanks to the normality of the generator, one has: \[[\mathcal{U}+\mathcal{L}_{\mathcal{D}},-\mathcal{U}+\mathcal{L}_{\mathcal{D}}] =0\qquad\implies\qquad[\mathcal{U},\mathcal{L}_{\mathcal{D}}]=0\,. \tag{145}\] This generic property can be used to further constrain \(\mathcal{L}_{\mathcal{D}}\) in the case in which \(H\) is non-degenerate, allowing for the identification of Def. 1 and Def. 3 in this case. If \(H\) is degenerate, on the other hand, we can provide the following structural characterisation of dissipators \(\mathcal{L}_{\mathcal{D}}\) that are detailed balanced according to Def. 3: \[\mathcal{L}_{\mathcal{D}}[\rho]=\sum_{\omega,i}\,\left(\lambda_{i}^{\omega} \left(A_{i}^{\omega}\,\rho\,(A_{i}^{\omega})^{\dagger}-\frac{1}{2}\{(A_{i}^{ \omega})^{\dagger}A_{i}^{\omega},\rho\}\right)+\mu_{i}^{\omega}\,B_{i}^{\omega }\,\rho^{T}(B_{i}^{\omega})^{\dagger}\right)\,, \tag{146}\] where the following conditions are satisfied: 1. \((A_{i}^{\omega})^{\dagger}=A_{i}^{-\omega}\) and \((B_{i}^{\omega})^{\dagger}=B_{i}^{-\omega}\); 2. \(\pi\,A_{i}^{\omega}\,\pi^{-1}=e^{\omega}\,A_{i}^{\omega}\) and \(\pi\,B_{i}^{\omega}\,\pi^{-1}=e^{\omega}\,B_{i}^{\omega}\); 3. \(\lambda_{i}^{\omega}=e^{\omega}\,\lambda_{i}^{-\omega}\) and \(\mu_{i}^{\omega}=e^{\omega}\,\mu_{i}^{-\omega}\); 4. \(\lambda_{i}^{\omega}\geq 0\) and \(\sum_{i}\mu_{i}^{\omega}=0\ \forall\omega\). We defer the proof to App. C.3. It should be noticed that the characterisation in Eq. (146) clearly generalises Def. 2, as the first can be considered as the restriction of the latter to the case in which \(\mu_{i}^{\omega}=0\) for all \(i\) and \(\omega\). Moreover, it should also be pointed out that, thanks to the second part of condition 4, the extra term only acts on off-diagonal elements, i.e, it only affects the dynamics of the coherences. ## IV Mathematical properties of quantum Fisher information The aim of this section is to explore the main properties of the superoperator \(\mathbb{J}_{f}\big{|}_{\pi}\), whose definition we repeat here for convenience: \[\mathbb{J}_{f}\big{|}_{\pi}:=\mathbb{R}_{\pi}\,f\big{(}\mathbb{L}_{\pi} \mathbb{R}_{\pi}^{-1}\big{)}\,. \tag{147}\] Before doing so, however, it is useful to study the properties of the set of standard monotone functions, which we will denote by \(\mathcal{M}\). ### The set of standard monotone functions As it was discussed in Sec. II, every \(f\in\mathcal{M}\) satisfies the following three properties: 1. is matrix monotone; 2. is Fisher adjusted, meaning that \(f(1)=1\); 3. satisfies the symmetry \(f(x)=xf(x^{-1})\). In Box 2 it was shown that one can pointwise bound all the standard monotones as: \[\frac{2x}{x+1}\,\leq f(x)\leq\,\frac{x+1}{2}\,, \tag{148}\] which automatically proves that all \(f\in\mathcal{M}\) are bounded for \(x\in[0,1]\), and diverge as \(x\to\infty\) at most in a linear manner. We denote by \(f_{H}(x)\) and \(f_{B}(x)\) the smallest and the largest standard monotone functions. Thanks to the pointwise characterisation of Eq. (148), it is then natural to introduce the partial order on \(\mathcal{M}\) defined by \(f_{1}\leq f_{2}\) if and only if \(f_{1}(x)\leq f_{2}(x)\) for every \(x\in\mathbb{R}^{+}\). Then, every \(f\in\mathcal{M}\) satisfies \(f_{H}\leq\,f\,\leq f_{B}\) (simply rewriting Eq. (148)), but it should be noticed that this is not an if and only if condition: there are functions for which Eq. (148) holds but that are not standard monotones5. Footnote 5: One can easily construct examples of this: consider for instance the function \(f(x):=\cos(x+x^{-1})^{2}\,f_{B}(x)+\sin(x+x^{-1})^{2}f_{H}(x)\). By construction it is Fisher adjusted, satisfies the symmetry condition \(f(x)=xf(x^{-1})\), and satisfies \(f_{H}\leq\,f\,\leq f_{B}\). Still, it is clearly not monotone, as it oscillates infinitely many times between \(f_{H}\) and \(f_{B}\). There is a fundamental symmetry of \(\mathcal{M}\) that was exploited to prove Eq. (148), given by the transformation \(f(x)\to[Tf](x):=x/f(x)\). Thanks to condition 2 in Lemma 1, this transformation maps standard monotones to standard monotones (i.e, \(T:\mathcal{M}\to\mathcal{M}\)). Moreover, \(T\) is also order-reversing (\(f\leq h\iff Th\leq Tf\)), from which it directly follows that \(Tf_{B}=f_{H}\): indeed, since for every \(f\) in \(\mathcal{M}\) one has \(f\,\leq f_{B}\), then it also holds that \(\forall f\in\mathcal{M}\), \(Tf_{B}\,\leq Tf\), which is the defining property of \(f_{H}\) (this is exactly part of the derivation used in Box 2 to prove Eq. (148)). Moreover, thanks to the symmetry \(f(x)=xf(x^{-1})\) we can rewrite the action of \(T\) as: \[[Tf](x)=\frac{x}{f(x)}=\frac{1}{f(x^{-1})}\,. \tag{149}\] From this expression one can directly verify that \(T\) is involutive, i.e., \(TTf=f\). One first consequence is that \(Tf_{H}=TTf_{B}=f_{B}\). Moreover, this also means that we can partition \(\mathcal{M}\) into two sets \(\mathcal{M}_{0}\) and \(\mathcal{M}_{1}\) such that \(\mathcal{M}_{0}\cup\mathcal{M}_{1}=\mathcal{M}\) and \(\mathcal{M}_{1}=T\mathcal{M}_{0}\). Moreover, we can choose the intersection between this two sets to be only one element, i.e., the only function for which \(f=Tf\), namely \(f_{SQ}(x)=\sqrt{x}\). Then, by defining \(\mathcal{M}_{00}:=\mathcal{M}_{0}\setminus f_{SQ}\), we can express the set of standard monotones as: \[\mathcal{M}=\left(\bigsqcup_{f\in\mathcal{M}_{00}}\{f,Tf\}\right)\bigsqcup\,f_ {SQ}\,, \tag{150}\] that is, as the disjoint union of sets of only two elements and, additionally, the square root function. Thanks to the possibility of rewriting \(T\) as in Eq. (149), this transformation is also particularly useful because it allows to lift some properties of \(f\in\mathcal{M}\) to their multiplicative inverse, and vice versa. For example, it was proven in [27] that the inverse of a standard monotone can be rewritten as: \[\frac{1}{f(x)}=\int_{0}^{1}\mathrm{d}\mu_{f}(\lambda)\;\left(\frac{\lambda+1} {2}\right)\left(\frac{1}{x+\lambda}+\frac{1}{1+\lambda\,x}\right)\,, \tag{151}\] where \(\mathrm{d}\mu_{f}(\lambda)\) is a probability measure on \([0,1]\). Moreover, by plugging into the integral above an arbitrary probability measure, the resulting function is always the inverse of a standard monotone. It then follows from Eq. (149) that we can express standard monotone functions as: \[f(x)=\frac{x}{Tf(x)}=\int_{0}^{1}\mathrm{d}\mu_{Tf}(\lambda)\;\left(\frac{1+ \lambda}{2}\right)\left(\frac{x}{x+\lambda}+\frac{x}{1+\lambda\,x}\right)= \int_{0}^{1}\mathrm{d}\mu_{Tf}(\lambda)\;f_{\lambda}(x)\,. \tag{152}\] This shows that \(\mathcal{M}\) is a convex set, which coincides with the convex hull of the extreme points denoted by \(f_{\lambda}(x)\) (in particular since the set of \(f_{\lambda}(x)\) is closed, this space has the structure of a Bauer simplex [28]). ### Properties of quantum Fisher operators Now that the main properties of the set \(\mathcal{M}\) have been discussed, we can start studying how these reflect on the operators \(\mathbb{J}_{f}\big{|}_{\pi}\) and \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\). First, let us explore the main implications of the three defining properties: 1. the monotonicity of \(f\) implies that for every CPTP map \(\Phi\) the two inequalities hold: \[\Phi^{\dagger}\left(\mathbb{J}_{f}^{-1}\big{|}_{\Phi(\pi)}\right)\Phi\;\leq\; \mathbb{J}_{f}^{-1}\big{|}_{\pi}\,,\qquad\qquad\qquad\Phi\left(\mathbb{J}_{f} \big{|}_{\pi}\right)\Phi^{\dagger}\;\leq\;\mathbb{J}_{f}\big{|}_{\Phi(\pi)}\,;\] (153) 2. the property of \(f\) to be Fisher adjusted implies that \(\mathbb{J}_{f}\big{|}_{\pi}[A]=A\pi\) and \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}[A]=A\pi^{-1}\) for every \(A\) such that \([A,\pi]=0\); 3. the symmetry \(f(x)=xf(x^{-1})\) implies that \(\mathbb{J}_{f}\big{|}_{\pi}\) is adjoint-preserving, i.e., it maps self-adjoint operators into self-adjoint operators. The proofs of these facts, and the other manipulations of the section are deferred to App. D. In order to make the discussion more concrete, it is useful to give the coordinate expression of \(\mathbb{J}_{f}\big{|}_{\pi}\) in the eigenbasis of \(\pi\). To this end, we express the state in coordinates as: \[\pi=\sum_{i}\pi_{i}\;|i\rangle\langle i|. \tag{154}\] It should be noticed that the modular operator is diagonal in the basis given by \(\{\,|\pi_{i}\rangle\langle\pi_{j}\,|\}\), as \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}[\,|\pi_{i}\rangle\langle\pi_{j}\,|]= \frac{\pi_{i}}{\pi_{j}}\;|\pi_{i}\rangle\langle\pi_{j}\,|\). Thanks to this, it is straightforward to compute the action of \(\mathbb{J}_{f}\big{|}_{\pi}\) on \(\,|i\rangle\langle j|\) by directly using the definition in Eq. (147): \[\mathbb{J}_{f}\big{|}_{\pi}\big{[}\,|i\rangle\langle j\,|]=f\left(\frac{\pi_{ i}}{\pi_{j}}\right)\pi_{j}\;\;|i\rangle\langle j\,|\;;\qquad\qquad\qquad \mathbb{J}_{f}^{-1}\big{|}_{\pi}\left[\,|i\rangle\langle j\,|\right]=\left(f \left(\frac{\pi_{i}}{\pi_{j}}\right)\pi_{j}\right)^{-1}\;\;|i\rangle\langle j \,|. \tag{155}\] Consider now an operator \(A\) which can be written in coordinates as \(A:=\sum A_{i,j}\;|i\rangle\langle j\,|\). Then, \(\mathbb{J}_{f}\big{|}_{\pi}\) acts as: \[\mathbb{J}_{f}\big{|}_{\pi}[A]=\sum_{i,j}f\left(\frac{\pi_{i}}{\pi_{j}}\right) \pi_{j}\;A_{i,j}\;|i\rangle\langle j|=\sum(J_{f,\pi}\circ A)_{i,j}\;|i\rangle \langle j|\, \tag{156}\] where we introduced the matrix \(J_{f,\pi}:=\sum_{i,j}\mathbb{J}_{f}\big{|}_{\pi}[\ket{i}\bra{j}]\), and we use the circle to denote the Hadamard product. Analogously, the same computation can be carried out for \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\). Since all the \(\mathbb{J}_{f}\big{|}_{\pi}\) are diagonal in the same basis, we can lift the partial order from \(\mathcal{M}\) to the operators. Indeed, as it can be verified in coordinates, one has that: \[f_{1}\leq f_{2}\quad\implies\quad\mathbb{J}_{f_{1}}\big{|}_{\pi}\leq\,\mathbb{J }_{f_{2}}\big{|}_{\pi}\,. \tag{157}\] Hence, it also follows that for every \(f\in\mathcal{M}\) the corresponding operator satisfies \(\mathbb{J}_{f_{H}}\big{|}_{\pi}\leq\mathbb{J}_{f}\big{|}_{\pi}\leq\mathbb{J}_{ f_{B}}\big{|}_{\pi}\). This allows to prove a key property of the Fisher operators: if one restricts the spectrum of \(\pi\) to be bounded away from zero, then \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) is a bounded operator (as a function of \(\pi\)). Since for every \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) it holds that \(\mathbb{J}_{f_{B}}^{-1}\big{|}_{\pi}\leq\mathbb{J}_{f}^{-1}\big{|}_{\pi}\leq \mathbb{J}_{f_{H}}^{-1}\big{|}_{\pi}\) (as the inversion reverts the inequalities), we can verify this fact just by studying the behaviour of \(\mathbb{J}_{f_{B}}^{-1}\big{|}_{\pi}\). Then, by explicitly computing the spectrum using Eq. (155) for \(f_{B}\), which takes the form \(\left\{\frac{2}{\pi_{i}+\pi_{j}}\right\}_{i,j}\), we see that this is a bounded set whenever \(\pi\) is bounded away from zero. This property was the key ingredient in the proof of Thm. 3. As it was mentioned in the previous section, the transformation \(T\) lifts properties of \(f\) to its inverse \(1/f\). Indeed, the same also holds when considering Fisher information operators, thanks to the following chain of equalities: \[\mathbb{J}_{Tf}\big{|}_{\pi}=\mathbb{R}_{\pi}\,Tf(\mathbb{L}_{\pi}\mathbb{R}_ {\pi}^{-1})=\mathbb{R}_{\pi}\frac{1}{f}(\mathbb{L}_{\pi}^{-1}\mathbb{R}_{\pi} )=\mathbb{R}_{\pi^{-1}}^{-1}\frac{1}{f}(\mathbb{L}_{\pi^{-1}}\mathbb{R}_{\pi ^{-1}}^{-1})=\mathbb{J}_{f}^{-1}\big{|}_{\pi^{-1}}\,. \tag{158}\] This means that the properties of \(\mathbb{J}_{f}\big{|}_{\pi}\) are in one to one correspondence with the properties of \(\mathbb{J}_{Tf}^{-1}\big{|}_{\pi^{-1}}\), fact that will be particularly relevant in the next section. Finally, it should be noticed that the convex structure of \(\mathcal{M}\) also reflects on the operators \(\mathbb{J}_{f}\big{|}_{\pi}\). For this reason, it is interesting to study the operators associated to \(f_{\lambda}\), namely: \[\mathbb{J}_{f_{\lambda}}\big{|}_{\pi}[A]=\left(\frac{1+\lambda}{2}\right) \left((\mathbb{L}_{\pi}+\lambda\mathbb{R}_{\pi})^{-1}+(\mathbb{R}_{\pi}+ \lambda\mathbb{L}_{\pi})^{-1}\right)\left[\pi A\pi\right], \tag{159}\] Then, thanks to Eq. (152), we have that any arbitrary quantum Fisher operator \(\mathbb{J}_{f}\big{|}_{\pi}\) can be written as: \[\mathbb{J}_{f}\big{|}_{\pi}[A]=\int_{0}^{1}\mathrm{d}\mu_{Tf}(\lambda)\;\left( \frac{1+\lambda}{2}\right)\left((\mathbb{L}_{\pi}+\lambda\mathbb{R}_{\pi})^{- 1}+(\mathbb{R}_{\pi}+\lambda\mathbb{L}_{\pi})^{-1}\right)\left[\pi A\pi\right]. \tag{160}\] Moreover, using the transformation in Eq. (158), we also have the expression for generic \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\), namely: \[\mathbb{J}_{f}^{-1}\big{|}_{\pi}[A]=\mathbb{J}_{Tf}\big{|}_{\pi^{-1}}[A]=\int_ {0}^{1}\mathrm{d}\mu_{f}(\lambda)\;\left(\frac{1+\lambda}{2}\right)\left(( \mathbb{L}_{\pi}+\lambda\mathbb{R}_{\pi})^{-1}+(\mathbb{R}_{\pi}+\lambda \mathbb{L}_{\pi})^{-1}\right)[A]\,. \tag{161}\] The two formulas above are derived in App. D. It should be noticed that the expression for \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\) could also be directly inferred from the integral expansion of \(1/f\) in Eq. (151). **Box 10**.: **Some additional properties of contrast functions**\(\Rightarrow\) Now that the quantum Fisher operators have been more thoroughly characterised, we can discuss some other properties of the contrast functions. In particular, by using their integral expression in the symmetrised case (and taking the adjoint of the second term in Eq. (12)) we have up to corrections of order \(\mathcal{O}\left(\varepsilon^{3}\right)\) that: \[H_{g}^{\mathrm{symm}}(\pi+\varepsilon A||\pi+\varepsilon B) =\frac{\varepsilon^{2}}{2}\int_{0}^{1}\mathrm{d}N_{g}(s)\;\mathrm{ Tr}\left[(A-B)\left((\mathbb{L}_{\pi}+s\mathbb{R}_{\pi})^{-1}+(\mathbb{R}_{ \pi}+s\mathbb{L}_{\pi})^{-1}\right)[(A-B)]\right]= \tag{162}\] \[=\frac{\varepsilon^{2}}{2}\,\mathrm{Tr}\left[(A-B)\,\mathbb{J}_{f }^{-1}\big{|}_{\pi}[(A-B)]\right]\,, \tag{163}\] where in the second line we simply used Thm. 1. The expression in Eq. (162) should be compared with the one obtained in Eq. (161), which shows that the two defining measures are connected by the relation: \[\mathrm{d}N_{g}(\lambda):=\mathrm{d}\mu_{f}(\lambda)\;\left(\frac{1+\lambda}{2} \right)\,. \tag{164}\] Since \({\rm d}\mu_{f}\) is a probability distribution, this also means that \({\rm d}N_{g}\) satisfies: \[\int_{0}^{1}{\rm d}N_{g}(s)\;\frac{2}{1+s}=1\,. \tag{165}\] Indeed, this could have even be verified directly from the requirement that \(H_{g}^{\rm symm}(\pi+\varepsilon A||\pi+\varepsilon B)\) needs to reduce to the classical Fisher information for commuting variables, together with the fact that for \([A,\pi]=0\) one has \((\mathbb{L}_{\pi}+s\mathbb{R}_{\pi})^{-1}[A]=\frac{A\pi^{-1}}{(1+s)}\). This also directly follows from the expression we have obtained in Eq. (21). Let us define the set of standard convex functions to be the set of functions \(g\in\mathcal{G}\) satisfying the following three properties: 1. \(g\) is matrix convex; 2. \(g\) is Fisher adjusted, that is \(g^{\prime\prime}(1)=1\); 3. \(g\) satisfies the symmetry \(g(x)=xg(x^{-1})\). As it was mentioned in condition 3 (Sec. II), the normalisation in the second requirement is chosen so that the corresponding contrast function will correctly reduce to the quantum Fisher information. Finally, since \(H_{g}(\rho||\sigma)\) coincides with \(H_{\bar{g}}(\sigma||\rho)\), with \(\bar{g}(x)=xg(x^{-1})\) (as discussed in Eq. (11)), the last requirement restricts our attention to symmetric contrast functions. Then, there exists a bijective map between the set of standard convex functions and the set of standard monotones \(L:\mathcal{G}\to\mathcal{M}\) which, thanks to Eq. (16), can be explicitly expressed as \(Lg(x)=\frac{(x-1)^{2}}{2g(x)}\). Moreover, it is easy to verify that this transformation is involutive, meaning \(L^{2}g=g\). Thanks to Eq. (7), together with the discussion at the beginning of the section, we can also see that \(\mathcal{G}\) has the structure of a Bauer simplex, with extreme points given by: \[g_{\lambda}(x)=\left(\frac{\lambda+1}{2}\right)\left(\frac{(x-1)^{2}}{x+ \lambda}+\frac{(x-1)^{2}}{1+\lambda\,x}\right)\,. \tag{166}\] It should be noticed that \(L\) reverses the partial order \(\leq\) defined on \(\mathcal{M}\), meaning that \(f_{1}\leq f_{2}\iff Lf_{2}\leq Lf_{1}\). This implies that there is a larger and a smaller elements in \(\mathcal{G}\), respectively corresponding to \(Lf_{H}\) and \(Lf_{B}\). In formulae, this reads: \[\frac{(x-1)^{2}}{x+1}\,\leq\,g(x)\,\leq\,\frac{(x+1)(x-1)^{2}}{4x}\,. \tag{167}\] Interestingly, the same ordering is also present for symmetrised contrast functions. Indeed, we have that for every \(g\in\mathcal{G}\) it holds that (see App. E): \[H_{Lf_{B}}^{\rm symm}(\rho||\sigma)\leq H_{g}^{\rm symm}(\rho||\sigma)\leq H_ {Lf_{H}}^{\rm symm}(\rho||\sigma)\,, \tag{168}\] where we highlighted the fact that the contrast functions considered here are symmetric. Finally, it should be noticed that we can lift the transformation \(T:\mathcal{M}\to\mathcal{M}\) to \(\bar{T}:\mathcal{G}\to\mathcal{G}\) by requiring the following diagram to commute: Thanks to the involutivity of \(L\), this is given by \(\tilde{T}:=LTL\), which explicitly reads: \[[\tilde{T}g](x):=\frac{(x-1)^{4}}{4\,x\,g(x)}\,. \tag{169}\] The procedure just presented exemplifies a general method to lift transformations from \(\mathcal{M}\) to \(\mathcal{G}\) and vice-versa. ### Complete positivity of the Fisher information functionals A linear map is physically realisable only if it is completely positive. For this reason, it is particularly relevant to give a characterisation of the cases in which \(\mathbb{J}_{f}\big{|}_{\pi}\) or its inverse are CP. To this end, we introduce the two sets: \[\mathcal{M}^{+}:=\big{\{}f\in\mathcal{M}\,|\,\mathbb{J}_{f}\big{|}_{\pi}\, \text{is CP }\forall\,\pi\big{\}}\;;\qquad\qquad\mathcal{M}^{-}:=\Big{\{}f\in \mathcal{M}\,|\,\mathbb{J}_{f}^{-1}\big{|}_{\pi}\,\text{is CP }\forall\,\pi\Big{\}}\;. \tag{170}\] Before starting with the exploration of this two sets, it should be noticed that in the context of Fisher operators positivity implies complete positivity. In fact, \(\mathbb{J}_{f}\big{|}_{\pi}\) is CP means that \(\mathbb{J}_{f}\big{|}_{\pi}\otimes\mathbb{I}_{n}\) is P for any \(n\). Then, thanks to the identity \(\mathbb{J}_{f}\big{|}_{\pi}\otimes\mathbb{I}_{n}=\mathbb{J}_{f}\big{|}_{\pi \otimes 1_{n}}\), if \(\mathbb{J}_{f}\big{|}_{\pi}\) is P for all states (and all dimensions), it automatically follows that it will also be CP, proving the claim. The expression in Eq. (156) is particularly useful in this context because it says that one can interpret the application of \(\mathbb{J}_{f}|_{\pi}\) to a state \(\rho\) as the Hadamard product of \(J_{f,\pi}\) with \(\rho\). Then, thanks to Schur product theorem [7], the resulting matrix is positive if both \(J_{f,\pi}\) and \(\rho\) are positive (the latter is true as \(\rho\) is a state). Since this should hold for any \(\rho\), it directly follows that \(\mathbb{J}_{f}|_{\pi}\) is CP if and only if \(J_{f,\pi}\geq 0\). Following the discussion above, it is easy to find that a necessary condition for \(f\in\mathcal{M}^{+}\) is that \(f\leq f_{SQ}\). Indeed, since every principal sub-matrix of a positive matrix is also positive semidefinite, imposing the positivity of the determinant for the \(2\times 2\) matrix containing only the \((i,j)\)-components of \(J_{f,\pi}\), namely \(\pi_{i}\pi_{j}-f(\pi_{i}/\pi_{j})^{2}\pi_{j}^{2}\geq 0\), we obtain the condition \(f(\pi_{i}/\pi_{j})\leq\sqrt{\pi_{i}/\pi_{j}}\). Moreover, from the fact that this should hold for any non-zero probability vector \(\{\pi_{i}\}_{i\in\{1,\ldots,n\}}\) and any \(n\in\mathbb{N}\), one can deduce that a necessary condition for \(\mathbb{J}_{f}|_{\pi}\) to be positive preserving is that \(f(x)\leq\sqrt{x}\) for every \(x\). A similar argument can also be given for \(\mathcal{M}^{-}\), showing that if \(f\in\mathcal{M}^{-}\) then \(f_{SQ}\leq f\). It should be noticed though that the one just presented is not a sufficient condition. Indeed, if one considers the family of extremal functions \(f_{\lambda}\) defined in Eq. (152), it holds that \(f_{\lambda}\leq f_{SQ}\) for all \(\lambda\in[3-2\sqrt{2},1]\). Still, it has been proven in [28] that for all \(\lambda\neq 1\)\(f_{\lambda}\notin\mathcal{M}^{+}\). Moreover, we can deduce another interesting property of the set of monotomes from the inspection of its extreme points: for any \(\lambda\in(0,3-2\sqrt{2})\) the corresponding function has two additional crossing with the graph of the square root (other than at \(x=0\) and \(x=1\)) implying that \(f_{\lambda}\) neither lays all beneath nor all above \(f_{SQ}\). Since these are necessary condition for \(f_{\lambda}\) to be in \(\mathcal{M}^{+}\) or \(\mathcal{M}^{-}\), this remark implies that \(\mathcal{M}\neq\mathcal{M}^{+}\cup\mathcal{M}^{-}\), that is, there are standard monotone functions for which neither \(\mathbb{J}_{f}|_{\pi}\) or \(\mathbb{J}_{f}^{-1}|_{\pi}\) are CP. It was shown in Eq. (150) that we could partition \(\mathcal{M}\) using the transformation \(T\), and that this could be used to express \(\mathbb{J}_{f}|_{\pi}\) in terms of \(\mathbb{J}_{Tf}^{-1}|_{\pi^{-1}}\) (see Eq. (158)). Thanks to this last property it follows that \(T\mathcal{M}^{+}=\mathcal{M}^{-}\). Indeed, suppose \(f\in\mathcal{M}^{+}\). Then, \(\mathbb{J}_{f}|_{\pi}\) admits the Kraus form: \[\mathbb{J}_{f}\big{|}_{\pi}\,[\rho]=\sum_{i}K_{i}(\pi)\,\rho\,K_{i}(\pi)^{ \dagger}\,, \tag{171}\] where \(i\) ranges on a possibly uncountable set, and \(K_{i}(\pi)\) are \(\pi\)-dependent Kraus operators. Then, thanks to Eq. (158), we can express \(\mathbb{J}_{Tf}^{-1}|_{\pi}\) as: \[\mathbb{J}_{Tf}^{-1}\big{|}_{\pi}\,[\rho]=\mathbb{J}_{f}\big{|}_{\pi^{-1}}\,[ \rho]=\sum_{i}K_{i}(\pi^{-1})\,\rho\,K_{i}(\pi^{-1})^{\dagger}\,, \tag{172}\] proving that \(\mathbb{J}_{Tf}^{-1}|_{\pi}\) is also CP, and thus \(Tf\in\mathcal{M}^{-}\). This shows that the subset \(\mathcal{M}^{+}\cup\mathcal{M}^{-}\) of \(\mathcal{M}\) is stable under the transformation \(T\). It should be noticed that since this is a strict subset, it was not obvious from the beginning that this would be the case. We have seen that one can give a necessary condition for \(f\) to be in \(\mathcal{M}^{+}\) in terms of the partial order induced by the pointwise order of real functions. It is then a remarkable fact that one can introduce a partial order that implies the pointwise one, and that can be used to completely characterise \(\mathcal{M}^{+}\) and \(\mathcal{M}^{-}\). To this end, we need to define the concept of positive definite continuous functions. These are functions \(h:\mathbb{R}\to\mathbb{C}\) such that for any vector of reals \(\{t_{i}\}_{i\in\{1,\ldots,n\}}\) of arbitrary size \(n\), the matrix \(A\) defined in coordinates as \(A_{i,j}=h(t_{i}-t_{j})\) is positive semidefinite. This class of functions is closed under multiplication, and the elements of the set are uniformly bounded by their value in zero. Finally, a key result called Bochner's theorem says that a function is positive definite if and only if it is the Fourier transform of a finite positive measure on \(\mathbb{R}\)[29]. This last property is the key ingredient of Thm. 14. Then, we define the partial order \(\preceq\) on \(\mathcal{M}\) by saying that \(f_{1}\preceq f_{2}\) if \(f_{1}(e^{t})/f_{2}(e^{t})\) is positive definite or, equivalently, if the matrix with entries: \[A_{i,j}:=\frac{f_{1}(\pi_{i}/\pi_{j})}{f_{2}(\pi_{i}/\pi_{j})} \tag{173}\] is positive semidefinite for any non-zero probability vector \(\{\pi_{i}\}_{i\in\{1,\ldots,n\}}\) of any fixed size \(n\). Before proving that \(\preceq\) is an actual order relation, it is useful to point out that \(f_{1}\preceq f_{2}\) implies \(f_{1}\leq f_{2}\)[30]. Then, we can verify that \(\preceq\) satisfies all the necessary conditions to induce a partial order: 1. _reflexivity:_ i.e., \(f\preceq f\), which follows from the fact that a matrix with all entries equal to \(1\) is positive semidefinite; 2. _antisymmetry:_ the fact that \(f_{1}\preceq f_{2}\) and \(f_{2}\preceq f_{1}\) implies \(f_{1}=f_{2}\). Since \(f_{1}\preceq f_{2}\implies f_{1}\leq f_{2}\), then antisymmetry is a consequence of the same relation for the pointwise order; 3. _transitivity:_ this condition says that if \(f_{1}\preceq f_{2}\) and \(f_{2}\preceq f_{3}\), this implies \(f_{1}\preceq f_{3}\). The proof of this fact is a consequence of the closure of the class of positive definite functions under multiplication, as one can rewrite: \[\frac{f_{1}(e^{t})}{f_{3}(e^{t})}=\left(\frac{f_{1}(e^{t})}{f_{2}(e^{t})} \right)\left(\frac{f_{2}(e^{t})}{f_{3}(e^{t})}\right)\,.\] (174) Since the two functions on the right hand side of the equation are positive definite (as it follows from the assumption that \(f_{1}\preceq f_{2}\) and \(f_{2}\preceq f_{3}\)) then also their product is. This proves the claim. The definition of \(\preceq\) in this context could look rather arbitrary. Still, it can be argued that it is actually a very natural relation when it comes to the characterisation of the sets of completely positive quantum Fisher operators. Indeed, one can specify all the functions \(f\in\mathcal{M}^{+}\) as exactly the ones satisfying \(f\preceq f_{SQ}\). The reason for this is simple: \(f\preceq f_{SQ}\) if and only if the matrix with entries: \[A_{i,j}:\,=f\left(\frac{\pi_{i}}{\pi_{j}}\right)\sqrt{\frac{\pi_{j}}{\pi_{i}}} =\left(f\left(\frac{\pi_{i}}{\pi_{j}}\right)\pi_{j}\right)\frac{1}{\sqrt{\pi_{ i}\pi_{j}}}=\frac{1}{\sqrt{\pi_{i}}}\left(J_{f,\pi}\right)_{i,j}\frac{1}{ \sqrt{\pi_{j}}}\,, \tag{175}\] is positive semidefinite. Interestingly, the last equality shows that \(A\) is connected to \(J_{f,\pi}\) by a similarity transformation, which preserves positivity. In other words, the condition that \(f\preceq f_{SQ}\) is equivalent to the requirement that \(J_{f,\pi}\) is positive semidefinite for any \(\pi\), which exactly corresponds to \(\mathbb{J}_{f}|_{\pi}\) being CP. Thus, for any \(f\in\mathcal{M}\) there is the equivalence \(f\preceq f_{SQ}\iff f\in\mathcal{M}^{+}\). It is also easy to verify that \(T\) reverts the order \(\preceq\). Indeed, if \(f_{1}\preceq f_{2}\), then it follows from: \[\frac{f_{1}(e^{t})}{f_{2}(e^{t})}=\frac{\varkappa^{\prime\prime}\,Tf_{2}(e^{t} )}{\varkappa^{\prime\prime}\,Tf_{1}(e^{t})}\,, \tag{176}\] which directly implies \(Tf_{2}\preceq Tf_{1}\). Indeed, thanks to the bijection between \(\mathcal{M}^{+}\) and \(\mathcal{M}^{-}\), it also follows that we can completely characterise the latter as all the functions satisfy \(f_{SQ}\preceq f\). This also shows that we can partition \(\mathcal{M}\) as: \[\mathcal{M}=\left(\mathcal{M}^{+}\cup\mathcal{M}^{-}\right)\bigsqcup\left( \bigsqcup_{f\not\preceq f_{SQ}\wedge f_{SQ}\not\preceq f}\{f\}\right), \tag{177}\] i.e., in functions for which either \(\mathbb{J}_{f}|_{\pi}\) or \(\mathbb{J}_{f}^{-1}|_{\pi}\) are CP, and ones that are incomparable with \(f_{SQ}\). Moreover, from the reflexivity of \(\preceq\) we also have that \(\mathcal{M}^{+}\cap\mathcal{M}^{-}=f_{SQ}\) (this could also be deduced from the fact that \(T\) has a unique fixed point, together with the fact that \(T\mathcal{M}^{+}=\mathcal{M}^{-}\)). The use of positive definite functions does not just help in characterising which elements of \(\mathcal{M}\) are in \(\mathcal{M}^{+}\) or \(\mathcal{M}^{-}\), but it also allows to give an analytical expression for general CP quantum Fisher operators [28]: **Theorem 14**.: _For any \(f\in\mathcal{M}^{+}\) there exists a symmetric probability distribution \(\mathrm{d}\nu_{f}^{+}(s)\) on \(\mathbb{R}\) such that:_ \[\mathbb{J}_{f}|_{\pi}[A]=\int_{-\infty}^{\infty}\mathrm{d}\nu_{f}^{+}(s)\;\pi^{ is+\frac{1}{2}}\,A\,\pi^{-is+\frac{1}{2}}\,. \tag{178}\] _Similarly, for any \(f\in\mathcal{M}^{-}\) there exists a symmetric probability distribution \(\mathrm{d}\nu_{f}^{-}(s)\) on \(\mathbb{R}\) such that:_ \[\mathbb{J}_{f}^{-1}|_{\pi}[A]=\int_{-\infty}^{\infty}\mathrm{d}\nu_{f}^{-}(s) \;\pi^{is-\frac{1}{2}}\,A\,\pi^{-is-\frac{1}{2}}\,. \tag{179}\] _Moreover, the defining probability distributions coincide when mapping \(\mathcal{M}^{+}\) into \(\mathcal{M}^{-}\) through \(T\), that is, for every \(f\in\mathcal{M}^{+}\) we have that \(\mathrm{d}\nu_{f}^{+}=\mathrm{d}\nu_{Tf}^{-}\)._ Proof.: This result is a straightforward application of Bochner's theorem. Let us first analyse the case in which \(f\in\mathcal{M}^{+}\). Since \(f\preceq f_{SQ}\), this means that \(e^{-t/2}f(e^{t})\) is positive definite. Hence, from Bochner's theorem there exists a unique probability measure \(\mathrm{d}\nu_{f}^{+}(s)\) on \(\mathbb{R}\) such that: \[e^{-t/2}f(e^{t})=\int_{-\infty}^{\infty}\mathrm{d}\nu_{f}^{+}(s)\;e^{its}\,. \tag{180}\] Since \(f\) is standard it follows from the symmetry \(e^{-t/2}f(e^{t})=e^{-t/2}e^{t}f(e^{-t})=e^{t/2}f(e^{-t})\) that \(\mathrm{d}\nu_{f}^{+}(s)\) has to be symmetric with respect to zero. Then, using the definition of \(\left.\mathbb{J}_{f}\right|_{\pi}\) in Eq. (15) we have: \[\left.\mathbb{J}_{f}\right|_{\pi}[A]=\mathbb{R}_{\pi}\,f\!\left(\mathbb{L}_{ \pi}\mathbb{R}_{\pi}^{-1}\right)\![A]=\mathbb{R}_{\pi}\int_{-\infty}^{\infty} \mathrm{d}\nu_{f}^{+}\;(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})^{is+1/2}[A]= \int_{-\infty}^{\infty}\mathrm{d}\nu_{f}^{+}(s)\;\pi^{is+1/2}\,A\,\pi^{-is+1/2 }\,, \tag{181}\] proving Eq. (178). At this point we can use the bijection between \(\mathcal{M}^{+}\) and \(\mathcal{M}^{-}\) to show that for every \(f\in\mathcal{M}^{-}\) we can express \(\left.\mathbb{J}_{f}^{-1}\right|_{\pi}\) as: \[\left.\mathbb{J}_{f}^{-1}\right|_{\pi}[A]=\left.\mathbb{J}_{Tf}\right|_{\pi^{ -1}}[A]=\int_{-\infty}^{\infty}\mathrm{d}\nu_{Tf}^{+}(s)\;\pi^{-is-1/2}\,A\, \pi^{is-1/2}=\int_{-\infty}^{\infty}\mathrm{d}\nu_{f}^{-}(s)\;\pi^{is-1/2}\,A \,\pi^{-is-1/2}\,, \tag{182}\] where in the last step we used the symmetry of \(\mathrm{d}\nu_{Tf}^{+}\) to perform the change of variables \(s\to(-s)\) and we implicitly defined \(\mathrm{d}\nu_{f}^{-}\). This equality proves Eq. (179) and the last claim. The theorem just presented gives the most general expression of Fisher operators \(\mathbb{J}_{f}|_{\pi}\) and \(\mathbb{J}_{f}^{-1}|_{\pi}\) that are CP. Still, it should be noticed that not all symmetric probability distributions generate a quantum Fisher operator, as it can be explicitly verified by plugging an arbitrary probability distribution in Eq. (180). For this reason, the next result gives a characterisation of the possible measures one can use: **Theorem 15**.: _For every \(f\in\mathcal{M}^{+}\) the defining measure \(\mathrm{d}\nu_{f}^{+}\) in Eq. (178) satisfies:_ \[\mathrm{d}\nu_{f}^{+}(s)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}t\;e^ {-(\frac{1}{2}+is)t}f(e^{t})=\int_{0}^{1}\mathrm{d}\mu_{Tf}(\lambda)\;\cosh \left(\frac{\log\lambda}{2}\right)\frac{\cos(s\log\lambda)}{\cosh\pi s}\,; \tag{183}\] _where \(\mathrm{d}\mu_{Tf}\) is the probability distribution on \([0,1]\) defined in Eq. (152)._ One can obtain this result by simply plugging the decomposition of general standard monotone functions from Eq. (152) into Eq. (180) and performing an inverse Fourier transform. This theorem shows the difficulty of giving a full characterisation of the allowed measures \(\mathrm{d}\nu_{f}^{+}\): indeed, it should be noticed that the functions in the last integral of Eq. (183) are not probability distributions (they are quasi-probabilities, i.e, they can be negative in general) so plugging generic \(\mathrm{d}\mu_{Tf}\) will give negativities in the corresponding \(\mathrm{d}\nu_{f}^{+}\). Indeed, whereas the decomposition in Eq. (152) holds for any \(f\in\mathcal{M}\), it makes sense to define \(\mathrm{d}\nu_{f}^{+}\) only for \(f\in\mathcal{M}^{+}\). Finally, it should also be pointed out that whereas \(f\in\mathcal{M}\implies f_{H}\leq f\leq f_{B}\), the same does not hold for the partial order \(\preceq\). Indeed, for \(\lambda\) small enough (different from zero), the extreme points defined in Eq. (152) satisfy \(f_{\lambda}\not\leq f_{B}\): this can be proven by noticing that \(f_{\lambda}(e^{t})/f_{B}(e^{t})\) is not a positive definite function, as it does not arise as the Fourier transform of a positive measure on \(\mathbb{R}\). This also justifies the need to introduce the two partial orders \(\leq\) and \(\preceq\): one gives necessary conditions to be in \(\mathcal{M}\), the other completely specifies \(\mathcal{M}^{+}\) and \(\mathcal{M}^{-}\). * **Box II.**: The set of inverse standard monotone functions \(\Rightarrow\) As it was discussed in Sec. IV.1, the set \(\mathcal{M}\) is not only convex but it even has the structure of a simplex with a continuous number of vertices. It is easy to verify that the convex structure of \(\mathcal{M}\) is also present for \(\mathcal{M}^{+}\), since \(\left.\mathbb{J}_{cf_{1}+(1-\alpha)f_{2}}\right|_{\pi}=\alpha\left.\mathbb{J}_{ f_{1}}\right|_{\pi}+(1-\alpha)\left.\mathbb{J}_{f_{2}}\right|_{\pi}\) and convex sums preserve complete positivity. Still, the simplicial structure appears to be lost since the only extreme point for which \(f_{\lambda}\in\mathcal{M}^{+}\) is for \(\lambda=1\). When considering \(\mathcal{M}^{-}\) the situation becomes even worst: it has been shown in [28] that there are pairs of functions in \(\mathcal{M}^{-}\) for which any non-trivial convex sum is not an element of \(\mathcal{M}^{-}\). Since convexity is a desirable property for analytical manipulations, we introduce in here a new set that will facilitate the treatment of \(\mathcal{M}^{-}\) In particular, define \(\mathcal{K}\) to be the set of inverse standard monotones, i.e., the set of functions given by: \[\mathcal{K}:=\left\{k\left|k(x)=\frac{1}{f(x)},\,f\in\mathcal{M}\right\}\right.. \tag{184}\] The members of \(\mathcal{K}\) are matrix convex functions that satisfy \(k(x)=k(x^{-1})x^{-1}\) and \(k(1)=1\). Actually, since the inverse of a matrix convex function is matrix monotone [7], one could equivalently define \(\mathcal{M}\) as the set of inverses of the members of \(\mathcal{K}\). Indeed, in the literature both choices are used interchangeably, and which set to use boils down to a question of taste. We can pass from one set to the other using the two bijections \(I_{1,2}:\mathcal{M}\to\mathcal{K}\) given by: \[[I_{1}f](x)=\frac{1}{f(x)}\,,\qquad\qquad\qquad[I_{2}f](x)=f(x^{-1})\,. \tag{185}\] It should be noticed that \(I_{1}\) and \(I_{2}\) are both involutive, they commute (in the sense that \(I_{1}I_{2}f=I_{2}I_{1}f\)), and they satisfy the relation \(I_{1}I_{2}=T\), as it can be directly verified from Eq. (149). This also means that the two transformations are related two one another by \(I_{1/2}=I_{2/1}T\). From the expression in Eq. (151) one can deduce that also \(\mathcal{K}\) has the structure of a Bauer simplex, and that the extreme points are given by: \[k_{\lambda}(x):=\left(\frac{\lambda+1}{2}\right)\left(\frac{1}{x+\lambda}+ \frac{1}{1+\lambda\,x}\right)\,. \tag{186}\] Moreover, both partial orders \(\leq\) and \(\preceq\) can be defined on \(\mathcal{K}\), and in this context it should be noticed that \(I_{1}\) is order reversing, while \(I_{2}\) preserves the order, since it is the composition of two order reversing maps. Hence, we can also bound the elements of \(\mathcal{K}\) by using the relation \(f_{H}\,\leq\,f\,\leq f_{B}\) and mapping it through either of \(I_{1/2}\). Hence, any \(k\in\mathcal{K}\) satisfies: \[[I_{1}f_{B}](x)=[I_{2}f_{H}](x)=\frac{2}{x+1}\leq\,k(x)\,\leq\frac{x+1}{2x}=[ I_{1}f_{H}](x)=[I_{2}f_{B}](x)\,. \tag{187}\] Finally, one can also define the quantum Fisher information operators directly in terms of \(k\in\mathcal{K}\), where we use the new notation: \[\Omega_{k}\big{|}_{\pi}:=k(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})\,\mathbb{R} _{\pi}^{-1}=\mathbb{J}_{I_{1}k}^{-1}\big{|}_{\pi}=\mathbb{J}_{I_{2}k}\big{|}_ {\pi^{-1}}\,. \tag{188}\] Thanks to this definition, one can lift the convex structure of \(\mathcal{K}\) to the quantum Fisher operators, since \(\Omega_{\alpha k_{1}+(1-\alpha)k_{2}}|_{\pi}=\alpha\,\Omega_{k_{1}}|_{\pi}+(1 -\alpha)\Omega_{k_{2}}|_{\pi}\). If we then define the two sets: \[\mathcal{K}^{+}:=\left\{k\in\mathcal{K}\,|\,\Omega_{k}\big{|}_{\pi}\text{ is CP }\forall\,\pi\right\}\,;\qquad\qquad\mathcal{K}^{-}:=\left\{k\in\mathcal{K}\,|\, \Omega_{k}^{-1}\big{|}_{\pi}\text{ is CP }\forall\,\pi\right\}\,, \tag{189}\] it follows from the remark above that \(\mathcal{K}^{+}\) is a convex set, while it was proven in [28] that \(\mathcal{K}^{-}\) is not. Interestingly, it holds that \(\mathcal{K}^{\pm}=I_{1}\mathcal{M}^{\mp}\), so this shows that depending on whether one is more interested in the complete positivity of \(\mathbb{J}_{f}\big{|}_{\pi}\) or of \(\mathbb{J}_{f}^{-1}\big{|}_{\pi}\), it will more suitable to work with \(\mathcal{M}\) or \(\mathcal{K}\). The following diagram summarises all the relations between the sets introduced in this section: It should be noticed that all the transformations are involutive, so all the arrows can be reversed without changing the corresponding label. Moreover, the two sets on the right (\(\mathcal{M}^{+}\) and \(\mathcal{K}^{+}\)) are convex. A Garden of Quantum Fisher Information Now that the general theory of quantum Fisher information has been laid down, it is time to enter the tangle of disparate expressions that this quantity can present. One of the founders of the subject, Denes Petz, having to define the richness of different examples one is confronted with, chose to call it a garden of monotonic metrics [31]. In his honour, we use here the same term to designate this section, which is aimed at providing a field guide for the reader to orient themself in this florid jungle. To facilitate the identification of different metrics, we have summarised the ones treated here in Table 4, where we list the standard monotones in decreasing order (according to \(\leq\)), the corresponding contrast functions and quantum Fisher operators. The letter on each row refers to the subsection in which each of the different case is treated, which we list here for convenience together with their main properties: * **The Bures metric**: _the smallest among all the Fisher informations, and for this reason also the most studied in the literature. This is one of the two cases for which a closed form of the geodesic distance is known (see Eq. (195)). Moreover, it also appears in a central result in estimation theory, the quantum Cramer-Rao bound (Eq. (206)), connecting the Fisher information to the minimal variance of an unbiased estimator. We give a generalised version of this bound in Eq. (208)._ * **The Heinz family**: _a one-parameter family of functions in \(\mathcal{M}^{-}\)._ * **The family of \(\alpha\)-divergences**: _a fundamental class of standard monotone functions. The corresponding contrast functions are related to Renyi divergences through Eq. (231). Most of the examples in this list are particular cases of this family._ * **The Wigner-Yanase skew information (\(\alpha=1/2\))**: _the largest function in the family of \(\alpha\)-divergences. It's the only metric having constant positive curvature, making the space of states in this case isometric to a \(n\)-sphere. This allows to find a closed expression of the geodesic distance and of the geodesics (see Eq. (256) and Eq. (257)). Moreover, it naturally appears in the context of hypothesis testing, in particular in the quantum Chernoff bound, expressed in Eq. (261)._ * **The relative entropy (\(\alpha=0\))**: _the most famous among the contrast functions, the corresponding metric, called Kubo-Mori-Bogoliubov, also appears often in statistical mechanics, as it is related to the linear response of thermal states. In particular, in Eq. (276) we discuss the interpretation of the generalised Cramer-Rao bound is this context._ * **The quantum information variance**: _this metric is related to the second derivative of the \(\alpha\)-divergences in zero (see Eq. (278)), and also to estimation theory for thermal states._ * **The geometric mean**: _the only standard monotone in the intersection of \(\mathcal{M}^{+}\) and \(\mathcal{M}^{-}\). This was used in Sec. III.2 to define the Petz recovery map._ * **The harmonic mean (\(\alpha=2\))**: _the biggest Fisher information (and the smallest standard monotone). It is also the only case in which the corresponding contrast function can be expressed as a \(\chi_{f}^{2}\)-divergence (see Box 3)._ ### The Bures metric Among the standard monotone functions, the maximum is given by: \[f_{B}(x)=\frac{x+1}{2}\,. \tag{190}\] The associated standard convex function \(g_{B}(x)\) can be obtained using the map \(L\) defined in Box 10, giving: \[g_{B}(x)=[Lf_{B}](x)=\frac{(x-1)^{2}}{x+1}=\int_{0}^{1}\mathrm{d}N_{B}(s)\ \left(\frac{(x-1)^{2}}{x+s}+\frac{(x-1)^{2}}{1+sx}\right)\,, \tag{191}\] where we implicitly defined the measure \(\mathrm{d}N_{B}(s):=\delta(1-s)/2\). The corresponding contrast function is given by: \[H_{B}(\rho||\sigma)=\mathrm{Tr}\left[(\rho-\sigma)(\mathbb{L}_{\sigma}+ \mathbb{R}_{\rho})^{-1}(\rho-\sigma)\right]=\int_{0}^{\infty}\mathrm{d}t\ \mathrm{Tr}\left[(\rho-\sigma)\,e^{-t\sigma}\,(\rho-\sigma)e^{-t\rho}\right]\,, \tag{192}\] where we used the integral representation of the operator \((\mathbb{L}_{\sigma}+\mathbb{R}_{\rho})^{-1}\) proved in App. F. By doing so we also highlight the symmetry in the contrast function, i.e., \(H_{B}(\rho||\sigma)=H_{B}(\sigma||\rho)\), which directly follows from the fact that \(L\) maps standard monotones in standard convex functions (see the discussion in Box 10 for more details). Applying Thm. 1, it directly follows from Eq. (192) that the quantum Fisher operators are given by: \[\mathbb{J}_{B}\big{|}_{\pi}[A]=\frac{1}{2}\left(\mathbb{L}_{\pi}+\mathbb{R}_{ \pi}\right)[A]=\frac{1}{2}\left\{\pi,A\right\};\hskip 56.905512pt\mathbb{J}_{B}^{-1} \big{|}_{\pi}[A]=2(\mathbb{L}_{\pi}+\mathbb{R}_{\pi})^{-1}[A]=\int_{0}^{\infty} \mathrm{d}t\,\,\,e^{-t\pi/2}\,A\,e^{-t\pi/2}\,. \tag{193}\] It should be noticed that since \(\mathbb{J}_{B}^{-1}\big{|}_{\pi}\) is in Kraus form, \(f_{B}\in\mathcal{M}^{-}\). The metric generated by \(\mathbb{J}_{B}^{-1}\big{|}_{\pi}\) is called Bures metric. Interestingly, this is one of the two examples for which a closed form for the geodesic distance exists [32], which is given by: \[d_{B}(\rho,\,\sigma) =2\arccos\left(\max\left\{\mathrm{Tr}\left[WX^{\dagger}\right] \big{|}WW^{\dagger}=\rho,\,\,XX^{\dagger}=\sigma\right\}\right)=2\arccos \left(\mathrm{Tr}\left[\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right]\right)= \tag{194}\] \[=2\arccos\left(\sqrt{F(\rho,\,\sigma)}\right)\,, \tag{195}\] where in the last step we have implicitly defined the fidelity \(F(\rho,\sigma)\). This quantity can be related to another quantifier of statistical distance, the Bures length, which takes the form [33]: \[D_{B}(\rho,\,\sigma)^{2} =\min\left\{\mathrm{Tr}\left[(W-X)(W-X)^{\dagger}\right]\big{|}WW ^{\dagger}=\rho,\,\,XX^{\dagger}=\sigma\right\}=2\left(1-\mathrm{Tr}\left[ \sqrt{\sqrt{\rho}\sigma\sqrt{\rho}}\right]\right)= \tag{196}\] \[=2\left(1-\cos\left(\frac{d_{B}(\rho,\,\sigma)}{2}\right)\right)\,. \tag{197}\] It is worthwhile to briefly sketch the main ideas that lead to these expressions, but we refer the interested reader to [34] for a more extensive discussion. The first step is to define the principal bundle induced by the purification of the states: Figure 4: In the table above we summarise the expressions of standard monotone functions \(f(x)\) analysed in the text. The table is divided in two parts: in the first half we present the properties of the associated contrast function, in the second the corresponding standard monotone. In particular, notice that the measure \(\mathrm{d}N(s)\) is the one defined in Eq. (12) for asymmetric contrast functions. The question marks correspond to the entries for which no explicit form has been found. For reason of space, we have also introduced the constants \(c_{\alpha}:=\frac{\sin(\pi\alpha)}{\pi\alpha(1-\alpha)}\) and \(C_{\gamma}=\frac{\sin(\pi\gamma)}{\pi}\). this is given as the set of full rank matrices \(A\), and the projection on the space of states is defined as \(A\to\rho:=AA^{\dagger}\). It should be noticed that \(A\) and \(AU\), with \(U\) an arbitrary unitary, both project into the same state. Moreover, if one endows the purification bundle with the Hilbert-Schmidt scalar product (defined as \(\left\langle A|B\right\rangle:=\operatorname{Tr}\left[A^{\dagger}B\right]\)), the set of matrices that project onto normalised states coincides with the sphere given by the equation \(\operatorname{Tr}\left[AA^{\dagger}\right]=1\). Then, there are two natural distances defined in this space: the one on the surface of the sphere (i.e., the angle spanned by the great circle passing between any two points), and the chordal one (given by the shortest line in the Euclidean ambient space). One can then guess that this was exactly what was computed in Eq. (195) and Eq. (197) respectively, exploiting the gauge freedom in the purification to obtain the minimal distances in each case. Moreover, one can easily show that the Fisher information associated to \(\mathbb{J}_{B}^{-1}\big{|}_{\pi}\) upper bounds the square of the trace norm [2]. Indeed, thanks to the identity \(||X||_{1}=\sup_{U}|\operatorname{Tr}\left[XU\right]|\), it follows from a straightforward application of the Cauchy-Schwarz inequality that: \[||X||_{1}^{2} =\sup_{U|UU^{\dagger}=1}|\operatorname{Tr}\left[XU\right]|^{2}= \sup_{U|UU^{\dagger}=1}\left|\operatorname{Tr}\left[\mathbb{J}_{B}^{-1/2} \big{|}_{\pi}\!\left[X\right]\mathbb{J}_{B}^{1/2}\big{|}_{\pi}\!\left[U\right] \right]\right|^{2}\leq \tag{198}\] \[\leq\operatorname{Tr}\left[\mathbb{J}_{B}^{-1/2}\!\left[X\right] \mathbb{J}_{B}^{-1/2}\!\left[X\right]\right]\sup_{U|UU^{\dagger}=1} \operatorname{Tr}\left[\mathbb{J}_{B}^{1/2}\big{|}_{\pi}\!\left[U^{\dagger} \right]\mathbb{J}_{B}^{1/2}\big{|}_{\pi}\!\left[U\right]\right]\,. \tag{199}\] The first factor in the second line simply coincides with \(\mathcal{F}_{B,\pi}(X)\). The second term instead can be explicitly computed as: \[\sup_{U|UU^{\dagger}=1}\operatorname{Tr}\left[U^{\dagger}\,\mathbb{J}_{B} \big{|}_{\pi}\!\left[U\right]\right]=\frac{1}{2}\sup_{U|UU^{\dagger}=1}\, \left(\operatorname{Tr}\left[U^{\dagger}\pi U\right]+\operatorname{Tr}\left[U ^{\dagger}U\pi\right]\right)=1\,, \tag{200}\] Figure 5: In the figure some of the most notable standard monotones are presented in a log-log scale (in the inset we show their behaviour in linear coordinates). In particular, we show the two extrema (\(f_{B}\) from Sec. V.1 and \(f_{H}\) from Sec. V.8), the square root (Sec. V.7), the family \(f_{\alpha}\) of \(\alpha\)-divergences in the range \(\alpha\in[0,1]\) and its transform \(Tf_{\alpha}\) (Sec. V.9), together with the standard monotone associated with the quantum information variance \(f_{V}(x)\) defined in Eq. (277) (Sec. V.6). The shading in the two curves associated to \(f_{\alpha}\) indicates that this family interpolates between \(f_{0}\), the standard monotone for the relative entropy (Sec. V.10), and the maximum value \(f_{1/2}=\frac{1}{4}(1+\sqrt{x})^{2}\), corresponding to the Wigner-Vanase skew information (Sec. V.11). It is interesting to notice that the monotone associated to the quantum information variance does not satisfy \(f_{V}(x)\geq\sqrt{x}\) nor \(f_{V}(x)\leq\sqrt{x}\). This shows that there are monotones for which both \(\left.\mathbb{J}_{f}\right|_{\pi}\) and \(\left.\mathbb{J}_{f}^{-1}\right|_{\pi}\) are not CP. thanks to the cyclicity of the trace. Then, wrapping everything together, we finally obtain: \[||X||_{1}^{2}\,\leq\,\mathcal{F}_{B,\pi}(X)\,\leq\,\mathcal{F}_{f,\pi}(X)\,, \tag{201}\] where the last inequality follows from the discussion in Sec. IV.2. This shows that the Fisher information associated to the Bures metric is an upper bound to the square of the trace norm, and lowers bound all other Fisher informations. It should be pointed out that in the physics literature when one refers to the quantum Fisher information, actually what one has in mind is usually the Bures metric. Its importance is justified by the prominent role it plays in quantum metrology: indeed, one of the main results in this context is the famous Cramer-Rao bound, which gives a bound on the quality with which one can estimate parameters encoded in a state. For the sake of the discussion, consider in fact a family of density matrices \(\rho(\theta)\) depending on some parameters \(\theta\). Without loss of generality, suppose the true value of the parameter to be \(\theta=0\) (this just corresponds to a change of variables). We define a locally unbiased estimator to be an observable satisfying [35]: \[\frac{\partial}{\partial\theta}\operatorname{Tr}\left[\rho(\theta)A\right] \bigg{|}_{\theta=0}=1\,. \tag{202}\] This equation tells us that by measuring \(A\) in a neighbourhood of \(\theta=0\), one obtains the correct value of \(\theta\) on average: \[\exists\,\varepsilon\;\big{|}\quad\operatorname{Tr}\left[\rho(\theta)A\right] =\theta\qquad\forall\theta\in(-\varepsilon,\varepsilon)\,. \tag{203}\] This is a desirable feature, as it means that no transformation on the measured average is needed to give a good estimate of \(\theta\). Define now the symmetric logarithmic derivative (SLD) of \(\rho(\theta)\) to be the operator \(L_{B}\) such that, for any \(X\), we have: \[\frac{\partial}{\partial\theta}\operatorname{Tr}\left[X\,\rho(\theta)\right] \bigg{|}_{\theta=0}=\operatorname{Tr}\left[X\,L_{B}\,\rho(0)\right]= \operatorname{Tr}\left[X\,\mathbb{J}_{B}\big{|}_{\rho(0)}[L_{B}]\right]\,, \tag{204}\] where in the last equality we used the cyclicity of the trace to introduce \(\mathbb{J}_{B}\big{|}_{\rho(0)}\). Since this relation holds for arbitrary \(X\), one can impose the equality at the operator level, giving: \[\frac{\partial\rho(\theta)}{\partial\theta}\bigg{|}_{\theta=0}=\mathbb{J}_{B }\big{|}_{\rho(0)}[L_{B}]=\frac{1}{2}\left(L_{B}\rho(0)+\rho(0)L_{B}\right)\,. \tag{205}\] This expression makes it clear where the name for SLD comes from, as \(L_{B}\) can be thought as a symmetric generalisation of the differential operator to non-commutative variables. We are now ready to prove the Cramer-Rao bound: this is a fundamental limit on how small the variance of locally unbiased operators can be. Writing it down explicitly, we have: \[\operatorname{Tr}\left[\rho(0)A^{2}\right]=\,\operatorname{Tr}\left[A\, \mathbb{J}_{B}\big{|}_{\rho(0)}[A]\right]\geq\frac{\left|\operatorname{Tr} \left[A\,\mathbb{J}_{B}\big{|}_{\rho(0)}[L_{B}]\right]\right|^{2}}{ \operatorname{Tr}\left[L_{B}\,\mathbb{J}_{B}\big{|}_{\rho(0)}[L_{B}]\right]}= \frac{1}{\operatorname{Tr}\left[\partial_{\theta}\rho(\theta)\,\mathbb{J}_{ B}^{-1}\big{|}_{\rho(\theta)}[\partial_{\theta}\rho(\theta)]\right] \big{|}_{\theta=0}}\,, \tag{206}\] where the inequality is a simple application of Cauchy-Schwartz for \(\mathbb{J}_{B}\big{|}_{\rho(0)}\), and in the last step we used the definition of local unbiased operators (Eq. (202)) and inverted Eq. (205). This shows that the ability to estimate the parameter \(\theta\) is intrinsically connected to the statistical difference between \(\rho(0)\) and \(\rho(0+\partial\theta)\). It should be pointed out that the steps presented above can in principle be replicated for the any other quantum Fisher information. In fact, it is sufficient to define the generalised derivative \(L_{f}\) as: \[\frac{\partial\rho(\theta)}{\partial\theta}\bigg{|}_{\theta=0}=\mathbb{J}_{f} \big{|}_{\rho(0)}[L_{f}]\,, \tag{207}\] to obtain: \[\operatorname{Tr}\left[A\,\mathbb{J}_{f}\big{|}_{\rho(0)}[A]\right]\geq\frac{ 1}{\operatorname{Tr}\left[L_{f}\,\mathbb{J}_{f}\big{|}_{\rho(0)}[L_{f}]\right] }=\frac{1}{\operatorname{Tr}\left[\partial_{\theta}\rho(\theta)\,\mathbb{J}_{ f}^{-1}\big{|}_{\rho(0)}[\partial_{\theta}\rho(\theta)]\right] \big{|}_{\theta=0}}\,. \tag{208}\] This procedure gives a whole family of bounds. Whereas for the Bures case the variance of an observable is a quite standard object, the problem here is to find an operational interpretation to the generalised variance on the left in the above equation. Still, this can be done in some contexts, as for example for the relative entropy, Sec. V.5. ### The Heinz family This family of standard monotone functions is given by: \[f_{\gamma_{>}}(x):=\frac{x^{\gamma}+x^{1-\gamma}}{2}\,. \tag{209}\] Thanks to the Lowner-Heinz inequality we know that \(x^{\gamma}\) is matrix monotone for all \(\gamma\in[0,1]\)[7], which means that \(f_{\gamma_{>}}(x)\) is matrix monotone as well. Hence, we only consider this range for the parameter \(\gamma\). Then, using the transformation \(L\) from Box 10 we can also introduce the corresponding standard convex functions \(g_{\gamma_{>}}(x)\) as: \[g_{\gamma_{>}}(x)=[Lf_{\gamma_{>}}](x)=\frac{(x-1)^{2}}{x^{\gamma}+x^{1-\gamma }}\,, \tag{210}\] for \(\gamma\in[0,1]\), which give rise to the contrast functions: \[H_{\gamma_{>}}(\rho||\sigma)=\operatorname{Tr}\big{[}(\rho-\sigma)(\mathbb{L }_{\sigma}^{\gamma}\mathbb{R}_{\rho}^{1-\gamma}+\mathbb{L}_{\sigma}^{1-\gamma }\mathbb{R}_{\rho}^{\gamma})^{-1}[(\rho-\sigma)]\big{]}\, \tag{211}\] It is useful for what follows to give an integral expression to this quantity. To this end, we point out the following rewriting: \[(\mathbb{L}_{\sigma}^{\gamma}\mathbb{R}_{\rho}^{1-\gamma}+ \mathbb{L}_{\sigma}^{1-\gamma}\mathbb{R}_{\rho}^{\gamma})[A] =\sigma^{\gamma}\,A\,\rho^{1-\gamma}+\sigma^{1-\gamma}\,A\,\rho ^{\gamma}=(\sigma^{\gamma}\,A\,\rho^{\gamma})\rho^{1-2\gamma}+\sigma^{1-2 \gamma}\,(\sigma^{\gamma}\,A\,\rho^{\gamma})= \tag{212}\] \[=(\mathbb{L}_{\sigma}^{1-2\gamma}+\mathbb{R}_{\rho}^{1-2\gamma} )\mathbb{L}_{\sigma}^{\gamma}\mathbb{R}_{\rho}^{\gamma}[A]\,. \tag{213}\] Then, the inverse in Eq. (211) can be taken by inverting first \((\mathbb{L}_{\sigma}\mathbb{R}_{\rho})^{\gamma}\) (which simply gives \((\mathbb{L}_{\sigma}\mathbb{R}_{\rho})^{-\gamma}\)) and the superoperator \((\mathbb{L}_{\sigma}^{1-2\gamma}+\mathbb{R}_{\rho}^{1-2\gamma})\) independently. As the latter can be rewritten as \((\mathbb{L}_{\sigma^{1-2\gamma}}+\mathbb{R}_{\rho^{1-2\gamma}})\), we can use the result from App. F to finally express Eq. (211) as: \[H_{\gamma_{>}}(\rho||\sigma)=\int_{0}^{\infty}\mathrm{d}t\ \operatorname{Tr}\Big{[}(\rho-\sigma)e^{-t\sigma^{1-2\gamma}}\sigma^{- \gamma}(\rho-\sigma)\rho^{-\gamma}e^{-t\rho^{1-2\gamma}}\Big{]}. \tag{214}\] Once again the symmetry in the arguments is evident, and it follows from the fact that \(g_{\gamma_{>}}\) is standard convex. Moreover, one can also apply Thm. 1 to the expression just obtained to derive the two Fisher operators: \[\mathbb{J}_{f_{\gamma_{>}}}\big{|}_{\pi}[A] =\frac{1}{2}\,(\mathbb{L}_{\pi}^{1-2\gamma}+\mathbb{R}_{\pi}^{1- 2\gamma})(\mathbb{L}_{\pi}\mathbb{R}_{\pi})^{\gamma}[A]=\frac{1}{2}\,\{\pi^{1 -2\gamma},\pi^{\gamma}\,A\,\pi^{\gamma}\}\,; \tag{215}\] \[\mathbb{J}_{f_{\gamma_{>}}}^{-1}\big{|}_{\pi}[A] =2\,(\mathbb{L}_{\pi}^{1-2\gamma}+\mathbb{R}_{\pi}^{1-2\gamma})^ {-1}(\mathbb{L}_{\pi}\mathbb{R}_{\pi})^{-\gamma}[A]=\int_{0}^{\infty}\mathrm{ d}t\ e^{-(t\pi^{1-2\gamma})/2}\,\pi^{-\gamma}A\,\pi^{-\gamma}\,e^{-(t\pi^{1-2 \gamma})/2}\,. \tag{216}\] Since \(\mathbb{J}_{f_{\gamma_{>}}}^{-1}\big{|}_{\pi}\) is given in Kraus form, it is immediately clear that it is completely positive. This shows that \(f_{\gamma_{>}}\in\mathcal{M}^{-}\) for all \(\gamma\in[0,1]\). This not only provides a one-parameter family of functions in \(\mathcal{M}^{-}\), but also allows to define a whole class of functions in the set. Indeed, thanks to the convex structure of \(\mathcal{K}^{+}\) (see Box 11) one also has that: \[f_{\mathrm{d}\mu(\gamma)_{>}}(x):=\left(\int_{0}^{1}\mathrm{d}\mu(\gamma)\ \frac{2}{x^{\gamma}+x^{1-\gamma}}\right)^{-1}\,, \tag{217}\] for any probability distribution \(\mathrm{d}\mu(\gamma)\) on \([0,1]\) is also in \(\mathcal{M}^{-}\). This exemplifies how one can exploit convexity to pass from a one-parameter family to a much bigger class of standard monotones satisfying the same property. We can now give some more properties of the defining functions \(f_{\gamma_{>}}\). First, it should be noticed that all of these functions lay above \(f_{SQ}\), as \(x^{\gamma}\) has a unique minimum as a function of \(\gamma\) exactly for \(\gamma=1/2\), in which case \(f_{1/2_{>}}=f_{SQ}\). Moreover, it is straightforward to verify that \(f_{0_{>}}=f_{1_{>}}=f_{B}\), so the Heinz family interpolates between the largest and the smallest elements of \(\mathcal{M}^{-}\) (according to \(\leq\)). It is then an open question whether one could obtain all the elements of \(\mathcal{M}^{-}\) through the procedure in Eq. (217) (it should be noticed that this would imply that \(\mathcal{K}^{+}\) is a Bauer simplex). Finally, we can use the integral expression for powers \(\gamma\in(0,1)\): \[x^{\gamma}=\frac{\sin\pi\gamma}{\pi}\int_{0}^{\infty}\mathrm{d}\lambda\ \lambda^{\gamma-1}\frac{x}{x+\lambda}=\frac{\sin\pi\gamma}{\pi}\int_{0}^{1} \mathrm{d}\lambda\ \left(\lambda^{\gamma-1}\frac{x}{x+\lambda}+\lambda^{-\gamma}\frac{x}{1+\lambda \,x}\right)\,, \tag{218}\] to rewrite \(f_{\gamma_{>}}\) in terms of the extreme points (defined in Eq. (152)): \[f_{\gamma_{>}}(x)=\int_{0}^{1}\mathrm{d}\lambda\left(\frac{\sin\pi\gamma}{\pi} \left(\frac{\lambda^{\gamma-1}+\lambda^{-\gamma}}{1+\lambda}\right)\right)\,f_{ \lambda}(x)\,, \tag{219}\] where we isolated the terms which defines the measure \(\mathrm{d}\mu_{Tf_{\gamma_{>}}}(\lambda)\). This equation can be used to give alternative expressions to the Fisher operator in Eq. (215). Interestingly, we can use the map \(T\) to define an similar family on \(\mathcal{M}^{+}\), namely: \[f_{\gamma_{<}}(x):=[Tf_{\gamma_{>}}](x)=\frac{2x}{x^{\gamma}+x^{1-\gamma}}\,. \tag{220}\] The standard convex functions in this case are given by: \[g_{\gamma_{<}}(x)=[Lf_{\gamma_{<}}](x)=\frac{(x^{\gamma}+x^{1-\gamma})(x-1)^{2 }}{4x} \tag{221}\] which result in contrast functions of the form: \[H_{\gamma_{>}}(\rho||\sigma)=\mathrm{Tr}\left[(\rho-\sigma)(\mathbb{I}_{ \sigma}^{\gamma-1}\mathbb{R}_{\rho}^{-\gamma}+\mathbb{I}_{\sigma}^{\gamma} \mathbb{R}_{\rho}^{\gamma-1})[(\rho-\sigma)]\right] \tag{222}\] Interestingly, thanks to the integral expression in Eq. (218), we can also express the standard convex functions as: \[g_{\gamma_{<}}(x)=\int_{0}^{1}\mathrm{d}\lambda\left(\frac{\sin\pi\gamma}{\pi }\left(\frac{\lambda^{\gamma-1}+\lambda^{-\gamma}}{2}\right)\right)\left( \frac{(x-1)^{2}}{x+s}+\frac{(x-1)^{2}}{1+sx}\right) \tag{223}\] showing that the defining measure (see Eq. (12)) in this case is given by \(\mathrm{d}N_{g}(s):=\frac{\sin\pi\gamma}{\pi}\left(\frac{\lambda^{\gamma-1}+ \lambda^{-\gamma}}{2}\right)\mathrm{d}s\). Thanks to the relation between \(\mathbb{J}_{f}\big{|}_{\pi}\) and \(\mathbb{J}_{Tf}^{-1}\big{|}_{\pi^{-1}}\) given in Eq. (158), we can directly deduce from Eq. (215) and Eq. (216) that: \[\mathbb{J}_{f_{\gamma_{<}}}\big{|}_{\pi}[A] =\mathbb{J}_{Tf_{\gamma_{>}}}^{-1}\big{|}_{\pi^{-1}}[A]=\int_{0} ^{\infty}\mathrm{d}t\ e^{-(t\pi^{2\gamma-1})/2}\,\pi^{\gamma}A\pi^{\gamma}\, e^{-(t\pi^{2\gamma-1})/2}\,; \tag{224}\] \[\mathbb{J}_{f_{\gamma_{<}}}^{-1}\big{|}_{\pi}[A]=\mathbb{J}_{Tf_{ \gamma_{>}}}\big{|}_{\pi^{-1}}[A]=\frac{1}{2}\left\{\pi^{2\gamma-1},\pi^{- \gamma}A\pi^{-\gamma}\right\}. \tag{225}\] As it was expected from the action of \(T\), all \(\mathbb{J}_{f_{\gamma_{<}}}\) are CP, as they are presented in Kraus form. Hence, it should be noticed that the convex hull of \(f_{\gamma_{<}}\) is contained in \(\mathcal{M}^{+}\), and that this family spans from the smallest element in the set \(f_{0_{<}}=f_{1_{<}}=f_{H}\), to the largest, namely \(f_{1/2_{<}}=f_{SQ}\). Interestingly, we can also define a family of functions in \(\mathcal{M}^{+}\) in complete analogy with what we the procedure presented in Eq. (217). Indeed, since \(\mathcal{M}^{+}\) is convex, we have that the functions defined as: \[f_{\mathrm{d}\mu(\gamma)_{<}}(x):=\int_{0}^{1}\mathrm{d}\mu(\gamma)\ \frac{2x}{x^{\gamma}+x^{1-\gamma}}\,, \tag{226}\] are also in \(\mathcal{M}^{+}\) for any probability distribution \(\mathrm{d}\mu(\gamma)\) on \([0,1]\). Moreover, it is straightforward to verify that \(Tf_{\mathrm{d}\mu(\gamma)_{>}}=f_{\mathrm{d}\mu(\gamma)_{<}}\) for the same probability distribution. ### The family of \(\alpha\)-divergences This family of standard monotone functions takes the form: \[f_{\alpha}(x)=\frac{\alpha(1-\alpha)(x-1)^{2}}{(1-x^{\alpha})(1-x^{1-\alpha})}\,. \tag{227}\] Despite the complicated expression of Eq. (227), this arises from the family of functions: \[g_{\alpha}(x)=\frac{x^{\alpha}-1}{\alpha(\alpha-1)}\,, \tag{228}\] which are matrix convex for \(\alpha\in[-1,2]\) and give rise to the following contrast functions: \[H_{\alpha}(\rho||\sigma)=\frac{1}{\alpha(\alpha-1)}\left(\mathrm{Tr}\left[\sigma ^{\alpha}\rho^{1-\alpha}\right]-1\right)=\frac{1}{\alpha(\alpha-1)}\int_{0}^{ \alpha}\mathrm{d}\beta\;\mathrm{Tr}\left[\rho^{1-\beta}(\log\sigma-\log\rho) \sigma^{\beta}\right]\,. \tag{229}\] We call these quantities \(\alpha\)-divergences. The integral expression in the equation above is obtained simply by differentiating with respect to \(\alpha\) and integrating again. A similar family is the one of Renyi divergences, given by: \[S_{\alpha}(\rho||\sigma): =\frac{1}{\alpha-1}\log\;\mathrm{Tr}\left[\rho^{\alpha}\sigma^{1 -\alpha}\right]=\frac{1}{\alpha-1}\log\left(1+\int_{0}^{\alpha}\mathrm{d} \beta\;\mathrm{Tr}\left[\rho^{\beta}(\log\rho-\log\sigma)\sigma^{1-\beta} \right]\right)\,. \tag{230}\] It is easy to verify that the two are related by the equation: \[H_{\alpha}(\rho||\sigma)=\frac{e^{-\alpha S_{1-\alpha}(\rho||\sigma)}-1}{ \alpha(\alpha-1)}\,. \tag{231}\] Interestingly, the \(\alpha\)-contrast functions and the corresponding Renyi divergences locally give rise to the same metric structure. Before entering into this, it should be noticed that we can find an integral expression for \(g_{\alpha}\) of the form in Eq. (7), namely: \[g_{\alpha}(x)=\frac{\sin\pi\alpha}{\pi\alpha(1-\alpha)}\int_{0}^{\infty} \mathrm{d}s\;\frac{s^{\alpha}}{(1+s)^{2}}\,\left(\frac{(x-1)^{2}}{x+s}\right)- \frac{1}{\alpha-1}(1-x)\,, \tag{232}\] where we can ignore the extra linear term as it does not contribute to the contrast function (see Sec. II). Indeed, once one symmetrises this function, any linear terms erase, so that we have: \[g_{\alpha}^{\mathrm{symm}}(x)=[Lf_{\alpha}](x)=\frac{(1-x^{\alpha})(1-x^{1- \alpha})}{2\alpha(1-\alpha)}=\frac{\sin\pi\alpha}{\pi\alpha(1-\alpha)}\int_{ 0}^{1}\mathrm{d}s\;\frac{(s^{\alpha}+s^{1-\alpha})}{2(1+s)^{2}}\left(\frac{(x -1)^{2}}{x+s}+\frac{(x-1)^{2}}{1+sx}\right)\,, \tag{233}\] where we can identify the defining measure from Eq. (12) to be \(\mathrm{d}N_{g}(s):=\frac{\sin\pi\alpha}{\pi\alpha(1-\alpha)}\frac{(s^{\alpha }+s^{1-\alpha})}{2(1+s)^{2}}\,\mathrm{d}s\). It should be noticed that this expression of \(\mathrm{d}N_{g}(s)\) corrects the one present in most of the literature, most probably arising from a typo in [6]. In order to give a closed expression for \(\mathbb{J}_{\alpha}^{-1}\big{|}_{\pi}\) it is useful to first study how one can rewrite \(H_{\alpha}^{\mathrm{symm}}(\rho||\sigma)\) using the integral expansion in Eq. (229). In particular, carrying out the same procedure one more time, we obtain: \[2\alpha(\alpha-1)\,H_{\alpha}^{\mathrm{symm}}(\rho||\sigma)= \left(\mathrm{Tr}\left[\sigma^{\alpha}\rho^{1-\alpha}\right]+\mathrm{Tr} \left[\rho^{\alpha}\sigma^{1-\alpha}\right]-2\right)= \tag{234}\] \[=\int_{0}^{\alpha}\mathrm{d}\beta\;\left(\mathrm{Tr}\left[\rho^{1 -\beta}(\log\sigma-\log\rho)\sigma^{\beta}\right]+\mathrm{Tr}\left[\rho^{ \beta}(\log\rho-\log\sigma)\sigma^{1-\beta}\right]\right)=\] (235) \[=\int_{0}^{\alpha}\mathrm{d}\beta\left(\int_{0}^{\beta}\mathrm{d} \gamma\;\mathrm{Tr}\left[\rho^{1-\gamma}(\log\sigma-\log\rho)\sigma^{\gamma}( \log\sigma-\log\rho)\right]-\int_{\beta}^{1}\mathrm{d}\gamma\;\mathrm{Tr} \left[\rho^{\gamma}(\log\rho-\log\sigma)\sigma^{1-\gamma}(\log\rho-\log\sigma )\right]\right)\,, \tag{236}\] where the easiest way to verify the last equality is to explicitly carry out the integrals in Eq. (236) and see that it correctly retrieves Eq. (235). Substituting \(\gamma\to 1-\gamma\) in the second integral of Eq. (236), one finally obtains: \[2\alpha(\alpha-1)\,H_{\alpha}^{\mathrm{symm}}(\rho||\sigma) =\int_{0}^{\alpha}\mathrm{d}\beta\left(\int_{0}^{\beta}\mathrm{d} \gamma-\int_{0}^{1-\beta}\mathrm{d}\gamma\right)\left(\mathrm{Tr}\left[\rho^{1 -\gamma}(\log\sigma-\log\rho)\sigma^{\gamma}(\log\sigma-\log\rho)\right]\right)= \tag{237}\] \[=-\int_{0}^{\alpha}\mathrm{d}\beta\int_{\beta}^{1-\beta}\mathrm{d} \gamma\;\mathrm{Tr}\left[\rho^{1-\gamma}(\log\sigma-\log\rho)\sigma^{\gamma}( \log\sigma-\log\rho)\right]\,. \tag{238}\] It should be noticed that the symmetry in the arguments of \(H_{\alpha}^{\mathrm{symm}}(\rho||\sigma)\) is reflected in the symmetry under the transformation \(\alpha\to 1-\alpha\). Moreover, thanks to the appearance of the two differences of logarithms in Eq. (238), it is straightforward to give the local expansion of the \(\alpha\)-divergences. In fact, denote by \(\mathbb{J}_{L}^{-1}\big{|}_{\pi}\) the Frechet derivative of the logarithm, which reads in formulae [7]: \[\mathbb{J}_{L}^{-1}\big{|}_{\pi}[A]:=\lim_{\varepsilon\to 0}\frac{\log(\pi+ \varepsilon A)-\log\pi}{\varepsilon}=\int_{0}^{\infty}\mathrm{d}t\;\left(\pi+t \right)^{-1}A\left(\pi+t\right)^{-1}. \tag{239}\] One can use this expression to approximate the operator \(\log(\pi+\varepsilon A)\) by \(\log(\pi)+\varepsilon\,\mathbb{J}_{L}^{-1}\big{|}_{\pi}[A]\). Then, a simple substitution shows that the expansion of the \(\alpha\)-divergences is given by: \[H_{\alpha}(\pi||\pi+\varepsilon\delta\rho) =\frac{S_{1-\alpha}(\pi||\pi+\varepsilon\delta\rho)}{1-\alpha}= \frac{\varepsilon^{2}}{2\,\alpha(1-\alpha)}\int_{0}^{\alpha}\mathrm{d}\beta \int_{\beta}^{1-\beta}\mathrm{d}\gamma\,\operatorname{Tr}\big{[}\pi^{1-\gamma }\,\mathbb{J}_{L}^{-1}\big{|}_{\pi}[\delta\rho]\,\pi^{\gamma}\mathbb{J}_{L}^{-1 }\big{|}_{\pi}[\delta\rho]\big{]}+\mathcal{O}\left(\varepsilon^{3}\right)= \tag{240}\] \[=\frac{\varepsilon^{2}}{2\,\alpha(1-\alpha)}\int_{0}^{\alpha} \mathrm{d}\beta\int_{\beta}^{1-\beta}\mathrm{d}\gamma\,\operatorname{cov}_{ \pi}^{\gamma}(\mathbb{J}_{L}^{-1}\big{|}_{\pi}[\delta\rho],\,\mathbb{J}_{L}^{- 1}\big{|}_{\pi}[\delta\rho])+\mathcal{O}\left(\varepsilon^{3}\right)\,, \tag{241}\] where in the last line we implicitly defined the \(\gamma\)-covariance, which generically reads: \[\operatorname{cov}_{\pi}^{\gamma}(A,B):=\operatorname{Tr}\big{[}\pi^{1-\gamma }A\pi^{\gamma}B\big{]}-\operatorname{Tr}\big{[}\pi\,A\big{]}\operatorname{Tr }\big{[}\pi\,B\big{]}\,\,. \tag{242}\] It should be noticed that, since perturbations of states are traceless \(\operatorname{Tr}\big{[}\pi\,\mathbb{J}_{L}^{-1}\big{|}_{\pi}[\delta\rho]\big{]} =\operatorname{Tr}\big{[}\delta\rho]=0\). For this reason, the second term in Eq. (242) has zero contribution. We are now ready to apply Thm. 1 to Eq. (241), which allows us to deduce that the corresponding family of quantum Fisher information operators takes the form: \[\mathbb{J}_{\alpha}^{-1}\big{|}_{\pi}[A]=\frac{1}{\alpha(1-\alpha)}\int_{0}^{ \alpha}\mathrm{d}\beta\int_{\beta}^{1-\beta}\mathrm{d}\gamma\,\mathbb{J}_{L}^ {-1}\big{|}_{\pi}\,\big{[}\,\pi^{1-\gamma}\,\mathbb{J}_{L}^{-1}\big{|}_{\pi}[ A]\,\pi^{\gamma}\big{]}\,\,. \tag{243}\] This expression is valid in general, i.e., for \(\alpha\in[-1,2]\). Still, if one restricts to the smaller interval given by \(\alpha\in(0,1)\), it was shown in [36] that one can actually express \(\mathbb{J}_{\alpha}^{-1}\big{|}_{\pi}\) as: \[\mathbb{J}_{\alpha}^{-1}\big{|}_{\pi}[A]=\frac{1}{\alpha(1-\alpha)}\int_{0}^{ \infty}\mathrm{d}s\int_{0}^{\infty}\mathrm{d}t\,s^{\alpha}\,t^{1-\alpha}\,( \pi+s)^{-1}(\pi+t)^{-1}A(\pi+s)^{-1}(\pi+t)^{-1}\,. \tag{244}\] Since \(\mathbb{J}_{\alpha}^{-1}\big{|}_{\pi}\) is in Kraus form, this shows that \(f_{\alpha}\in\mathcal{M}^{-}\) for \(\alpha\in(0,1)\). Moreover, it was also proven in [28] that \(f_{\alpha}\in\mathcal{M}^{+}\) for \(\alpha\in[-1,-\frac{1}{2}]\cup[\frac{1}{2},2]\) (i.e., in this parameter range \(\mathbb{J}_{\alpha}\big{|}_{\pi}\) is CP), and that for \(\alpha\in[-\frac{1}{2},0)\cup(1,\frac{3}{2},2]\) neither \(\mathbb{J}_{\alpha}\big{|}_{\pi}\) nor \(\mathbb{J}_{\alpha}^{-1}\big{|}_{\pi}\) are CP. This gives an exhaustive list of the possible behaviours of the Fisher information operators associated to \(f_{\alpha}\). Unfortunately, we were not able to find a general expression for \(\mathbb{J}_{\alpha}\big{|}_{\pi}\) not even in the parameter range for which it is completely positive. Before moving on to the next sections, it is interesting to connect the local behaviour of the \(\alpha\)-divergences to the Wigner-Yanase-Dyson skew information, which is defined by the formula: \[I^{\gamma}(\pi,X)=-\frac{1}{2}\mathrm{Tr}\left[[\pi^{\gamma},X][\pi^{1-\gamma},X]\right]=\mathrm{Tr}\left[X^{2}\,\pi\right]-\mathrm{Tr}\left[\pi^{1-\gamma} \,X\,\pi^{\gamma}\,X\right]\,, \tag{245}\] which can be interpreted as a quantifier of the quantum uncertainty of the observable \(X\) as measured in the state \(\pi\)[37]. Then, by adding and subtracting the variance of \(\mathbb{J}_{L}^{-1}\big{|}_{\pi}[\delta\rho]\) to Eq. (241), one obtains: \[H_{\alpha}(\pi||\pi+\varepsilon\delta\rho)=\frac{\varepsilon^{2}}{2}\mathrm{Tr} \left[\left(\mathbb{J}_{L}^{-1}\big{|}_{\pi}[\delta\rho]\right)^{2}\,\pi\right] +\frac{\varepsilon^{2}}{2\,\alpha(\alpha-1)}\int_{0}^{\alpha}\mathrm{d}\beta \int_{\beta}^{1-\beta}\mathrm{d}\gamma\,\,I^{\gamma}(\pi,\mathbb{J}_{L}^{-1} \big{|}_{\pi}[\delta\rho])\,. \tag{246}\] This expression is particularly useful when one wants to isolate the effects of the coherences in the basis of \(\pi\). Notice in fact that for a full rank state \(\pi\), the Wigner-Yanase-Dyson skew information \(I^{\gamma}(\pi,\mathbb{J}_{L}^{-1}\big{|}_{\pi}[\delta\rho])=0\) if and only if \([\pi,\delta\rho]=0\). In the next sections, we present different examples of quantum Fisher information as \(\alpha\) varies. First, it should be noticed that thanks to the symmetry \(f_{\alpha}=f_{1-\alpha}\), it is sufficient to characterise the interval \([0,2]\) alone, as \([0,1]\) is mapped into itself, and \([1,2]\) into \([-1,1]\), completing the range of allowed parameters. Then, for \(\alpha\in[0,1/2]\) the value \(f_{\alpha}(x)\) is monotonically increasing in \(\alpha\), whereas for \(\alpha\in[1/2,2]\) this behaviour inverts, and \(f_{\alpha}(x)\) becomes monotonically decreasing. There are three limits that are notable enough to deserve a name: the Fisher information associated with the relative entropy, obtained in the limit \(\alpha\to 0\), called Kubo-Mori-Bogoliubov (KMB) inner product; the one given by the limit \(\alpha\to 1/2\), i.e, the largest element of the family according to \(\leq\), called the Wigner-Yanase metric; and finally, as \(\alpha\to 2\) one gets the minimal function \(f_{H}(x)=2x/(x+1)\), called the harmonic mean. ### The Wigner-Yanase skew information (\(\alpha=1/2\)) The first case we consider is the family of \(\alpha\)-divergence is the one of the Wigner-Yanase metric, corresponding to the case \(\alpha=\frac{1}{2}\). In this context, the convex function in Eq. (228) and the corresponding divergence are given by: \[g_{WY}(x)=4(1-\sqrt{x})\,,\qquad H_{WY}(\rho||\sigma)=4(1-\operatorname{Tr} \big{[}\sqrt{\rho}\,\sqrt{\sigma}\big{]})\,, \tag{247}\] while the standard monotone function takes the particularly simple form: \[f_{WY}(x)=\left(\frac{1+\sqrt{x}}{2}\right)^{2}\,. \tag{248}\] By comparing this expression with \(f_{B}\), it can be easily verified that \(f_{WY}(x)\equiv\left(f_{B}(\sqrt{x})\right)^{2}\). This relation allows to compute the quantum Fisher operators for the Wigner-Yanase metric directly from the ones for the Bures. Indeed, it follows from the straightforward manipulations: \[\mathbb{J}_{WY}\big{|}_{\pi}[A]=\mathbb{R}_{\pi}\,f_{WY}(\mathbb{L}_{\pi} \mathbb{R}_{\pi}^{-1})[A]=\mathbb{R}_{\sqrt{\pi}}^{2}\,f_{B}(\mathbb{L}_{ \sqrt{\pi}}\mathbb{R}_{\sqrt{\pi}}^{-1})^{2}[A]=\mathbb{J}_{B}\big{|}_{\sqrt{ \pi}}[\mathbb{J}_{B}\big{|}_{\sqrt{\pi}}[A]]\,, \tag{249}\] that one can rewrite the Wigner-Yanase Fisher operators as: \[\mathbb{J}_{WY}\big{|}_{\pi}[A]=\frac{1}{4}\left\{\sqrt{\pi},\{\sqrt{\pi},A\} \right\};\qquad\qquad\mathbb{J}_{WY}^{-1}\big{|}_{\pi}[A]=\int_{0}^{\infty} \mathrm{d}t\,\int_{0}^{\infty}\mathrm{d}s\;e^{-(t+s)\sqrt{\pi}/2}\,A\,e^{-(t+ s)\sqrt{\pi}/2}\,. \tag{250}\] The identification in Eq. (249) also allows to explicitly compute the geodesics associated to the Wigner-Yanase metric. The construction needed is completely analogous to the one for classical states [34], which we briefly sketch for completeness. Denote the set of Hermitian matrices of dimension \(d\) by \(\mathcal{H}_{d}\). We then define the map6\(S:\mathcal{S}_{d}\to\mathcal{H}_{d}\) associating to each state \(\pi\) its unique positive square root \(\sqrt{\pi}\). The target space is naturally endowed with the Hilbert-Schmidt scalar product, so that the image of \(\mathcal{S}_{d}\) can be characterised by the equation: Footnote 6: This corresponds to a global section in the purification bundle discussed in the context of the Bures metric (see Sec. V.1). \[S(\mathcal{S}_{d}):=\left\{X\in\mathcal{H}_{d}\mid X\geq 0\ \wedge\ \operatorname{Tr}\big{[}X^{2}\big{]}=1\right\}\,, \tag{251}\] which means that \(S(\mathcal{S}_{d})\) is just given by the positive octant of a \((d^{2}-1)\)-sphere in \(\mathcal{H}_{d}\). The geodesic distance in this context is well known: since geodesics are given by great circles, the geodesic distance simply coincides with the angle \(\theta\) they subsume. Moreover, since on \(X,Y\in S(\mathcal{S}_{d})\) it holds that \(\cos\theta=\operatorname{Tr}\left[XY\right]\), we finally obtain: \[d_{S(\mathcal{S}_{d})}=\arccos\operatorname{Tr}\left[XY\right]\,. \tag{252}\] Given the simplicity of this construction, it is interesting to consider what is the pullback of the Hilbert-Schmidt metric on \(\mathcal{S}_{d}\). To this end, we need to compute the differential of the map \(S\), which is defined by: \[\mathrm{d}S\big{|}_{\pi}[\delta\rho]:=\frac{\mathrm{d}}{\mathrm{d}\varepsilon }\,\sqrt{\pi+\varepsilon\,\delta\rho}\,\Big{|}_{\varepsilon=0}\,. \tag{253}\] This can be computed by noticing that: \[\left(\mathrm{d}S\big{|}_{\pi}[\delta\rho]\right)\,\sqrt{\pi}+\sqrt{\pi}\, \left(\mathrm{d}S\big{|}_{\pi}[\delta\rho]\right)=\frac{\mathrm{d}}{\mathrm{d }\varepsilon}\left(\sqrt{\pi+\varepsilon\,\delta\rho}\,\sqrt{\pi+\varepsilon\, \delta\rho}\right)\big{|}_{\varepsilon=0}=\frac{\mathrm{d}}{\mathrm{d} \varepsilon}\left(\pi+\varepsilon\,\delta\rho\right)\big{|}_{\varepsilon=0}= \delta\rho\,, \tag{254}\] where the first equality can be explicitly verified from the definition in Eq. (253). Then, since the equation above can be rewritten as \(\mathrm{d}S\big{|}_{\pi}[\delta\rho]=\frac{1}{2}\,\mathbb{J}_{B}^{-1}\big{|}_{ \sqrt{\pi}}[\delta\rho]\), we also obtain that the pullback of the Hilbert-Schmidt metric takes the form: \[\operatorname{Tr}\left[\mathrm{d}S\big{|}_{\pi}[A]\,\mathrm{d}S\big{|}_{\pi}[B ]\right]=\frac{1}{4}\operatorname{Tr}\left[\mathbb{J}_{B}^{-1}\big{|}_{\sqrt{ \pi}}[A]\,\mathbb{J}_{B}^{-1}\big{|}_{\sqrt{\pi}}[B]\right]=\frac{1}{4} \operatorname{Tr}\left[A\,\mathbb{J}_{WY}^{-1}\big{|}_{\pi}[B]\right]\,. \tag{255}\] This proves a remarkable fact: that, up to a factor \(\frac{1}{4}\), the Wigner-Yanase skew information is the pullback of the Hilbert-Schmidt metric by root map \(S\). This means that, thanks to Eq. (251), the space \(\mathcal{S}_{d}\) with the metric \(\mathbb{J}_{WY}^{-1}\big{|}_{\pi}\) is isometric7 to an \(n\)-sphere of radius \(2\). Thus, one can give a closed form for the geodesic distance, given by: Footnote 7: This was already noticed in [38], where it was pointed out that the Wigner-Yanase metric is the only known Fisher information that has a constant positive curvature, a property that uniquely identifies subsets of hyperspheres. \[d_{WY}(\rho,\sigma)=2\,\arccos\operatorname{Tr}\left[\sqrt{\rho}\sqrt{\sigma} \right]\,, \tag{256}\] and a simple expression for the geodesic path connecting any two density matrices \(\rho\) and \(\sigma\), namely: \[\gamma_{WY}^{\rho\to\sigma}(t)=2\,\frac{\left(\left(1-t\right)\sqrt{\rho}+t\, \sqrt{\sigma}\right)^{2}}{\mathrm{Tr}\left[\left(\left(1-t\right)\sqrt{\rho}+t \,\sqrt{\sigma}\right)^{2}\right]}\,. \tag{257}\] It is interesting to compare Eq. (256) with the Bures geodesic distance obtained in Eq. (195): these two quantities coincide for commuting states, whereas in general one has \(d_{B}(\rho,\sigma)\leq d_{WY}(\rho,\sigma)\), due to the inequality \(\mathrm{Tr}\left[\sqrt{\rho}\sqrt{\sigma}\right]\leq\sqrt{F(\rho,\sigma)}\)[39] and the fact that the arccosine is monotonically decreasing. To the best of the authors' knowledge, these are the only two cases for which one has an analytical expression for the geodesic distance. Another important property of this metric is that it can be used to express the quantum Chernoff bound [40]. This arises in the following setting: consider the task of distinguishing two different states \(\rho_{0}\) and \(\rho_{1}\), knowing that each is prepared with a probability \(p_{0}\) and \(p_{1}\). In this context, the symmetric distinguishability problem consists in finding a POVM (positive operator-valued measure) \(\{E_{0},E_{1}\}\) such that the probability of error \(P_{e}:=p_{0}\mathrm{Tr}\left[E_{1}\rho_{0}\right]+p_{1}\mathrm{Tr}\left[E_{0} \rho_{1}\right]\) is minimal. By defining the positive and negative part of a Hermitian operator as \(A_{\pm}:=(|A|\pm A)/2\), one can prove that the optimal measurement is obtained by setting \(E_{1}\) to be the projector on the range of \((p_{1}\rho_{1}-p_{0}\rho_{0})_{+}\), yielding the following expression for the minimum error probability [40]: \[P_{e,min}=\frac{1}{2}(1-\mathrm{Tr}\left[|p_{1}\rho_{1}-p_{0}\rho_{0}|\right])\,. \tag{258}\] This discussion was done in the single copy scenario. If one allows more copies of \(\rho_{0/1}\) to be prepared at the same time with the probability \(p_{0/1}\), one can again infer that the optimal error probability is given by: \[P_{e,min,n}=\frac{1}{2}(1-\mathrm{Tr}\left[|p_{1}\rho_{1}^{\otimes n}-p_{0} \rho_{0}^{\otimes n}|\right])\,. \tag{259}\] Differently from what happened for the single copy scenario, though, this probability scales with \(n\), and in particular it asymptotically decreases as \(P_{e,min,n}\simeq e^{-\xi_{QCB}n}\) for \(n\gg 1\). In [40] it was proven that the exponent takes the form: \[\xi_{QCB}:=-\lim_{n\to\infty}\,\frac{\log P_{e,min,n}}{n}=\max_{0\leq s\leq 1 }(-\log\mathrm{Tr}\left[\rho_{0}^{s}\rho_{1}^{1-s}\right])\,. \tag{260}\] This result goes under the name of quantum Chernoff bound. The position at which the maximum is found usually depends on the particular form of \(\rho_{0}\) and \(\rho_{1}\). Still, if one restricts to the case in which \(\rho_{1}=\rho_{0}+\delta\rho\), with \(\delta\rho\ll 1\), then one can apply the methods from Sec. V.3 to express \(\xi_{QCB}\) in terms of \(\mathbb{J}_{-}^{-1}\big{|}_{\rho_{0}}\). In this context, the unique maximum is attained for \(s=\frac{1}{2}\), meaning that: \[\xi_{QCB}=\frac{1}{8}\mathrm{Tr}\left[\delta\rho\,\mathbb{J}_{WY}^{-1}\big{|} _{\rho_{0}}[\delta\rho]\right]\,. \tag{261}\] This further motivates the interest in the Wigner-Yanase metric. ### The relative entropy (\(\alpha=0\)) The most renowned among the \(\alpha\)-divergences, and among the contrast functions in general, is the one obtained in the limit \(\alpha\to 0\), namely the relative entropy. In fact, carrying out the limit of Eq. (241) one gets: \[g_{L}(x):=\lim_{\alpha\to 0}\,\frac{1}{\alpha(\alpha-1)}\int_{0}^{\alpha} \!\mathrm{d}\beta\,\left(x^{\beta}\,\log x\right)=-\log x\,. \tag{262}\] The corresponding contrast function takes the familiar form: \[S(\rho||\sigma):=H_{L}(\rho||\sigma)=\mathrm{Tr}\left[\rho\left(\log\rho-\log \sigma\right)\right]\,. \tag{263}\] Moreover, its symmetrised version has the following integral expression: \[\frac{g_{L}(x)+x\,g_{L}(x^{-1})}{2}=\int_{0}^{1}\!\mathrm{d}s\,\frac{1}{2(1+s )}\,\left(\frac{(x-1)^{2}}{x+s}+\frac{(x-1)^{2}}{1+sx}\right)\,, \tag{264}\] allowing to identify \(\mathrm{d}N_{L}(s):=\frac{1}{2(1+s)}\mathrm{d}s\). This divergence has the special property that it is additive on tensor products: \[S(\rho_{A}\otimes\rho_{B}||\sigma_{A}\otimes\sigma_{B})=S(\rho_{A}||\sigma_{A}) +S(\rho_{B}||\sigma_{B})\,, \tag{265}\] whereas in general one can just prove that \(H_{g}(\rho\otimes\tau||\sigma\otimes\tau)=H_{g}(\rho||\sigma)\) (see Eq. (32)). The standard monotone function in this case is given by: \[f_{L}(x)=\frac{x-1}{\log x}=\int_{0}^{1}\mathrm{d}\gamma\;x^{\gamma}=\int_{0}^ {1}\mathrm{d}\gamma\;f_{\gamma_{>}}(x)\,, \tag{266}\] where the last equality shows how \(f_{L}\) can be defined as a uniform mixture of \(f_{\gamma_{>}}\). Moreover, the integral expression allows the immediate calculation of \(\mathbb{J}_{L}\big{|}_{\pi}\) as: \[\mathbb{J}_{L}\big{|}_{\pi}[A]=\mathbb{R}_{\pi}\int_{0}^{1}\mathrm{d}\gamma\; (\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})^{\gamma}[A]=\int_{0}^{1}\mathrm{d} \gamma\;\pi^{\gamma}A\,\pi^{1-\gamma}\,. \tag{267}\] Interestingly, this superoperator is the same one gets from the Dyson series of the exponential, i.e., \(e^{\log(\pi)+\varepsilon A}\simeq\pi+\varepsilon\,\mathbb{J}_{L}\big{|}_{\pi} [A]\)[7]. Then, thanks to this identification, we can deduce that \(\mathbb{J}_{L}^{-1}[A]\) will be given by the first term in the expansion of \(\log(\pi+\varepsilon A)\). Indeed, this follows from the equalities: \[\pi+\varepsilon A=e^{\log(\pi+\varepsilon A)}=e^{\log(\pi)+\varepsilon\, \mathbb{J}_{L}^{-1}}\big{|}_{\pi}[A]=\pi+\varepsilon\,\mathbb{J}_{L}\big{|}_{ \pi}\mathbb{J}_{L}^{-1}\big{|}_{\pi}[A]+\mathcal{O}\left(\varepsilon^{2} \right)\,, \tag{268}\] where we expanded at first order in \(\varepsilon\) and used the derivative of the logarithm defined in Eq. (239). Thus, it follows that \(\mathbb{J}_{L}^{-1}\big{|}_{\pi}\) coincides with the Frechet derivative of the logarithm, i.e.: \[\mathbb{J}_{L}^{-1}\big{|}_{\pi}[A]=\int_{0}^{\infty}\mathrm{d}t\;\left(\pi+ t\right)^{-1}A\left(\pi+t\right)^{-1}. \tag{269}\] Since \(\mathbb{J}_{L}^{-1}\big{|}_{\pi}\) is in Kraus form, it follows that \(f_{L}\in\mathcal{M}^{-}\). It should be noticed that this was not clear a priori, as \(f_{L}\) is obtained from the convex combination of elements in \(\mathcal{M}^{-}\), which is not a convex set [28]: indeed, one should compare the expression for the subfamily of \(\mathcal{M}^{-}\) generated by \(f_{\gamma_{>}}\) in Eq. (217), with the one given in Eq. (266). Interestingly, one can express the relative entropy in terms of the corresponding Fisher information operator. Indeed, define the path \(\gamma(t)\) connecting \(\rho\) to \(\sigma\) as \(\gamma(t):=(1-t)\rho+t\,\sigma\). Then, the following equalities hold: \[S(\rho||\sigma) =S(\rho||\gamma(1))-S(\rho||\gamma(0))=\int_{0}^{1}\mathrm{d}t\; \left(\frac{\mathrm{d}}{\mathrm{d}t}\;S(\rho||\gamma(t))\right)=-\int_{0}^{1} \mathrm{d}t\;\left(\frac{\mathrm{d}}{\mathrm{d}t}\mathrm{Tr}\left[\rho\log \gamma(t)\right]\right)= \tag{270}\] \[=\int_{0}^{1}\mathrm{d}t\;\operatorname{Tr}\left[\rho\,\mathbb{J }_{L}^{-1}\big{|}_{\gamma(t)}[(\rho-\sigma)]\right]\,. \tag{271}\] This identity is particularly useful when one needs to compare global and local behaviours, as it allows to express the contrast function with the same operator appearing in the definition of the Fisher information [3]. The scalar product defined by \(\mathbb{J}_{L}\big{|}_{\pi}\) on the space of observables is called Kubo-Mori-Bogoliubov (KMB) inner product, which is of key importance in the context of linear response theory of thermal states. Indeed, using the definition of Gibbs states \(\pi(H):=\frac{\varepsilon^{-\beta H}}{\mathcal{Z}(H)}\), with \(\mathcal{Z}(H):=\mathrm{Tr}\left[e^{-\beta H}\right]\) denoting the partition function, one can derive the following identity from the Dyson series in Eq. (267): \[\frac{\partial^{2}}{\partial x\partial y}\,\log\mathcal{Z}(H+x\,A+y\,B)\bigg{|} _{x=y=0}=\beta^{2}\,\mathrm{Tr}\left[\Delta_{\pi(H)}A\;\mathbb{J}_{L}\big{|}_{ \pi(H)}[\Delta_{\pi(H)}B]\right]\,. \tag{272}\] It should be noticed that in the right hand side of the equation we implicitly defined \(\Delta_{\pi(H)}X:=X-\mathbb{1}\mathrm{Tr}\left[X\,\pi(H)\right]\). When \(A=B\), Eq. (272) describes the (quantum) thermal fluctuations of the operator \(A\). Moreover, this expression is connected to transport coefficients in the linear response by the fluctuation-dissipation relation (see [41; 42] for more details), showing the physical significance of \(\mathbb{J}_{L}\big{|}_{\pi}\). On top of this, we can also give an interpretation to the Cramer-Rao bound of the type in Eq. (208) in this context. Indeed, suppose we have a Hamiltonian \(H(\theta)\) depending on some unknown parameter \(\theta\) that we want to estimate. To this end, we use a locally unbiased estimator \(A\), defined by the condition: \[\frac{\partial}{\partial\theta}\,\mathrm{Tr}\left[\pi_{\beta}(H(\theta))\,A \right]\bigg{|}_{\theta=0}=1\,. \tag{273}\] Moreover, it is useful to give an explicit form to the generalised derivative \(L_{f}\) (defined in Eq. (207)) in this context, which reads., \[\left.\frac{\partial}{\partial\theta}\pi(H(\theta))\right|_{\theta=0}=\mathbb{J} _{L}\big{|}_{\pi(H(0))}[L_{L}]\,. \tag{274}\] Explicitly carrying out the differentiation on the left, one can verify that \(L_{L}=-\beta\Delta_{\pi(H(0))}\dot{H}\), where we slightly abused the notation to define \(\dot{H}:=\partial_{\theta}H(\theta)\big{|}_{\theta=0}\). Then, using the generalised Cramer-Rao bound in Eq. (208), together with the expression in Eq. (272), we have the following inequality: \[\left.\frac{\partial^{2}}{\partial x^{2}}\,\log\mathcal{Z}(H(0)+x \,A)\right|_{x=0} \geq\left.\frac{1}{\operatorname{Tr}\left[\Delta_{\pi(H(0))}\dot{H} \,\mathbb{J}_{L}\big{|}_{\pi(H(0))}[\Delta_{\pi(H(0))}\dot{H}]\right]\right|_ {\theta=0}}= \tag{275}\] \[=\beta^{2}\,\left(\frac{\partial^{2}}{\partial\theta^{2}}\log \mathcal{Z}(H(0)+\theta\,\dot{H})\right)^{-1}\bigg{|}_{\theta=0}\,. \tag{276}\] This equation gives us an interesting result: the minimum fluctuations of an unbiased estimator cannot be smaller than the inverse of the fluctuations of the operator inducing the change in the Hamiltonian (namely, \(\dot{H}\)). Moreover, in order to saturate the bound one should choose \(A\) to be parallel to \(\dot{H}\). A practical application of this result can be seen in the context of thermometry [43]: in there, the optimal measurement turns out to be the one of the energy (since the operator associated to the variation of \(\beta\) is exactly the Hamiltonian), and the fluctuations in the estimator are bounded by the inverse of the heat capacity. It directly follows, then, that in order to have a good estimation of the temperature one needs to choose a thermal state with the heat capacity as big as possible. ### The quantum information variance The function that we consider in this case is not in the family of \(\alpha\)-divergences, but it is closely related to them: \[f_{V}(x)=\frac{2\,(x-1)^{2}}{(x+1)(\log x)^{2}}\,. \tag{277}\] Indeed, this standard monotone is associated to the following matrix convex function: \[g_{V}(x)=\frac{1}{2}\frac{\partial^{2}}{\partial\alpha^{2}}\,\left(\alpha( \alpha-1)g_{\alpha}(x)\right)\bigg{|}_{\alpha=0}=\frac{1}{2}\frac{\partial^{2 }}{\partial\alpha^{2}}\,x^{\alpha}\bigg{|}_{\alpha=0}=\frac{1}{2}(\log x)^{2}\,, \tag{278}\] which gives rise what is called the quantum information variance, given by: \[H_{V}(\rho||\sigma)=\frac{1}{2}\operatorname{Tr}\left[\rho\left(\log\rho-\log \sigma\right)^{2}\right]\,. \tag{279}\] It was shown in [44] that this quantity could be interpreted as a quantifier of the fluctuations in the distinguishability of \(\rho\) with respect to \(\sigma\). Differentiating Eq. (232) we can also obtain the integral expression of \(g_{V}\) as: \[g_{V}(x)=\int_{0}^{\infty}\mathrm{d}s\,\frac{(-\log s)}{(1+s)^{2}}\,\left( \frac{(x-1)^{2}}{x+s}\right)\,. \tag{280}\] Interestingly, we can express the function in Eq. (277) as \(f_{V}(x)=f_{L}(x)^{2}/f_{B}(x)\). Similarly to what happened for the Wigner-Yanase skew information (see Sec. V.4), this identification allows us to directly compute the Fisher operators in terms of the one associated to the Bures metric and to the relative entropy. Indeed, these take the form: \[\left.\mathbb{J}_{V}\big{|}_{\pi}[A]=\mathbb{J}_{B}^{-1}\big{|}_{\pi}\, \mathbb{J}_{L}^{2}\big{|}_{\pi}[A]=\int_{0}^{\infty}\mathrm{d}t\,\,e^{-t\pi/2 }\,\mathbb{J}_{L}^{2}[A]\,e^{-t\pi/2}\,;\qquad\qquad\left.\mathbb{J}_{V}^{-1} \big{|}_{\pi}[A]=\mathbb{J}_{B}\big{|}_{\pi}\,\mathbb{J}_{L}^{-2}\big{|}_{\pi }[A]=\frac{1}{2}\,\mathbb{J}_{L}^{-2}\left[\{\pi,A\}\right]\,. \tag{281}\] Even if \(\mathbb{J}_{B}^{-1}\big{|}_{\pi}\) is completely positive, this is not true for \(\left.\mathbb{J}_{L}\big{|}_{\pi}\right.\). Thus \(\left.\mathbb{J}_{V}\right|_{\pi}\) is not CP in general, and \(f_{V}\notin\mathcal{M}^{+}\). A similar reasoning also holds for \(\left.\mathbb{J}_{V}^{-1}\right|_{\pi}\), proving that \(f_{V}\notin\mathcal{M}^{-}\) as well. Indeed, one could reach this conclusion by noticing neither \(f_{V}\leq f_{SQ}\) nor \(f_{SQ}\leq f_{V}\) hold in this case (see Fig. 5), which are necessary conditions for the membership in \(\mathcal{M}^{+}\) or \(\mathcal{M}^{-}\). The formal similarity with the Bures metric allows for the following application. Suppose we want to estimate a parameter \(\theta\) encoded in the Hamiltonian of a thermal state. Then, it follows from Eq. (206) that the variance of a locally unbiased operator \(A\) can be bounded as: \[\mathrm{Tr}\left[\pi(H(0))A^{2}\right]\geq\frac{1}{\mathrm{Tr}\left[\partial_ {\theta}\pi(H(\theta))\,\mathbb{J}_{B}^{-1}\big{|}_{\pi(H(\theta))}[\partial_ {\theta}\pi(H(\theta))]\right]\Big{|}_{\theta=0}}\,. \tag{282}\] At this point, applying Eq. (274), together with the fact that \(L_{L}=-\beta\Delta_{\pi(H(0))}\dot{H}\), we obtain [45]: \[\mathrm{Tr}\left[\pi(H(0))A^{2}\right]\geq\frac{1}{\beta^{2}\,\mathrm{Tr} \left[\Delta_{\pi(H(0))}\dot{H}\,\mathbb{J}_{V}\big{|}_{\pi(H(0))}[\Delta_{\pi (H(0))}\dot{H}]\right]}\,, \tag{283}\] Thus, the Cramer-Rao bound associated to the Bures metric when considering the variation of states (the right hand side of Eq. (282)), translates to a bound involving the variation of the Hamiltonian, expressed in terms of \(\mathbb{J}_{V}\big{|}_{\pi}\). ### The geometric mean In the set of standard monotone functions a special role is taken by the square root, i.e.: \[f_{SQ}(x)=\sqrt{x}\,. \tag{284}\] Indeed, as it was discussed in Sec. IV.3, the two sets \(\mathcal{M}^{+}\) and \(\mathcal{M}^{-}\) can be defined in terms of \(f_{SQ}\), and it is the only element in the intersection \(\mathcal{M}^{+}\cap\mathcal{M}^{-}\), i.e., the only case in which both \(\mathbb{J}_{SQ}\big{|}_{\pi}\) and \(\mathbb{J}_{SQ}^{-1}\big{|}_{\pi}\) are CP. Some consequences of this property were explored in Box 9 in the context of recovery maps. The corresponding convex function is given by: \[g_{SQ}(x)=\sqrt{x^{-1}}-\sqrt{x}\,, \tag{285}\] which gives rise to the contrast function: \[H_{SQ}(\rho||\sigma)=\mathrm{Tr}\left[\sqrt{\rho}\,(\rho-\sigma)\sqrt{\sigma^ {-1}}\right]\,. \tag{286}\] Its symmetrised version has the following integral expression: \[g_{SQ}^{\mathrm{symm}}(x)=\frac{(x-1)^{2}}{2\sqrt{x}}=\int_{0}^{1}\mathrm{d} s\,\,\frac{1}{2\pi\sqrt{s}}\,\,\left(\frac{(x-1)^{2}}{x+s}+\frac{(x-1)^{2}}{1+ sx}\right)\,, \tag{287}\] so that we can identify \(\mathrm{d}N_{g}(s):=\frac{1}{2\pi\sqrt{s}}\). Finally, the Fisher operators in this case are given by: \[\mathbb{J}_{SQ}\big{|}_{\pi}[A]=\sqrt{\pi}\,A\,\sqrt{\pi}\,; \mathbb{J}_{SQ}^{-1}\big{|}_{\pi}[A]=\sqrt{\pi^{-1}}\,A\,\sqrt{\pi^{-1}}\,, \tag{288}\] which explicitly shows that both \(\mathbb{J}_{SQ}\big{|}_{\pi}\) and \(\mathbb{J}_{SQ}^{-1}\big{|}_{\pi}\) are CP, as they are given in Kraus form. ### The harmonic mean (\(\alpha=2\)) We close the review with the smallest among the standard monotone functions, namely the harmonic mean: \[f_{H}(x)=\frac{2x}{x+1}\,. \tag{289}\] This is part of the family of \(\alpha\)-divergences, and in particular it is obtained in the limit \(\alpha\to 2\). Then, the corresponding convex function is given by: \[g_{H}(x)=\frac{x^{2}-1}{2}=\frac{(x-1)^{2}}{2}+(x-1)\,. \tag{290}\] As usual, we can discard the linear contribution. Then, its symmetrised version has the following integral expression: \[g_{H}^{\rm symmn}(x)=\frac{(x-1)^{2}}{2}+\frac{(x-1)^{2}}{2x}=\int_{0}^{1}{\rm d }N_{H}(s)\,\,\left(\frac{(x-1)^{2}}{x+s}+\frac{(x-1)^{2}}{1+sx}\right)\,, \tag{291}\] with \({\rm d}N_{H}(s)=\delta(s)/2\). The corresponding contrast function reads: \[H_{H}(\rho||\sigma)=\frac{1}{2}\,{\rm Tr}\left[(\rho-\sigma)\rho^{-1}(\rho- \sigma)\right]=\frac{1}{2}\,\left({\rm Tr}\left[\sigma^{2}\rho^{-1}\right]-1 \right)\,. \tag{292}\] It was shown in Box 3 that this is the only contrast function that can be expressed in terms of a \(\chi_{f}^{2}\)-divergence, namely for \(f=f_{H}\). Then, from Eq. (168) it follows that: \[H_{g}^{\rm symm}(\rho||\sigma)\leq H_{H}^{\rm symm}(\rho||\sigma)=\frac{1}{2} \,\left(\chi_{f_{H}}^{2}(\rho||\sigma)+\chi_{f_{H}}^{2}(\sigma||\rho)\right)\,. \tag{293}\] The Fisher information operators can be expressed in terms of the Bures ones, since \(f_{H}=Tf_{B}\). Then, by applying the relation in Eq. (158) one directly obtains: \[\mathbb{J}_{H}\big{|}_{\pi}[A]=\int_{0}^{\infty}{\rm d}t\,\,\,e^{-t\pi^{-1}/2 }\,A\,e^{-t\pi^{-1}/2}\,;\qquad\qquad\qquad\mathbb{J}_{H}^{-1}\big{|}_{\pi}[A ]=\frac{1}{2}\{\pi^{-1},A\}\,. \tag{294}\] Since \(\mathbb{J}_{H}\big{|}_{\pi}\) is in Kraus form, it follows that \(f_{H}\in\mathcal{M}^{+}\). This concludes the survey of the quantum Fisher information metrics. ## VI Conclusions and open questions Fisher information is ubiquitous in classical and quantum physics, as it quantifies key figures of merit in different fields - parameter estimation precision in metrology, states' distinguishability in information theory, work dissipation and fluctuations in thermodynamics, to name a few. This fact alone, that the same quantity fundamentally characterises such variety of distinct physical scenarios, looks almost miraculous. From a completely different angle, the classical Chentsov theorem guarantees that the Fisher metric is, for the set of physical states, the unique Riemannian metric that contracts under physical evolutions. These remarkable properties make the case for a detailed study of Fisher information, both in the classical and quantum domain, where there exists a family of such quantum Fisher informations (27), characterised by the Petz theorem (Thm. 2). All the elements of this family collapse to the classical Fisher information in the case of commutative diagonal states, while in the presence of off-diagonal contributions they present a rich phenomenology. This wide range of possible behaviours makes the Fisher operators mathematically interesting objects with different operational properties (cf. Sec. V). In this work we reviewed, systematized and introduced new results regarding the dynamical and mathematical properties of the quantum Fisher informations. From a statistical point of view, these can be thought as the local expansion of contrast functions, which are introduced in Sec. II together with a self-contained historical introduction to the topic. From the geometrical point of view, a natural perspective is that of studying the Fisher information metrics (27), their properties as scalar products and their interplay with the action of completely positive linear maps (i.e., physical evolutions). In fact, an attentive look at the contraction properties of the Fisher information on the set of physical states unveils the deep connection between physical dynamics and such geometric structures. So much so that one can even define physical evolutions as exactly the ones that contract the Fisher metric, as showed in our Theorem 3 and its corollary, which can be considered as a dual of the Chentsov-Petz theorem (also see the related work [1]). Moreover, the discussions of Sec. III, corroborate the claim that the Fisher information is an inherently dynamical quantity, a fact that is not completely acknowledged in the literature. In fact, the quantum Fisher information metrics can be used: III.1) to fully characterize Markovianity, when identified with CP-divisibility, as well as operationally detect non-Markovian evolutions; III.2) to generalize the notion of physical retrodiction based on Bayes/Petz maps; III.3) to characterize quantum microreversibility, as detailed balance becomes a geometric property of the dynamical generators. On a more technical side, in Sec. IV.2 we provided an organic discussion of many mathematical properties of the Fisher information functionals that are both relevant to the proofs, as well as retaining their own mathematical interest. In particular we focused on the characterisation of matrix monotone functions and their connection to the complete positivity of the induced functionals. Finally in Sec. V we detailed an extensive list of several quantum Fisher information metrics and their applications to different areas of quantum information science. Our work can therefore serve as a self-contained introduction to the topic, as well as a manual for researchers working in the areas related. In particular, we gave a comprehensive review of many results scattered in the literature, that are not always easy to combine. In addition, we complemented them with different new results, most of which in Section III, as well as technical developments scattered throughout the work. The applications and mathematical properties of the Fisher information metrics are an ongoing theme of study, and a number of problems remain open. A particularly natural question is the following: once the class of Fisher \(f\)-metrics (27) is introduced, is there a closed-form for each corresponding \(f\)-geodesic distance? And what is the relation between such finite distance on the set of states and the contrast functions introduced in Sec. II? To answer this question, one needs to solve the geodesics relative to each metrics. This is nontrivial, and an analytic answer, to the best of our knowledge, only exists for the case of the Bures metric (Sec. V.1) and the Wigner-Yanase metric (Sec. V.4). The closed-form expression for any \(f\) remains unknown. For the interested reader, in Box VI we show a universal mapping that might simplify the quest of such geodesics lengths. [backgroundcolor=light,backgroundcolor=light,linewidth=0.5cm] **Box VI.** Fisher geodesics for normalized states we general positive operators In the quantum information literature, and throughout this work, we considered the quantum Fisher scalar product \(K_{f,\rho}(A,B):=\mathrm{Tr}\left[A\,\mathbb{J}_{f}^{-1}\big{|}_{\rho}[B]\right]\) to induce a metric on the set of normalised quantum states \(\rho\), whose tangent space is given by operators \(\delta\rho\) that are Hermitian and traceless, i.e. \[\mathrm{Tr}\left[\rho\right]=1\;,\;\rho\geq 0\;.\qquad\mathrm{Tr}\left[ \delta\rho\right]=0\;,\;\delta\rho^{\dagger}=\delta\rho\;. \tag{295}\] It is however straightforward to extend the scalar product to the set of all Hermitian operators. This corresponds to removing the trace constraints above and consider unnormalised states \(\rho^{\prime}\) with tangent vectors \(\delta\rho^{\prime}\) that only satisfy \[\rho^{\prime}\geq 0\;,\;\delta\rho^{\prime\dagger}=\delta\rho^{\prime}\;. \tag{296}\] Clearly, each unnormalised state can be decomposed in a scalar times a normalised state \[\rho^{\prime}=r\rho\;,\quad r=\mathrm{Tr}\left[\rho^{\prime}\right]\;. \tag{297}\] When considering the Fisher metrics applied to the set of unnormalised states \(\rho^{\prime}\), one gets explicitly in these coordinates \[\mathrm{Tr}\left[\delta\rho^{\prime}\,\mathbb{J}_{f}^{-1}\big{|}_{\rho^{ \prime}}[\delta\rho^{\prime}]\right]=\frac{1}{r}\mathrm{Tr}\left[\delta\rho^{ \prime}\,\mathbb{J}_{f}^{-1}\big{|}_{\rho}[\delta\rho^{\prime}]\right]=\frac {1}{r}\mathrm{Tr}\left[\left(\delta r\rho+r\delta\rho\right)\mathbb{J}_{f}^{-1 }\big{|}_{\rho}[\delta r\rho+r\delta\rho]\right]\;. \tag{298}\] Thanks to the fact that \(\mathbb{J}_{f}^{-1}\big{|}_{\rho}[\rho]=\mathbb{1}\), and the self-adjointness of \(\mathbb{J}_{f}\big{|}_{\rho}\), it is easy to realize that the cross-terms are zero in such expression, due to \(\mathrm{Tr}\left[\delta\rho\,\mathbb{J}_{f}^{-1}\big{|}_{\rho}[\rho]\right]= \mathrm{Tr}\left[\delta\rho\right]=0\). Therefore we find \[\mathrm{Tr}\left[\delta\rho^{\prime}\,\mathbb{J}_{f}^{-1}\big{|}_{\rho^{ \prime}}[\delta\rho^{\prime}]\right]=\frac{(\delta r)^{2}}{r}+r\,\mathrm{Tr} \left[\delta\rho\,\mathbb{J}_{f}^{-1}\big{|}_{\rho}[\delta\rho]\right]. \tag{299}\] The infinitesimal squared distance \(\delta l^{2}=\mathrm{Tr}\left[\delta\rho^{\prime}\,\mathbb{J}_{f}^{-1}\big{|}_ {\rho^{\prime}}[\delta\rho^{\prime}]\right]\) can thus be written as \[\delta l^{2}=4\left(\delta q^{2}+q^{2}\delta\theta^{2}\right)\;,\quad\text{ with}\quad q\equiv\sqrt{r}\;,\quad\delta\theta^{2}\equiv\frac{1}{4}\mathrm{Tr} \left[\delta\rho\mathbb{J}_{f}^{-1}\big{|}_{\rho}[\delta\rho]\right]\;. \tag{300}\] We see then that it is possible to separate the contribution due to the normalization coordinate \(r\equiv q^{2}\), and the resulting underlying geometry is spherical with radius \(2q\). This has a nontrivial consequence. Namely, for all \(f\)-defined Fisher metrics (27), solving geodesics on the set of physical states (\(r=q^{2}=1\)), is equivalent to solving geodesics on the set of positive operators (\(r\in(0,\infty)\)). Formally speaking, one defines the geodesic length on the normalised (\(\rho\)) and unnormalised (\(\rho^{\prime}\)) set as \[(\Delta l_{f})^{2} :=\min_{\rho^{\prime}(t)|\rho^{\prime}(0)=r_{0}\rho_{0},\rho^{ \prime}(1)=r_{1}\rho_{1}}\int_{0}^{1}\mathrm{d}t\;K_{f,\rho^{\prime}(t)}(\dot{ \rho}^{\prime}(t),\dot{\rho}^{\prime}(t))\;, \tag{301}\] \[(\Delta\theta_{f})^{2} :=\min_{\rho(t)|\rho(0)=\rho_{0},\rho(1)=\rho_{1}}\frac{1}{4}\int_ {0}^{1}\mathrm{d}t\;K_{f,\rho(t)}(\dot{\rho}(t),\dot{\rho}(t))\;. \tag{302}\] The direct consequence of the above discussion resulting in Eq. (300) is the following relation, that holds independently from the choice of \(f\), \[(\Delta l_{f})^{2}=(r_{0}+r_{1})-2\sqrt{r_{0}r_{1}}\cos(\Delta\theta_{f})\;. \tag{303}\] For example, in the case of the Bures metric V.1, one has that the angle \(\Delta\theta_{\rm Bures}\equiv d_{B}/2\) is the Bures distance given in Eq. (195) and \(\cos(\Delta\theta_{\rm Bures})={\rm Tr}\left[\sqrt{\sqrt{\rho_{0}}\rho_{1} \sqrt{\rho_{0}}}\right]\) corresponds to the fidelity between initial and final state, leading to the length \(D_{B}\) in (197). Similarly, for the Wigner-Yanase metric one has \(\Delta\theta_{\rm WY}\equiv d_{\rm WY}/2\) (256), and therefore \(\cos(\Delta\theta_{\rm WY})={\rm Tr}\left[\sqrt{\rho_{0}}\sqrt{\rho_{1}}\right]\), from which it follows immediately that the unnormalised geodesics length coincides with the contrast function in (247), i.e. \(\Delta l_{\rm WY}\equiv\sqrt{H_{\rm WY}}\). The above equality (303) shows that integrating any \(f\)-metrics on the set of normalised and unnormalised states is equivalent. This consideration might help in finding geodesics for other Fisher metrics by suitably choosing the problem between (301) and (302) that is easier to solve. Finally, notice that in the classical, diagonal case [1, 34], \(\Delta\theta\) and \(\Delta l\) are known in the literature respectively as the _Bhattacharyya angle_ (or Bhattacharyya distance) and the _Hellinger distance_. ## Acknowledgements M. S. acknowledges support from the European Union's Horizon 2020 research and innova- tion programme under the Marie Sklodowska-Curie grant agreement No 713729, and from the Government of Spain (FIS2020-TRANQI and Severo Ochoa CEX2019-000910- S), Fundacio Cellex, Fundacio Mir-Puig, Generalitat de Catalunya (SGR 1381 and CERCA Programme). P.A. is supported by the QuantERA II programme that has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No 101017733, and from the Austrian Science Fund (FWF), project I-6004. D.D.S. is supported by the research project "Dynamics and Information Research Institute - Quantum Information, Quantum Technologies" within the agreement between UniCredit Bank and Scuola Normale Superiore di Pisa (CI14 UNICREDIT MARMI), the Spanish Government (FIS2020-TRANQI and Severo Ochoa CEX2019-000910-S), the ERC AdG CERQUTE, the AXA Chair in Quantum Information Science, Fundacio Cellex, Fundacio Mir-Puig and Generalitat de Catalunya (CERCA, AGAUR SGR 1381). ## Appendix A Derivations from Sec. II We present here some derivations that were left implicit in Sec. II. First, it should be noticed that the canonical expression of matrix convex functions such that \(g(1)=0\) is given by [6]: \[g(x)=a\,(x-1)+b\,(x-1)^{2}+c\,\frac{(x-1)^{2}}{x}+\int_{0}^{\infty}\mathrm{d} \mu_{g}(s)\;\frac{(x-1)^{2}}{x+s}\,, \tag{104}\] where \(b\) and \(c\) are positive constants, and \(\mathrm{d}\mu_{g}(s)\) is a positive measure with finite mass (i.e., \(\int_{0}^{\infty}\mathrm{d}\mu_{g}(s)<\infty\)). It should be noticed that we can ignore the linear term since, as it is discussed in the main text, it does not contribute to the contrast function. Moreover, we can redefine the measure \(\delta\mu_{g}(s)\) as \(\mathrm{d}\tilde{\mu}_{g}(s):=\mathrm{d}\mu_{g}(s)+c\,\delta(s)\), so to take care of the third term in Eq. (104). Finally, the integral term can be rewritten as: \[\int_{0}^{\infty}\mathrm{d}\mu_{g}(s)\;\frac{(x-1)^{2}}{x+s} =\int_{0}^{\infty}\frac{\mathrm{d}\mu_{g}(s^{-1})}{s}\;\frac{(x-1)^{2}}{1+sx} =\frac{1}{2}\int_{0}^{\infty}\mathrm{d}\mu_{g}(s)\;\frac{(x-1)^{2}}{x+s}+ \frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d}\mu_{g}(s^{-1})}{s}\;\frac{(x-1)^ {2}}{1+sx}\,, \tag{105}\] where we performed a change of variables \(s\to s^{-1}\). Hence, we can rewrite Eq. (104) (up to linear terms) as: \[g(x) =b\,(x-1)^{2}+\int_{0}^{\infty}\mathrm{d}\tilde{\mu}_{g}(s)\; \frac{(x-1)^{2}}{x+s}= \tag{106}\] \[=b\,(x-1)^{2}+\frac{1}{2}\int_{0}^{\infty}\mathrm{d}\tilde{\mu}_ {g}(s)\;\frac{(x-1)^{2}}{x+s}+\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d} \tilde{\mu}_{g}(s^{-1})}{s}\;\frac{(x-1)^{2}}{1+sx}=\] (107) \[=\frac{1}{2}\int_{0}^{\infty}\mathrm{d}\nu_{g}(s)\;\frac{(x-1)^{ 2}}{x+s}+\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d}\nu_{g}(s^{-1})}{s}\; \frac{(x-1)^{2}}{1+sx}\,, \tag{108}\] where in the last step we redefined the measure as \(\mathrm{d}\nu_{g}(s^{-1})/s=\mathrm{d}\tilde{\mu}_{g}(s^{-1})/s+b\,\delta(s)\). This proves Eq. (7). Another expression that was not proved in the main text is Eq. (10). As it is explained there, this follows from the two identities: \[(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}-\mathbb{1})\left[\rho ^{1/2}\right] =(\sigma-\rho)\rho^{-1/2}=\mathbb{R}_{\rho}^{-1/2}(\sigma-\rho)\,, \tag{109}\] \[(\mathbb{R}_{\sigma}\mathbb{L}_{\rho}^{-1}-\mathbb{1})\left[\rho ^{1/2}\right] =\rho^{-1/2}(\sigma-\rho)=\mathbb{L}_{\rho}^{-1/2}(\sigma-\rho)\,, \tag{110}\] which can be verified by explicit computation. Then, one can rewrite the contrast functions in Eq. (4) as: \[H_{g}(\rho||\sigma) =\mathrm{Tr}\left[g(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}) \left[\rho\right]\right]=\mathrm{Tr}\left[\rho^{1/2}\,g(\mathbb{L}_{\sigma} \mathbb{R}_{\rho}^{-1})\left[\rho^{1/2}\right]\right]= \tag{111}\] \[=\mathrm{Tr}\left[(\mathbb{R}_{\sigma}\mathbb{L}_{\rho}^{-1}-1)^ {-1}[(\rho-\sigma)]g(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1})\left[( \mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}-\mathbb{1})^{-1}[(\rho-\sigma)] \right]\rho^{-1}\right]=\] (112) \[=\mathrm{Tr}\left[(\rho-\sigma)g(\mathbb{L}_{\sigma}\mathbb{R}_{ \rho}^{-1})\left[(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}-1)^{-2}[(\rho- \sigma)]\right]\rho^{-1}\right]=\] (113) \[=\mathrm{Tr}\left[(\rho-\sigma)\,\mathbb{R}_{\rho}^{-1}h(\mathbb{ L}_{\sigma}\mathbb{R}_{\rho}^{-1})[(\rho-\sigma)]\right], \tag{114}\] where in the first line we used the fact that since the superoperator \(g(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1})\) acts on \(\rho\) on the right only through \(\mathbb{R}_{\rho}^{-1}\) one can extract \(\rho^{1/2}\) from the square parenthesis, and use the cyclicity of the trace to put it in front; in the second line we used both Eq. (109) and Eq. (110), in the third line the fact that \((\mathbb{R}_{\sigma}\mathbb{L}_{\rho}^{-1}-1)^{\dagger}=(\mathbb{L}_{\sigma} \mathbb{R}_{\rho}^{-1}-1)\) and finally the definition of the function \(h(x):=g(x)/(x-1)^{2}\). This proves Eq. (10). Hence, from the integral expression in Eq. (108) it follows that we can rewrite generic contrast functions as: \[H_{g}(\rho||\sigma)= \frac{1}{2}\int_{0}^{\infty}\mathrm{d}\nu_{g}(s)\,\mathrm{Tr} \left[(\rho-\sigma)\,\mathbb{R}_{\rho}^{-1}(\mathbb{L}_{\sigma}\mathbb{R}_{\rho }^{-1}+s)^{-1}[(\rho-\sigma)]\right]+ \tag{115}\] \[+\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d}\nu_{g}(s^{-1})}{s} \,\mathrm{Tr}\left[(\rho-\sigma)\,\mathbb{R}_{\rho}^{-1}(1+s\mathbb{L}_{ \sigma}\mathbb{R}_{\rho}^{-1})^{-1}[(\rho-\sigma)]\right]\,. \tag{116}\] Let us first focus on the first trace. In particular, it should be noticed that: \[\mathbb{R}_{\rho}^{-1}(\mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}+s)^{-1}=(( \mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1}+s)\mathbb{R}_{\rho})^{-1}=(\mathbb{L}_ {\sigma}+s\mathbb{R}_{\rho})^{-1}\,, \tag{117}\] proving that the first integral coincides with the first integral of Eq. (11). Doing the same transformation on the second trace we obtain: \[\operatorname{Tr}\left[(\rho-\sigma)\,\mathbb{R}_{\rho}^{-1}(1+s \mathbb{L}_{\sigma}\mathbb{R}_{\rho}^{-1})^{-1}[(\rho-\sigma)]\right] =\operatorname{Tr}\left[(\rho-\sigma)\,(\mathbb{R}_{\rho}+s \mathbb{L}_{\sigma})^{-1}[(\rho-\sigma)]\right]= \tag{111}\] \[=\operatorname{Tr}\left[(\mathbb{L}_{\rho}+s\mathbb{R}_{\sigma}) ^{-1}[(\rho-\sigma)]\,(\rho-\sigma)\right]\,, \tag{112}\] where in the last step we used the fact that \((\mathbb{R}_{\rho}+s\mathbb{L}_{\sigma})^{\dagger}=(\mathbb{L}_{\rho}+s \mathbb{R}_{\sigma})\). Hence, putting everything together, and using the cyclicity of the trace in Eq. (112), we finally obtain: \[H_{g}(\rho||\sigma)=\frac{1}{2}\int_{0}^{\infty}\mathrm{d}\nu_{g}(s)\, \operatorname{Tr}\left[(\rho-\sigma)(\mathbb{L}_{\sigma}+s\mathbb{R}_{\rho}) ^{-1}[(\rho-\sigma)]\right]+\frac{1}{2}\int_{0}^{\infty}\frac{\mathrm{d}\nu_{ g}(s^{-1})}{s}\,\operatorname{Tr}\left[(\rho-\sigma)(\mathbb{L}_{\rho}+s \mathbb{R}_{\sigma})^{-1}[(\rho-\sigma)]\right]\,. \tag{113}\] proving Eq. (11). ## Appendix B Derivation of the flux of Fisher information We present here the derivation of Thm. 4. In particular, we want to study the evolution of the Fisher information: \[\mathcal{F}_{f,t}:=\operatorname{Tr}\left[\delta\rho_{t}\,\mathbb{J}_{f}^{-1} \big{|}_{\pi_{t}}[\delta\rho_{t}]\right]\,, \tag{114}\] where \(\pi_{t}:=\Phi_{t}(\pi)\) and \(\delta\rho_{t}:=\Phi_{t}(\delta\rho)\). Using the integral expression in Eq. (13), we can rewrite the Fisher information as: \[\mathcal{F}_{f,t}:=2\operatorname{Re}\int_{0}^{1}\mathrm{d}N_{g}(s)\, \operatorname{Tr}\left[\delta\rho_{t}\,(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{ \pi_{t}})^{-1}[\delta\rho_{t}]\right]\,, \tag{115}\] where the real part comes from the fact that \((\mathbb{L}_{\pi}+s\mathbb{R}_{\pi})^{\dagger}=(\mathbb{R}_{\pi}+s\mathbb{L}_ {\pi})\). This expression is particularly convenient for calculations, due to the simple form that the derivative of \((\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^{-1}\) takes. In fact, this is given by: \[\frac{\mathrm{d}}{\mathrm{d}t}(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^ {-1}=-(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^{-1}(\mathbb{L}_{\dot{ \pi}_{t}}+s\,\mathbb{R}_{\dot{\pi}_{t}})(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{ \pi_{t}})^{-1}\,, \tag{116}\] where \(\dot{\pi}_{t}\) is simply the derivative of the state. This expression can be proved by noticing that \(\frac{\mathrm{d}}{\mathrm{d}t}\mathbb{L}_{\pi_{t}}=\mathbb{L}_{\dot{\pi}_{t}}\) (and similarly for \(\mathbb{R}_{\pi_{t}}\)) and by taking the derivative of: \[\frac{\mathrm{d}}{\mathrm{d}t}\left((\mathbb{L}_{\pi_{t}}+s\, \mathbb{R}_{\pi_{t}})(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^{-1}\right) =(\mathbb{L}_{\dot{\pi}_{t}}+s\,\mathbb{R}_{\pi_{t}})(\mathbb{L}_ {\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^{-1}+(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{ \pi_{t}})\frac{\mathrm{d}}{\mathrm{d}t}(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{ \pi_{t}})^{-1}= \tag{117}\] \[=\frac{\mathrm{d}}{\mathrm{d}t}\,\mathbb{I}=0\,, \tag{118}\] which directly implies Eq. (116). Given this technical tool, we can start analysing the evolution of \(\mathcal{F}_{f,t}\) under the dynamics generated by the Lindbladian: \[\mathcal{L}_{t}[\rho]=-i[H_{t},\rho]+\sum_{\alpha}^{d^{2}}\,\lambda_{\alpha}(t) \,\left(A_{\alpha}(t)\,\rho\,A_{\alpha}(t)^{\dagger}-\frac{1}{2}\{A_{\alpha}(t )^{\dagger}\,A_{\alpha}(t),\rho\}\right)\,. \tag{119}\] Notice that, since \(\mathcal{F}_{f,t}\) is invariant under unitary transformations, there is no contribution coming from the commutator in the previous equation. Moreover, since the derivative is linear, it decomposes into a sum of the form: \[\mathcal{F}_{f,t}^{\prime}=\sum_{\alpha}\,\,\lambda_{\alpha}(t)\,\,\mathcal{I }_{\alpha}^{f}(t)\,, \tag{120}\] where each current \(\mathcal{I}_{\alpha}^{f}(t)\) only contains the corresponding jump operator \(A_{\alpha}(t)\), together with its adjoint. For this reason, without loss of generality, we consider here Lindblad operators generated by a single jump operator. In order to shorten the notation we also assume that the jump operator, denoted by \(A\), is time independent, again without loss of generality. We start by rewriting the derivative of the Fisher information as: \[\mathcal{F}_{f,t}^{\prime}=2\operatorname{Re}\int_{0}^{1}\mathrm{d}N_{g}(s)\, \left(2\operatorname{Tr}\left[\delta\dot{\rho}_{t}(\mathbb{L}_{\pi_{t}}+s\, \mathbb{R}_{\pi_{t}})^{-1}[\delta\rho_{t}]\right]+\operatorname{Tr}\left[ \delta\rho_{t}\left(\frac{\mathrm{d}}{\mathrm{d}t}(\mathbb{L}_{\pi_{t}}+s\, \mathbb{R}_{\pi_{t}})^{-1}\right)[\delta\rho_{t}]\right]\right). \tag{121}\] The second term in the integral can be expanded as: \[\mathrm{Tr}\left[\delta\rho_{t}\left(\frac{\mathrm{d}}{\mathrm{d}t} \left(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}}\right)^{-1}\right)\left[ \delta\rho_{t}\right]\right] =-\mathrm{Tr}\left[\delta\rho_{t}(\mathbb{L}_{\pi_{t}}+s\, \mathbb{R}_{\pi_{t}})^{-1}(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})( \mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^{-1}[\delta\rho_{t}]\right]= \tag{111}\] \[=-\mathrm{Tr}\left[B_{s}(t)^{\dagger}\,\hat{\pi}_{t}\,B_{s}(t) \right]-s\,\mathrm{Tr}\left[B_{s}(t)^{\dagger}B_{s}(t)\,\hat{\pi}_{t}\right]= \tag{112}\] where we introduced the notation \(B_{s}(t):=(\mathbb{L}_{\pi_{t}}+s\,\mathbb{R}_{\pi_{t}})^{-1}[\delta\rho_{t}]\). On the other hand, the first term in Eq. (100) simply gives: \[2\,\mathrm{Tr}\left[\delta\dot{\rho}_{t}(\mathbb{L}_{\pi_{t}}+s \,\mathbb{R}_{\pi_{t}})^{-1}[\delta\rho_{t}]\right]= =2\,\mathrm{Tr}\left[\mathcal{L}((\mathbb{R}_{\pi_{t}}+s\, \mathbb{L}_{\pi_{t}})[B_{s}(t)^{\dagger}])B_{s}(t)\right]= \tag{113}\] \[=2\,\mathrm{Tr}\left[\mathcal{L}(B_{s}(t)^{\dagger}\pi_{t})B_{s}( t)\right]+2s\,\mathrm{Tr}\left[\mathcal{L}(\pi_{t}B_{s}(t)^{\dagger})B_{s}(t) \right]\,, \tag{114}\] where in the first line we have multiplied and divided by \((\mathbb{R}_{\pi_{t}}+s\,\mathbb{L}_{\pi_{t}})\) to obtain \(B_{s}(t)^{\dagger}\). We can now proceed in summing up Eq. (112) and Eq. (114). Due to the number of terms that will appear, though, we first consider the first traces in both equations, and then the second ones. Hence, summing the first term in Eq. (112) and the first of Eq. (114), and explicitly expanding the Lindbladian, we obtain: \[2\,\mathrm{Tr}\left[A\,B_{s}(t)^{\dagger}\pi_{t}A^{\dagger}B_{s }(t)\right]-\mathrm{Tr}\left[B_{s}(t)^{\dagger}\pi_{t}A^{\dagger}A\overline{B _{s}(t)}\right]-\mathrm{Tr}\left[A^{\dagger}A\,B_{s}(t)^{\dagger}\pi_{t}B_{s }(t)\right]+\] \[-\mathrm{Tr}\left[B_{s}(t)^{\dagger}A\,\pi_{t}A^{\dagger}\,B_{s }(t)\right]+\frac{1}{2}\frac{\mathrm{Tr}\left[B_{s}(t)^{\dagger}\pi_{t}A^{ \dagger}A\overline{B_{s}(t)}\right]}{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## Appendix C Derivations from Sec. III.3 In this appendix we first prove Thm. 13, and then proceed to derive the structural characterisation of Fisher detailed balance Lindbladians presented in Eq. (146). Due to the amount of different notions of adjoints used in the following, we remind the reader about the notation used. There are three different scalar product used, namely the Hilbert-Schmidt one, \(K_{\pi}^{o}\) and \(K_{f,\pi}\), which induce the following adjoints: Hilbert-Schmidt: \[\operatorname{Tr}\left[AX(B)\right]=\operatorname{Tr}\left[X^{ \dagger}(A)B\right]\,;\] (112) \[K_{\pi}^{o}: \operatorname{Tr}\left[A\,\mathcal{J}_{\pi}[OB]\right]= \operatorname{Tr}\left[\widetilde{O}^{o}(A)\mathcal{J}_{\pi}[B]\right]\,;\] (113) \[K_{f,\pi}: \operatorname{Tr}\left[A\,\mathbb{J}_{f}^{-1}\big{|}_{\pi}[ \mathcal{O}B]\right]=\operatorname{Tr}\left[(\widetilde{\mathcal{O}}_{f}A) \mathbb{J}_{f}^{-1}\big{|}_{\pi}[B]\right]\,,\] (114) where \(O\) and \(\mathcal{O}\) are a superoperator on the space of observables or on the state space, respectively, while \(X\) is a generic bounded operator. We also remind the reader that we use the notation \(\mathcal{J}_{\pi}:=\mathbb{R}_{\pi}\). Then, the adjoint with respect to \(K_{\pi}^{o}\) or \(K_{f,\pi}\) are related to the Hilbert-Schmidt one by the relation: \[\widetilde{O}^{o}=\mathcal{J}_{\pi}^{-1}\circ O^{\dagger}\circ\mathcal{J}_{ \pi}\,; \widetilde{\mathcal{O}}_{f}=\mathbb{J}_{f}\big{|}_{\pi}\circ\mathcal{O}^{ \dagger}\circ\mathbb{J}_{f}^{-1}\big{|}_{\pi}\,, \tag{115}\] as it can be verified directly from the definition. In this context, self-adjointness with respect to \(K_{\pi}^{o}\) is equivalent to the condition \(\mathcal{J}_{\pi}\circ O=O^{\dagger}\circ\mathcal{J}_{\pi}\), while for the Fisher scalar product it can be expressed by the equality \(\mathcal{O}\circ\mathbb{J}_{f}\big{|}_{\pi}=\mathbb{J}_{f}\big{|}_{\pi}\circ \mathcal{O}^{\dagger}\). Thanks to this characterisation we can prove the following useful result: **Lemma 2**.: _Suppose \(\mathcal{O}\) and \(O\) are adjoint of each other, \(\mathcal{O}^{\dagger}\equiv O\). Then, if \([\mathcal{O},\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}]=0\), the two conditions of self-adjointness and skew-self-adjointness with respect to \(K_{f,\pi}\) and \(K_{\pi}^{o}\) coincide._ Proof.: First of all, it should be noticed that the adjoint of the modular operator takes the form \((\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})^{\dagger}=\mathbb{L}_{\pi}^{-1} \mathbb{R}_{\pi}=(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})^{-1}\) and for the right multiplication operator one has \((\mathbb{R}_{\pi})^{\dagger}=\mathbb{L}_{\pi}\). Using these property we can show that the (skew-)self-adjointness with respect to \(K_{f,\pi}\) is equivalent to the corresponding notion for \(K_{\pi}^{o}\). In fact, the following relations are equivalent: \[\widetilde{\mathcal{O}}_{f}=\pm\mathcal{O}\iff(\mathcal{O} \mathbb{J}_{f}\big{|}_{\pi})^{\dagger}=\pm\,\mathcal{O}\mathbb{J}_{f}\big{|}_ {\pi} \iff f((\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})^{-1})\mathbb{L}_{\pi}O=\pm \,\mathcal{O}\mathbb{R}_{\pi}f(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})\iff \tag{116}\] \[\iff\,\,\underline{f}(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}) \mathbb{R}_{\pi}O=\pm\,\underline{f}(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})O ^{\dagger}\mathbb{R}_{\pi}\iff\widetilde{O}^{o}=\pm O\,, \tag{117}\] where in the last line we used the property \(f(x)=xf(x^{-1})\), together with the commutation between \(\mathcal{O}\) and \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\) to push \(f(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})\) to the left of both equations. Finally, the last step is simply the definition of \(\mathcal{O}^{\dagger}\equiv O\). This lemma is particularly useful because it allows to reduce the question about the equivalence of Def. 1 and Def. 3 to the decision about the commutation of the Lindbladian with the modular operator \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\). ### Proof of Theorem 13 The aim of this section is to prove Thm. 13, which we repeat here for convenience: **Theorem**.: _The following conditions are equivalent:_ 1. _the generator of the dynamics in the Heisenberg picture_ \(\mathcal{L}^{\dagger}\) _satisfies the adjointness relations in Def._ 1_;_ 2. _the Lindbladian_ \(\mathcal{L}\) _satisfies the structural characterisation in Def._ 2_._ _These conditions imply the condition:_ 1. _the generator of the dynamics in the Schroedinger picture_ \(\mathcal{L}\) _satisfies the adjointness relations in Def._ 3_._ _Moreover, if the Hamiltonian \(H\) is non-degenerate the three conditions are equivalent._ First, it should be noticed that the equivalence between condition 1 and 2 was already proven by Alicki in [25], so we postpone the proof to App. 11 where we characterise the Lindbladians satisfying condition 3. Then, if \(H\) is non-degenerate, this provides a proof of the structural definition of condition 2. Proof.: First it should be noticed that if condition 2 is satisfied, the Lindbladian commutes with the modular operator \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\). In fact, starting from the characterisation: \[\mathcal{L}(\rho)=-i[H,\rho]+\sum_{\omega,i}\ \lambda_{i}^{\omega}\left(A_{i}^{ \omega}\,\rho\,(A_{i}^{\omega})^{\dagger}-\frac{1}{2}\{(A_{i}^{\omega})^{ \dagger}A_{i}^{\omega},\rho\}\right)\,, \tag{100}\] it is a matter of straightforward calculations to verify that: \[\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}(\mathcal{L}(\rho)) =-i\pi[H,\rho]\pi^{-1}+\sum_{\omega,i}\ \lambda_{i}^{\omega}\left((\pi\,A_{i}^{\omega}\pi^{-1})\,\pi\,\rho\,\pi^{-1}( \pi^{-1}A_{i}^{\omega}\,\pi)^{\dagger}-\frac{1}{2}\pi\{(A_{i}^{\omega})^{ \dagger}A_{i}^{\omega},\rho\}\pi^{-1}\right)= \tag{101}\] \[=-i[H,\pi\rho\pi^{-1}]+\sum_{\omega,i}\ \lambda_{i}^{\omega}\left( \mathscr{E}\mathscr{E}\mathscr{E}\mathscr{A}_{i}^{\omega}\,\pi\,\rho\,\pi^{- 1}(A_{i}^{\omega})^{\dagger}-\frac{1}{2}\{(A_{i}^{\omega})^{\dagger}A_{i}^{ \omega},\pi\,\rho\pi^{-1}\}\right)=\] (102) \[=\mathcal{L}(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}(\rho)) \tag{103}\] where we used the condition \([H,\pi]=0\), together with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}(A_{i}^{\omega})=e^{\omega}\,A_{i}^{\omega}\) and \((A_{i}^{\omega})^{\dagger}=A_{i}^{-\omega}\). Since condition 2 is equivalent to condition 1, and \([\mathcal{L},\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}]=0\), thanks to Lemma 2 this means that \(\mathcal{L}\) has the same self-adjointness properties with respect to \(K_{\pi}^{\circ}\) and \(K_{f,\pi}\). This proves the forward implication. Let us prove the reverse, namely that condition 3 is equivalent to condition 1 for non-degenerate Hamiltonians. Let us first focus on the unitary part. Then, using \(\mathcal{U}^{\dagger}=-\mathcal{U}\) we can rewrite the skew-adjointness condition as: \[\mathcal{U}\circ\mathbb{J}_{f}\big{|}_{\pi}=-\mathbb{J}_{f}\big{|}_{\pi} \circ\mathcal{U}^{\dagger}\qquad\Longleftrightarrow\qquad\mathcal{U}\circ \mathbb{J}_{f}\big{|}_{\pi}=\mathbb{J}_{f}\big{|}_{\pi}\circ\mathcal{U}\,. \tag{104}\] Then, applying the two operators in the last equation to the identity, we can verify that: \[\mathcal{U}\circ\mathbb{J}_{f}\big{|}_{\pi}(\mathbb{1})=\mathbb{J}_{f}\big{|} _{\pi}\circ\mathcal{U}(\mathbb{1})\implies\mathcal{U}(\pi)=-i\,\mathbb{J}_{f }\big{|}_{\pi}([H,\mathbb{1}])\implies[H,\pi]=0\,. \tag{105}\] Notice that this result is generic, i.e., no assumptions on the spectrum of \(H\) need to be made. This directly implies the commutation \([\mathcal{U},\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}]=0\), so again thanks to Lemma 2 we have that for the unitary part condition 3 and 1 are always equivalent. Let us now focus on Eq. (105). Since \(H\) and \(\pi\) commute, we can find a common set of eigenvectors, which we denote by \(\{\ket{\alpha}\bra{\beta}\ket{}\) gives a set of common eigenvectors to \(\mathcal{U}\) and \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\), as it can be verified by direct calculation: \[\mathcal{U}(\ket{\alpha}\bra{\beta})=-i(H_{\alpha}-H_{\beta}) \ket{\alpha}\bra{\beta}\,; \mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}(\ket{\alpha}\bra{\beta})=\frac{\pi_{ \alpha}}{\pi_{\beta}}\ket{\alpha}\bra{\beta}\,. \tag{106}\] Both superoperators have constant eigenvalues for all eigenvectors of the form \(\ket{\alpha}\bra{\alpha}\). Under the assumption of continuity under small perturbations, we can also assume that each eigenvector such that \(\alpha\neq\beta\) has a different eigenvalue (non-degenerate gap condition). These two observations together then imply that any superoperator commuting with \(\mathcal{U}\) needs to commute with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\) as well. In the main text we showed how normality of \(\mathcal{L}\) implies \([\mathcal{U},\mathcal{L}_{\mathcal{D}}]=0\). Then, thanks to the considerations above, we also have that: \[[\mathcal{L}_{\mathcal{D}},\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}]=0. \tag{107}\] Thus, thanks once again to Lemma 2, we have that \((\widetilde{\mathcal{L}_{\mathcal{D}}})_{f}=\mathcal{L}_{\mathcal{D}}\) implies \((\widetilde{\mathcal{L}_{\mathcal{D}}^{\dagger}})^{o}=\mathcal{L}_{\mathcal{D}}^ {\dagger}\). This concludes the proof. It should be noticed that the non-degeneracy of the spectrum is needed to prove the commutation relation in Eq. (107). The same equivalence can be proven if \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\) has some functional dependence on \(\mathcal{U}\). Take for example the thermal scenario, i.e., \(\pi\propto\exp[-\beta H]\). Then the modular operator takes the form \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}=\exp[-i\beta\mathcal{U}]\). Due to normality of the Lindbladian (which implies \([\mathcal{U},\mathcal{L}_{\mathcal{D}}]=0\)), we directly obtain Eq. (107), proving the equivalence without further assumptions on the spectrum of \(\pi\) or \(H\). ### Def. 3 is weaker in general Whereas the constraints coming from Def. 1 imply the ones in Def. 3, the reverse does not hold in general. In fact, this is connected with the commutation between \(\mathcal{L}_{\mathcal{D}}\) and \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\). Whereas in the first definition of detailed balance these two operators always commute, this is not the case for the Fisher detailed balance dissipators. This leads to a less constrained evolution of the coherences, as it will be shown in the following. First, as it was discussed in the proof in the previous section, if \(\{\,|\alpha\,\rangle\}\) is an eigenbasis for \(\pi\), then \(\{\,|\alpha\,\rangle\,\rangle\,\langle\beta\,|\,\}\) are eigenvectors for \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\), with eigenvalues: \[\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}(\,|\alpha\,\rangle\,\,\langle\beta\,|)= \frac{\pi_{\alpha}}{\pi_{\beta}}\,\,|\alpha\,\rangle\,\,\langle\beta|. \tag{101}\] Since the steady state \(\pi\) is always assumed to be full rank, proving that \(\mathcal{L}_{\mathcal{D}}\) commutes with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\) is equivalent to requiring that the matrix elements \[(\mathcal{L}_{\mathcal{D}})_{\delta|\beta}^{\gamma|\alpha}\,=\,\langle\gamma \,|\,\mathcal{L}_{\mathcal{D}}(\,|\alpha\,\rangle\langle\beta\,|)\,\,|\delta\,\rangle \tag{102}\] satisfy the following condition \[[\mathcal{L}_{\mathcal{D}},\Phi_{\pi}]=0\quad\iff\quad(\mathcal{L}_{\mathcal{ D}})_{\delta|\beta}^{\gamma|\alpha}\left(\frac{\pi_{\alpha}}{\pi_{\beta}}- \frac{\pi_{\gamma}}{\pi_{\delta}}\right)=0\,. \tag{103}\] Equivalently, this means that matrix elements of \(\mathcal{D}\) can be nonzero only if: \[(\mathcal{L}_{\mathcal{D}})_{\delta|\beta}^{\gamma|\alpha}\neq 0\quad\implies \quad\frac{\pi_{\alpha}}{\pi_{\beta}}=\frac{\pi_{\gamma}}{\pi_{\delta}}\,. \tag{104}\] In the following we show that a slightly more general condition follows from the requirement that \((\widetilde{\mathcal{L}_{\mathcal{D}}})_{f}=\mathcal{L}_{\mathcal{D}}\) for all standard monotone functions \(f\). Indeed, this condition can be written in coordinates as: \[\mathcal{L}_{\mathcal{D}}\circ\mathbb{J}_{f}\big{|}_{\pi}=\mathbb{J}_{f} \big{|}_{\pi}\circ\mathcal{L}_{\mathcal{D}}^{\dagger}\quad\iff\quad( \mathcal{L}_{\mathcal{D}})_{\delta|\beta}^{\gamma|\alpha}\,f\left(\frac{\pi_{ \alpha}}{\pi_{\beta}}\right)\pi_{\beta}=(\mathcal{L}_{\mathcal{D}})_{\alpha| \gamma}^{\beta|\delta}\,f\left(\frac{\pi_{\gamma}}{\pi_{\delta}}\right)\pi_{ \delta}\,. \tag{105}\] At this point it is useful to introduce the notation \(e^{-\omega_{1}}:=\pi_{\alpha}/\pi_{\beta}\) and \(e^{-\omega_{2}}:=\pi_{\gamma}/\pi_{\delta}\). Grouping the functional dependence on one side of the equation we obtain: \[\frac{(\mathcal{L}_{\mathcal{D}})_{\delta|\beta}^{\gamma|\alpha}\,\pi_{\beta }}{(\mathcal{L}_{\mathcal{D}})_{\alpha|\gamma}^{\beta|\delta}\,\pi_{\delta}} =\frac{f\left(e^{-\omega_{2}}\right)}{f\left(e^{-\omega_{1}}\right)}\,. \tag{106}\] It should be noticed that the left hand side of the equation does not depend on the function \(f\), so the coordinates of \(\mathcal{L}_{\mathcal{D}}\) are zero unless \(\omega_{1}=\pm\omega_{2}\). Notice that one cannot rule out the case \(\omega_{1}=-\omega_{2}\), since this follows from the symmetry of standard monotone functions \(f(x)=x\,f(x^{-1})\). Then, the only non-zero elements of a Fisher self-adjoint \(\mathcal{L}_{\mathcal{D}}\) are the ones for which either of the conditions: \[(\mathcal{L}_{\mathcal{D}})_{\delta|\beta}^{\gamma|\alpha}\neq 0\implies \left(\frac{\pi_{\alpha}}{\pi_{\beta}}=\frac{\pi_{\gamma}}{\pi_{\delta}} \right)\vee\left(\frac{\pi_{\alpha}}{\pi_{\beta}}=\frac{\pi_{\delta}}{\pi_{ \gamma}}\right)\,, \tag{107}\] are satisfied. Comparing this result with Eq. (104) directly shows that in general \(\mathcal{L}_{\mathcal{D}}\) does not commute with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\), so Def. 3 is weaker than Def. 1 (at the end of the section we present an explicit example showing this). For this reason, it is interesting to explore which constraints Eq. (107) imposes on the Lindbladian. First, it should be noticed that since \(\mathcal{L}_{\mathcal{D}}\) is adjoint preserving, its coordinates satisfy \[\mathcal{L}_{\mathcal{D}}(A^{\dagger})=\mathcal{L}_{\mathcal{D}}(A)^{\dagger} \quad\iff\quad(\mathcal{L}_{\mathcal{D}})_{\delta|\beta}^{\gamma|\alpha}= \overline{(\mathcal{L}_{\mathcal{D}})_{\gamma|\alpha}^{\delta|\beta}}\,. \tag{108}\] Then, combining Eq. (106), Eq. (107), and Eq. (108), we can see that: * populations and coherences do not mix, and the populations on the diagonal satisfy the classical detailed balance condition. In fact, from Eq. (107) we see that from \(\alpha=\beta\) it follows that \(\gamma=\delta\) (assuming that \(\pi\) is non-degenerate), and the transition probabilities are related by the standard detailed balance condition: \[(\mathcal{L}_{\mathcal{D}})_{\gamma|\alpha}^{\gamma|\alpha}\,\pi_{\alpha}=( \mathcal{L}_{\mathcal{D}})_{\alpha|\gamma}^{\alpha|\gamma}\,\pi_{\gamma}\,.\] (109) Moreover, the dynamics of the coherences can be split in two cases: * the one for which \(\frac{\pi_{\alpha}}{\pi_{\beta}}=\frac{\pi_{\gamma}}{\pi_{\delta}}\), implying the following relation: \[(\mathcal{L}_{\mathcal{D}})^{\gamma|\alpha}_{\delta|\beta}\,\pi_{\beta}=( \mathcal{L}_{\mathcal{D}})^{\beta|\delta}_{\alpha|\gamma}\,\pi_{\delta}=\overline {(\mathcal{L}_{\mathcal{D}})^{\alpha|\gamma}_{\beta|\delta}}\,\pi_{\delta}\,.\] (104) This property is satisfied also in the Alicki's definition of detailed balanced generator. * the additional transitions between coherences, corresponding to the case \(\frac{\pi_{\alpha}}{\pi_{\beta}}=\frac{\pi_{\delta}}{\pi_{\gamma}}\). In this case the rate are given by: \[(\mathcal{L}_{\mathcal{D}})^{\gamma|\alpha}_{\delta|\beta}\,\pi_{\beta}=( \mathcal{L}_{\mathcal{D}})^{\beta|\delta}_{\alpha|\gamma}\,\pi_{\gamma}= \overline{(\mathcal{L}_{\mathcal{D}})^{\alpha|\gamma}_{\beta|\delta}}\,\pi_{ \gamma}\,.\] (105) These rates are the only novelty compared with the ones coming from Def. 1, and are the cause of the failure of \(\mathcal{L}_{\mathcal{D}}\) from commuting with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\). In order to justify the preference for Def. 3 as the quantum generalisation of detailed balance, we argue here that this last case is still physically sensible. Consider indeed two coherences terms \(\ket{\alpha}\bra{\beta}\) and \(\ket{\gamma}\bra{\delta}\) such that \(\frac{\pi_{\alpha}}{\pi_{\beta}}=\frac{\pi_{\gamma}}{\pi_{\delta}}\). Then, from Eq. (104) it follows that the ratio between the currents induced between the two coherences is given by: \[\frac{\left|(\mathcal{L}_{\mathcal{D}})^{\gamma|\alpha}_{\delta|\beta}\right| }{\left|(\mathcal{L}_{\mathcal{D}})^{\alpha|\gamma}_{\beta|\delta}\right|}= \frac{\pi_{\delta}}{\pi_{\beta}}=\frac{\pi_{\gamma}}{\pi_{\alpha}}\,. \tag{106}\] The additional freedom given by Eq. (105) corresponds to the possibility of the matrix element \((\mathcal{L}_{\mathcal{D}})^{\delta|\alpha}_{\gamma|\beta}\) to be non-zero. It should be noticed, though, that the current between the two coherences is consistent with Eq. (106): \[\frac{\left|(\mathcal{L}_{\mathcal{D}})^{\delta|\alpha}_{\gamma|\beta}\right| }{\left|(\mathcal{L}_{\mathcal{D}})^{\alpha|\delta}_{\beta|\gamma}\right|}= \frac{\pi_{\delta}}{\pi_{\beta}}=\frac{\pi_{\gamma}}{\pi_{\alpha}}\,. \tag{107}\] Thus, the difference between the detailed balance condition in Def. 1 and the Fisher one (i.e., Def. 3), is that the first allows the coherences \(\ket{\alpha}\bra{\beta}\) and \(\ket{\gamma}\bra{\delta}\) to communicate but prohibits interactions between \(\ket{\alpha}\bra{\beta}\) and \(\ket{\delta}\bra{\gamma}\), while the latter allows the dynamics to connect both off-diagonal elements, while still keeping the ratio between the two currents detailed balanced. Since there is no clear argument to disregard this second set of transitions, it seems that Def. 3 should be preferred. Finally, before moving on to the characterisation of the Lindbladian operators satisfying Def. 3 we present here an example of \(\mathcal{L}\) which is detailed balance in the Fisher sense, but not according to Def. 1. Consider a two-levels system equilibrating to the state \[\pi_{\beta}=\frac{\ket{0}\bra{0}+e^{-\beta}\ket{1}\bra{1}}{1+e^{-\beta}}\,. \tag{108}\] We consider the Hamiltonian of the system to be completely degenerate, i.e., \(H\propto\mathbb{1}\), which implies \(\mathcal{U}=0\). Then, it follows from an explicit calculation that the Lindbladian \[\mathcal{L}(\rho)=A\rho A^{\dagger}-\frac{1}{2}\{\rho,A^{\dagger}A\}\,, \tag{109}\] with the jump operator given by \(A=\ket{0}\bra{1}+\sqrt{e^{-\beta}}\ket{1}\bra{0}\) satisfies \(\widetilde{\mathcal{L}}_{f}=\mathcal{L}\), but not \(\widetilde{\mathcal{L}}^{o}=\mathcal{L}\). ### Structural characterisation of Def. 3 This section is devoted to the derivation of the structural form of Lindbladians satisfying the detailed balance condition in Def. 3. Restricting the derivation to the case in which \(\mathcal{L}\) commutes with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\) also proves the equivalence between Def. 1 and Def. 2. With the hindsight of the previous section it is useful to expand the Lindbladian operator in terms of the following eigenbasis: \[F_{\alpha}^{\omega}:=\left\{\ket{\gamma}\bra{\alpha}\bigg{|}\,\,\frac{\pi_{ \gamma}}{\pi_{\alpha}}=e^{\omega}\right\}\,. \tag{110}\] It is also useful to introduce the function \(\beta_{\alpha}(\omega):=\{\beta|\,\pi_{\beta}=\pi_{\alpha}e^{\omega}\}\), namely a function that returns the index \(\beta\) such that \(\frac{\pi_{\beta}}{\pi_{\alpha}}=e^{\omega}\). In order to keep the notation clear we also define \(\gamma_{\alpha}(\omega)\) and \(\delta_{\alpha}(\omega)\) exactly in the same way. The elements of the eigenbasis in Eq. (105) have the property that: \[\left(F^{\omega}_{\alpha}\right)^{\dagger}=F^{-\omega}_{\gamma_{ \alpha}(\omega)}\,;\qquad\qquad\pi F^{\omega}_{\alpha}=e^{\omega}\,F^{\omega}_ {\alpha}\,\pi\,. \tag{106}\] Generically, one can express the action of the dissipator as: \[\mathcal{L}_{\mathcal{D}}(\rho): =\sum_{\begin{subarray}{c}\alpha,\beta,\\ \gamma,\delta\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{\gamma| \alpha}_{\delta|\beta}\,|\,\gamma\rangle\langle\alpha|\,\,\rho\,(|\delta \rangle\langle\beta|)^{\dagger}=\sum_{\begin{subarray}{c}\alpha,\omega,\\ \omega_{1},\omega_{2}\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{ \gamma_{\alpha}(\omega)|\alpha}_{\delta_{\alpha}(\omega+\omega_{2})|\beta_{ \alpha}(\omega_{1})}\,F^{\omega}_{\alpha}\,\rho\,(F^{\omega-\omega_{1}+ \omega_{2}}_{\beta_{\alpha}(\omega_{1})})^{\dagger}\,, \tag{107}\] where we implicitly defined \(\pi_{\gamma}/\pi_{\alpha}=:e^{\omega}\), \(\pi_{\beta}/\pi_{\alpha}=:e^{\omega_{1}}\) and \(\pi_{\delta}/\pi_{\gamma}=:e^{\omega_{2}}\). This expression is particularly useful because it allows for a straightforward application of the constraints in Eq. (104). In fact, we have that: \[\frac{\pi_{\alpha}}{\pi_{\beta}} =\frac{\pi_{\gamma}}{\pi_{\delta}} \Longleftrightarrow\qquad\omega_{1}=\omega_{2}\,; \tag{108}\] \[\frac{\pi_{\alpha}}{\pi_{\beta}} =\frac{\pi_{\delta}}{\pi_{\gamma}} \Longleftrightarrow\qquad\omega_{1}=-\omega_{2}\,. \tag{109}\] Hence, the sum above can be restricted to the case \(\omega_{1}=\pm\omega_{2}\), giving: \[\mathcal{L}_{\mathcal{D}}(\rho)= \sum_{\begin{subarray}{c}\alpha,\omega\\ \omega_{1}\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{\gamma_{ \alpha}(\omega)|\alpha}_{\delta_{\alpha}(\omega+\omega_{1})|\beta_{\alpha}( \omega_{1})}\,F^{\omega}_{\alpha}\,\rho\,(F^{\omega}_{\beta_{\alpha}(\omega_{ 1})})^{\dagger}+\sum_{\begin{subarray}{c}\alpha,\omega,\\ \omega_{1}\neq\emptyset\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{ \gamma_{\alpha}(\omega)|\alpha}_{\delta_{\alpha}(\omega-\omega_{1})|\beta_{ \alpha}(\omega_{1})}\,F^{\omega}_{\alpha}\,\rho\,(F^{\omega-2\omega_{1}}_{ \beta_{\alpha}(\omega_{1})})^{\dagger}\,. \tag{110}\] It should be noticed that the case \(\omega_{1}=0\) is included in the first sum, so that we have to impose the constraint \(\omega_{1}\neq 0\) in the second sum. In this way \(\mathcal{L}_{\mathcal{D}}\) naturally splits in two parts, \(\mathcal{L}_{\mathcal{D}_{1}}\) (i.e., the operator in the first sum) corresponding to the component of the dissipator commuting with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\), and \(\mathcal{L}_{\mathcal{D}_{2}}\). Interestingly, the condition \(\widetilde{\mathcal{L}}^{\circ}_{\mathcal{D}}=\mathcal{L}_{\mathcal{D}}\) implies \(\mathcal{L}_{\mathcal{D}}\equiv\mathcal{L}_{\mathcal{D}_{1}}\), so characterising the latter provides the structural form of Lindbladians detailed balance according to Def. 1. Whereas \(\mathcal{L}_{\mathcal{D}_{1}}\) directly commutes with \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\), we can apply a transformation to \(\mathcal{L}_{\mathcal{D}_{2}}\) to make it commuting. In particular, it should be noticed that \(\mathcal{L}_{\mathcal{D}_{2}}\) transforms under the transposition superoperator \(\Theta\) as: \[[\Theta\,\mathcal{L}_{\mathcal{D}_{2}}](\rho) =\sum_{\begin{subarray}{c}\alpha,\omega,\\ \omega_{1}\neq\emptyset\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{ \gamma_{\alpha}(\omega)|\alpha}_{\delta_{\alpha}(\omega-\omega_{1})|\beta_{ \alpha}(\omega_{1})}\,\left(F^{\omega}_{\alpha}\,\rho\,(F^{\omega-2\omega_{1}}_ {\beta_{\alpha}(\omega_{1})})^{\dagger}\right)^{T}= \tag{111}\] \[=\sum_{\begin{subarray}{c}\alpha,\omega,\\ \omega_{1}\neq\emptyset\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{ \gamma_{\alpha}(\omega)|\alpha}_{\delta_{\alpha}(\omega-\omega_{1})|\beta_{ \alpha}(\omega_{1})}\,\left|\pi_{\alpha}e^{\omega-\omega_{1}}\right.\big{\langle} \pi_{\alpha}\,\big{|}\,\rho\,|\pi_{\alpha}e^{\omega_{1}}\big{\rangle}\langle \pi_{\alpha}e^{\omega}|=\] (112) \[=\sum_{\begin{subarray}{c}\alpha,\omega,\\ \omega_{1}\neq\emptyset\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{ \gamma_{\alpha}(\omega)|\alpha}_{\delta_{\alpha}(\omega-\omega_{1})|\beta_{ \alpha}(\omega_{1})}\,F^{\omega-\omega_{1}}_{\alpha}\,\rho\,(F^{\omega-\omega _{1}}_{\beta_{\alpha}(\omega_{1})})^{\dagger}=\sum_{\begin{subarray}{c}\alpha, \omega,\\ \omega_{1}\neq\emptyset\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{ \gamma_{\alpha}(\omega+\omega_{1})|\alpha}_{\delta_{\alpha}(\omega)|\beta_{ \alpha}(\omega_{1})}\,F^{\omega}_{\alpha}\,\rho\,(F^{\omega}_{\beta_{\alpha}( \omega_{1})})^{\dagger}\,, \tag{113}\] where in the second line we used the abuse of notation \(\,|\pi_{\alpha}\rangle\) for \(\,|\alpha\rangle\), and in the last line we made an implicit change of variables. From Eq. (111) we can see that \(\Theta\mathcal{L}_{\mathcal{D}_{2}}\) takes a form completely analogous to \(\mathcal{L}_{\mathcal{D}_{1}}\). For this reason, it directly follows that: \[[\Theta\mathcal{L}_{\mathcal{D}_{2}},\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}]=0\,. \tag{114}\] This condition allows to lift the characterisation for \(\mathcal{L}_{\mathcal{D}_{1}}\) to \(\mathcal{L}_{\mathcal{D}_{2}}\), as it will be shown in the following. We begin by studying \(\mathcal{L}_{\mathcal{D}_{1}}\). Notice that the coordinates of \(\mathcal{L}_{\mathcal{D}}\) are related by: \[(\mathcal{L}_{\mathcal{D}})^{\gamma_{\alpha}(\omega)|\alpha}_{\delta_{\alpha}( \omega+\omega_{1})|\beta_{\alpha}(\omega_{1})}=e^{\omega}\left(\mathcal{L}_{ \mathcal{D}}\right)^{\beta_{\alpha}(\omega_{1})|\delta_{\alpha}(\omega+\omega_ {1})}_{\alpha|\gamma_{\alpha}(\omega)}\,, \tag{115}\] as it can be verified from Eq. (104). Then, we can proceed with the standard approach to diagonalise \(\mathcal{L}_{\mathcal{D}_{1}}\)[25, 46]. To this end, it is useful to introduce a new basis of operators, given by \(\{X^{\omega}_{m}\}=\{\Delta_{i}\}_{1\leq i\leq d}\cup\{F^{\omega}_{\alpha}\}_{ \omega\neq 0}\), where \(\{\Delta_{i}\}_{1\leq i\leq d}\) is an orthonormal basis for the diagonal matrices, and \(\Delta_{1}=1/\sqrt{d}\). Then, we can rewrite \(\mathcal{L}_{\mathcal{D}_{1}}\) in this basis as: \[\mathcal{L}_{\mathcal{D}_{1}}(\rho) =\sum_{\alpha,\beta}\,(\mathcal{L}_{\mathcal{D}})^{\gamma_{ \alpha}(\omega)|\alpha}_{\delta_{\beta}(0)|\beta}\,F^{\omega}_{\alpha}\,\rho \,(F^{\omega}_{\beta})^{\dagger}+\sum_{\begin{subarray}{c}\alpha,\beta,\\ \omega\neq\emptyset\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{ \gamma_{\alpha}(\omega)|\alpha}_{\delta_{\alpha}(\omega)|\beta}\,F^{\omega}_{ \alpha}\,\rho\,(F^{\omega}_{\beta})^{\dagger}=\] (116) \[=\sum_{i,j}\,D^{0}_{i,j}\,\Delta_{i}\,\rho\,(\Delta_{j})^{\dagger}+ \sum_{\begin{subarray}{c}\alpha,\beta,\\ \omega\neq\emptyset\ where we introduced the coefficients \(D^{\omega}_{\alpha,\beta}:=(\mathcal{L}_{\mathcal{D}})^{\gamma_{\alpha}(\omega)| \alpha}_{\delta_{\beta}(\omega)|\beta}\) and \[D^{0}_{i,j}=\sum_{\alpha,\beta}\,\left(\mathcal{L}_{\mathcal{D}} \right)^{a|\alpha}_{\beta|\beta}\bar{U}_{\alpha,i}\,U_{j,\beta}\,, \tag{112}\] where \(U\) is the unitary defined by \(\Delta_{i}:=\sum_{\alpha}U_{i,\alpha}F_{\alpha}\). In order to make the dissipator explicitly trace preserving, we highlight the terms containing the identity operator. This leads to the form: \[\mathcal{L}_{\mathcal{D}_{1}}(\rho)=\{K,\rho\}+\sum_{\begin{subarray}{c}i,j \neq(1,1)\\ \omega\end{subarray}}D^{\omega}_{i,j}\;X^{\omega}_{i}\,\rho\left(X^{\omega}_{ j}\right)^{\dagger}, \tag{113}\] where \(K:=\frac{D^{0}_{i,1}}{2d}\mathbbm{1}+\frac{1}{\sqrt{d}}\sum_{i}D^{0}_{i,1} \Delta_{i}\), and there is no additional term thanks to the reality of \(D^{0}_{i,j}\). It should be noticed that \(\mathcal{L}_{\mathcal{D}_{2}}\) only gives contributions out of the diagonal, so one can impose trace preservation on \(\mathcal{L}_{\mathcal{D}_{1}}\) alone, i.e., \(\mathcal{L}^{\dagger}_{\mathcal{D}_{1}}(\mathbbm{1})=0\). This directly implies that \(K=-\frac{1}{2}\left(\sum_{i,j\neq(1,1)}D^{\omega}_{i,j}(X^{\omega}_{j})^{ \dagger}X^{\omega}_{i}\right)\), so the dissipator can be rewritten in Lindblad form: \[\mathcal{L}_{\mathcal{D}_{1}}(\rho)=\sum_{\begin{subarray}{c}i,j\neq(1,1)\\ \omega\end{subarray}}D^{\omega}_{i,j}\;\left(X^{\omega}_{i}\,\rho\left(X^{ \omega}_{j}\right)^{\dagger}-\frac{1}{2}\left\{(X^{\omega}_{j})^{\dagger}X^{ \omega}_{i},\rho\right\}\right)\,. \tag{114}\] The property of the dissipator of being adjoint preserving implies that the matrix \(D^{\omega}_{i,j}\) is Hermitian. Then, there exists a unitary matrix \(V\), such that \(D^{\omega}_{i,j}=\sum_{m,n}V^{\omega}_{i,m}(\lambda^{\omega}_{m}\delta^{m}_{n })(V^{\omega})^{\dagger}_{n,j}\). We can then define the jump operators as \(A^{\omega}_{m}:=\sum_{i}X^{\omega}_{i}V^{\omega}_{i,m}\). This allows us to recast Eq. (114) in the form: \[\mathcal{L}_{\mathcal{D}_{1}}(\rho) =\sum_{\begin{subarray}{c}i,j\neq(1,1)\\ \omega\end{subarray}}\lambda^{\omega}_{m}\;V^{\omega}_{i,m}(V^{\omega})^{ \dagger}_{m,j}\left(X^{\omega}_{i}\,\rho\left(X^{\omega}_{j}\right)^{\dagger} -\frac{1}{2}\left\{(X^{\omega}_{j})^{\dagger}X^{\omega}_{i},\rho\right\}\right)= \tag{115}\] \[=\sum_{m,\omega}\;\lambda^{\omega}_{m}\;\left(A^{\omega}_{m}\, \rho\left(A^{\omega}_{m}\right)^{\dagger}-\frac{1}{2}\left\{(A^{\omega}_{m})^{ \dagger}A^{\omega}_{m},\rho\right\}\right)\,. \tag{116}\] We can now characterise the properties of the jump operators and of the rates in the previous equation. First, it should be noticed that since \(X^{\omega}_{i}\) are eigenoperators of the modular operator \(\mathbbm{L}_{\pi}\mathbbm{R}_{\pi}^{-1}\), the same holds for \(A^{\omega}_{m}\), as the unitary does not mix \(X^{\omega}_{i}\)s with different \(\omega\)s. Moreover, Eq. (112) implies that \(D^{\omega}_{\alpha,\beta}=e^{\omega}\,D^{-\omega}_{\delta_{\beta}(\omega), \gamma_{\alpha}(\omega)}\), where we used the same indices as in the equation. This relation shows that \(D^{-\omega}_{\delta_{\beta}(\omega),\gamma_{\alpha}(\omega)}\) can be diagonalised as \(e^{-\omega}\lambda^{\omega}_{m}\delta^{m}_{n}=\sum_{\alpha,\beta}(V^{\omega})^ {\dagger}_{m,\alpha}D^{-\omega}_{\delta_{\beta}(\omega),\gamma_{\alpha}(\omega )}(V^{\omega})_{\beta,n}\). This allows to deduce the following two facts: first, the spectrum of \(D^{\omega}_{\alpha,\beta}\) satisfies \(\lambda^{\omega}_{i}=e^{\omega}\,\lambda^{-\omega}_{i}\); second, since \((X^{\omega}_{\alpha})^{\dagger}=X^{-\omega}_{\gamma_{\alpha}(\omega)}\), it also holds that: \[(A^{\omega}_{i})^{\dagger}=\sum_{\alpha}\;X^{-\omega}_{\gamma_{\alpha}(\omega)}(V ^{\omega}_{\alpha,i})^{\dagger}=\sum_{\alpha}\;X^{-\omega}_{\gamma_{\alpha}( \omega)}V^{-\omega}_{\gamma_{\alpha}(\omega),i}=A^{-\omega}_{i}\,. \tag{117}\] Thus, the dissipator \(\mathcal{L}_{\mathcal{D}_{1}}\) satisfies the same conditions of Def. 2, namely: 1. \((A^{\omega}_{i})^{\dagger}=A^{-\omega}_{i}\); 2. \(\pi\,A^{\omega}_{i}\,\pi^{-1}=e^{\omega}\,A^{\omega}_{i}\) ; 3. \(\lambda^{\omega}_{i}=e^{\omega}\,\lambda^{-\omega}_{i}\). Since \(\mathcal{L}_{\mathcal{D}_{1}}\) is the only component of the dissipator if one uses Def. 1, this proves the equivalence between this notion of detailed balance and the structural characterisation in Def. 2. We can now pass to characterise \(\mathcal{L}_{\mathcal{D}_{2}}\). Thanks to Eq. (109) we can rewrite it as: \[\mathcal{L}_{\mathcal{D}_{2}}(\rho) =\sum_{\begin{subarray}{c}\alpha,\omega_{1}\\ \omega_{1}\neq 0\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{\gamma_{ \alpha}(\omega+\omega_{1})|\alpha}_{\delta_{\alpha}(\omega)|\beta}(F^{\omega}_ {\alpha}\,\rho\,(F^{\omega}_{\beta_{\alpha}(\omega_{1})})^{\dagger})^{T}=\sum_ {\begin{subarray}{c}\alpha\neq\beta,\\ \omega\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{\gamma_{\beta}( \omega)|\alpha}_{\delta_{\alpha}(\omega)|\beta}F^{\omega}_{\beta}\,\rho^{T} \,(F^{\omega}_{\alpha})^{\dagger}= \tag{118}\] \[=\sum_{\begin{subarray}{c}\alpha\neq\beta,\\ \omega\end{subarray}}\left(\mathcal{L}_{\mathcal{D}}\right)^{\delta_{\beta}( \omega)|\alpha}_{\gamma_{\alpha}(\omega)|\beta}F^{\omega}_{\beta}\,\rho^{T} \,(F^{\omega}_{\alpha})^{\dagger}\,, \tag{119}\] where we eliminated the dependence on \(\omega_{1}\) by enforcing the constraint \(\alpha\neq\beta\). Finally, in the last line we exchanged the dummy indexes \(\gamma\) and \(\delta\) to highlight the analogy with the other part of the Lindbladian. Indeed, define the matrix \(\widetilde{D^{\omega}_{\alpha,\beta}}:=(\mathcal{L}_{\mathcal{D}})^{\delta_{ \beta}(\omega)|\alpha}_{\gamma_{\alpha}(\omega)|\beta}\) for any \(\alpha\neq\beta\) and zero on the diagonal. It is interesting to compare it to the off-diagonal elements of \(D^{\omega}_{\alpha,\beta}=(\mathcal{L}_{\mathcal{D}})^{\gamma_{\alpha}(\omega )|\alpha}_{\delta_{\beta}(\omega)|\beta}\): as it can be seen, the two are related by an exchange \(\gamma_{\alpha}(\omega)\leftrightarrow\delta_{\beta}(\omega)\). Moreover, thanks to Eq. (100) it also holds that: \[\widetilde{D^{\omega}_{\alpha,\beta}}=e^{\omega}\,\widetilde{D}^{-\omega}_{ \delta_{\beta}(\omega),\gamma_{\alpha}(\omega)}\,, \tag{101}\] which shows the analogy with \(D^{\omega}_{\alpha,\beta}\) even further. Using Eq. (100) it also follows that \(\widetilde{D^{\omega}_{\alpha,\beta}}\) is Hermitian, so there exists a unitary matrix \(W\) such that \(\widetilde{D^{\omega}_{\alpha,\beta}}=\sum_{m,n}W^{\omega}_{\alpha,m}(\mu^{ \omega}_{m}\delta^{m}_{n})(W^{\omega})^{\dagger}_{n,\beta}\). We can then define the jump operators \(B^{\omega}_{m}:=\sum_{\alpha}(W^{\omega}_{m,\alpha})^{\dagger}F^{\omega}_{\alpha}\). This allows to rewrite \(\mathcal{L}_{\mathcal{D}_{2}}\) as: \[\mathcal{L}_{\mathcal{D}_{2}}(\rho)=\sum_{\begin{subarray}{c}\alpha\neq\beta,\\ \omega\end{subarray}}\,W^{\omega}_{\alpha,m}(\mu^{\omega}_{m}\delta^{m}_{n})(W^ {\omega})^{\dagger}_{n,\beta}\,F^{\omega}_{\beta}\,\rho^{T}\,(F^{\omega}_{ \alpha})^{\dagger}=\sum_{m}\;\mu^{\omega}_{m}\,B^{\omega}_{m}\,\rho^{T}(B^{ \omega}_{m})^{\dagger}\,. \tag{102}\] In analogy with the previous case it also holds that \(\mu^{\omega}_{i}=e^{\omega}\mu^{-\omega}_{i}\) and \((B^{\omega}_{m})^{\dagger}=B^{-\omega}_{m}\), together with the fact that \(B^{\omega}_{m}\) is an eigenoperator of \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}\). Still, there is one crucial difference with \(\mathcal{L}_{\mathcal{D}_{1}}\): since all the diagonal elements of \(\widetilde{D^{\omega}_{\alpha,\beta}}\) are zero, this matrix is traceless, meaning that the sum of the eigenvalues is also zero. Hence, whereas \(\lambda^{\omega}_{i}\geq 0\) for all \(i\) and \(\omega\), we have the extra constraint \(\sum_{i}\mu^{\omega}_{i}=0\), implying the negativity of some of the \(\mu^{\omega}_{i}\). Putting everything together, we finally obtain the characterisation in Eq. (146). ## Appendix D Derivations from Sec. IV.2 We report here the explicit computation that were omitted in Sec. IV.2. We start by discussing how the defining properties of the standard monotone functions are reflected on the Fisher information operators. First, the property that \(f(1)=1\) implies that \(\mathbb{J}_{f}\big{|}_{\pi}\) reduces to the multiplication by \(\pi\) for commuting operators, as it was shown in Eq. (25). We repeat the derivation here for completeness: it should be noticed that the modular operator acts trivially on the commutant of \(\pi\), that is \(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}[A]=A\) for \([\pi,A]=0\). For this reason \(f(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1})[A]=f(\mathbb{I})[A]=A\), where in the last step we used the fact that \(f(1)=1\). Hence, if \([\pi,A]=0\), then \(\mathbb{J}_{f}\big{|}_{\pi}[A]=\mathbb{R}_{\pi}[A]=A\pi\). We can now pass to the second condition that standard monotone functions should satisfy, namely \(f(x)=xf(x^{-1})\). This enforces \(\mathbb{J}_{f}\) to be adjoint preserving. Indeed, one has that, if \(A=A^{\dagger}\), it follows that: \[(\mathbb{J}_{f}\big{|}_{\pi}[A])^{\dagger}=(\mathbb{R}_{\pi}f(\mathbb{L}_{\pi }\mathbb{R}_{\pi}^{-1})[A])^{\dagger}=\mathbb{L}_{\pi}f(\mathbb{R}_{\pi} \mathbb{L}_{\pi}^{-1})[A^{\dagger}]=\mathbb{J}_{f}\big{|}_{\pi}^{\prime} \mathbb{J}_{\pi}^{\prime\prime}\mathbb{R}_{\pi}f(\mathbb{L}_{\pi}\mathbb{R}_{ \pi}^{-1})[A]=\mathbb{J}_{f}\big{|}_{\pi}[A]\,, \tag{103}\] where we used the fact that \((\mathbb{L}_{\pi}[A])^{\dagger}=\mathbb{R}_{\pi}[A^{\dagger}]\) (and similarly for \(\mathbb{R}_{\pi}\)), the commutation \([\mathbb{L}_{\pi},\mathbb{R}_{\pi}]=0\) and the property \(f(x)=xf(x^{-1})\). Finally, from the monotonicity of \(f\) it follows that (see Thm. 2): \[\operatorname{Tr}\left[\Phi(A)\,\mathbb{J}_{f}^{-1}\big{|}_{\Phi(\pi)}[\Phi(A )]\right]\leq\operatorname{Tr}\left[A\,\mathbb{J}_{f}^{-1}\big{|}_{\pi}[A] \right]\,, \tag{104}\] for every self-adjoint operator \(A\) and CPTP map \(\Phi\). By polarisation this means that \(\Phi^{\dagger}\left(\mathbb{J}_{f}^{-1}\big{|}_{\Phi(\pi)}\right)\Phi\leq \mathbb{J}_{f}^{-1}\big{|}_{\pi}\), which is the first of the two conditions in Eq. 153. By multiplying the two sides of the inequality by \(\left.\mathbb{J}_{f}^{1/2}\right|_{\pi}\) on the right and on the left we obtain: \[\mathbb{J}_{f}^{1/2}\big{|}_{\pi}\Phi^{\dagger}\left(\mathbb{J}_{f}^{-1}\big{|}_ {\Phi(\pi)}\right)\Phi\mathbb{J}_{f}^{1/2}\big{|}_{\pi}=\left(\mathbb{J}_{f}^{1/ 2}\big{|}_{\pi}\Phi^{\dagger}\mathbb{J}_{f}^{-1/2}\big{|}_{\Phi(\pi)}\right) \left(\mathbb{J}_{f}^{1/2}\big{|}_{\pi}\Phi^{\dagger}\mathbb{J}_{f}^{-1/2} \big{|}_{\Phi(\pi)}\right)^{\dagger}\leq\mathbb{I}\,. \tag{105}\] This inequality is of the form \(AA^{\dagger}\leq\mathbb{I}\), for \(A=\mathbb{J}_{f}^{1/2}\big{|}_{\pi}\Phi^{\dagger}\mathbb{J}_{f}^{-1/2}\big{|}_{ \Phi(\pi)}\). But then it follows that also \(A^{\dagger}A\leq\mathbb{I}\), which expands to: \[\mathbb{J}_{f}^{-1/2}\big{|}_{\Phi(\pi)}\Phi\,\mathbb{J}_{f}\big{|}_{\pi}\Phi^{ \dagger}\mathbb{J}_{f}^{-1/2}\big{|}_{\Phi(\pi)}\leq\mathbb{I}\,. \tag{106}\] Multiplying both sides on the left and on the right by \(\mathbb{J}_{f}^{1/2}\big{|}_{\Phi(\pi)}\) we finally obtain \(\Phi\left(\mathbb{J}_{f}\big{|}_{\pi}\right)\Phi^{\dagger}\;\leq\;\mathbb{J}_{f} \big{|}_{\Phi(\pi)}\), which proves the second condition in Eq. 153. After having solved the question about the properties of \(\left.\mathbb{J}_{f}\right|_{\pi}\) we present here the explicit derivation of Eq. (160) and Eq. (161). First it should be noticed that we can rewrite \(\left.\mathbb{J}_{f_{\lambda}}\right|_{\pi}\) as: \[\mathbb{J}_{f_{\lambda}}\big{|}_{\pi}[A] =\mathbb{R}_{\pi}\,f_{\lambda}(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{ -1})[A]=\left(\frac{1+\lambda}{2}\right)\left(\frac{\mathbb{L}_{\pi}\mathbb{R }_{\pi}^{-1}}{(\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}+\lambda)}+\frac{\mathbb{L }_{\pi}\mathbb{R}_{\pi}^{-1}}{(1+\lambda\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{-1}) }\right)[A\pi]= \tag{165}\] \[=\left(\frac{1+\lambda}{2}\right)\left(\frac{\mathbb{L}_{\pi} \mathbb{R}_{\pi}^{\mp^{\prime}}}{\mathbb{R}_{\pi}^{\mp^{\prime}}(\mathbb{L}_{ \pi}+\lambda\mathbb{R}_{\pi})}+\frac{\mathbb{L}_{\pi}\mathbb{R}_{\pi}^{\mp^{ \prime}}}{\mathbb{R}_{\pi}^{\mp^{\prime}}(\mathbb{R}_{\pi}+\lambda\mathbb{L} _{\pi})}\right)[A\pi]=\] (166) \[=\left(\frac{1+\lambda}{2}\right)\left((\mathbb{L}_{\pi}+\lambda \mathbb{R}_{\pi})^{-1}+(\mathbb{R}_{\pi}+\lambda\mathbb{L}_{\pi})^{-1}\right) \left[\pi A\pi\right], \tag{167}\] where we used the commutation between \(\mathbb{L}_{\pi}\) and \(\mathbb{R}_{\pi}\) to simplify the manipulations. Integrating over \(\lambda\) then gives Eq. (160). A similar manipulation can be carried out for \(\left.\mathbb{J}_{f}^{-1}\right|_{\pi}\), which gives: \[\left.\mathbb{J}_{f}^{-1}\right|_{\pi}[A] =\left.\mathbb{J}_{Tf}\right|_{\pi^{-1}}[A]=\int_{0}^{1}\mathrm{d }\mu_{f}(\lambda)\;\left(\frac{1+\lambda}{2}\right)\left((\mathbb{L}_{\pi}^{-1 }+\lambda\mathbb{R}_{\pi}^{-1})^{-1}+(\mathbb{R}_{\pi}^{-1}+\lambda\mathbb{L} _{\pi}^{-1})^{-1}\right)[\pi^{-1}A\pi^{-1}]= \tag{168}\] \[=\int_{0}^{1}\mathrm{d}\mu_{f}(\lambda)\;\left(\frac{1+\lambda}{ 2}\right)\left(\frac{\mathbb{L}_{\pi}^{-1}\mathbb{R}_{\pi}^{-1}}{(\mathbb{L}_ {\pi}^{-1}+\lambda\mathbb{R}_{\pi}^{-1})}+\frac{\mathbb{L}_{\pi}^{-1}\mathbb{ R}_{\pi}^{-1}}{(\mathbb{R}_{\pi}^{-1}+\lambda\mathbb{L}_{\pi}^{-1})}\right)[A]=\] (169) \[=\int_{0}^{1}\mathrm{d}\mu_{f}(\lambda)\;\left(\frac{1+\lambda}{ 2}\right)\left((\mathbb{L}_{\pi}+\lambda\mathbb{R}_{\pi})^{-1}+(\mathbb{R}_{ \pi}+\lambda\mathbb{L}_{\pi})^{-1}\right)[A]\,, \tag{170}\] which proves Eq. (161). ## Appendix E Ordering of symmetrised contrast functions In this section we prove that symmetrised contrast functions satisfy the ordering: \[H_{Lf_{B}}^{\mathrm{symm}}(\rho||\sigma)\leq H_{g}^{\mathrm{symm}}(\rho|| \sigma)\leq H_{Lf_{B}}^{\mathrm{symm}}(\rho||\sigma)\,. \tag{171}\] The two main ingredients we use are the integral expression in Eq. (12), namely: \[H_{g}^{\mathrm{symm}}(\rho||\sigma)=\frac{1}{2}\int_{0}^{1}\mathrm{d}N_{g}(s) \;\mathrm{Tr}\left[(\rho-\sigma)\left((\mathbb{L}_{\sigma}+s\mathbb{R}_{\rho}) ^{-1}+(\mathbb{L}_{\rho}+s\mathbb{R}_{\sigma})^{-1}\right)[(\rho-\sigma)]\right] \tag{172}\] and the following Lemma: **Lemma 3**.: _For any two full-rank states \(\rho\) and \(\sigma\) and any \(s\in[0,1]\) the following chain of operator inequalities hold:_ \[2\left(\mathbb{L}_{\sigma}+\mathbb{R}_{\rho}\right)^{-1}\leq\left(\frac{1+s}{ 2}\right)\left((\mathbb{L}_{\sigma}+s\mathbb{R}_{\rho})^{-1}+(\mathbb{R}_{\rho }+s\mathbb{L}_{\sigma})^{-1}\right)\leq\frac{1}{2}\left(\mathbb{L}_{\sigma}^{-1 }+\mathbb{R}_{\rho}^{-1}\right). \tag{173}\] Proof.: The three operators in Eq. (173) are all diagonal in the basis given by \(\left\{\left.\left|\sigma_{j}\right.\right\rangle\!\left\langle\rho_{i}\right.\right\}\), as it can be readily verified from the equalities: \[2\left(\mathbb{L}_{\sigma}+\mathbb{R}_{\rho}\right)^{-1}\!\left[ \left.\left|\sigma_{j}\right.\right\rangle\!\left\langle\rho_{i}\right.\right] =\frac{2}{\sigma_{j}+\rho_{i}}\left.\left|\sigma_{j}\right.\right\rangle \!\left\langle\rho_{i}\right.\right|\,; \tag{174}\] \[\left(\frac{1+s}{2}\right)\left((\mathbb{L}_{\sigma}+s\mathbb{R}_ {\rho})^{-1}+(\mathbb{R}_{\rho}+s\mathbb{L}_{\sigma})^{-1}\right)\left[ \left.\left|\sigma_{j}\right.\right\rangle\!\left\langle\rho_{i}\right.\right] =\left(\frac{1+s}{2}\right)\left(\frac{1}{\sigma_{j}+s\rho_{i}}+ \frac{1}{s\,\sigma_{j}+\rho_{i}}\right)\left.\left|\sigma_{j}\right.\right\rangle \!\left\langle\rho_{i}\right.\right|\,;\] (175) \[\frac{1}{2}\left(\mathbb{L}_{\sigma}^{-1}+\mathbb{R}_{\rho}^{-1} \right)\!\left[\left.\left|\sigma_{j}\right.\right\rangle\!\left\langle\rho_{i} \right.\right] =\frac{1}{2}\,\left(\frac{1}{\sigma_{j}}+\frac{1}{\rho_{i}}\right) \left.\left|\sigma_{j}\right.\rangle\!\left\langle\rho_{i}\right.\right|\,. \tag{176}\] Then the claim follows from the chain of inequalities: \[\frac{2}{1+x}\leq\left(\frac{1+s}{2}\right)\left(\frac{1}{1+s\,x}+\frac{1}{s+x} \right)\leq\frac{1}{2}+\frac{1}{2x}\,, \tag{177}\] which holds for any positive \(x\) and \(s\in[0,1]\). Moreover, the function in the middle is monotonically decreasing in We are now ready to prove the claim. Indeed, starting from Eq. (144), and using Eq. (164) we have: \[H_{g}^{\text{symm}}(\rho||\sigma) =\frac{1}{2}\int_{0}^{1}\text{d}N_{g}(s)\;\text{Tr}\left[(\rho- \sigma)\left((\mathbb{L}_{\sigma}+s\mathbb{R}_{\rho})^{-1}+(\mathbb{L}_{\rho}+ s\mathbb{R}_{\sigma})^{-1}\right)[(\rho-\sigma)]\right]= \tag{145}\] \[=\frac{1}{2}\int_{0}^{1}\text{d}\mu_{f}(s)\;\left(\frac{1+s}{2} \right)\text{Tr}\left[(\rho-\sigma)\left((\mathbb{L}_{\sigma}+s\mathbb{R}_{ \rho})^{-1}+(\mathbb{R}_{\rho}+s\mathbb{L}_{\sigma})^{-1}\right)[(\rho-\sigma) ]\right]\,, \tag{146}\] where in the second line we took the adjoint of the second term in the trace. Since \(\text{d}\mu_{f}(s)\) is a probability distribution and thanks to Lemma 3, any contrast function is smaller than the one corresponding to \(\delta\mu_{f_{H}}(s)=\delta(s)\) and larger than the one corresponding to \(\delta\mu_{f_{B}}(s)=\delta(s-1)\). Hence, for any symmetrised contrast function Eq. (144) holds. ## Appendix F Integral expression of \((\mathbb{L}_{\sigma}+\mathbb{R}_{\rho})^{-1}\) Consider the equation: \[(\mathbb{L}_{\sigma}+\mathbb{R}_{\rho})[A]=\sigma A+A\rho=X\,. \tag{147}\] One can implicitly solve for \(A\) as \(A=(\mathbb{L}_{\sigma}+\mathbb{R}_{\rho})^{-1}[X]\). In this section, we prove that this solution can be rewritten as: \[A=\int_{0}^{\infty}\text{d}t\;\,e^{-t\sigma}\,X\,e^{-t\rho}\,. \tag{148}\] If this were the case, it follows from Eq. (147) that multiplying the right hand side of the equation by \((\mathbb{L}_{\sigma}+\mathbb{R}_{\rho})\) would give back \(X\). Indeed, this can be straightforwardly verified as: \[(\mathbb{L}_{\sigma}+\mathbb{R}_{\rho})\int_{0}^{\infty}\text{d }t\;\,e^{-t\sigma}\,X\,e^{-t\rho} =\int_{0}^{\infty}\text{d}t\;\,\left(\sigma\,e^{-t\sigma}\,X\,e^ {-t\rho}+e^{-t\sigma}\,X\,e^{-t\rho}\,\rho\right)= \tag{149}\] \[=-\int_{0}^{\infty}\text{d}t\;\,\left(\frac{\text{d}}{\text{d}t }\,e^{-t\sigma}\,X\,e^{-t\rho}\right)=\left(e^{-t\sigma}\,X\,e^{-t\rho}\right) \bigg{|}_{t=\infty}^{t=0}=X\,, \tag{150}\] which proves the claim.
2304.05651
Study on discrete degenerate Bell distributions with two parameters
Recently, Freud-Rodriguez proposed a new counting process which is called the Bell-Touchard process and based on the Bell-Touchard probability distribution. This process was developed to solve the problem of rare events hypothesis which is one of the limitations of the Poisson process. In this paper, we consider the discrete degenerate Bell distributions and the degenerate Bell process which are 'degenerate versions' of the Bell-Touchard probability distributions and the Bell-Touchard process, respectively. We investigate several properties of the degenerate Bell distribution. We introduce the degenerate Bell process by giving two equivalent definitions and show one method of constructing a new infinite family of degenerate Bell process out of a given infinite family of degenerate Bell process.
Taekyun Kim, Dae san Kim, Hye Kyung Kim
2023-04-12T07:04:39Z
http://arxiv.org/abs/2304.05651v1
# Study on discrete degenerate Bell distributions with two parameters ###### Abstract. Recently, Freud-Rodriguez proposed a new counting process which is called the Bell-Touchard process and based on the Bell-Touchard probability distribution. This process was developed to solve the problem of rare events hypothesis which is one of the limitations of the Poisson process. In this paper, we consider the discrete degenerate Bell distributions and the degenerate Bell process which are 'degenerate versions' of the Bell-Touchard probability distributions and the Bell-Touchard process, respectively. We investigate several properties of the degenerate Bell distribution. We introduce the degenerate Bell process by giving two equivalent definitions and show one method of constructing a new infinite family of degenerate Bell process out of a given infinite family of degenerate Bell process. Key words and phrases:degenerate Bell distribution; degenerate Bell process; counting process; Poisson process 2020 Mathematics Subject Classification: 05A15; 11B73; 60G51 \({}^{*}\) is corresponding author ## 1. Introduction We have witnessed in recent years that studying various degenerate versions of some special polynomials and numbers yields many interesting and fruitful results. This explorations for degenerate versions started from the pioneering work of Carlitz on the degenerate Bernoulli and degenerate Euler polynomials. It turns out that the Bell-Touchard distribution with parameters \((\alpha,\theta)\) can be used quite effectively for modeling count data (see [2]). In [5], Freud and Rodriguez defined the Bell-Touchard process which is based the Bell-Touchard probability distribution. It is introduced to contribute the formulation of mathematical models where the rare events hypothesis is not suitable. The aim of this paper is to study the discrete degenerate Bell distributions and the degenerate Bell process which are degenerate versions of the Bell-Touchard probability distributions and the Bell-Touchard process, respectively. We investigate several properties of the degenerate Bell-Touchard probability distributions. Then we give two equivalent definitions of the degenerate Bell-Touchard process. We show that the infinite family of degenerate Bell process formed by the partial sums of an infinite family of degenerate Bell process with the same \(\theta\) parameter is of the same nature. In more detail, the outline of this paper is as follows. In Section 1, we recall the degenerate exponential functions, the degenerate Stirling numbers of the second kind and the degenerate Bell polynomials. Then we remind the reader of the counting process, independent increments and stationary increments. We also recall the Poisson process. Section 2 is the main result of this paper. We derive the Dobinski-like formula for the degenerate Bell polynomials. We introduce the degenerate Bell distributions with two parameters \((\alpha,\theta)\). Then we find the probability generating function of such distributions in Theorem 1. We deduce the parameters for the sum of a finite number of independent degenerate Bell distributions with the same \(\theta\) parameter in Theorem 2. We determine the moment generating function of the degenerate Bell distributions in Theorem 3. In Theorem 4, we compute the expectation and the variance of the degenerate Bell distributions. We define the degenerate Bell process with parameters \((\alpha,\theta)\). We introduce the degenerate Bell process \(\{N_{\lambda}(t)|t\geq 0\}\) with parameters \((\alpha,\theta)\) as s counting process satisfying conditions (i), (ii) and (iii). Here the condition (iii) specifies only the linear term of \(P\{N_{\lambda}(t)=k\}\), the probability of \(k\) events occurring in the interval \([0,t)\). However, we show that the conditions (i), (ii) and (iii) together determine \(P\{N_{\lambda}(t)=k\}\) completely in Theorem 5, so that we come up with another (equivalent) definition for the degenerate Bell process in Definition 6. In Theorem 7, we show that, for a given infinite family of degenerate Bell process with the same \(\theta\) parameter, the infinite family of degenerate Bell process formed by their partial sums are also of the same nature. Finally, we conclude our paper in Section 3. In the rest of this section, we recall the necessary facts that are needed throughout this paper. For any nonzero \(\lambda\in\mathbb{R}\), the degenerate exponentials are defined by (1) \[e_{\lambda}^{x}(t)=(1+\lambda t)^{\frac{x}{\lambda}}=\sum_{n=0}^{\infty}\frac{ (x)_{n,\lambda}}{n!}t^{n},\ e_{\lambda}(t)=e_{\lambda}^{1}(t),\quad(\text{see \@@cite[cite]{[\@@bibref{}{K A stochastic process \(\{N(t)|t\geq 0\}\) is said to be a _counting process_ if \(N(t)\) represents the total number of 'events' that occur by time \(t\). From its definition, we note that for a counting process \(N(t)\) must satisfy (see [1, 2, 5, 6, 12]): 1. \(N(t)\geq 0\), 2. \(N(t)\) is integer valued, 3. if \(s<t\), then \(N(s)\leq N(t)\), 4. for \(s<t\), \(N(t)-N(s)\) equals the number of events that occur in the interval \((s,\,t]\). A counting process is said to _possess independent increments_ if the numbers of events that occur in disjoint time intervals are independent. A counting process is said to _possess stationary increments_ if the distribution of the number of events that occur in any interval of time depends only on the length of the time interval. The counting process \(\{N(t)|\,t\geq 0\}\) is said to be a _Poisson process having rate \(\alpha\)\((\alpha>0)\)_ if 1. \(N(0)=0\), 2. the process has independent increments, 3. the number of events in any interval of length \(t\) is Poisson distributed with mean \(\alpha t\), (see [1]). That is, for all \(s,\ t\geq 0\), \[P\{N(t+s)-N(s)=n\}=e^{-\alpha t}\frac{(\alpha t)^{n}}{n!}.\] In this paper, we introduce a new degenerate Bell discrete distribution with two parameters and propose a new counting process based on the degenerate Bell probability distribution, naming it the degenerate Bell process. ## 2. Discrete degenerate Bell distributions with two parameters In this section, we assume that \(\lambda\in(0,1]\). From (7), we note that \[\begin{split}\sum_{n=0}^{\infty}\phi_{n,\lambda}(x)\frac{t^{n}}{ n!}&=e^{x(\epsilon_{\lambda}(t)-1)}=e^{-x}e^{xe_{\lambda}(t)}\\ &=e^{-x}\sum_{k=0}^{\infty}\frac{x^{k}}{k!}e_{\lambda}^{k}(t)= \sum_{n=0}^{\infty}\left(e^{-x}\sum_{k=0}^{\infty}\frac{(k)_{n,\lambda}}{k!}x ^{k}\right)\frac{t^{n}}{n!}.\end{split} \tag{10}\] Comparing the coefficients on both sides of (10), we get the Dobinski-like formula: \[\phi_{n,\lambda}\left(x\right)=e^{-x}\sum_{k=0}^{\infty}\frac{(k)_{n,\lambda} }{k!}x^{k},\quad(n\geq 0),\quad(\text{see \@@cite[cite]{[\@@bibref{}{Dobinski-like-formula}{}{}]}}). \tag{11}\] In particular, for \(x=1\), we have \[\phi_{n,\lambda}=\phi_{n,\lambda}\left(1\right)=\frac{1}{e}\sum_{k=0}^{\infty }\frac{(k)_{n,\lambda}}{k!},\quad(n\geq 0), \tag{12}\] which are called the degenerate Bell numbers. Now, we consider the degenerate Bell random variable with parameters \(\alpha\) and \(\theta\). A discrete random variable \(X\) has a _degenerate Bell distribution_ with parameters \((\alpha,\theta)\in\mathbb{R}^{2}\) if its probability mass function is given by \[p(k)=P\{X=k\}=e^{-\alpha(e_{\lambda}(\theta)-1)}\frac{\theta^{k}}{k!}\phi_{k, \lambda}(\alpha),\quad(k\geq 0), \tag{13}\] which is denoted by \(X\sim DB_{\lambda}(\alpha,\theta)\). Note that \[\sum_{k=0}^{\infty}p(k)=\sum_{k=0}^{\infty}P\{X=k\} =e^{-\alpha(e_{\lambda}(\theta)-1)}\sum_{k=0}^{\infty}\frac{ \phi_{k,\lambda(\alpha)}}{k!}\theta^{k}\] \[=e^{-\alpha(e_{\lambda}(\theta)-1)}e^{\alpha(e_{\lambda}(\theta) -1)}=1.\] Let \(X\sim DB_{\lambda}(\alpha,\theta)\). Then the probability generating function \(G_{X}(t)\) of \(X\) is given by \[\begin{split} G_{X}(t)&=E[t^{X}]=\sum_{n=0}^{ \infty}P\{X=n\}t^{n}\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\sum_{n=0}^{\infty}\frac{( \theta t)^{n}}{n!}\phi_{n,\lambda}(\alpha)\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}e^{\alpha(e_{\lambda}( \theta t)-1)}=e^{\alpha(e_{\lambda}(\theta t)-e_{\lambda}(\theta))}.\end{split} \tag{14}\] Therefore, by (14), we obtain the following theorem. **Theorem 1**.: _Let \(X\sim DB_{\lambda}(\alpha,\theta)\). Then the probability generating function \(G_{X}(t)\) of \(X\) is given by_ \[G_{X}(t)=e^{\alpha(e_{\lambda}(\theta t)-e_{\lambda}(\theta))}.\] Let \(\{X_{i}\}_{i=1}^{n}\) be the sequence of independent random variables with \(X_{i}\sim DB_{\lambda}(\alpha_{i},\theta)\), and let \(Y=\sum_{i=1}^{n}X_{i}\). Then we have \[\begin{split} G_{Y}(t)&=E[t^{Y}]=E[t^{\sum_{i=1}^{n }X_{i}}]=\Pi_{i=1}^{n}E[t^{X_{i}}]\\ &=\Pi_{i=1}^{n}e^{\alpha_{i}(e_{\lambda}(\theta t)-e_{\lambda}( \theta))}=e^{\sum_{i=1}^{n}\alpha_{i}(e_{\lambda}(\theta t)-e_{\lambda}( \theta))}.\end{split} \tag{15}\] Therefore, by (14) and (15), we obtain the following theorem. **Theorem 2**.: _Let \(\{X_{i}\}_{i=1}^{n}\) be the sequence of independent random variables with \(X_{i}\sim DB_{\lambda}(\alpha_{i},\theta)\). Then we have_ \[\sum_{i=1}^{n}X_{i}\sim DB_{\lambda}\Big{(}\sum_{i=1}^{n}\alpha_{i},\theta \Big{)}.\] Let \(X\) be the discrete random variable with probability mass function given by \(p(k)=P\{X=k\}\). Then the moments of \(X\) are defined by \[E[X^{n}]=\sum_{k=0}^{\infty}k^{n}p(k)=\sum_{k=0}^{\infty}k^{n}P\{X=k\},\quad(n \geq 0).\] The moment generating function of \(X\) is given by \[F_{X}(t)=E[e^{Xt}]=\sum_{n=0}^{\infty}E[X^{n}]\frac{t^{n}}{n!}=\sum_{n=0}^{ \infty}e^{\mu t}P_{r}\{X=n\},\quad(\text{see \@@cite[cite]{[\@@bibref{}{HJ}{}{}]}}).\] Let \(X\sim DB_{\lambda}(\alpha,\theta)\). Then, by (7) and (13), the moment generating function of \(X\) is given by \[\begin{split} F_{X}(t)=E[e^{Xt}]&=\sum_{n=0}^{\infty} e^{nt}P\{X=n\}\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\sum_{n=0}^{\infty}e^{nt} \frac{\theta^{n}\phi_{n,\lambda}(\alpha)}{n!}\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}e^{\alpha(e_{\lambda}(e^{ \prime}\theta)-1)}=e^{\alpha(e_{\lambda}(e^{\prime}\theta)-e_{\lambda}(\theta ))}.\end{split} \tag{16}\] Therefore, by (16), we obtain the following theorem. **Theorem 3**.: _For \(X\sim DB_{\lambda}(\alpha,\theta)\), let \(F_{X}(t)=\sum_{n=0}^{\infty}E(X^{n})\frac{t^{n}}{n!}\) be the moment generating function of \(X\). Then we have_ \[F_{X}(t)=e^{\alpha(e_{\lambda}(e^{\prime}\theta)-e_{\lambda}(\theta))}.\] Let \(X\sim DB_{\lambda}(\alpha,\theta)\). Then we have \[\begin{split} E[X]=\sum_{k=0}^{\infty}kP\{X=k\}&= \sum_{k=0}^{\infty}k\cdot e^{-\alpha(e_{\lambda}(\theta)-1)}\frac{\theta^{k}}{ k!}\phi_{k,\lambda}(\alpha)\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\theta\frac{\partial}{ \partial\theta}\sum_{k=0}^{\infty}\frac{\theta^{k}}{k!}\phi_{k,\lambda}( \alpha)\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\theta\frac{\partial}{ \partial\theta}\Big{(}e^{\alpha(e_{\lambda}(\theta)-1)}\Big{)}\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\theta\alpha e_{\lambda}^{1- \lambda}(\theta)(e^{\alpha(e_{\lambda}(\theta)-1)})=\theta\alpha e_{\lambda}^{1 -\lambda}(\theta),\end{split} \tag{17}\] and \[\begin{split} E[X^{2}]&=\sum_{k=0}^{\infty}k^{2}e^{ -\alpha(e_{\lambda}(\theta)-1)}\frac{\theta^{k}}{k!}\phi_{k,\lambda}(\alpha)\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\bigg{(}\theta\frac{\partial }{\partial\theta}\bigg{)}^{2}\sum_{k=0}^{\infty}\frac{\theta^{k}}{k!}\phi_{k, \lambda}(\alpha)\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\theta\frac{\partial}{ \partial\theta}\bigg{(}\theta\frac{\partial}{\partial\theta}e^{\alpha(e_{ \lambda}(\theta)-1)}\bigg{)}\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\theta\frac{\partial}{ \partial\theta}\Big{(}\theta\alpha e_{\lambda}^{1-\lambda}(\theta))e^{\alpha(e_ {\lambda}(\theta)-1)}\bigg{)}\\ &=e^{-\alpha(e_{\lambda}(\theta)-1)}\theta\alpha\Big{\{}\Big{(}e _{\lambda}^{1-\lambda}(\theta)+\theta(1-\lambda)e_{\lambda}^{1-2\lambda}(\theta )\Big{)}e^{\alpha(e_{\lambda}(\theta)-1)}\\ &\hskip 142.26378pt+\theta e_{\lambda}^{1-\lambda}(\theta)\alpha e _{\lambda}^{1-\lambda}(\theta)e^{\alpha(e_{\lambda}(\theta)-1)}\Big{\}}\\ &=\theta\alpha\big{(}1+\theta(1-\lambda)e_{\lambda}^{-\lambda}( \theta)\big{)}e_{\lambda}^{1-\lambda}(\theta)+\theta^{2}\alpha^{2}e_{\lambda}^ {2(1-\lambda)}(\theta).\end{split} \tag{18}\] From (17) and (18), we have \[\begin{split}\text{Var}(\text{X})&=E[x^{2}]-(E[x])^{2 }\\ &=\theta\alpha(1+\theta(1-\lambda)e_{\lambda}^{-\lambda}(\theta))e _{\lambda}^{1-\lambda}(\theta)+\theta^{2}\alpha^{2}e_{\lambda}^{2(1-\lambda )}(\theta)-\theta^{2}\alpha^{2}e_{\lambda}^{2(1-\lambda)}(\theta)\\ &=\theta\alpha(1+\theta(1-\lambda)e_{\lambda}^{-\lambda}(\theta))e _{\lambda}^{1-\lambda}(\theta).\end{split} \tag{19}\] Therefore, by (19), we obtain the following theorem. **Theorem 4**.: _Let \(X\sim DB_{\lambda}(\alpha,\theta)\). Then we have_ \[E\left[X\right]=\theta\alpha e_{\lambda}^{1-\lambda}(\theta),\ \ \text{and}\ \ \ \text{Var}(X)=\theta\alpha\big{(}1+\theta(1-\lambda)e_{\lambda}^{-\lambda}( \theta)\big{)}e_{\lambda}^{1-\lambda}(\theta).\] A counting process \(\{N_{\lambda}(t)|t\geq 0\}\) is said to be a _degenerate Bell process_ with parameters \((\alpha,\theta)\in\mathbb{R}_{+}^{2}\) if the following assumption hold: * \(N_{\lambda}(0)=0\), * \(\{N_{\lambda}(t)|t\geq 0\}\) has stationary and independent increments, * \(P\{N_{\lambda}(t+s)-N_{\lambda}(t)=k\}=\alpha s\frac{(1)_{k,\lambda}}{k!} \theta^{k}+o(s)\), where \(k\in\mathbb{N}\) and \(s,\ t\geq 0\). Let \(g(t)=E[\exp(-xN_{\lambda}(t))]\). Then \[g(t+s) =E[\exp(-xN_{\lambda}(t+s))] \tag{20}\] \[=E[\exp(-x(N_{\lambda}(t+s)-N_{\lambda}(t)+N_{\lambda}(t))]\] \[=E[\exp(-x(N_{\lambda}(t))\exp(N_{\lambda}(t+s)-N_{\lambda}(t)))]\] \[=E[\exp(-xN_{\lambda}(t))]E[\exp(-xN_{\lambda}(t+s)-N_{\lambda}( t)))]\] \[=g(t)E[\exp(-xN_{\lambda}(s))].\] Now, we note that \[\sum_{k=0}^{\infty}P\{N_{\lambda}(s)=k\}=1=P\{N_{\lambda}(s)=0\}+\sum_{k=1}^{ \infty}P\{N_{\lambda}(s)=k\}. \tag{21}\] Thus, by (21), we get \[P\{N_{\lambda}(s)=0\}=1-\sum_{k=1}^{\infty}P\{N_{\lambda}(s)=k\} =1-\sum_{k=1}^{\infty}\alpha s\frac{(1)_{k,\lambda}}{k!}\theta^{k }+o(s)\] \[=1-\alpha s(e_{\lambda}(\theta)-1)+o(s). \tag{22}\] For \(N_{\lambda}(s)=k\ (k=0,1,2,\cdots)\), we have \[E\left[\exp(-xN_{\lambda}(s))\right] =\sum_{k=0}^{\infty}e^{-xt}P\{N_{\lambda}(s)=k\}\] \[=P\{N_{\lambda}(s)=0\}+\sum_{k=1}^{\infty}e^{-xt}P\{N_{\lambda}( s)=k\}\] \[=1-\alpha s(e_{\lambda}(\theta)-1)+o(s)+\alpha s\sum_{k=1}^{ \infty}e^{-xt}\frac{(1)_{k,\lambda}}{k!}\theta^{k}\] \[=1-\alpha s(e_{\lambda}(\theta)-1)+o(s)+\alpha s(e_{\lambda}(e^{ -x}\theta)-1)\] \[=1+\alpha s(e_{\lambda}(e^{-x}\theta)-e_{\lambda}(\theta))+o(s). \tag{23}\] From (20) and (23), we have \[g(t+s) =g(t)E[\exp(-xN_{\lambda}(s))]\] \[=g(t)(1+\alpha s(e_{\lambda}(e^{-x}\theta)-e_{\lambda}(\theta)))+ o(s). \tag{24}\] Thus, by (24), we get \[g(t+s)-g(t)=g(t)\big{(}\alpha s(e_{\lambda}(e^{-x}\theta)-e_{\lambda}(\theta) )+o(s). \tag{25}\] From (25), we note that \[\frac{dg(t)}{dt}=\lim_{s\to 0}\frac{g(t+s)-g(t)}{s}=g(t)\alpha(e_{\lambda}(e^{-x} \theta)-e_{\lambda}(\theta)). \tag{26}\] Thus, by (26), we get \[\log g(t)=\alpha t(e_{\lambda}(e^{-x}\theta)-e_{\lambda}(\theta)). \tag{27}\] From (27), we note that \[g(t)=\exp(\alpha t(e_{\lambda}(e^{-x}\theta)-e_{\lambda}(\theta))). \tag{28}\] Thus, by (28), we get \[E[\exp(-xN_{\lambda}(t))]=g(t)=\exp(\alpha t(e_{\lambda}(e^{-x}\theta)-e_{ \lambda}(\theta))). \tag{29}\] Therefore, by Theorem 3 and (29), we obtain the following theorem. **Theorem 5**.: _If \(\{N_{\lambda}(t)|\,t\geq 0\}\) is a degenerate Bell process with parameters \((\alpha,\theta)\), then_ \[N_{\lambda}(t)\sim DB_{\lambda}(\alpha t,\theta),\ \ \text{for all}\ \ \ t\geq 0.\] In view of Theorem 5, we may have another definition for the degenerate Bell process. **Definition 6**.: _A counting process \(\{N_{\lambda}(t)|\,t\geq 0\}\) is called a degenerate Bell process with parameters \((\alpha,\theta)\) if the following assumptions hold:_ 1. \(N_{\lambda}(0)=0\)_,_ 2. \(\{N_{\lambda}(t)|\,t\geq 0\}\) _has stationary and independent increments,_ 3. _for all_ \(k\in\mathbb{N}\cup\{0\}\) _and_ \(t>0\)_,_ \(P\{N_{\lambda}(t)=k\}=e^{-\alpha t(e_{\lambda}(\theta)-1)\frac{\theta^{k}}{k!} }\phi_{k,\lambda}(\alpha t)\)_,_ where \(\phi_{k,\lambda}(x)\) are the degenerate Bell polynomials. Let \(\{N_{i,\lambda}(t)|\,t\geq 0\}_{i\in\mathbb{N}}\) be the family of degenerate Bell process with parameters \((\alpha_{i},\theta)_{i\in\mathbb{N}}\). Let \(\widetilde{N}_{n,\lambda}(t)=\sum_{i=1}^{n}N_{i,\lambda}(t)\), for all \(t\geq 0\), and \(\beta_{n}=\sum_{i=1}^{n}\alpha_{i}\). Recalling (8), we obtain \[\begin{split} P[N_{1,\lambda}(t)+N_{2,\lambda}(t)&= k]=\sum_{i=0}^{k}P[N_{1,\lambda}(t)=i]P[N_{2,\lambda}(t)=k-i]\\ &=\sum_{i=0}^{k}e^{-\alpha_{1}t(e_{\lambda}(\theta)-1)}\frac{ \theta^{i}}{i!}\phi_{i,\lambda}(\alpha_{1}t)e^{-\alpha_{2}t(e_{\lambda}( \theta)-1)}\frac{\theta^{k-i}}{(k-i)!}\phi_{k-i,\lambda}(\alpha_{2}t)\\ &=e^{-(\alpha_{1}+\alpha_{2})t(e_{\lambda}(\theta)-1)}\frac{ \theta^{k}}{k!}\sum_{i=0}^{k}\binom{k}{i}\phi_{i,\lambda}(\alpha_{1}t)\phi_{ k-i,\lambda}(\alpha_{2}t)\\ &=e^{-(\alpha_{1}+\alpha_{2})t(e_{\lambda}(\theta)-1)}\frac{ \theta^{k}}{k!}\phi_{k,\lambda}((\alpha_{1}+\alpha_{2})t).\end{split} \tag{30}\] From (30), we have \[P[\widetilde{N}_{2,\lambda}(t)=k]=e^{-\beta_{2}t(e_{\lambda}(\theta)-1)}\frac {\theta^{k}}{k!}\phi_{k,\lambda}(\beta_{2}t). \tag{31}\] For \(n\geq 0\), we assume that the probability mass function of \(\widetilde{N}_{n,\lambda}(t)\) is given by \[P[\widetilde{N}_{n,\lambda}(t)=k]=e^{-\beta_{n}t(e_{\lambda}(\theta)-1)}\frac {\theta^{k}}{k!}\phi_{k,\lambda}(\beta_{n}t). \tag{32}\] Then we obtain \[\begin{split} P[\widetilde{N}_{n,\lambda}\left(t\right)+& N_{(n+1),\lambda}\left(t\right)=k]=\sum_{i=0}^{k}P[\widetilde{N}_{n,\lambda} \left(t\right)=i]P[N_{(n+1),\lambda}\left(t\right)=k-i]\\ &=\sum_{i=0}^{k}e^{-\beta_{n}t\left(e_{\lambda}\left(\theta \right)-1\right)}\frac{\theta^{i}}{i!}\phi_{i,\lambda}\left(\beta_{n}t\right) e^{-\alpha_{n+1}t\left(e_{\lambda}\left(\theta\right)-1\right)}\frac{\theta^{k-i}}{ \left(k-i\right)!}\phi_{k-i,\lambda}\left(\alpha_{n+1}t\right)\\ &=e^{-\left(\beta_{n}+\alpha_{n+1}\right)t\left(e_{\lambda}\left( \theta\right)-1\right)}\frac{\theta^{k}}{k!}\phi_{k,\lambda\left(\left(\beta_ {n}+\alpha_{n+1}t\right)\right.}.\end{split} \tag{33}\] By (33), we get \[P[\widetilde{N}_{n+1}(t)=k]=e^{-\beta_{n+1}t\left(e_{\lambda}\left(\theta \right)-1\right)}\frac{\theta^{k}}{k!}\phi_{k,\lambda}\left(\beta_{n+1}t \right). \tag{34}\] From (34), we have the following result. **Theorem 7**.: _Let \(\{N_{i,\lambda}\left(t\right)|\,t\geq 0\}_{i\in\mathbb{N}}\) be the family of degenerate Bell process with parameters \(\left(\alpha_{i},\theta\right)_{i\in\mathbb{N}}\). Let \(\widetilde{N}_{n,\lambda}\left(t\right)=\sum_{i=1}^{n}N_{i,\lambda}\left(t\right)\), for all \(t\geq 0\), and let \(\beta_{n}=\sum_{i=1}^{n}\alpha_{i}\). Then \(\{\widetilde{N}_{n,\lambda}\left(t\right)|\,t\geq 0\}_{n\in\mathbb{N}}\) is the family of degenerate Bell process with parameters \(\left(\beta_{n},\theta\right)_{n\in\mathbb{N}}\)._ **Remark 8**.: _We note that for two Bell process \(N_{1}(t)\) and \(N_{2}(t)\) with respective parameters \(\left(\alpha_{1},\theta_{1}\right)\) and \(\left(\alpha_{2},\theta_{2}\right)\), \(N_{1}(t)+N_{2}(t)\) is not a Bell process (see [6])._ ## 3. Conclusion In recent years, the exploration for degenerate versions has been done for many special numbers and polynomials. It is remarkable that this led to find the degerate gamma functions, the degenerate umbral calculus and the degenerate \(q\)-umbral calculus. Here the central role is played by the degenerate exponentials in all of these quests (see (1)). In this paper, we considered the discrete degenerate Bell distributions and the degenerate Bell process which are degenerate versions of the Bell-Touchard probability distributions and the Bell-Touchard process, respectively. Several properties were derived for the degenerate Bell distribution. The degenerate Bell process was introduced by giving two equivalent definitions. Then we showed one method of constructing a new infinite family of degenerate Bell process out of a given infinite family of degenerate Bell process. We would like to continue to explore degenerate versions and to find their applications to physics, science and engineering as well as to mathematics. #### Availability of data and material Not applicable. #### Funding The third author is supported by the Basic Science Research Program, the National Research Foundation of Korea, (NRF-2021R1F1A1050151). #### Ethics approval and consent to participate The authors declare that there is no ethical problem in the production of this paper.
2303.01490
Language Variety Identification with True Labels
Language identification is an important first step in many IR and NLP applications. Most publicly available language identification datasets, however, are compiled under the assumption that the gold label of each instance is determined by where texts are retrieved from. Research has shown that this is a problematic assumption, particularly in the case of very similar languages (e.g., Croatian and Serbian) and national language varieties (e.g., Brazilian and European Portuguese), where texts may contain no distinctive marker of the particular language or variety. To overcome this important limitation, this paper presents DSL True Labels (DSL-TL), the first human-annotated multilingual dataset for language variety identification. DSL-TL contains a total of 12,900 instances in Portuguese, split between European Portuguese and Brazilian Portuguese; Spanish, split between Argentine Spanish and Castilian Spanish; and English, split between American English and British English. We trained multiple models to discriminate between these language varieties, and we present the results in detail. The data and models presented in this paper provide a reliable benchmark toward the development of robust and fairer language variety identification systems. We make DSL-TL freely available to the research community.
Marcos Zampieri, Kai North, Tommi Jauhiainen, Mariano Felice, Neha Kumari, Nishant Nair, Yash Bangera
2023-03-02T18:51:58Z
http://arxiv.org/abs/2303.01490v1
# Language Variety Identification with True Labels ###### Abstract Language identification is an important first step in many IR and NLP applications. Most publicly available language identification datasets, however, are compiled under the assumption that the gold label of each instance is determined by where texts are retrieved from. Research has shown that this is a problematic assumption, particularly in the case of very similar languages (e.g., Croatian and Serbian) and national language varieties (e.g., Brazilian and European Portuguese), where texts may contain no distinctive marker of the particular language or variety. To overcome this important limitation, this paper presents DSL True Labels (DSL-TL), the first human-annotated multilingual dataset for language variety identification. DSL-TL contains a total of 12,900 instances in Portuguese, split between European Portuguese and Brazilian Portuguese; Spanish, split between Argentine Spanish and Castilian Spanish; and English, split between American English and British English. We trained multiple models to discriminate between these language varieties, and we present the results in detail. The data and models presented in this paper provide a reliable benchmark toward the development of robust and fairer language variety identification systems. We make DSL-TL freely available to the research community. 1George Mason University, USA, 2University of Helsinki, Finland 3Cambridge University, UK, 4Rochester Institute of Technology, USA [email protected] ## 1 Introduction Language identification is the task of automatically identifying the language of a given text or document (Jauhianen et al., 2019). The task is a vital pre-processing step integrated into many IR and NLP applications. Language identification is commonly modeled as a supervised text classification task where the archetypal language identification system typically follows these four main steps (Lui, 2014): 1. Representation: selects a text representation (e.g., characters, words, or a combination of the two); 2. Language Modelling: derives a model from texts for each language; 3. Classification: defines a function that best represents the similarity between a text and each language model; 4. Prediction: computes the highest-scoring model to determine the language of the given text. In the early 2000s, language identification was widely considered as a solved task (McNamee, 2005) since character n-gram language models achieve perfect performance on discriminating between sets of dissimilar languages (e.g., Arabic, English, Finish, and Japanese) in standard contemporary texts (e.g., newspaper texts). Renewed interest in the task has emerged in the last decade with more challenging scenarios of particular interest to IR applications. This includes identifying the language of very short non-standard texts from user-generated content (e.g., microblogs) (Tromp and Pechenizkiy, 2011), and web queries (Anand, 2014; Ceylan and Kim, 2009). Other challenges to state-of-the-art language identification systems arise from linguistic phenomena such as code-mixing and code-switching, where two or more languages are mixed in texts or social media posts (Solorio et al., 2014; Molina et al., 2016). Discriminating between very similar languages, dialects, and national varieties of the same language is another important, challenging language identification scenario that has been addressed by several studies (Tiedemann and Ljubesic, 2012; Lui and Cook, 2013; Bouamor et al., 2019). In this scenario, systems need to model fine distinctions between a set of closely-related languages (e.g., Bulgarian and Macedonian), dialects (e.g., the different dialects of Arabic), or national varieties of the same language (e.g., Brazilian and European Portuguese) to accurately discriminate between them. This challenge has been the main topic of the workshop series on NLP for Similar Languages, Varieties, and Dialects (VarDial) [1, 13, 14] and their associated benchmark competitions which are organized yearly since 2014. The VarDial competitions have been providing the community with multiple datasets containing a wide variety of languages and dialects, helping to establish important language identification benchmarks. As discussed in Section 2, the main limitation of the datasets collected for VarDial and similar competitions is that the gold labels for each instance are not obtained through human annotation. The most widely used one, the DSL Corpus Collection (DSLCC) [15], for example, contains news texts retrieved from multiple newspaper websites considering the domain of the website as a proxy for language variety. For example, all content retrieved from news websites hosted in country-specific domains such as.br and.pt domains is labeled as Brazilian and European Portuguese, respectively. While this is a straightforward assumption that results in a high number of accurate gold labels, this assumption has proved to be problematic in cases of republications of articles in different countries, particularly for languages that are widely spoken throughout the world, most notably English [13]. Furthermore, multiple studies [12, 14] have evaluated native speakers' performance in identifying language varieties using the DSLCC concluding that many instances do not include any marker that allows humans to discriminate between varieties. To address this limitation, in this paper, we introduce DSL True Labels (DSL-TL), the first human-annotated language variety identification dataset. To the best of our knowledge, no manually annotated dataset with true labels is available for language variety identification or language identification in general, and ours fills this gap. We collect instances available in the DSLCC and in other news corpora and gather multiple human judgments for each instance through a crowdsourcing platform. Finally, we train and evaluate multiple machine-learning models on this new dataset. The contributions of this paper are the following: 1. A novel problem formulation for language variety identification and language identification in general. 2. The release of DSL-TL, the first human-annotated language identification dataset.1 Footnote 1: [https://github.com/LanguageTechnologyLab/DSL-TL](https://github.com/LanguageTechnologyLab/DSL-TL) 3. An evaluation of multiple language identification models on this new dataset. The remainder of this paper is organized as follows. Section 2 discusses prior research in language variety identification, including the VarDial competitions and available datasets. Section 3 details the steps taken in the construction DSL-TL dataset from data collection to annotation. Section 4 describes the language identification models used in our experiments, while Section 5 presents their results on the new DSL-TL dataset. Finally, Section 6 summarizes our research and discusses avenues for future work. ## 2 Related Work As discussed in a recent survey [1], several language identification studies have reported achieving near-perfect performances in a variety of scenarios. Ljubesic and Kranjicic2014 trained a selection of traditional machine learning classifiers to discriminate between social media posts (tweets) written in four related languages: Bosnian, Croatian, Montenegrin, and Serbian. Their best model, being a Gaussian naive Bayes (GNB) classifier, achieved an accuracy of 97.1%. Martadinata et al.2016 used a Markov model to identify which extracts were taken from Wikipedia articles in Indonesian, Javanese, Sundanese, or Minang-kabau. Their model achieved an accuracy of 95.75%. Li et al.2018 trained a convolutional neural network (CNN) on multiple datasets containing 97 languages. Their model consistently achieved performance of over 95% accuracy when training was conducted across several datasets. Language variety identification systems that discriminate between varieties of the same language, however, achieve more varied performances as discussed in the VarDial shared task reports [13, 14, 15]. Since 2014, the VarDial workshop has hosted several shared tasks for language variety identification, as discussed next. ### VarDial Shared Tasks The Discriminating between Similar Languages (DSL) shared task at VarDial-2014 [23], saw eight teams produce systems for distinguishing between similar languages and language varieties of several language groups. The best-performing model used a probabilistic model similar to a Naive Bayes classifier combined with several SVMs. They reported an accuracy of 91% for differentiating between European and Brazilian Portuguese and an accuracy of 95.6% for Castilian and Argentine Spanish [11]. The same model achieved an accuracy of 52.2% for differentiating between British and American English [23]. DSL continued in VarDial-2015 and 2016 [23, 24]. Both iterations of this shared task expanded upon the original dataset. The 2015 edition added several additional languages and removed named entities to determine their influence on performance [23]. The 2016 edition included varieties of French and challenged 18 teams with Arabic dialect identification [25]. The highest performing systems [24, 26, 27] achieved accuracies ranging from 49.7% and 51.2% when differentiating between Egyptian, Gulf, Levantine, Modern Standard, and Maghreb dialects. The two best-performing systems were an ensemble or a single SVM(s) trained on character and word-level n-grams [24, 25]. Since 2017 VarDial continued to host shared tasks for identifying other language varieties [23, 25, 26, 27, 28, 29]. Performances on these shared tasks were consistent with that of 2014 to 2016, with SVMs and models trained on character and word-level n-grams often outperforming other approaches. An ensemble or a single SVM(s) trained on character n-grams achieved the highest performance for language variety identification for German in 2017 with an F1 of 0.662 [24], for Dutch and Flemish in 2018 with an F1 of 0.660 [28], and for Romanian in 2020 with an F1 of 0.787 [27]. Naive Bayes trained on character n-grams also reported the highest F1s of 0.908 for Chinese in 2019 [24], 0.777 for Romanian in 2021 [24], and 0.9 for Italian in 2022 [25]. Language identification is, therefore, far from being a solved task, with performances varying greatly between groups of dialects and language varieties. Dataset quality and the similarity between language varieties are responsible for such varied performances. ### Available Datasets The datasets used in the VarDial shared tasks, as well as other similar shared tasks [25, 26], contain thousands of sentences in groups of languages or dialects sampled mostly from local newspapers and social media. Examples include Portuguese, Spanish, and English [23, 24], Arabic [24], Chinese [23], Romanian [25], and Italian [1]. However, as discussed in the introduction, these datasets consist of instances assigned with a ground truth label determined by where the text was published (e.g., UK, USA, etc.). Each sentence within these datasets is, therefore, either, for example, American or British English, and only one of these labels is considered correct according to the gold labels included in these datasets. The problem with formulating automatic language identification in this way is that many sentences do not necessarily belong to a single language variety [11]. This is true for varieties of English and varieties of other languages. The DSL dataset from 2014 [23], for example, contained instances with incorrect ground truth labels. Articles containing features characteristic of British English were published in American newspapers and vice-versa, resulting in mislabeled data. [11] went on to show that human annotators were unable to achieve competitive performances on the DSL 2014 and 2015 datasets [23]. On average, accuracies achieved by single human annotators were just over 50%, with performances varying between languages. Relying on the source of extracts or on an individual annotator to determine binary ground truth labels is, therefore, problematic. We address this limitation with DSL-TL by collecting the first human-annotated language identification dataset containing multiple annotations per instance. ## 3 The Dataset: DSL-TL ### Rationale and Motivation The main limitation of the datasets used in the aforementioned benchmark competitions is that each instance (a sentence or a paragraph) contains only one ground truth label, which is assigned depending on the country where the text was published (e.g., UK, USA, etc.). Therefore, each sentence in the datasets is either, say, American or British English, and only one of the labels is considered correct when evaluating the language identification system. The problem with this task formulation is that, as demonstrated in previous research (Goutte et al., 2016), many sentences are simply impossible to be identified because they may belong to multiple language varieties. For example, not all sentences published in a British newspaper contain features that are exclusive to British English, such as spelling conventions (e.g., analyze, neighbor) or lexical choices (e.g., trousers, rubbish) that would make it possible for an English native speaker to recognize a sentence as British. The same is true for other English varieties and varieties of other languages. Goutte et al. (2016) showed that native speakers perform very poorly in identifying the language variety when only one label needs to be assigned to each text. As human performance is often below chance in this task, it is unfair to expect that automatic systems will ever be able to achieve 100% performance when discriminating between national varieties of the same language. To cope with this important limitation, we introduce the use of true labels in language variety identification. The true labels are designed to capture the presence or absence of variety-specific features in each sentence by collecting and aggregating multiple human judgments per data point using Amazon Mechanical Turk (AMT). The annotators were displayed with sentences from the dataset, and they were asked to assign one of the following three labels to a given sentence. * variety X when at least one feature of the variety X is present in the instance; * variety Y when at least one feature of the variety Y is present in the instance; * both/neither when no or the same number of features of the variety X and Y is present in the instance. The annotators were paid between 3 and 5 cents of US dollar per annotation. More detail on the data collection and annotation is provided next. ### Data DSL-TL contains 12,900 instances split between several language varieties, as shown in Table 1. These instances vary between 1 to 3 sentences in length. They consist of short extracts taken from newspaper articles. The English articles have been sourced from a collection of news articles made available by Zellers et al. (2019) - henceforth True News - while the Portuguese and Spanish articles have been sourced from the DSLCC (Tan et al., 2014). Both datasets feature data retrieved from multiple newspapers from each country. We randomly selected instances from the original datasets with an even split between each language variety, being a 2,500/2,500 split for Portuguese and Spanish varieties and a 1,500/1,500 split for English varieties. The final 12,900 instances in DSL-TL have been randomly split into training, development, and testing partitions in a 70%, 20%, 10% split as shown in Table 2. Finally, example instances from DSL-TL are provided in Table 3. \begin{table} \begin{tabular}{l|c c c|c} \hline **Language** & **Variety A** & **Variety B** & **Both or Neither** & **Total** \\ \hline Portuguese & 1,317 (pt-PT) & 3,023 (pt-BR) & 613 (pt) & 4,953 \\ Spanish & 2,131 (es-ES) & 1,211 (es-AR) & 1,605 (es) & 4,947 \\ English & 1,081 (en-GB) & 1,540 (en-US) & 379 (en) & 3,000 \\ \hline **Total** & & & & **12,900** \\ \hline \end{tabular} \end{table} Table 1: DSL-TL’s class splits and the total number of instances. \begin{table} \begin{tabular}{l|c c c|c} \hline **Variety** & **Train** & **Dev** & **Test** & **Total** \\ \hline Portuguese & 3,467 & 991 & 495 & 4,953 \\ Spanish & 3,467 & 985 & 495 & 4,947 \\ English & 2097 & 603 & 300 & 3,000 \\ \hline Total & & & & 12,900 \\ \hline \end{tabular} \end{table} Table 2: DSL-TL’s train, dev, and test splits are 70/20/10% of the total number of instances, respectively. ### Annotation The annotators were crowd-sourced using AMT. They were based in the six countries where the language varieties were spoken, namely Argentina, Brazil, Portugal, Spain, United Kingdom, and the United States. The annotators were requested to label instances in their own native or non-native language variety. They labeled instances as being either European (pt-PT) or Brazilian Portuguese (pt-BR), Castilian (es-ES) or Argentine Spanish (es-AR), and British (en-GB) or American English (en-US). Label distributions are shown in Table 1. We asked annotators to label each instance with what they believed to be the most representative variety label. They were presented with three choices: (1). language variety A, (2). language variety B, or (3). both or neither. We initially collected three annotations for each of the 12,900 instances in the dataset. We considered the gold label correct in cases in which the three annotators agreed on the same label or when two annotators agreed with the original gold label - the DSLCC for Spanish and Portuguese and True News for English. This resulted in 6,426 instances annotated by three annotators. For the remaining 6,474 instances, we collected two additional human annotations targeting an agreement of at least three annotators agreeing on the label or two annotators agreeing with the original dataset's gold label. Finally, the annotators were also asked to identify the linguistic markers or named entities that influenced their decision. From the total 12,900 instances, 3,386 instances were provided with linguistic markers. The number of markers for each language is 270 for English, 2,378 for Spanish, and 738 for Portuguese. ## 4 Models We trained classic machine learning models as well as transformer-based models on the DSL-TL corpus, as presented in the next sections. As discussed by Medvedeva et al. (2017) and Jauhiainen et al. (2019), deep learning models have not shown to clearly outperform traditional machine learning models in language identification, so we take this opportunity to test methods from different machine learning paradigms. The models were evaluated in two tracks. * the six language varieties plus the both or none class for each language. * only the six language varieties ### Naive Bayes We describe experiments using the newest version of the Naive Bayes system previously used by Jauhiainen et al. (2022) and Jauhiainen et al. (2022). For each language pair, we used the common instances as a third "language" in a usual classification setup on track one. \begin{table} \begin{tabular}{l|l|l|l|l|l|} \hline \hline & & \multicolumn{3}{c|}{**Gold Label**} & \\ \hline **Language** & **Sentence** & **Old** & **DSL-TL** & **Markers** \\ \hline Portuguese & desde a **cracolandia** até as grandes mansöes que existem à beira-mar... & pt-BR & pt-BR & cracolandia \\ Portuguese & O **realques** & colictóndo pela Celpe, empresa do Grupo Neoenergía, previa o efeito & pt-BR & pt-BR & reajuste \\ & medio de 8,67°e & 0 & & & \\ & O **Esta equipao** do Athletic Bilbau é muito difrerente da que, em 1976-77, jogou & pt-PT & pt-PT & equipa \\ & e perdeu a final. & & & & \\ \hline Spanish & Le dejé la l lave a un **baqueano** y debo volver en poco tiempo, porque en abril & es-AR & es-AR & baqueano \\ & las condiciones... & & & & \\ & Aún así, la alimentación sana consigue más **adeptos** cada dia gracias... restaurantes & es-ES & es-ES & adeptos \\ & Estas irregulariddes fueron planteadas por legisladores del **PSOE** y, sobre & es-AR & es-ES & PSOE \\ & todo, de Izquierda... & & & & \\ \hline English & Arose Hill funeral director who pides herself on delivering the perfect **person**- & en-GB & en-GB & personalised \\ & **allised** send-off & & & & \\ & Ten symbolic silhouettes are on display around the **county** as part of the... campaign. & en-GB & both/neither & county \\ & It seems very un-Saintslike, almost unnatural, to go into the last day of the season with nothing.. & en-GB & en-GB & saintslike \\ \hline \hline \end{tabular} \end{table} Table 3: Example instances in English, Portuguese, and Spanish from the DSL-TL. The ‘old label’ represents the original dataset label along with the new label at the DSL-TL corpus. The linguistic markers identified by the annotators are provided in bold. Only a snapshot of these instances is shown. On our first run on the test data with the optimized parameters and the development data added as additional training data, the macro F1 was 0.505 for track one. Only character trigrams were used as they performed best on the development data. Table 5 shows the statistics for individual language varieties for track one. The macro F1 is clearly affected by the low F1 scores of the common instances. For track two, we modified the system to ignore the common instances when evaluating during optimization. The common instances in the training and the development sets were added to both varieties in each language. This time the optimal character n-gram range was from two to five. The system attained a macro F1 score of 0.794, which also outperformed the deep learning systems with pre-trained language models. For track two, we also experimented with the open variety by adding the training data from DSLCC v1.0 corpus. Adding the English instances from the first DSL shared task made the results worse on the development set, whereas adding the Portuguese and the Spanish instances improved the results. This is probably due to the fact that the results attained for the English varieties were already much higher than those of the best systems on DSL 2014. The resulting macro F-score for the open NB run was 0.803. ### Adaptive Naive Bayes In addition to the traditional Naive Bayes identifier, we used it with adaptive language models (Jauhiainen et al., 2019) in a similar manner to Jauhiainen et al. (2022) in the winning system of the ITDI shared task (Aepli et al., 2022). For the adaptive version of the Naive Bayes classifier, we use the same penalty modifier and character n-gram range as in the non-adaptive version. Using the development data, we optimize the number of splits used in adaptation as well as the number of learning epochs used. The number of splits indicates the size of the most confidently identified test data to be added after each identification run. The number of splits was chosen to be 512 with four epochs of adaptation. Table 5 shows the results for track one. The macro averaged F1-score of 0.503 is slightly higher than that of 0.501, which was attained by the identical system without adaptation. On track two, a similar increase in performance was observed, with 0.799 attained by adaptive naive Bayes. ### Deep Learning Models We also experimented with several pre-trained large language models (LLMs). These LLMs were multilingual, and consisted of multilingual BERT2 (mBERT) (Devlin et al., 2019), XLM-RoBERTa3 (XLM-R) (Liu et al., 2019), and XLM-R-Language Detection4 (XLM-R-LD). XLM-R-LD is a fine-tuned XLM-R model on the language identification dataset5 (LID) containing 90k instances in 20 languages. These instances were taken from a range of sources, including Amazon reviews and SemEval tasks from 2012 to 2017. The three models were trained on train and dev sets with no bleed between sets (Table 2). Train sets for English, Spanish, and Portuguese consisted of 2097, 3467, and 3467 instances, respectively. The dev sets contained 599 instances for English, 989 instances for Spanish, and 991 instances for Portuguese. Models were trained with a learning rate of 2e-5 over 5 epochs. Our models are summarized in Table 4. Footnote 2: [https://huggingface.co/bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) Footnote 3: [https://huggingface.co/xlm-roberta-base](https://huggingface.co/xlm-roberta-base) Footnote 4: [https://huggingface.co/paluca/xlm-roberta-base-language-detection](https://huggingface.co/paluca/xlm-roberta-base-language-detection) ## 5 Results In this section, we present the results obtained by all models in tracks one and two. Table 5 presents the results of the models on track one in terms \begin{table} \begin{tabular}{c|c|c|c} \hline & **mBERT** & **XLM-R** & **XLM-R-LD** \\ \hline type & BERT-base & RoBERTa-base & RoBERTa-base \\ corpus & Wikipedia & CC data & LID \\ size & 3.3B (102 lang.) & 2.5TB (100 lang.) & 70k (20 lang.) \\ \#layers & 12 & 12 & 12 \\ \#heads & 12 & 16 & 16 \\ \#lay.size & 768 & 768 & 768 \\ \#para & 110M & 250M & 250M \\ \hline \end{tabular} \end{table} Table 4: Comparison of mBERT, XLM-R, and XLM-R-LD models. Lang is short for languages. CC data refers to CommonCrawl data. of Precision, Recall, and F1-score, as well as the macro average for all languages. In track one, we show that mBERT achieves the best results with a 0.540 average F1-score, followed by XLM-R with a 0.540 average F1-score. In terms of the performance for individual languages, we observe that all models obtained their best results for the two English varieties and, in particular, for en-US, with results as high as 0.829 F1-score obtained by the XLM-R model. This is somewhat surprising given that the English dataset is the smallest among the three languages. Finally, for all languages, the results obtained by all models for the 'both or neither' class (en, es, and pt) were very low compared to the scores obtained for the varieties. This suggests that the class is very difficult to model due to the absence of any specific features. Previously released language identification datasets have not been manually annotated; they did not contain such a class. Therefore, the results on 'both or neither' require further investigation. To further understand the prediction of our best-performing model, mBERT, in Figure 1 we plot a confusion matrix of our model in the track one setting. The confusion matrix shows that confusion does not occur outside the three labels of each language which is evidence of the high performance of the model in discriminating between different languages. We observe that the predictions for the 'both or neither' Portuguese class behave differently than the classes for any of the other languages without any correct prediction. In Table 6, we present the results of all models in track two. We include the same five models as in track one plus a variation of Naive Bayes (NB, open) that has been trained using additional original data retrieved from the DSLCC. Unsurprisingly, the use of additional training data has boosted this system's performance and helped it achieve the best average score among all models with a 0.803 Figure 1: Confusion matrix showing the class predictions of mBERT in track one. \begin{table} \begin{tabular}{l l c c c} \hline **Variety** & **Model** & **Recall** & **Prec.** & **F1** \\ \hline \multirow{4}{*}{**en-GB**} & NB & 0.754 & 0.705 & 0.729 \\ & ANB & 0.772 & 0.721 & 0.746 \\ & mBERT & 0.760 & 0.807 & 0.783 \\ & XLM-R & 0.750 & 0.842 & 0.793 \\ & XLM-R-LD & 0.771 & 0.798 & 0.784 \\ \hline \multirow{4}{*}{**en-US**} & NB & 0.750 & 0.848 & 0.796 \\ & ANB & 0.731 & 0.851 & 0.786 \\ & mBERT & 0.867 & 0.795 & 0.829 \\ & XLM-R & 0.829 & 0.776 & 0.801 \\ & XLM-R-LD & 0.797 & 0.782 & 0.790 \\ \hline \multirow{4}{*}{**en**} & NB & 0.267 & 0.190 & 0.222 \\ & ANB & 0.233 & 0.146 & 0.179 \\ & mBERT & 0.278 & 0.333 & 0.303 \\ & XLM-R & 0.231 & 0.200 & 0.214 \\ & XLM-R-LD & 0.233 & 0.233 & 0.233 \\ \hline \multirow{4}{*}{**es-AR**} & NB & 0.481 & 0.427 & 0.452 \\ & ANB & 0.579 & 0.458 & 0.512 \\ & mBERT & 0.551 & 0.489 & 0.518 \\ & XLM-R-LD & 0.511 & 0.519 & 0.515 \\ \hline \multirow{4}{*}{**es-ES**} & NB & 0.636 & 0.679 & 0.657 \\ & ANB & 0.612 & 0.716 & 0.660 \\ & mBERT & 0.651 & 0.670 & 0.660 \\ & XLM-R & 0.689 & 0.752 & 0.719 \\ & XLM-R-LD & 0.684 & 0.694 & 0.689 \\ \hline \multirow{4}{*}{**es**} & NB & 0.327 & 0.336 & 0.331 \\ & ANB & 0.34 & 0.351 & 0.345 \\ & mBERT & 0.442 & 0.468 & 0.455 \\ & XLM-R & 0.454 & 0.442 & 0.448 \\ & XLM-R-LD & 0.444 & 0.429 & 0.436 \\ \hline \multirow{4}{*}{**pt-BR**} & NB & 0.662 & 0.762 & 0.708 \\ & ANB & 0.609 & 0.795 & 0.689 \\ & mBERT & 0.718 & 0.799 & 0.756 \\ & XLM-R & 0.753 & 0.786 & 0.769 \\ & XLM-R-LD & 0.739 & 0.796 & 0.767 \\ \hline \multirow{4}{*}{**pt-PT**} & NB & 0.533 & 0.442 & 0.483 \\ & ANB & 0.555 & 0.442 & 0.492 \\ & mBERT & 0.459 & 0.496 & 0.477 \\ & XLM-R & 0.492 & 0.657 & 0.562 \\ & XLM-R-LD & 0.488 & 0.613 & 0.544 \\ \hline \multirow{4}{*}{**pt**} & NB & 0.136 & 0.118 & 0.126 \\ & ANB & 0.153 & 0.100 & 0.121 \\ & mBERT & 0.214 & 0.051 & 0.082 \\ & XLM-R & 0.000 & 0.000 & 0.000 \\ & XLM-R-LD & 0.000 & 0.000 & 0.000 \\ \hline \multirow{4}{*}{**Macro**} & NB & 0.505 & 0.501 & 0.501 \\ & ANB & 0.509 & 0.509 & 0.503 \\ \cline{1-1} & mBERT & **0.549** & **0.545** & **0.540** \\ \cline{1-1} & XLM-R & 0.528 & 0.549 & 0.536 \\ \cline{1-1} & XLM-R-LD & 0.519 & 0.541 & 0.529 \\ \hline \end{tabular} \end{table} Table 5: The scores for individual language varieties on the test set with Naive Bayes, Adaptative Naive Bayes, mBERT, XLM-R, and XLM-R-LD on track one in terms of Recall, Precision, and F1-score. Macro average is reported for average. The best average results are in bold. average F1-score. That said, we observed that in track two, the three Naive Bayes variations had outperformed the deep learning systems corroborating the findings of previous studies Medvedeva et al. (2017); Jauhainien et al. (2019). Language identification is essentially a pattern-matching task rather than a semantic understanding one. We believe that this often favors relatively simpler character n-gram models when compared to more sophisticated text embedding-based representations. Both the average and individual class results are substantially higher in track two than in track one, once again evidencing the challenge of modeling the 'both or neither' class and its impact on the overall performance of the models. In this task formulation, however, track two brings the most important baseline results for DSL-TL as those are obtained when discriminating only between the language varieties. We believe that the 'both or neither' is of much less importance to real-world language identification systems. ## 6 Conclusion and Future Research This paper presented DSL-TL, the first human-annotated dataset for language variety identification and language identification more broadly. The dataset includes newspaper texts written in three languages and annotated with six language variety labels and a 'both or neither' class. We evaluate the performance of multiple models on this dataset, including variations of a classical machine learning approach (Naive Bayes) and multiple deep learning systems (mBERT, XLM-R, and XLM-R-LD). In terms of performance, we observed that the Naive Bayes system delivers performance on par with the deep learning models corroborating the findings of previous research Medvedeva et al. (2017); Jauhainien et al. (2019). In certain scenarios, the performance by Naive Bayes even surpassed deep learning models. This is a completely new way of looking at the problem. Our findings indicate that there is room for improvement in the treatment and computational modeling of the 'both or neither' class. Although this class is of less importance to real-world applications than the variety labels, the low results for this class evidence the challenge of modeling it in this novel language identification setting. This new dataset opens several avenues for research in language identification. It allows the community to perform a much fairer evaluation of language identification systems mitigating potential biases. We anticipate the true labels strategy presented in DSL-TL to become a new standard in language variety identification, helping to improve the performance of IR and NLP applications that struggle to deal with language variation, such as virtual assistants (e.g., Alexa, Siri), machine translation systems, text and multimedia retrieval systems, and many others. In the future, we would like to expand the size of this dataset to further investigate the impact of dataset size on performance. We would also \begin{table} \begin{tabular}{l l c c c} \hline **Variety** & **Model** & **Recall** & **Prec.** & **F1** \\ \hline \multirow{3}{*}{**en-GB**} & NB & 0.921 & 0.761 & 0.833 \\ & NB, open & 0.877 & 0.758 & 0.813 \\ & ANB & 0.93 & 0.752 & 0.831 \\ & mBERT & 0.828 & 0.842 & 0.835 \\ & XLM-R & 0.795 & 0.921 & 0.854 \\ & XLM-R-LD & 0.802 & 0.851 & 0.826 \\ \hline \multirow{3}{*}{**en-US**} & NB & 0.808 & 0.933 & 0.866 \\ & NB, open & 0.821 & 0.901 & 0.859 \\ & ANB & 0.795 & 0.939 & 0.861 \\ & mBERT & 0.889 & 0.872 & 0.880 \\ & XLM-R & 0.935 & 0.827 & 0.878 \\ & XLM-R-LD & 0.886 & 0.846 & 0.866 \\ \hline \multirow{3}{*}{**es-AR**} & NB & 0.857 & 0.726 & 0.786 \\ & NB, open & 0.789 & 0.789 & 0.789 \\ & ANB & 0.887 & 0.724 & 0.797 \\ & mBERT & 0.772 & 0.662 & 0.713 \\ & XLM-R & 0.750 & 0.654 & 0.699 \\ & XLM-R-LD & 0.765 & 0.684 & 0.722 \\ \hline \multirow{3}{*}{**es-ES**} & NB & 0.791 & 0.896 & 0.840 \\ & NB, open & 0.864 & 0.864 & 0.864 \\ & ANB & 0.782 & 0.915 & 0.843 \\ & mBERT & 0.800 & 0.874 & 0.835 \\ & XLM-R & 0.794 & 0.859 & 0.825 \\ & XLM-R-LD & 0.809 & 0.864 & 0.836 \\ \hline \multirow{3}{*}{**pt-BR**} & NB & 0.716 & 0.873 & 0.787 \\ & NB, open & 0.696 & 0.924 & 0.794 \\ & ANB & 0.702 & 0.897 & 0.788 \\ & mBERT & 0.766 & 0.779 & 0.773 \\ & XLM-R & 0.823 & 0.809 & 0.816 \\ & XLM-R-LD & 0.810 & 0.796 & 0.803 \\ \hline \multirow{3}{*}{**pt-PT**} & NB & 0.774 & 0.564 & 0.652 \\ & NB, open & 0.876 & 0.58 & 0.698 \\ & ANB & 0.825 & 0.568 & 0.673 \\ & mBERT & 0.504 & 0.489 & 0.496 \\ & XLM-R & 0.599 & 0.620 & 0.609 \\ & XLM-R-LD & 0.570 & 0.591 & 0.581 \\ \hline \multirow{3}{*}{**Macro**} & NB & 0.811 & 0.792 & 0.794 \\ & NB, open & **0.820** & **0.803** & **0.803** \\ & ANB & 0.820 & 0.799 & 0.799 \\ & mBERT & 0.760 & 0.753 & 0.755 \\ & XLM-R & 0.783 & 0.782 & 0.780 \\ & XLM-R-LD & 0.774 & 0.772 & 0.772 \\ \hline \end{tabular} \end{table} Table 6: The scores for individual language varieties on the test set with Naive Bayes, Adaptative Naive Bayes, mBERT, XLM-R, and XLM-R-LD on track two in terms of Recall, Precision, and F1-score. Macro average is reported for average. The best average results are in bold. like to carry out the same annotation on groups of very similar languages, such as Bosnian, Croatian, and Serbian. DSL-TL is the official dataset of a homonymous ongoing competition at the 2023 edition of the VarDial workshop.6 The results presented in this paper will serve as baseline results for the competition. Footnote 6: [https://sites.google.com/view/vardial-2023/shared-tasks](https://sites.google.com/view/vardial-2023/shared-tasks) ## Acknowledgements We would like to thank the creators of the DSLCC for making the data available. We further thank the VarDial DSL-TL shared task participants for the feedback provided. This work has been partially supported by a GWBC seed fund awarded by RIT, the Academy of Finland (Funding decision no. 341798) and by the Finnish Research Impact Foundation from its Tandem Industry Academia funding in cooperation with Lingsoft.
2306.03459
A Combinatorial Model of Numerical Semigroup
Let $A=(a_1, a_2, ..., a_n)$ be relative prime positive integers with $a_i\geq 2$. The Frobenius number $F(A)$ is the largest integer not belonging to the numerical semigroup $\langle A\rangle$ generated by $A$. The genus $g(A)$ is the number of positive integer elements that are not in $\langle A\rangle$. The Frobenius problem is to find $F(A)$ and $g(A)$ for a given sequence $A$. In this paper, we study the Frobenius problem of $A=\left(a,h_1a+b_1d,h_2a+b_2d,...,h_ka+b_kd\right)$ with some restrictions. An innovation is that $d$ can be a negative integer. In particular, when $A=\left(a,ba+d,b^2a+\frac{b^2-1}{b-1}d,...,b^ka+\frac{b^k-1}{b-1}d\right)$, we obtain formulas for $F(A)$ and $g(A)$ when $a\geq k-1-\frac{d-1}{b-1}$. Our formulas simplifies further for some special cases, such as Mersenne, Thabit and repunit numerical semigroups. We obtain explicit closed formulas for generalized Mersenne, Thabit and repunit numerical semigroups and some more general numerical semigroups. Finally, we partially solve an open problem for the Porth numerical semigroup.
Feihu Liu, Guoce Xin, Suting Ye, Jingjing Yin
2023-06-06T07:27:02Z
http://arxiv.org/abs/2306.03459v3
# A generalization of Mersenne, Thabit and Repunit Numerical semigroups ###### Abstract. Let \(A=(a_{1},a_{2},...,a_{n})\) be relative prime positive integers with \(a_{i}\geq 2\). The Frobenius number \(F(A)\) is the largest integer not belonging to the numerical semigroup \(\langle A\rangle\) generated by \(A\). The genus \(g(A)\) is the number of positive integer elements that are not in \(\langle A\rangle\). The Frobenius problem is to find \(F(A)\) and \(g(A)\) for a given sequence \(A\). In this paper, we study the Frobenius problem of \(A=(a,h_{1}a+b_{1}d,h_{2}a+b_{2}d,...,h_{k}a+b_{k}d)\) with some restrictions. In particular, when \(A=\left(a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1}{b-1}d\right)\), we obtain formulas for \(F(A)\) and \(g(A)\) when \(a\geq k-1\). Our formulas simplifies further for some special cases, such as Mersenne, Thabit and repunit numerical semigroups. We obtain explicit formulas for generalized Mersenne, Thabit and repunit numerical semigroups and some more general numerical semigroups. _Mathematic subject classification_: Primary 11D07; Secondary 11A67, 11B75, 20M14. _Keywords_: Numerical semigroup; Apery set; Frobenius number; Genus; Pseudo-Frobenius number. \({}^{*}\) This work was partially supported by NSFC(12071311). 5. _The type_\(t(A)\) of \(\langle A\rangle\): The cardinality of \(PF(A)\). For more knowledge about numerical semigroup, see [2, 26]. The Frobenius number \(F(A)\) has been widely studied. For \(A=(a_{1},a_{2})\), Sylvester [30] obtained \(F(A)=a_{1}a_{2}-a_{1}-a_{2}\) in 1882. For \(n\geq 3\), F. Curtis [6] proved that the \(F(A)\) can not be given by closed formulas of a certain type. However, many special cases have been studied such as arithmetic progression in [3, 19, 27], geometric sequences in [17], triangular and tetrahedral sequences in [20]. For more special sequences, see [12, 13, 18, 21, 31, 32]. Many special numerical semigroups are also considered, such as Fibonacci in [16], Mersenne in [25], repunit in [24], squares and cubes in [11], Thabit in [23] and other numerical semigroups in [9, 28, 29]. The motivation of this paper comes from the repunit numerical semigroup: \(\langle\{\frac{b^{n+i}-1}{b-1}\mid i\in\mathbb{N}\}\rangle\). Essentially, its minimal system of generators is \(S(b,n)=\left(\frac{b^{n}-1}{b-1},\frac{b^{n+1}-1}{b-1},...,\frac{b^{2n-1}-1}{ b-1}\right)\) ([24]), that is the embedding dimension \(e(S(b,n))=n\). In this paper, we consider the following more general model \[A=(a,Ha+dB)=\left(a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1} {b-1}d\right), \tag{1}\] where \(a,d,b,k\in\mathbb{P},b\geq 2\) and \(\gcd(a,d)=1\). Here we have the embedding dimension \(e(A)\leq k+1\). At this time, we can observe that 1. If \(a=2^{n}-1\), \(b=2\), \(d=1\), \(k=n-1\), then \(\langle A\rangle\) is the Mersenne numerical semigroup \(S(n)\) in [25]. 2. If \(a=3\cdot 2^{n}-1\), \(b=2\), \(d=1\), \(k=n+1\), then \(\langle A\rangle\) is the Thabit numerical semigroup \(T(n)\) in [23]. 3. If \(a=(2^{m}-1)\cdot 2^{n}-1\), \(b=2\), \(d=1\), \(k=n+m-1\), then \(\langle A\rangle\) is a class of numerical semigroups in [8]. 4. If \(a=(2^{m}+1)\cdot 2^{n}-(2^{m}-1)\), \(b=2\), \(d=2^{m}-1\), \(k\in\{n+1,n+m-1,n+m\}\), then \(\langle A\rangle\) is a class of numerical semigroups in [29]. 5. If \(a=\frac{b^{n}-1}{b-1}\), \(b\geq 2\), \(d=1\), \(k=n-1\), then \(\langle A\rangle\) is the repunit numerical semigroups \(S(b,n)\) in [24]. It reduces to case (1) when \(b=2\). 6. If \(a=b^{n+1}+\frac{b^{n}-1}{b-1}\), \(b\geq 2\), \(d=1\), \(k=n+1\), then \(\langle A\rangle\) is a class of numerical semigroups in [9]. 7. If \(a=(b+1)b^{n}-1\), \(b\geq 2\), \(d=b-1\), \(k=n+1\), then \(\langle A\rangle\) is the Thabit numerical semigroup \(T_{b,1}(n)\) of the first kind base \(b\) in [28]. It reduces to case (2) when \(b=2\). The main purpose of this paper is to give a unified approach to the above 7 numerical semigroups, which are special cases of the model (1). Inspired by the model (1), we study the following more general model \[A=(a,Ha+dB)=(a,h_{1}a+db_{1},h_{2}a+db_{2},...,h_{k}a+db_{k}), \tag{2}\] where \(b_{1}=1,b_{i+1}=s_{i}b_{i}+1\), \(s_{i}\geq s_{i-1}\) for \(1\leq i\leq k-1\) and \(h_{i}=ub_{i}+1,u\in\mathbb{P}\) for \(1\leq i\leq k\). For this model, we obtain formulas of \(F(A)\) and \(g(A)\) in Theorem 2.12. In some specializations, we can further obtain \(PF(A)\) and type \(t(A)\). This approach also applies to several new numerical semigroups: i) The case \(b_{i}=\frac{b^{i}-1}{b-1}\), \(u=b-1\), \(b\geq 2\), \(k=n\), \(a=\frac{m(b^{n}-1)}{b-1}\), which reduces to (5) when \(m=1,d=1\); ii) The case \(b_{i}=2^{i}-1\), \(u=1\), \(k=n+1,a=3\cdot 2^{n}-1\) which reduces to (2) when \(d=1\); iii) The case \(b_{i}=2^{i}-1\), \(u=1\), \(a=m(2^{k}-1)+2^{k-1}-1\), \(m\geq 1\) and \(k\geq 3\). To compute \(F(A)\) and \(g(A)\) for the above general model, we introduce a much simpler minimization problem \(O_{B}^{H}(M)\): \[O_{B}^{H}(M)=\min\left\{uM+\sum_{i=1}^{k}x_{i}\;\big{|}\;\sum_{i=1}^{k}b_{i}x_ {i}=M,\ x_{i}\in\mathbb{N},1\leq i\leq k\right\}.\] It turns out that \(O_{B}^{H}(M)\) can be solved by the greedy algorithm. This is based on the fact that \(B=(1,b_{2},...,b_{k})\) is an orderly sequence that we shall introduce in section 2. The paper is organized as follows. In Section 2, we provide some necessary lemmas and related results. Then we establish Theorem 2.12, which gives formulas of Frobenius number \(F(A)\) and genus \(g(A)\) where \(A\) form in (2) and \(ua+d+k-2\geq\sum_{i=1}^{k-1}s_{i}\) and \(s_{i}\leq u+1\). As special cases, we obtain the \(F(A)\) and \(g(A)\) where \(A=\left(a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1}{b-1}d\right)\) and \(a\geq k-1\). Sections 3 and 4 focus on applications of Theorem 2.12 for some special \(a\) and \(k\). They are related to Mersenne, Thabit and repunit numerical semigroups. The formulas simplify in these cases. In Section 5, we will mention other numerical semigroups and consider the Frobenius problem for a new class of numerical semigroups. Section 6 is a concluding remark. ## 2. The Model \(A=(a,h_{1}a+b_{1}d,h_{2}a+b_{2}d,...,h_{k}a+b_{k}d)\) with Some Restrictions ### Crude Formula for Frobenius Numbers It is convenient to use the short hand notation \(A:=(a,B)=(a,b_{1},b_{2},...,b_{k})\). Let \(\gcd(A)=1\), \(a,b_{i}\in\mathbb{P}\). The set \[\langle A\rangle=\left\{ax+\sum_{i=1}^{k}b_{i}x_{i}\ \mid x,x_{i}\in\mathbb{N}\right\}\] is a numerical semigroup. Let \(w\in\langle A\rangle\backslash\{0\}\). The _Apery set_ of \(w\) in \(\langle A\rangle\) is \(Ape(A,w)=\{s\in\langle A\rangle\mid s-w\notin\langle A\rangle\}\). In [26], we can obtain \[Ape(A,w)=\{N_{0},N_{1},N_{2},...,N_{w-1}\},\] where \(N_{r}:=\min\{a_{0}\mid a_{0}\equiv r\mod w,\ a_{0}\in\langle A\rangle\}\), \(0\leq r\leq w-1\). We usually take \(w:=a\). A. Brauer and J. E. Shockley [4], E. S. Selmer [27] gave the following results respectively. **Lemma 2.1** ([4], [27]).: _Suppose \(A:=(a,B)=(a,b_{1},b_{2},...,b_{k})\). The Apery set of \(a\) in \(\langle A\rangle\) is \(Ape(A,a)=\{N_{0},N_{1},N_{2},...,N_{a-1}\}\). Then the Frobenius number and genus of \(A\) are respectively:_ \[F(A)=F(a,B)=\max_{r\in\{0,1,...,a-1\}}N_{r}-a,\] \[g(A)=g(a,B)=\frac{1}{a}\sum_{r=1}^{a-1}N_{r}-\frac{a-1}{2}.\] Now we define the following order relation in \(\mathbb{Z}\): \(a\preceq_{\langle A\rangle}b\) if \(b-a\in\langle A\rangle\). It is proved that this relation \(\preceq_{\langle A\rangle}\) is an partial order in [26]. **Lemma 2.2** (Proposition 2.20, [26]).: _Let \(\langle A\rangle\) be a numerical semigroup. Then_ \[PF(A)=\left\{w-a\mid w\in\max_{\preceq_{\langle A\rangle}}Ape(A,a)\right\},\] _where \(Ape(A,a)=\{N_{0},N_{1},...,N_{a-1}\}\)._ Note that \(N_{0}=0\) for all \(A\). The following result is easy by definition of \(N_{r}\). **Proposition 2.3**.: _Let \(A=(a,b_{1},...,b_{k})\), \(\gcd(a,d)=1,\ d\in\mathbb{P}\). Then we have_ \[\{N_{0},N_{1},N_{2},...,N_{a-1}\}=\{N_{d\cdot 0},N_{d\cdot 1},N_{d\cdot 2},...,N_ {d\cdot(a-1)}\}. \tag{3}\] We observe that the argument in [13] for \(A=(a,a+dB)\) naturally extends for general \(A=(a,Ha+dB)=(a,h_{1}a+db_{1},h_{2}a+db_{2},...,h_{k}a+db_{k})\) with \(\gcd(a,d)=1\). We find \(F(A)\) and \(g(A)\) are closely related to a minimization problem defined by: \[O_{B}^{H}(M):=\min\left\{\sum_{i=1}^{k}h_{i}x_{i}\mid\sum_{i=1}^{k}b_{i}x_{i}= M,\ x_{i}\in\mathbb{N},1\leq i\leq k\right\}.\] It reduces to the \(O_{B}(M)\) in [13] when \(H=(1,1,\ldots,1)\). For the sake of convenience, in what follows we shall always assume \(x_{i}\in\mathbb{N},1\leq i\leq k\) unless specified otherwise. We have **Lemma 2.4**.: _Let \(A=(a,h_{1}a+db_{1},...,h_{k}a+db_{k})\), \(k,h,d\in\mathbb{P}\) and \(\gcd(A)=1\), \(m\in\mathbb{N}\), \(\gcd(a,d)=1\). For a given \(0\leq r\leq a-1\), we have_ \[N_{dr}=\min\left\{O_{B}^{H}(ma+r)\cdot a+(ma+r)d\mid m\in\mathbb{N}\right\}. \tag{4}\] Proof.: We have the following equalities: \[N_{dr} =\min\{a_{0}\mid a_{0}\equiv dr\mod a;\ a_{0}\in\langle A\rangle\}\] \[=\min\left\{\sum_{i=1}^{k}(h_{i}a+db_{i})x_{i}\mid\sum_{i=1}^{k}( h_{i}a+db_{i})x_{i}\equiv dr\mod a,\ x_{i}\in\mathbb{N},1\leq i\leq k\right\}\] \[=\min\left\{\left(\sum_{i=1}^{k}h_{i}x_{i}\right)\cdot a+d\cdot \sum_{i=1}^{k}b_{i}x_{i}\mid d\sum_{i=1}^{k}b_{i}x_{i}\equiv dr\mod a,\ x_{i}\in \mathbb{N},1\leq i\leq k\right\}\] \[=\min\left\{\left(\sum_{i=1}^{k}h_{i}x_{i}\right)\cdot a+d\cdot \sum_{i=1}^{k}b_{i}x_{i}\mid\sum_{i=1}^{k}b_{i}x_{i}\equiv r\mod a,\ x_{i}\in \mathbb{N},1\leq i\leq k\right\}\] \[=\min\left\{\left(\sum_{i=1}^{k}h_{i}x_{i}\right)\cdot a+d(ma+r) \mid\sum_{i=1}^{k}b_{i}x_{i}=ma+r,\ m,x_{i}\in\mathbb{N},1\leq i\leq k\right\}.\] Now for fixed \(m\), and hence fixed \(M=ma+r\), \(\sum_{i=1}^{k}h_{i}x_{i}\) is minimized to \(O_{B}^{H}(ma+r)\). This completes the proof. By Lemma 2.4, we can define an intermediate function with respect to \(m\in\mathbb{N}\), namely: \[N_{dr}(m):=O_{B}^{H}(ma+r)\cdot a+(ma+r)d.\] In our cases, it is not hard to show that \(N_{dr}(m)\) is increasing with respect to \(m\), so that \(N_{dr}=N_{dr}(0)\). This allows us to further obtain the formulas of \(F(A)\) and \(g(A)\). Before proceeding further, we also need the following definition. For a given positive integers sequence \(B=(b_{1},b_{2},...,b_{k})\), \(1=b_{1}<b_{2}<\cdots<b_{k}\) and \(M\in\mathbb{N}\), let \[opt_{B}(M):=O_{B}(M)=\min\left\{\sum_{i=1}^{k}x_{i}\mid\sum_{i=1}^{k}b_{i}x_{i} =M,\ \ M,x_{i}\in\mathbb{N},1\leq i\leq k\right\}. \tag{5}\] The problem \(opt_{B}(M)\) is called _the change-making problem_[1]. A strategy is called greedy, that is to use as many of the maximum as possible, than as many of the next one as possible, and so on. We denote by \(grd_{B}(M)\) the number of elements used in \(B\) by the greedy strategy. Then we have \(opt_{B}(M)\leq grd_{B}(M)\). If the greedy solution is always optimal, i.e., \(opt_{B}(M)=grd_{B}(M)\) for all \(M>0\), then we call the sequence \(B\)_orderly_; Otherwise, we call it _non-orderly_. For example, the sequence \(B=(1,5,16)\) is non-orderly. Because \(20=16+1+1+1+1=5+5+5+5\), we have \(opt_{B}(20)=4<5=grd_{B}(20)\). In general, it is hard to determine if a sequence is orderly. The following lemma gives a nice sufficient condition. **Lemma 2.5** (One-Point Theorem, [5, 10, 15]).: _Suppose \(B^{\prime}=(1,b_{1},...,b_{k})\) is orderly and \(b_{k+1}>b_{k}\). Let \(s=\lceil b_{k+1}/b_{k}\rceil\). Then the sequence \(B=(1,b_{1},...,b_{k},b_{k+1})\) is orderly if and only if \(opt_{B}(sb_{k})=grd_{B}(sb_{k})\)._ By the _One-Point Theorem_, we construct a class of orderly sequence as follows. **Lemma 2.6**.: _Let \(k\in\mathbb{P}\), \(b_{1}=1\), \(b_{i+1}=s_{i}b_{i}+1\) and \(s_{i}\geq s_{i-1}\), \(1\leq i\leq k-1\). Then the sequence \(B=(b_{1},b_{2},...,b_{k})\) is orderly, i.e., \(opt_{B}(M)=grd_{B}(M)\) for all \(M\in\mathbb{P}\)._ Proof.: We prove by induction on \(k\). The lemma clearly holds for \(k\leq 2\), since the sequences (1) and (\(1,b_{2}\)) are orderly. Suppose \((1,b_{2},...,b_{k-1})\) is orderly. By assumption, \(\left\lceil\frac{b_{k}}{b_{k-1}}\right\rceil=s_{k-1}+1\). By \(\left\lceil\frac{b_{k}}{b_{k-1}}\right\rceil\cdot b_{k-1}=(s_{k-1}+1)b_{k-1}= b_{k}+b_{k-1}-1=b_{k}+s_{k-2}b_{k-2}\) and \(s_{k-2}+1\leq s_{k-1}+1\), we have \(opt_{B}\left(\left\lceil\frac{b_{k}}{b_{k-1}}\right\rceil\cdot b_{k-1}\right) =grd_{B}\left(\left\lceil\frac{b_{k}}{b_{k-1}}\right\rceil\cdot b_{k-1}\right)\). By Lemma 2.5, the sequence \(B\) is orderly. This completes the proof. For the \(B\) as in Lemma 2.6, we can further study the Frobenius problem for \(A=(a,Ha+dB)=(a,h_{1}a+b_{1}d,h_{2}a+b_{2}d,...,h_{k}a+b_{k}d)\). **Proposition 2.7**.: _Suppose \(B=(b_{1},b_{2},...,b_{k})\), with \(b_{1}=1\), \(b_{i+1}=s_{i}b_{i}+1\) and \(s_{i}\geq s_{i-1}\), \(1\leq i\leq k-1\). Then for any \(M\in\mathbb{P}\), \(grd_{B}(M)=(x_{1},\ldots,x_{k})\) is uniquely characterized by the following three properties._ 1. \(x_{k}=\left\lfloor\frac{M}{b_{k}}\right\rfloor\)_._ 2. \(x_{i}\in\{0,1,...,s_{i}\}\) _for every_ \(1\leq i\leq k-1\)_._ 3. _if_ \(2\leq i\leq k-1\) _and_ \(x_{i}=s_{i}\)_, then_ \(x_{1}=\cdots=x_{i-1}=0\)_._ Proof.: By \(b_{i+1}=s_{i}b_{i}+1\) for \(1\leq i\leq k-1\) and Lemma 2.6, the proof is obvious. If a solution \(X=(x_{1},x_{2},...,x_{k})\) of \(\sum_{i}x_{i}b_{i}=M\) satisfies the above conditions (1), (2) and (3), then we call \(X=X(M)\) the _greedy presentation_ of \(M\). Define \(M\}\) and define a colexicographic order on \(R(M)\) as follows: \[(x_{1}^{\prime},x_{2}^{\prime},...,x_{k}^{\prime})\preceq(x_{1},x_{2 },...,x_{k})\Longleftrightarrow x_{i}^{\prime}=x_{i}\ \text{ for any }\ i>0\ \text{ or}\] \[x_{j}^{\prime}<x_{j},x_{i}^{\prime}=x_{i}\ \text{ for a certain }\ j>0\ \text{ and any }\ i>j.\] Obviously, the order relation \(\preceq\) is a total order on \(R(M)\). **Remark 2.8**.: _Lemma 2.6 essentially provides the construction process of the greedy presentations \(X=X(m)=(x_{1},x_{2},...,x_{k})\) in \(R(M)\)._ **Example 2.9**.: _Let \(B=(1,3,7,15)\) and \(M=23\). Elements in \(R(M)\) are listed as follows._ \[X(0)=(0,0,0,0), X(1)=(1,0,0,0), X(2)=(2,0,0,0), X(3)=(0,1,0,0),\] \[X(4)=(1,1,0,0), X(5)=(2,1,0,0), X(6)=(0,2,0,0), X(7)=(0,0,1,0),\] \[X(8)=(1,0,1,0), X(9)=(2,0,1,0), X(10)=(0,1,1,0), X(11)=(1,1,1,0),\] \[X(12)=(2,1,1,0), X(13)=(0,2,1,0), X(14)=(0,0,2,0), X(15)=(0,0,0,1),\] \[X(16)=(1,0,0,1), X(17)=(2,0,0,1), X(18)=(0,1,0,1), X(19)=(1,1,0,1),\] \[X(20)=(2,1,0,1), X(21)=(0,2,0,1), X(22)=(0,0,1,1), X(23)=(1,0,1,1).\] ### The Frobenius Problem Now we consider the \(O_{B}^{H}(M)\) for \(B=(b_{1},b_{2},...,b_{k})\), \(H=(h_{1},h_{2},...,h_{k})\) and any \(M\in\mathbb{P}\), where \[b_{1}=1,b_{i+1}=s_{i}b_{i}+1\ \text{ and }\ s_{i}\geq s_{i-1}\ \text{ for }\ 1\leq i\leq k-1 \tag{6}\] \[h_{i}=ub_{i}+1,u\in\mathbb{P}\ \text{ for }\ 1\leq i\leq k. \tag{7}\] We have \[O_{B}^{H}(M) =\min\left\{\sum_{i=1}^{k}(ub_{i}+1)x_{i}\mid\sum_{i=1}^{k}b_{i} x_{i}=M,\ x_{i}\in\mathbb{N},1\leq i\leq k\right\}\] \[=\min\left\{uM+\sum_{i=1}^{k}x_{i}\mid\sum_{i=1}^{k}b_{i}x_{i}=M, \ x_{i}\in\mathbb{N},1\leq i\leq k\right\}\] \[=uM+opt_{B}(M).\] **Lemma 2.10**.: _Let \(A=(a,Ha+dB)=(a,h_{1}a+b_{1}d,h_{2}a+b_{2}d,...,h_{k}a+b_{k}d)\), \(a,d,k\in\mathbb{P}\), \(k\geq 2\) and \(\gcd(a,d)=1\). The sequences \(B\) and \(H\) satisfy conditions (6) and (7) respectively. If \(ua+d+k-2\geq\sum_{i=1}^{k-1}s_{i}\), then \(N_{dr}(m)\) is increasing with respect to \(m\in\mathbb{N}\). More precisely, if \(X(r)=(x_{1},x_{2},...,x_{k})\) is the greedy presentation of \(r\), then_ \[N_{dr}=\left(\sum_{i=1}^{k}x_{i}\right)a+r(ua+d)=\left(\sum_{i=1}^{k}(ub_{i}+ 1)x_{i}\right)a+rd. \tag{8}\] Proof.: Recall that if \(X(ma+r)=(y_{1},y_{2},...,y_{k})\) then \[N_{dr}(m)=\left(u(ma+r)+\sum_{i=1}^{k}y_{i}\right)a+(ma+r)d.\] Write \(X((m+1)a+r)=(z_{1},...,z_{k})\). Then we see that \(z_{k}\geq y_{k}\), and \[\begin{split} N_{dr}(m+1)-N_{dr}(m)&=ua^{2}+ad+\left( \sum_{i=1}^{k}z_{i}-\sum_{i=1}^{k}y_{i}\right)a\\ &\geq\left(ua+d+\sum_{i=1}^{k-1}z_{i}-\sum_{i=1}^{k-1}y_{i} \right)a\\ (\text{by Proposition \ref{prop:2.2}})&\geq\left(ua+d-\sum_{i=1 }^{k-1}s_{i}+k-2\right)a\geq 0.\end{split}\] Thus \(N_{dr}(m)\) is increasing so that \(N_{dr}=N_{dr}(0)\). This completes the proof. In Equation (8) with \(X(r)=(x_{1},x_{2},...,x_{k})\), we call \(w(r)=\sum_{i=1}^{k}(ub_{i}+1)x_{i}\) the _weight_ of \(r\). Now we have the following result. **Lemma 2.11**.: _Let \(M\in\mathbb{N}\). The sequences \(B\) and \(H\) satisfy conditions (6) and (7) respectively and \(s_{i}\leq u+1\) for \(1\leq i\leq k-1\). Suppose \(X(r_{1})=(x^{\prime}_{1},...,x^{\prime}_{k})\in R(M)\) and \(X(r_{2})=(x_{1},...,x_{k})\in R(M)\). If \((x^{\prime}_{1},...,x^{\prime}_{k})\preceq(x_{1},...,x_{k})\), then \(w(r_{1})\leq w(r_{2})\)._ Proof.: We consider the following two cases. If \(x^{\prime}_{i}=x_{i}\) for any \(1\leq i\leq k\), then \(w(r_{1})=w(r_{2})\). If there is a certain \(1\leq t\leq k\), so that \(x^{\prime}_{t}<x_{t}\) and \(x^{\prime}_{t+1}=x_{t+1},...,x^{\prime}_{k}=x_{k}\), then \(x^{\prime}_{t}+1\leq x_{t}\) and \[\begin{split} w(r_{1})&=\sum_{i=1}^{k}(ub_{i}+1)x^{ \prime}_{i}=\sum_{i=1}^{t}(ub_{i}+1)x^{\prime}_{i}+\sum_{j=t+1}^{k}(ub_{j}+1)x ^{\prime}_{j}\\ &\leq(u_{1}b_{1}+1)s_{1}+\sum_{i=2}^{t-1}(ub_{i}+1)(s_{i}-1)+(ub_ {t}+1)x^{\prime}_{t}+\sum_{j=t+1}^{k}(ub_{j}+1)x_{j}\\ &=\sum_{i=1}^{t-2}\left((ub_{i}+1)s_{i}-ub_{i+1}-1\right)+(ub_{t -1}+1)s_{t-1}+(ub_{t}+1)x^{\prime}_{t}+\sum_{j=t+1}^{k}(ub_{j}+1)x_{j}\\ &\leq\sum_{i=1}^{t-2}(s_{i}-u-1)+(ub_{t}+1)+(ub_{t}+1)x^{\prime}_ {t}+\sum_{j=t+1}^{k}(ub_{j}+1)x_{j}\\ &\leq(ub_{t}+1)x_{t}+\sum_{j=t+1}^{k}(ub_{j}+1)x_{j}\leq w(r_{2}).\end{split}\] This completes the proof. Now we can obtain the main result of this section. **Theorem 2.12**.: _Let \(A=(a,Ha+dB)=(a,h_{1}a+b_{1}d,h_{2}a+b_{2}d,...,h_{k}a+b_{k}d)\), \(a,d,k\in\mathbb{P}\), \(k\geq 2\) and \(\gcd(a,d)=1\). The sequences \(B\) and \(H\) satisfy conditions (6) and (7) respectively._ _If \(ua+d+k-2\geq\sum_{i=1}^{k-1}s_{i}\) and \(s_{i}\leq u+1\) for \(1\leq i\leq k-1\) then we have_ \[F(A) =\left(\sum_{i=1}^{k}x_{i}\right)_{a-1}\cdot a+(a-1)(ua+d)-a,\] \[g(A) =\sum_{r=1}^{a-1}\left(\sum_{i=1}^{k}x_{i}\right)_{r}+\frac{(a-1) (ua+d-1)}{2},\] _where \(\left(\sum_{i=1}^{k}x_{i}\right)_{r}\) is the sum of elements in the greedy presentation of \(r\). Obviously \(x_{k}=\left\lfloor\frac{r}{b_{k}}\right\rfloor\)._ Proof.: By Lemma 2.10, Lemma 2.11 and Equation (8), we have \[\max N_{dr}=N_{d(a-1)}=\left(\sum_{i=1}^{k}x_{i}\right)_{a-1}\cdot a+(a-1)(ua+d).\] Further from Lemma 2.1, we can obtain the Frobenius number formula \(F(A)\). From Equation (8) and Lemma 2.1 again, we can obtain the genus \(g(A)\). This completes the proof. **Example 2.13**.: _Let \(B=(1,3,7,29)\) and \(u=3\), we have \(s_{1}=2,s_{2}=2,s_{3}=4\) and \(H=(4,10,22,88)\). Therefore we have \(A=(a,4a+d,10a+3d,22a+7d,88a+29d)\). If \(a=21\) and \(d=2\), then the greedy presentation of \(a-1\) is \((0,2,2,0)\) and the Frobenius number \(F(A)=1363\). Similarly, we can obtain the genus \(g(A)=694\). We can use the numericalsgps GAP _package ([7]) verify the correctness of the above results._ From the Theorem 2.12, we can easily obtain the following results. **Corollary 2.14**.: _Let \(A=\left(a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1}{b-1}d\right)\), \(a,d,b,k\in\mathbb{P}\), \(\gcd(a,d)=1\), \(b\geq 2\) and \(a\geq k-1\). We have_ \[F(A) =\left((b-1)a-b+d+\left(\sum_{i=1}^{k}x_{i}\right)_{a-1}\right)a-d,\] \[g(A) =\sum_{r=1}^{a-1}\left(\sum_{i=1}^{k}x_{i}\right)_{r}+\frac{(a-1) ((b-1)a+d-1)}{2},\] _where \(\left(\sum_{i=1}^{k}x_{i}\right)_{r}\) is the sum of elements in the greedy presentation of \(r\)._ Proof.: Apply Theorem 2.12 to the special case \(s_{1}=s_{2}=\cdots=s_{k-1}=b\) and \(u=b-1\). Let's give the following two examples. The proof is left to the readers. **Example 2.15**.: _Let \(A=(a,2a+d,4a+3d)\), \(a,d\in\mathbb{P}\), \(\gcd(a,d)=1\) and \(a\geq 2\). We have the Frobenius number:_ \[F(A)=2a^{2}-\left(3-d+2\left\lfloor\frac{a-1}{3}\right\rfloor\right)a-d.\] _The genus is as follows:_ \[g(A)=\left\{\begin{array}{ll}\frac{(a-1)(2a+d-1)}{2}-\frac{a(a-3)}{3}&\text {if }\ a\equiv 0\mod 3;\\ \frac{(a-1)(2a+d-1)}{2}-\frac{(a-1)(a-2)}{3}&\text{if }\ a\equiv 1,2\mod 3. \end{array}\right.\] **Example 2.16**.: _Let \(A=(a,2a+d,4a+3d,8a+7d)\), \(a,d\in\mathbb{P}\), \(\gcd(a,d)=1\) and \(a\geq 7\). We have the Frobenius number:_ \[F(A)=a^{2}+\left(\left\lfloor\frac{a-1}{7}\right\rfloor+d+v(a\mod 7)\right)a-d,\] _where \((v(r))_{0\leq r\leq 6}=(0,-2,-1,0,-1,0,1).\) The genus is as follows:_ \[g(A)=\frac{(a-1)(a+d-1)}{2}+\frac{a^{2}+15a+c(a\mod 7)}{14},\] _where \((c(r))_{0\leq r\leq 6}=(0,-16,-20,-12,-20,-16,0).\)_ ## 3. A Generalization of Repunit Numerical Semigroup In this section we focus on the case \(b_{i}=\frac{b^{i}-1}{b-1}\), \(u=b-1\), \(k=n\), \(a=\frac{m(b^{n}-1)}{b-1}\), \(n,b\geq 2\) and \(m\geq 1\) of our model. That is, we consider the numerical semigroup \(\langle A\rangle\) generated by \[A=\Bigg{(}\frac{m(b^{n}-1)}{b-1},b\cdot\frac{m(b^{n}-1)}{b-1}+d,b^{2}\cdot \frac{m(b^{n}-1)}{b-1}+\frac{b^{2}-1}{b-1}d,...,b^{n}\cdot\frac{m(b^{n}-1)}{b- 1}+\frac{b^{n}-1}{b-1}d\Bigg{)}.\] This is a generalization of the repunit numerical semigroup. By Proposition 2.7 and \(a-1=\frac{m(b^{n}-1)}{b-1}-1=(m-1)\cdot\frac{b^{n}-1}{b-1}+b\cdot\frac{b^{n-1 }-1}{b-1}\), we have \[R(a-1)=\Big{\{}(x_{1},x_{2},...,x_{n})\mid 0\leq x_{n}\leq m-1;\ 0\leq x_{i}\leq b \ \ \text{for}\ \ 1\leq i\leq n-1;\] \[\text{if}\ \ x_{i}=b,\ \ \text{then}\ \ x_{j}=0\ \ \text{for}\ \ j\leq i-1\leq n-2\Big{\}}. \tag{9}\] One can check that \(\#R(a-1)\) is \(\frac{m(b^{n}-1)}{b-1}=a\). **Theorem 3.1**.: _For \(A=\Big{(}a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1}{b-1}d \Big{)}\), suppose \(k=n\), \(a=\frac{m(b^{n}-1)}{b-1}\), \(b,n\geq 2\), \(m\geq 1\) and \(\gcd(a,d)=1\). We have_ \[F(A)=(mb^{n}+d-1)\cdot\frac{m(b^{n}-1)}{b-1}-d,\] \[g(A)=\frac{m(b^{n}-1)(mb^{n}+d-1)}{2(b-1)}+\frac{mb^{n}(n-2)+1-d }{2}.\] Proof.: By Corollary 2.14 and \(a-1=(m-1)\cdot\frac{b^{n}-1}{b-1}+b\cdot\frac{b^{n-1}-1}{b-1}\), we have \[F(A) =\Bigg{(}(b-1)\cdot\frac{m(b^{n}-1)}{b-1}-b+d+(m-1+b)\Bigg{)} \cdot\frac{m(b^{n}-1)}{b-1}-d\] \[=(mb^{n}+d-1)\cdot\frac{m(b^{n}-1)}{b-1}-d.\] Now we consider the genus \(g(A)\). Based on the composition of \(R(a-1)\) in Equation (9), we can get \[\sum_{r=1}^{a-1}\Bigg{(}\sum_{i=1}^{n}x_{i}\Bigg{)}_{r}=\sum_{i=0}^{m-1}i\cdot (b^{n-1}+b^{n-2}+\cdots+1)+m\left((n-1)b^{n-2}\cdot\sum_{i=0}^{b-1}i\right)\] \[+m\left(\sum_{j=1}^{n-1}\left(b\cdot b^{n-1-j}+(n-1-j)\cdot b^{n-2-j} \cdot\sum_{i=0}^{b-1}i\right)\right)\] \[= \frac{m(m-1)(b^{n}-1)}{2(b-1)}+\frac{m(n-1)b^{n-1}(b-1)}{2}\] \[+\sum_{j=1}^{n-1}m\left(b^{n-j}+\frac{b^{n-1-j}(b-1)(n-j-1)}{2}\right)\] \[= \frac{m(m-1)(b^{n}-1)}{2(b-1)}+\frac{m(b^{n}-b)}{2(b-1)}+\frac{mb^ {n}(n-1)}{2}.\] By Corollary 2.14, we have \[g(A) =\frac{m(m(b^{n}-1)-(b-1))}{2(b-1)}+\frac{mb^{n}(n-1)}{2}+\left( \frac{m(b^{n}-1)}{2(b-1)}-\frac{1}{2}\right)(m(b^{n}-1)+d-1)\] \[=\frac{m(b^{n}-1)(mb^{n}+d-1)}{2(b-1)}+\frac{mb^{n}(n-2)+1-d}{2}.\] This completes the proof. In Theorem 3.1, we know that the embedding dimension \(e(A)\leq n+1\). Let \(m=d=1\) in Theorem 3.1, we have \(b^{n}\cdot\frac{m(b^{n}-1)}{b-1}+\frac{b^{n}-1}{b-1}d=(b^{n}+1)a\). In fact, we have \(e(A)=n\) when \(m=d=1\) (see [24]). But this does not affect that \(\langle A\rangle\) becomes the repunit numerical semigroup ([24]). Clearly, we have the following corollary. **Corollary 3.2** (Theorem 20, Theorem 25, [24]).: _Let \(n,b\geq 2\), \(m=d=1\) in Theorem 3.1. Then \(S(b,n)=\langle A\rangle\) is the repunit numerical semigroup. Furthermore, we have_ \[F(S(b,n)) =\frac{b^{n}-1}{b-1}b^{n}-1,\] \[g(S(b,n)) =\frac{b^{n}}{2}\left(\frac{b^{n}-1}{b-1}+n-2\right).\] If \(b=2\) in Corollary 3.2, then the \(\langle A\rangle\) becomes the Mersenne numerical semigroup ([25]). **Corollary 3.3** (Theorem 16, Theorem 19, [25]).: _Let \(n\geq 2\), \(b=2\) in Corollary 3.2. Then \(S(n)=\langle A\rangle\) is the Mersenne numerical semigroup. Furthermore, we have_ \[F(S(n))=2^{2n}-2^{n}-1,\] \[g(S(n))=2^{n-1}(2^{n}+n-3).\] **Theorem 3.4**.: _For \(A=\left(a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1}{b-1}d\right)\), suppose \(k=n\), \(a=\frac{m(b^{n}-1)}{b-1}\), \(b,n\geq 2\), \(m\geq 1\) and \(\gcd(a,d)=1\). Then \(t(A)=n-1\). Furthermore_ \[PF(A)=\{F(A),F(A)-d,...,F(A)-(n-2)d\}.\] Proof.: Let \(X(r)=(x_{1},x_{2},...,x_{n})\) be a greedy presentation of \(r\), \(0\leq r\leq a-1\). By Equation (8), we have two representations of \(N_{dr}\): \[N_{dr}=\left(\sum_{i=1}^{n}b^{i}x_{i}\right)a+\sum_{i=1}^{n}\frac{b^{i}-1}{b-1 }x_{i}d \tag{10}\] \[=\left(\sum_{i=1}^{n}b^{i}x_{i}\right)a+\sum_{i=1}^{n}\frac{b^{i}x_{i}}{b-1}d- \sum_{i=1}^{n}\frac{x_{i}}{b-1}d. \tag{11}\] By Equation (9), the set \(R(a-1)\) consists of all greedy presentations of \(0\leq r\leq a-1\). We recall that the order relation \(a\preceq_{\langle A\rangle}b\) if and only if \(b-a\in\langle A\rangle\). In the Apery set \(Ape(A,a)=\{N_{d0},N_{d1},...,N_{d(a-1)}\}\), by comparing the \(x_{i}\)'s in Equation (10), we see that if \(N_{dr}\) is maximal (under \(\preceq_{\langle A\rangle}\)) in \(Ape(A,a)\), then its corresponding \((x_{1},\ldots,x_{n})\) has to be one of the following greedy presentations: \((b,b-1,...,b-1,m-1),\;(0,b,b-1,...,b-1,m-1),\;(0,0,b,...,b-1,m-1),\;...,\;(0,0, 0,...,b,m-1)\). For the above candidates, we always have \[\sum_{i=1}^{n}b^{i}x_{i}=mb^{n}.\] For each candidate \(X=(x_{1},...,x_{n})\in R(a-1)\) associated to \(N_{dr}\), to show that \(N_{dr}\) is maximal, it suffices to show that there is no candidate \(X^{\prime}=(x_{1}^{\prime},...,x_{n}^{\prime})\in R(a-1)\) associated to \(N_{dr^{\prime}}\) such that \(0\neq N_{dr^{\prime}}-N_{dr}\in\langle A\rangle\). Assume to the contrary the existence of the \(X^{\prime}\). By Equation (11), we have \[N_{dr^{\prime}}-N_{dr}=\sum_{i=1}^{n}\frac{x_{i}}{b-1}d-\sum_{i=1}^{n}\frac{x_ {i}^{\prime}}{b-1}d=td,\;\;t=1,2,...,n-2.\] Then \(td\in\langle A\rangle\), so that there exist not all zero nonnegative \((y_{0},y_{1},...,y_{n})\) such that \[td=y_{0}a+y_{1}(ba+d)+\cdots+y_{n}\left(b^{n}a+\frac{b^{n}-1}{b-1}d\right).\] Then we have \[(n-2)d\geq\left(t-y_{1}-y_{2}\frac{b^{2}-1}{b-1}-\cdots-y_{n}\frac{b^{n}-1}{b -1}\right)d=(y_{0}+by_{1}+\cdots+b^{n}y_{n})a>0.\] By \(\gcd(a,d)=1\), \(a\) has to divide \(t-y_{1}-y_{2}\frac{b^{2}-1}{b-1}-\cdots-y_{n}\frac{b^{n}-1}{b-1}\leq t\leq n-2<a\). Thus \((y_{0}+by_{1}+\cdots+b^{n}y_{n})a=0\), a contradiction. Therefore we have \[\max_{\preceq_{\langle A\rangle}}Ape(A,a)=\{N_{dr}\mid r\in a-\{1,2,...,n-1\}\}.\] By Lemma 2.2, we have \[PF(A) =\{N_{dr}-a\mid r\in a-\{1,2,...,n-1\}\}\] \[=\{F(A),F(A)-d,...,F(A)-(n-2)d\}.\] This completes the proof. Similarly, we can obtain the following corollary. **Corollary 3.5** (Theorem 23, [24]).: _Let \(n,b\geq 2\), \(m=d=1\) in Theorem 3.1. Then \(S(b,n)=\langle A\rangle\) is the repunit numerical semigroup. We have \(t(S(b,n))=n-1\) and_ \[PF(S(b,n))=\{F(S(n)),F(S(n))-1,...,F(S(n))-(n-2)\}.\] **Corollary 3.6** (Theorem 18, [25]).: _Let \(n\geq 2\), \(b=2\) in Corollary 3.5. Then \(S(n)=\langle A\rangle\) is the Mersenne numerical semigroup. We have \(t(S(n))=n-1\) and_ \[PF(S(n))=\{F(S(n)),F(S(n))-1,...,F(S(n))-(n-2)\}.\] ## 4. A Generalization of Thabit Numerical Semigroup In this section we focus on the case \(b_{i}=2^{i}-1\), \(u=1\), \(k=n+1\), \(a=3\cdot 2^{n}-1\), \(n\geq 1\). That is, we consider the numerical semigroup generated by \[A=\big{(}3\cdot 2^{n}-1,3\cdot 2^{n+1}-2+d,3\cdot 2^{n+2}-4+3d,...,3\cdot 2^{2n+ 1}+2^{n+1}(d-1)-d\big{)}.\] This generalize the Thabit numerical semigroup. By Proposition 2.7, \(a-1=3\cdot 2^{n}-2=(2^{n+1}-1)+(2^{n}-1)\) and \(2^{n+1}-2=2(2^{n}-1)\), we have \[R(a-1)=\Big{\{}(x_{1},x_{2},...,x_{n+1})\mid(x_{n},x_{n+1})=(0,0),(1,0),(0,1);\] \[0\leq x_{i}\leq 2\ \ \text{for}\ \ 1\leq i\leq n-1;\ \ \text{if}\ \ x_{i}=2,\ \ \text{then}\ \ x_{j}=0\ \ \text{for}\ \ j\leq i-1\Big{\}}\] \[\biguplus\Big{\{}(0,0,...,0,x_{n},x_{n+1})\mid(x_{n},x_{n+1})=(2,0),(1,1)\Big{\}}. \tag{12}\] One can check that \(\#R(a-1)\) is \(3\cdot 2^{n}-1=a\). **Theorem 4.1**.: _Let \(A=\big{(}3\cdot 2^{n}-1,3\cdot 2^{n+1}-2+d,3\cdot 2^{n+2}-4+3d,...,3\cdot 2^{2n+ 1}-2^{n+1}+(2^{n+1}-1)d\big{)}\), \(n,d\in\mathbb{P}\), \(n\geq 1\) and \(\gcd(3\cdot 2^{n}-1,d)=1\). We have_ \[F(A)=9\cdot 2^{2n}+3(d-2)\cdot 2^{n}-2d+1,\] \[g(A)=9\cdot 2^{2n-1}+(3n-8)\cdot 2^{n-1}+(3\cdot 2^{n-1}-1)d+1.\] Proof.: By Corollary 2.14 and \(a-1=(2^{n+1}-1)+(2^{n}-1)\), we have \[F(A)=(a+d-2+2)a-d=9\cdot 2^{2n}+3(d-2)\cdot 2^{n}-2d+1.\] Now we consider the genus \(g(A)\). Based on the composition of \(R(a-1)\) in Equation (12) and Corollary 2.14, we have \[g(A) =\sum_{r=1}^{a-1}\left(\sum_{i=1}^{n+1}x_{i}\right)_{r}+\frac{(a- 1)(a+d-1)}{2}\] \[=4+3\left(\sum_{i=0}^{n-1}i\binom{n-1}{i}+\sum_{j=0}^{n-1}\left( \sum_{i=1}^{n-j-1}(i+2)\binom{n-j-1}{i}\right)\right)\] \[\quad+2\left(\sum_{i=1}^{n-1}\binom{n-1}{i}+\sum_{j=1}^{n-1}\sum _{i=0}^{n-j-1}\binom{n-j-1}{i}+1\right)+\frac{(a-1)(a+d-1)}{2}\] \[=6+3\left(2^{n-2}(n-1)+2^{n-2}(n-3)+1+2^{n}-2\right)\] \[\quad+2\left(2^{n-1}-1+\sum_{j=1}^{n-1}2^{n-j-1}\right)+\frac{(a -1)(a+d-1)}{2}\] \[=6+3(2^{n-1}\cdot n-1)+2^{n}-2+2^{n}-2+(3\cdot 2^{n-1}-1)(3\cdot 2^{n}-2 +d)\] \[=9\cdot 2^{2n-1}+(3n-8)\cdot 2^{n-1}+(3\cdot 2^{n-1}-1)d+1.\] This completes the proof. Letting \(d=1\) in Theorem 4.1 gives the following corollary. **Corollary 4.2** (Corollary 20, Theorem 29, [23]).: _Let \(d=1\) in Theorem 4.1, so that \(T(n)=\langle A\rangle\) is the Thabit numerical semigroup in [23]. Then we have_ \[F(T(n))=9\cdot 2^{2n}-3\cdot 2^{n}-1,\] \[g(T(n))=9\cdot 2^{2n-1}+(3n-5)2^{n-1}.\] Let \(X(r)=(x_{1},x_{2},...,x_{n+1})\) be a greedy presentation of \(r\), \(0\leq r\leq a-1\). By Equation (8), we have \[N_{dr}=\left(\sum_{i=1}^{n+1}2^{i}x_{i}\right)a+\sum_{i=1}^{n+1}(2^{i}-1)x_{i} d=\left(\sum_{i=1}^{n+1}2^{i}x_{i}\right)(a+d)-\sum_{i=1}^{n+1}x_{i}d. \tag{13}\] Under the order relation \(\preceq_{\langle A\rangle}\) and Equation (12), we know that the greedy presentations of the maximal elements in \(Ape(A,a)\) are contained in the following elements: \[(2,1,1,...,1,1,0),\ (0,2,1,...,1,1,0),\ (0,0,2,...,1,1,0),\...,\ (0,0,0,...,2,1,0)\] \[(2,1,1,...,1,0,1),\ (0,2,1,...,1,0,1),\ (0,0,2,...,1,0,1),\...,\ (0,0,0,...,2,0,1)\] \[(0,0,0,...,0,2,0),\ (0,0,0,...,0,1,1).\] For the greedy presentations of the first two lines above, we always have \(\sum_{i=1}^{n-1}2^{i}x_{i}=2^{n}\). By Equation (13), we have \[(2^{n}+2^{n+1})(a+d)-(t+1)d-((2^{n}+2^{n})(a+d)-td)\] \[= 2^{n}(a+d)-d=2^{n}a+(2^{n}-1)d\in\langle A\rangle,\] where \(2\leq t\leq n\). Therefore we have \[\max_{\preceq_{\langle A\rangle}}Ape(A,a)\subseteq\left\{(2^{n}+2^{n+1})(a+d)- \{2d,3d,...,(n+1)d\}\right\}\biguplus\{2^{n+1}(a+d)-(n+1)d\}.\] Similar to the proof of Theorem 3.4, we can obtain \(s\cdot d\notin\langle A\rangle\), \(1\leq s\leq(n-1)\). Furthermore, we have \[\left\{(2^{n}+2^{n+1})(a+d)-\{2d,3d,...,(n+1)d\}\right\}\subseteq\max_{ \preceq_{\langle A\rangle}}Ape(A,a).\] **Theorem 4.3**.: _Let \(A=\left(3\cdot 2^{n}-1,3\cdot 2^{n+1}-2+d,3\cdot 2^{n+2}-4+3d,...,3\cdot 2^{2n+1}-2 ^{n+1}+(2^{n+1}-1)d\right)\), \(n,d\in\mathbb{P}\), \(n\geq 2\) and \(\gcd(3\cdot 2^{n}-1,d)=1\). Then_ \[\max_{\preceq_{\langle A\rangle}}Ape(A,a)=\left\{(2^{n}+2^{n+1})(a+d)-\{2d,3d,...,(n+1)d\}\right\}\biguplus\{2^{n+1}(a+d)-(n+1)d\},\] _where \(a=3\cdot 2^{n}-1\)._ Proof.: We just need to prove \(\{2^{n+1}(a+d)-(n+1)d\}\subseteq\max_{\preceq_{\langle A\rangle}}Ape(A,a)\). This proof is similar to the proof of Theorem 24 in [23]. We omit it. By applying Theorem 4.3 and Lemma 2.2 we get the following result. **Theorem 4.4**.: _Let \(A=\left(3\cdot 2^{n}-1,3\cdot 2^{n+1}-2+d,3\cdot 2^{n+2}-4+3d,...,3\cdot 2^{2n+1}-2^{n +1}+(2^{n+1}-1)d\right)\), \(n,d\in\mathbb{P}\), \(n\geq 2\) and \(\gcd(3\cdot 2^{n}-1,d)=1\). Then \(t(A)=n+1\). Furthermore_ \[PF(A)=\{F(A),F(A)-d,...,F(A)-(n-1)d\}\biguplus\{6\cdot 2^{2n}+(2d-5)2^{n}-(n+1)d+1\}.\] **Corollary 4.5** (Corollary 25, [23]).: _Let \(d=1\) in Theorem 4.4. Then \(T(n)=\langle A\rangle\) is the Thabit numerical semigroup in [23]. We have \(t(T(n))=n+1\) and_ \[PF(A)=\{F(A),F(A)-1,...,F(A)-(n-1)\}\biguplus\{6\cdot 2^{2n}-3\cdot 2^{n}-n\}.\] ## 5. Other Numerical Semigroups Our model \(A=\left(a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1}{b-1}d\right)\) in Corollary 2.14 can also be used to explain the following five numerical semigroups. The first four are considered in the literature, the fifth one seems new and we will give a proof. 1. If \(a=(2^{m}-1)\cdot 2^{n}-1\), \(b=2\), \(d=1\), \(k=n+m-1\), \(n\geq 1\) and \(2\leq m\leq 2^{n}\), then \[A=((2^{m}-1)\cdot 2^{n}-1,(2^{m}-1)\cdot 2^{n+1}-1,...,(2^{m}-1)\cdot 2^{2n+m- 1}-1),\] and the \(\langle A\rangle\) is a class of numerical semigroups \(S(m,n)\) in [8]. 2. Let \(m,n\in\mathbb{N}\) and \(m\geq 2\) and \[\delta=\left\{\begin{array}{ll}1&\mbox{if }\ n=0;\\ m&\mbox{if }\ n\neq 0,m\leq n;\\ m-1&\mbox{if }\ n\neq 0,m>n.\end{array}\right.\] If \(a=(2^{m}+1)\cdot 2^{n}-(2^{m}-1)\), \(b=2\), \(d=2^{m}-1\) and \(k=n+\delta\), then the \(\langle A\rangle\) is a class of numerical semigroups \(GT(n,m)\) in [29]. 3. If \(a=b^{n+1}+\frac{b^{n}-1}{b-1}\), \(b\geq 2\), \(d=1\), \(k=n+1\), then \[A=\left(b^{n+1}+\frac{b^{n}-1}{b-1},b^{n+2}+\frac{b^{n+1}-1}{b-1},...,b^{2n+2} +\frac{b^{2n+1}-1}{b-1}\right),\] and the \(\langle A\rangle\) is a class of numerical semigroups \(S(b,n)\) in [9]. 4. If \(a=(b+1)b^{n}-1\), \(b\geq 2\), \(d=b-1\), \(k=n+1\), then \[A=((b+1)b^{n}-1,(b+1)b^{n+1}-1,...,(b+1)b^{2n+1}-1),\] and the \(\langle A\rangle\) is the Thabit numerical semigroup \(T_{b,1}(n)\) of the first kind base \(b\) in [28]. 5. If \(b=2\), \(a=m(2^{k}-1)+2^{k-1}-1\), \(m\geq 1\) and \(k\geq 3\), then \(a-1=m(2^{k}-1)+2(2^{k-2}-1)\). We solve this case as follows. By Corollary 2.14, we have \[F(A) =\left(m(2^{k}-1)+2^{k-1}-1+d-2+m+2\right)\cdot(m(2^{k}-1)+2^{k-1}-1)-d\] \[=\left((2m+1)2^{k-1}-1\right)^{2}+(d-m)\left((2m+1)2^{k-1}-1\right) -md-d.\] Now we consider the genus \(g(A)\). Let \[R_{1} =\left\{(x_{1},x_{2},...,x_{k})\mid 0\leq x_{k}\leq m-1;\quad 0\leq x_{i} \leq 2\ \ \mbox{for }\ 1\leq i\leq k-1,\right.\] \[\qquad\qquad\mbox{if }\ x_{i}=2,\ \ \mbox{then }\ x_{j}=0\ \ \mbox{for }\ j<i\leq k-1\right\}\] \[R_{2}=\Big{\{}(x_{1},x_{2},...,x_{k})\mid x_{k}=m;\ x_{k-1}=0;\ \ 0\leq x_{i}\leq 2\ \text{for}\ \ 1\leq i\leq k-2,\] \[\text{if}\ \ x_{i}=2,\ \ \text{then}\ \ x_{j}=0\ \ \text{for}\ \ j<i\leq k-2 \Big{\}}.\] Readers can easily verify that \(R(a-1)=R_{1}\biguplus R_{2}\). Similar to the proof of Theorem 3.1, we have \[g(A) =\sum_{r=1}^{a-1}\left(\sum_{i=1}^{k}x_{i}\right)_{r}+\frac{(a-1) (a+d-1)}{2}\] \[=\sum_{i=1}^{m-1}i(2^{k}-1)+m(2^{k-1}-1)+\sum_{r=1}^{a-1}\left( \sum_{i=1}^{k-1}x_{i}\right)_{r}+\frac{(a-1)(a+d-1)}{2}.\] Now \[\sum_{r=1}^{a-1}\left(\sum_{i=1}^{k-1}x_{i}\right)_{r}= m\left(\sum_{i=0}^{k-1}i\binom{k-1}{i}+\sum_{j=1}^{k-1}\left(\sum_{i=0} ^{k-j-1}(i+2)\binom{k-j-1}{i}\right)\right)\] \[\ \ +\left(\sum_{i=0}^{k-2}i\binom{k-2}{i}+\sum_{j=1}^{k-2}\left( \sum_{i=0}^{k-j-2}(i+2)\binom{k-j-2}{i}\right)\right)\] \[= m(2^{k-1}k-1)+2^{k-2}(k-1)-1.\] Therefore, we have \[g(A)= 2^{k-1}(2^{k}-1)m^{2}+\frac{1}{2}(d-1)(2^{k}-1)m+\left(2^{2k-1}+ k2^{k-1}-2^{k+1}\right)m\] \[\ +2^{2k-3}+(d+k)2^{k-2}-5\cdot 2^{k-2}-d+1.\] The above discussion is summarized as follows. **Theorem 5.1**.: _Let \(A=(a,2a+d,2^{2}a+3d,...,2^{k}a+(2^{k}-1)d)\), \(a=m(2^{k}-1)+2^{k-1}-1\), \(m\geq 1\), \(k\geq 3\) and \(\gcd(a,d)=1\). Then we have_ \[F(A)= \left((2m+1)2^{k-1}-1\right)^{2}+(d-m)\left((2m+1)2^{k-1}-1\right) -md-d,\] \[g(A)= 2^{k-1}(2^{k}-1)m^{2}+\frac{1}{2}(d-1)(2^{k}-1)m+(2^{2k-1}+k2^{k -1}-2^{k+1})m\] \[\ \ +2^{2k-3}+(d+k)2^{k-2}-5\cdot 2^{k-2}-d+1.\] ## 6. Concluding Remark This paper combines the contents of the previous version and a subsequent draft [14]. In the previous version, we mainly deal with the case \(A=(a,2a+d,2^{2}a+3d,...,2^{k}a+(2^{k}-1)d)\). We soon realized that our idea extends for the case \(A=\left(a,ba+d,b^{2}a+\frac{b^{2}-1}{b-1}d,...,b^{k}a+\frac{b^{k}-1}{b-1}d \right),\) which was discussed in [14]. In this version, we extends the idea further to solve the Frobenius problem of \(A=(a,Ha+dB)\) as Theorem 2.12. The same idea also applies to the case like \[A=(a,ha+d,h^{2}a+(h^{2}-h+1)d,...,h^{k}a+(h^{k}-h+1)d),\] where \(h\geq 2\), \(\gcd(a,d)=1\), but solving the corresponding Frobenius problem is still hard. We explain as follows. Firstly, the _One-Point Theorem_ also shows that \((1,h^{2}-h+1,h^{3}-h+1,...,h^{k}-h+1)\) is an orderly sequence, and we have \[\begin{split} O_{B}^{H}(M)&=\min\left\{\sum_{i=1}^{ k}h^{i}x_{i}\mid\sum_{i=1}^{k}(h^{i}-h+1)x_{i}=M,\ x_{i}\in\mathbb{N},1\leq i\leq k \right\}\\ &=\min\left\{M+(h-1)\cdot\sum_{i=1}^{k}x_{i}\mid\sum_{i=1}^{k}(h^ {i}-h+1)x_{i}=M,\ x_{i}\in\mathbb{N},1\leq i\leq k\right\}\\ &=M+(h-1)\cdot opt_{B}(M).\end{split}\] In the solution of \(O_{B}^{H}(M)\) mentioned above, by \(h(h^{i}-h+1)+(h^{2}-2h+1)=h^{i+1}-h+1\), we know that \((x_{1},x_{2},...,x_{k})\) satisfies the following conditions. 1. \(x_{k}=\left\lfloor\frac{M}{h^{k}-h+1}\right\rfloor\). 2. \(0\leq x_{1}\leq h^{2}-h\) and \(0\leq x_{i}\leq h\) for every \(2\leq i\leq k-1\). 3. if \(2\leq i\leq k-1\) and \(x_{i}=h\), then \(0\leq x_{1}\leq h^{2}-2h\) and \(x_{2}=\cdots=x_{i-1}=0\). Similarly, we can define the \(R(M)\), a colexicographic order on \(R(M)\) and the weight \(w(r)=\sum_{i=1}^{k}h^{i}x_{i}\) of \(r\). However, the \(w(r)\) no longer has the increasing property similar to Lemma 2.11. This makes the Frobenius problem quite difficult to solve. **Acknowledgements:** This work was partially supported by the National Natural Science Foundation of China [12071311].
2301.07698
Quark-hadron duality at work: lifetimes of bottom baryons
In the 1990s, very low experimental values for the lifetime ratio $\tau (\Lambda_b)/\tau(B_d)$ triggered a considerable amount of doubt in the applicability of the heavy quark expansion (HQE), which is based on the assumption of quark-hadron duality (QHD) for inclusive total decay rates. However, these low values turned out to be the result of purely experimental problems, and the current HFLAV average reads $\tau (\Lambda_b)/\tau(B_d) = 0.969(6)$. In this work, we present the Standard Model predictions for the $b$-baryon lifetimes within the framework of the HQE. In particular, we include for the first time the contribution of the Darwin term and we update the estimates for the matrix elements of the dimension-six four-quark operators. Within experimental and theoretical uncertainties, we find excellent agreement between the data and the HQE predictions, and thus no indication for any visible violation of QHD. Our numerical results can be summarised by the ratios $\tau (\Lambda_b)/\tau(B_d) = 0.955(14)$, $\tau (\Omega_b^-)/\tau(B_d) = 1.081(42)$, and $\tau (\Xi_b^0)/\tau (\Xi_b^-) = 0.929(28)$.
James Gratrex, Alexander Lenz, Blaženka Melić, Ivan Nišandžić, Maria Laura Piscopo, Aleksey V. Rusov
2023-01-18T18:42:33Z
http://arxiv.org/abs/2301.07698v1
# Quark-hadron duality at work: ###### Abstract In the 1990s, very low experimental values for the lifetime ratio \(\tau(\Lambda_{b})/\tau(B_{d})\) triggered a considerable amount of doubt in the applicability of the heavy quark expansion (HQE), which is based on the assumption of quark-hadron duality (QHD) for inclusive total decay rates. However, these low values turned out to be the result of purely experimental problems, and the current HFLAV average reads \(\tau(\Lambda_{b})/\tau(B_{d})=0.969(6)\). In this work, we present the Standard Model predictions for the \(b\)-baryon lifetimes within the framework of the HQE. In particular, we include for the first time the contribution of the Darwin term and we update the estimates for the matrix elements of the dimension-six four-quark operators. Within experimental and theoretical uncertainties, we find excellent agreement between the data and the HQE predictions, and thus no indication for any visible violation of QHD. Our numerical results can be summarised by the ratios \(\tau(\Lambda_{b})/\tau(B_{d})=0.955(14)\), \(\tau(\Omega_{b}^{-})/\tau(B_{d})=1.081(42)\), and \(\tau(\Xi_{b}^{0})/\tau(\Xi_{b}^{-})=0.929(28)\). ## 1 Introduction Lifetimes are among the most fundamental properties of particles. For weakly decaying hadrons containing a heavy \(b\)-quark, the lifetimes can be determined theoretically within the framework of the heavy quark expansion (HQE), whose origin goes back to the 1980s [1]; see [2] for a review. According to the HQE, the total decay rate of a bottom hadron can be described as an expansion in inverse powers of the heavy quark mass, i.e. in \(\Lambda_{\rm QCD}/m_{b}\), with \(\Lambda_{\rm QCD}\) being a typical non-perturbative hadronic scale much smaller than the mass of the \(b\)-quark. The leading term in this expansion is given by the decay of a free \(b\)-quark, and is completely independent of the decaying hadron. Taking only this contribution into account would therefore lead to the expectation of equal lifetimes for different \(b\)-hadrons. Corrections to this picture, and thus deviations of the lifetime ratios from one, are suppressed by at least two powers of the \(b\)-quark mass. Without knowing the size of higher-order QCD corrections, and with only rough estimates for the matrix elements arising in the HQE, the naive expectation in 1986 [1] was \[\frac{\tau(B^{+})}{\tau(B_{d})}\Bigg{|}^{\rm HQE\,1986}\approx 1.1\,, \hskip 28.452756pt\frac{\tau(B_{s})}{\tau(B_{d})}\Bigg{|}^{\rm HQE\,1986} \approx 1\,,\hskip 28.452756pt\frac{\tau(\Lambda_{b}^{0})}{\tau(B_{d})} \Bigg{|}^{\rm HQE\,1986}\approx 0.96\,. \tag{1}\] Surprisingly, early measurements of the \(\Lambda_{b}^{0}\) lifetime resulted in values which were considerably lower than the first theory expectations, as shown in figure 1.1 Footnote 1: The \(\Lambda_{b}^{0}\) baryon was discovered in 1991 in proton-antiproton collisions by the UA1 collaboration, based on data taken in 1988/89 [3]. The first measurement of the \(\Lambda_{b}\) lifetime was performed by the Footnote 1: The \(\Lambda_{b}\) lifetime \(\tau(\Lambda_{b})\) is defined as \(\tau(\Lambda_{b})=(1.18\pm 0.07)\,{\rm ps}\), where \(\tau(\Lambda_{b})\) is the lifetime ratio of the \(B_{d}\) lifetime \(\tau(\Lambda_{b})\). In e.g. 1996, the world average for the \(\Lambda_{b}\) lifetime read [5] \[\tau(\Lambda_{b})=(1.18\pm 0.07)\,{\rm ps}\,, \tag{2}\] which corresponded to a lifetime ratio of \[\frac{\tau(\Lambda_{b})}{\tau(B_{d})}=(0.75\pm 0.05)\,, \tag{3}\] when using the 1996 world average for the \(B_{d}\) lifetime [5]. As these experimental results were more than four standard deviations below the naive expectation in eq. (1), a considerable amount of interest was triggered in the theory community, with various efforts made to accommodate the result (3) within the HQE. In [7], the possibility of anomalously large matrix elements of dimension-six four-quark operators in the HQE was suggested, which was, however, in conflict with the results of [5; 6; 14]; while large contributions from dimension-seven four-quark operators were considered in [11]. Separately, the validity of the HQE itself was questioned e.g. in [15; 16; 17], with [15; 16] suggesting a violation of local quark-hadron duality (QHD), see e.g. [18] for a brief introduction to the concept of QHD. However, the proposal in [15; 16] was heavily criticised since it would have required huge \(1/m_{b}\) corrections, which cannot be reconciled with the operator product expansion approach, see e.g. [19]. The notion of QHD was introduced in 1975 by Poggio, Figure 1: History of the lifetime ratio \(\tau(\Lambda_{b})/\tau(B_{d})\): experiment (lilac) vs. selected theory predictions: _Shifman, Voloshin_ (1986) [1], _Colangelo, De Fazio_ (1996) [5], _Di Pierro, Sachrajda, Michael_ (1999) [6], _Huang, Liu, Zhu_ (1999) [7], _Guberina, Melic, Stefancic_ (1999, 2000) [8; 9], _Franco et al_ (2002) [10], _Gabbiani, Onishchenko, Petrov_ (2004) [11], _Lenz_ (2015) [2], _Cheng_ (2018) [13], and this work. Quinn, and Weinberg [20] to equate the hadronic process \(e^{+}+e^{-}\to\text{hadrons}\) with the quark-level process \(e^{+}+e^{-}\to\text{quarks}\). In the case of the total decay rate of a \(B\)-hadron, we can write \[\Gamma^{\text{tot}}(B)=\!\!\!\!\!\!\sum_{\text{all possible hadrons}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! In this paper, we present theory predictions for the lifetimes of baryons containing a heavy \(b\)-quark, as a continuation of our work on the study of lifetimes of \(D\) mesons [52; 53], \(B\) mesons [54], and charmed baryons [53]. Besides implementing for the first time the recently determined Wilson coefficient of the Darwin operator [55; 56; 57; 58], we include radiative QCD corrections to the Wilson coefficients, where available, and update all the relevant numerical inputs, including new estimates for the non-perturbative matrix elements. We present predictions for the decay rates of the \(\Lambda_{b}^{0}\), \(\Xi_{b}^{0}\), \(\Xi_{b}^{-}\), and \(\Omega_{b}^{-}\) baryons, and their lifetime ratios, as well as lifetime ratios of these baryons with the \(B_{d}^{0}\) meson. Within uncertainties, our results are in excellent agreement with the experimental data. Moreover, we give predictions for the inclusive \(b\)-baryon semileptonic branching fractions, although in this case there are no current experimental determinations. The remainder of the paper is structured as follows. In section 2 we present the theoretical framework. Specifically, in section 2.1, we briefly describe the structure of the HQE, followed by the discussion of short-distance contributions in section 2.2, and the analysis of non-perturbative matrix elements in section 2.3. Section 3 contains the description of the numerical analysis and our predictions for the \(b\)-baryon lifetimes, lifetime ratios, and semileptonic branching fractions. We conclude in section 4. Appendix A contains numerical values of the input parameters used in the analysis, while in appendix B, we provide the analytical expressions at LO-QCD for the dimension-six four-quark operator contributions. ## 2 Theoretical framework ### Effective Hamiltonian and HQE Weak \(b\)-quark decays can be described by the effective Hamiltonian [59] \[\mathcal{H}_{\rm eff}=\mathcal{H}_{\rm eff}^{\rm NL}+\mathcal{H}_{\rm eff}^{ \rm SL}+\mathcal{H}_{\rm eff}^{\rm rare}\,. \tag{1}\] In the above equation, \(\mathcal{H}_{\rm eff}^{\rm NL}\) parametrises the contribution of non-leptonic \(b\)-quark transitions \[\mathcal{H}_{\rm eff}^{\rm NL}=\frac{G_{F}}{\sqrt{2}}\sum_{q_{3}=d,s}\Bigg{[} \sum_{q_{1},2=u,c}\!\!\!\lambda_{q_{1}q_{2}q_{3}}\Big{(}C_{1}(\mu_{1})\,Q_{1}^ {q_{1}q_{2}q_{3}}+C_{2}(\mu_{1})\,Q_{2}^{q_{1}q_{2}q_{3}}\Big{)}-\lambda_{q_{3 }}\!\!\!\sum_{j=3,\ldots,6,8}\!\!\!\!C_{j}(\mu_{1})\,Q_{j}^{q_{3}}\Bigg{]}+{\rm h.c.}\,, \tag{2}\] where \(\lambda_{q_{1}q_{2}q_{3}}=V_{q_{1}b}^{*}V_{q_{2}q_{3}}\) and \(\lambda_{q_{3}}=V_{tb}^{*}V_{tq_{3}}\) stand for the corresponding CKM factors, \(C_{i}(\mu_{1})\) denote the Wilson coefficients of the \(\Delta B=1\) effective operators evaluated at the renormalisation scale \(\mu_{1}\sim m_{b}\). \(Q_{1,2}^{q_{1}q_{2}q_{3}}\) and \(Q_{j}^{q_{3}}\), with \(j=3,\ldots,6\), and \(Q_{8}^{q}\), respectively denote the current-current,2 penguin, and chromomagnetic operators, and are explicitly Footnote 2: Note that \(Q_{1}^{q_{1}q_{2}q_{3}}\) in our notation is the colour-singlet operator, following [52; 54] and contrary to e.g. [53; 59]. \[Q_{1}^{q_{1}q_{2}q_{3}}=\left(\bar{b}^{i}\,\Gamma_{\mu}\,q_{1}^{i}\right) \left(\bar{q}_{2}^{j}\,\Gamma^{\mu}\,q_{3}^{j}\right)\,,\qquad Q_{2}^{q_{1}q_ {2}q_{3}}=\left(\bar{b}^{i}\,\Gamma_{\mu}\,q_{1}^{j}\right)\left(\bar{q}_{2}^ {j}\,\Gamma^{\mu}\,q_{3}^{i}\right)\,, \tag{3}\] \[Q_{3}^{q_{3}} =(\bar{b}^{i}\,\Gamma_{\mu}\,q_{3}^{i})\sum_{q}(\bar{q}^{j}\,\Gamma^ {\mu}\,q^{j})\,, Q_{4}^{q_{3}} =(\bar{b}^{i}\,\Gamma_{\mu}\,q_{3}^{j})\sum_{q}(\bar{q}^{j}\,\Gamma^ {\mu}\,q^{i})\,,\] \[Q_{5}^{q_{3}} =(\bar{b}^{i}\,\Gamma_{\mu}\,q_{3}^{i})\sum_{q}(\bar{q}^{j}\, \Gamma_{+}^{\mu}\,q^{j})\,, Q_{6}^{q_{3}} =(\bar{b}^{i}\,\Gamma_{\mu}\,q_{3}^{j})\sum_{q}(\bar{q}^{j}\,\Gamma _{+}^{\mu}\,q^{i})\,, \tag{4}\] \[Q_{8}^{q_{3}} =\frac{g_{s}}{8\pi^{2}}m_{b}\left(\bar{b}^{i}\,\sigma^{\mu\nu}(1- \gamma_{5})t_{ij}^{a}\,q_{3}^{j}\right)G_{\mu\nu}^{a}\,, \tag{5}\] with \(\Gamma_{\mu}=\gamma_{\mu}(1-\gamma_{5})\), \(\Gamma_{+}^{\mu}=\gamma^{\mu}(1+\gamma_{5})\), and \(\sigma_{\mu\nu}=(i/2)[\gamma_{\mu},\gamma_{\nu}]\), while \(i,j=1,2,3\), are SU(3)\({}_{c}\) indices for the quark fields. Moreover, in eq. (5), \(g_{s}\) denotes the strong coupling, and \(G_{\mu\nu}=G_{\mu\nu}^{a}t^{a}\) for \(a=1,\ldots,8\) is the gluon field strength tensor. A comparison of the values of the Wilson coefficients for different choices of the scale \(\mu_{1}\) at LO- and NLO-QCD [59] is shown in table 8 in appendix A. The second term in eq. (1) describes the contribution to the effective Hamiltonian due to semileptonic \(b\)-quark decays, i.e. \[{\cal H}_{\rm eff}^{\rm SL}=\frac{G_{F}}{\sqrt{2}}\sum_{q_{1}=u,c}\,\sum_{ \ell=e,\mu,\tau}V_{q_{1}b}^{*}\,Q^{q_{1}\ell}+{\rm h.c.}\,, \tag{6}\] with the semileptonic operator \[Q^{q_{1}\ell}=\left(\bar{b}^{i}\,\Gamma_{\mu}\,q_{1}^{i}\right)(\bar{\nu}_{ \ell}\,\Gamma^{\mu}\,\ell). \tag{7}\] Finally, \({\cal H}_{\rm eff}^{\rm rare}\) in eq. (2) encodes the contribution due to suppressed \(b\)-quark transitions, which are only relevant for the study of rare decays such as \(\Lambda_{b}\to\Lambda\gamma\) or \(\Lambda_{b}\to\Lambda\ell^{+}\ell^{-}\). These modes have very small branching fractions, below the current theoretical sensitivity for lifetimes, and so the effect of \({\cal H}_{\rm eff}^{\rm rare}\) is neglected in this work. The total decay width of a \(b\)-baryon \({\cal B}\), with mass \(M_{\cal B}\) and four-momentum \(p_{\cal B}\), reads \[\Gamma({\cal B})=\frac{1}{2M_{\cal B}}\sum_{X}\int_{\rm PS}(2\pi)^{4}\delta^{(4 )}(p_{\cal B}-p_{X})\,\,|\langle X(p_{X})|{\cal H}_{\rm eff}|H(p_{\cal B}) \rangle|^{2}, \tag{8}\] where a summation over all possible final states \(X\) into which the \(b\)-baryon can decay has been performed, with PS denoting the corresponding phase space integration. Using the optical theorem, \(\Gamma({\cal B})\) can be related to the imaginary part of the forward scattering matrix element of the time-ordered product of the double insertion of the effective Hamiltonian, i.e. \[\Gamma({\cal B})=\frac{1}{2M_{\cal B}}{\rm Im}\langle{\cal B}|{\cal T}|{\cal B }\rangle\,, \tag{9}\] with the transition operator defined as \[{\cal T}=i\int d^{4}x\,T\left\{{\cal H}_{\rm eff}(x)\,,{\cal H}_{\rm eff}(0) \right\}\,. \tag{10}\] The non-local operator in eq. (10) can then be evaluated by exploiting the fact that the \(b\)-quark is heavy, i.e. \(m_{b}\gg\Lambda_{\rm QCD}\), where \(\Lambda_{\rm QCD}\) defines a typical non-perturbative hadronic scale. In the framework of the HQE [60; 61; 62; 63; 64; 65; 66; 67], the \(b\)-quark momentum is decomposed as \[p_{b}^{\mu}=m_{b}v^{\mu}+k^{\mu}\,, \tag{11}\] where \(v=p_{\cal B}/M_{\cal B}\) is the four-velocity of the \(b\)-baryon. The residual momentum \(k\) in (11) accounts for non-perturbative interactions of the \(b\)-quark with the light degrees of freedom, i.e. soft gluons and quarks, inside the hadron, so \(k\sim\Lambda_{\rm QCD}\). Moreover, the heavy \(b\)-quark field is parametrised as \[b(x)=e^{-im_{b}v\cdot x}b_{v}(x)\,, \tag{12}\] by factoring out the large component of its momentum and introducing a rescaled field \(b_{v}(x)\), which contains only low oscillation frequencies of order \(k\). This field satisfies \[iD_{\mu}b(x)=e^{-im_{b}v\cdot x}(m_{b}v_{\mu}+iD_{\mu})b_{v}(x)\,, \tag{13}\] so that the action of the covariant derivative \(D_{\mu}=\partial_{\mu}-ig_{s}A_{\mu}^{a}\,t^{a}\) also contains a large contribution proportional to the heavy quark mass alongside a residual term of order \(\Lambda_{\rm QCD}\). The rescaled field \(b_{v}(x)\) is related to the heavy quark effective theory (HQET) field \(h_{v}(x)\), see e.g. [68], by \[b_{v}(x)=h_{v}(x)+\frac{i\not{D}_{\perp}}{2m_{b}}h_{v}(x)+{\cal O}\left(\frac{ 1}{m_{b}^{2}}\right)\,, \tag{14}\] with \(D_{\perp}^{\mu}=D^{\mu}-(v\cdot D)\,v^{\mu}\). Finally, taking into account eqs. (11)-(13), the total decay width in eq. (9) can be systematically expanded in inverse powers of the heavy \(b\)-quark mass, leading to the HQE series, which schematically reads \[\Gamma({\cal B})=\Gamma_{3}+\Gamma_{5}\frac{\langle{\cal O}_{5}\rangle}{m_{b} ^{2}}+\Gamma_{6}\frac{\langle{\cal O}_{6}\rangle}{m_{b}^{3}}+...+16\pi^{2} \left(\tilde{\Gamma}_{6}\frac{\langle\tilde{\cal O}_{6}\rangle}{m_{b}^{3}}+ \tilde{\Gamma}_{7}\frac{\langle\tilde{\cal O}_{7}\rangle}{m_{b}^{4}}+... \right). \tag{15}\] Here, the \(\Gamma_{d}\) are short-distance functions, which can be computed perturbatively in QCD, i.e. \[\Gamma_{d}=\Gamma_{d}^{(0)}+\frac{\alpha_{s}}{4\pi}\Gamma_{d}^{(1)}+\left( \frac{\alpha_{s}}{4\pi}\right)^{2}\Gamma_{d}^{(2)}+\ldots\,, \tag{16}\] while \(\langle{\cal O}_{d}\rangle\equiv\langle{\cal B}|{\cal O}_{d}|{\cal B}\rangle/( 2M_{\cal B})\) denote the matrix elements of the corresponding \(\Delta B=0\) operators \({\cal O}_{d}\) of dimension \(d\) in the effective theory. Note that, starting from order \(1/m_{b}^{3}\), both two- and four-quark operator contributions appear. The latter originate from loop-enhanced diagrams, as reflected by the explicit factor of \(16\pi^{2}\) in eq. (15), and, to avoid confusion in the notation, we use a tilde to label them. ### Short-distance contributions In this section, we give a brief summary of the short-distance contributions, cf. eqs. (15, 16), included in our analysis. For more details we refer to the recent studies [52; 53; 54].3 Footnote 3: There are some differences in the structure of the HQE for charmed hadrons [52; 53] as opposed to the \(b\) sector; see also [69; 70]. The coefficients \(\Gamma_{d},\tilde{\Gamma}_{d}\) are analytic functions of the masses of the internal fermions running in the loops. In our analysis, we only include the contribution of the charm-quark and tau-lepton masses, expressed in terms of the two dimensionless parameters \[x_{c}=\frac{m_{c}^{2}}{m_{b}^{2}}\,,\qquad x_{\tau}=\frac{m_{\tau}^{2}}{m_{b}^ {2}}\,. \tag{17}\] As \(m_{s}^{2}/m_{b}^{2}\approx m_{\mu}^{2}/m_{b}^{2}\sim 0.05\%\), the effect of non-vanishing strange-quark and muon masses is far below the current theoretical accuracy, and hence can be safely neglected.4 The leading contribution to the \(b\)-baryon total width, \(\Gamma_{3}\) in eq. (15), is obtained by computing the free \(b\)-quark decay, and can be compactly expressed as Footnote 4: However, we do include strange quark mass corrections in the non-perturbative input, where these effects are much more pronounced, in order to account for \(\mathrm{SU}(3)_{F}\)-breaking. \[\Gamma_{3}=\Gamma_{0}\,c_{3}=\Gamma_{0}\left(c_{3}^{(0)}+\frac{\alpha_{s}}{4 \pi}c_{3}^{(1)}+\ldots\right)\,, \tag{18}\] where \[\Gamma_{0}=\frac{G_{F}^{2}\,m_{b}^{5}}{192\pi^{3}}|V_{cb}|^{2}\,, \tag{19}\] and \[c_{3}=\mathcal{C}_{3,\mathrm{SL}}+3\,C_{1}^{2}\,\mathcal{C}_{3,11}+2\,C_{1}C _{2}\,\mathcal{C}_{3,12}+3\,C_{2}^{2}\,\mathcal{C}_{3,22}+C_{i}\,C_{j}\, \mathcal{C}_{3,ij}^{P}\,. \tag{20}\] Above, a summation over all possible non-leptonic and semileptonic modes of the \(b\)-quark is implicitly assumed, and we have denoted by \(\mathcal{C}_{3,ij}^{P}\), with \(i=1,2\), and \(j=3,\ldots,6,8\), the contribution due to the mixed insertion of the current-current and penguin or chromomagnetic operators. For semileptonic modes, \(\alpha_{s}^{3}\)-corrections have been computed [71; 72]; however, as the accuracy for non-leptonic modes reaches only NLO-QCD, we perform our analysis consistently at this order and do not include the new results for \(\mathcal{C}_{3,\mathrm{SL}}\). Moreover, following a common counting adopted in the literature [73; 74], the contribution of the penguin and chromomagnetic operators is treated as a next-to-leading order effect, i.e. \(\mathcal{C}_{3,ij}^{P}=0\) at LO-QCD, owing to the small size of the corresponding Wilson coefficients. The result for \(c_{3}\) at LO can be found e.g. in [53; 56]. As for the NLO corrections, the analytical expressions for \(\mathcal{C}_{3,11}\), \(\mathcal{C}_{3,22}\), and \(\mathcal{C}_{3,\mathrm{SL}}\) can be extracted from [75], where the computation has been performed for three different final state masses, while those for \(\mathcal{C}_{3,12}\) are derived from the results presented in [76] in the case of the \(b\to c\bar{c}s\) transition, and in [77] for the remaining modes. Finally the results for \(\mathcal{C}_{3,ij}^{P}\) are taken from [76]. Power corrections due to two-quark operators are obtained by including the effect of soft gluons as well as the \(1/m_{b}\)-expansion of lower-dimensional matrix elements. At order \(1/m_{b}^{2}\), the corresponding contribution can be schematically written as \[\Gamma_{5}\frac{\langle\mathcal{O}_{5}\rangle}{m_{b}^{2}}=\Gamma_{0}\left[c_{ \pi}\frac{\langle\mathcal{O}_{\pi}\rangle}{m_{b}^{2}}+c_{G}\,\frac{\langle \mathcal{O}_{G}\rangle}{m_{b}^{2}}\right]\,, \tag{21}\] where the matrix elements of the kinetic and chromomagnetic operators5, given explicitly in eqs. (56, 57), are discussed in section 2.3. In our analysis, again for consistency, we include the coefficients \(c_{\pi}\) and \(c_{G}\) only at LO-QCD, since \(\alpha_{s}\)-corrections have so far been determined only for the semileptonic channels [78]. The coefficient of the kinetic operator is related to that of dimension-three by a purely numerical factor, \(c_{\pi}=-c_{3}^{(0)}/2\), while the coefficient \(c_{G}\) can be decomposed as \[c_{G}=\mathcal{C}_{G,\text{SL}}+3\,C_{1}^{2}\,\mathcal{C}_{G,11}+2\,C_{1}C_{2} \,\mathcal{C}_{G,12}+3\,C_{2}^{2}\,\mathcal{C}_{G,22}\,, \tag{22}\] where again a summation over all possible \(b\)-quark modes is implied. The expressions for the non-leptonic channels \(\mathcal{C}_{G,ij}\), originally computed in [64; 65; 79], can be found e.g. in [53; 56], while the semileptonic coefficient \(\mathcal{C}_{G,SL}\) is taken from the general result for two different final state masses presented e.g. in the appendix of [80], and first determined in [81; 82]. At order \(1/m_{b}^{3}\), both two- and four-quark operators contribute, cf. eq. (16). For the former, we can compactly write6 Footnote 6: Formally, at dimension-six the basis also includes the spin-orbit operator \(\mathcal{O}_{\text{LS}}\). However, by adopting definitions in terms of full covariant derivatives rather than transversal ones, the contribution of \(\mathcal{O}_{\text{LS}}\) to the total decay width vanishes. For more detail, see e.g. [83]. \[\Gamma_{6}\frac{\langle\mathcal{O}_{6}\rangle}{m_{b}^{3}}=\Gamma_{0}\,c_{ \rho_{D}}\frac{\langle\mathcal{O}_{D}\rangle}{m_{b}^{3}}\,, \tag{23}\] where the matrix element of the Darwin operator is defined in eq. (58), while the corresponding short-distance coefficient can be decomposed as \[c_{\rho_{D}}=\mathcal{C}_{\rho_{D},\text{SL}}+3\,C_{1}^{2}\,\mathcal{C}_{\rho_ {D},11}+2\,C_{1}C_{2}\,\mathcal{C}_{\rho_{D},12}+3\,C_{2}^{2}\,\mathcal{C}_{ \rho_{D},22}\,, \tag{24}\] summing again over all \(b\)-quark decay modes. As NLO-QCD corrections are only available for semileptonic decays [84; 85; 86], the accuracy in our analysis again extends to only LO-QCD, identically to the dimension-five contributions. The complete expressions of \(\mathcal{C}_{\rho_{D},ij}\) for all non-leptonic channels have been obtained recently in [55; 56; 57; 58], while the coefficient \(\mathcal{C}_{\rho_{D},SL}\), first computed in [87], can be read off the general results for the case of two different final state masses presented e.g. in [86; 88]. It is worth emphasising that the coefficient of the Darwin operator is one order of magnitude larger than the corresponding ones at dimension-five. However, as shown in detail in [56], this in fact follows from an accidental suppression of the dimension-five coefficients, rather than an enhancement of the Darwin term. Therefore, the contribution of the Darwin operator, neglected in previous phenomenological studies, turns out to be an important ingredient in the theoretical prediction of the \(b\)-baryon lifetimes, see section 3, and of \(B\) meson lifetimes [54]. The short-distance coefficients due to four-quark operators are obtained by computing, at LO-QCD, the discontinuity of the one-loop diagrams shown in figure 2, commonly denoted in the literature as destructive Pauli interference (\(\text{int}^{-}\)), weak-exchange (exc), and constructive Pauli interference (\(\text{int}^{+}\)), respectively.7 Taking into account the different topologies, the dimension-six contribution from four-quark operators can be compactly written as \[16\pi^{2}\,\tilde{\Gamma}_{6}\frac{\langle\tilde{\mathcal{O}}_{6} \rangle}{m_{b}^{3}}= \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Fierz transformation, from the corresponding ones for \(c^{i}_{6,\text{int}^{-}}(x_{q_{1}},x_{q_{2}})\) by replacing \(C_{1}\leftrightarrow C_{2}\), while for semileptonic modes, the NLO-corrections to the coefficients \(c^{i}_{6,\text{int}^{+}}(x_{\ell},x_{\nu_{\ell}})\) have been determined in [92]. Because of the different terminology used to denote the same loop diagrams in baryons and mesons, in appendix B we present the LO-QCD expressions for the functions \(\tilde{\Gamma}^{q}_{6,T}(x_{f_{1}},x_{f_{2}})\) given in eq. (25). Considering all possible contractions in the time-ordered product in eq. (10), the complete dimension-six four-quark operator contributions to \(\Gamma(\mathcal{B})\), included in our analysis, respectively read \[16\pi^{2}\,\tilde{\Gamma}_{6}\frac{\langle\tilde{\mathcal{O}}_{6 }\rangle_{\Delta_{b}^{0}}}{m_{b}^{3}} =\left[\tilde{\Gamma}^{u}_{6,\text{exc}}(x_{c},x_{d})+\tilde{ \Gamma}^{u}_{6,\text{exc}}(x_{c},x_{s})+\tilde{\Gamma}^{u}_{6,\text{exc}}(x_{ u},x_{d})+\tilde{\Gamma}^{u}_{6,\text{exc}}(x_{u},x_{s})\right.\] \[+\tilde{\Gamma}^{d}_{6,\text{int}^{-}}(x_{c},x_{u})+\tilde{\Gamma }^{d}_{6,\text{int}^{-}}(x_{c},x_{c})+\tilde{\Gamma}^{d}_{6,\text{int}^{-}}(x _{u},x_{u})+\tilde{\Gamma}^{d}_{6,\text{int}^{-}}(x_{u},x_{c})\] \[+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{u},x_{d})+\tilde{\Gamma }^{u}_{6,\text{int}^{+}}(x_{c},x_{s})+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x _{u},x_{s})+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{c},x_{d})\] \[+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{\tau},x_{\nu_{\tau}})+ \tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{\mu},x_{\nu_{\mu}})+\tilde{\Gamma}^{u }_{6,\text{int}^{+}}(x_{e},x_{\nu_{e}})\Big{]}_{\Delta_{b}^{0}}\,, \tag{27}\] \[16\pi^{2}\,\tilde{\Gamma}_{6}\frac{\langle\tilde{\mathcal{O}}_{6 }\rangle_{\Xi_{b}^{0}}}{m_{b}^{3}} =\left[\tilde{\Gamma}^{u}_{6,\text{exc}}(x_{c},x_{d})+\tilde{ \Gamma}^{u}_{6,\text{exc}}(x_{c},x_{s})+\tilde{\Gamma}^{u}_{6,\text{exc}}(x_{ u},x_{d})+\tilde{\Gamma}^{u}_{6,\text{exc}}(x_{u},x_{s})\right.\] \[+\tilde{\Gamma}^{s}_{6,\text{int}^{-}}(x_{c},x_{c})+\tilde{\Gamma }^{s}_{6,\text{int}^{-}}(x_{c},x_{u})+\tilde{\Gamma}^{s}_{6,\text{int}^{-}}(x _{u},x_{c})+\tilde{\Gamma}^{s}_{6,\text{int}^{-}}(x_{u},x_{u})\] \[+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{u},x_{d})+\tilde{\Gamma }^{u}_{6,\text{int}^{+}}(x_{c},x_{s})+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x _{u},x_{s})+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{c},x_{d})\] \[+\tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{\tau},x_{\nu_{\tau}})+ \tilde{\Gamma}^{u}_{6,\text{int}^{+}}(x_{\mu},x_{\nu_{\mu}})+\tilde{\Gamma}^{u }_{6,\text{int}^{+}}(x_{e},x_{\nu_{e}})\Big{]}_{\Xi_{b}^{0}}\,, \tag{28}\] \[16\pi^{2}\,\tilde{\Gamma}_{6}\frac{\langle\tilde{\mathcal{O}}_{6 }\rangle_{\Xi_{b}^{-}}}{m_{b}^{3}} =\left[\tilde{\Gamma}^{d}_{6,\text{int}^{-}}(x_{c},x_{u})+\tilde{ \Gamma}^{s}_{6,\text{int}^{-}}(x_{c},x_{c})+\tilde{\Gamma}^{s}_{6,\text{int}^{- }}(x_{c},x_{u})+\tilde{\Gamma}^{d}_{6,\text{int}^{-}}(x_{c},x_{c})\right.\] \[+\tilde{\Gamma}^{d}_{6,\text{int}^{-}}(x_{u},x_{u})+\tilde{\Gamma }^{s}_{6,\text{int}^{-}}(x_{u},x_{c})+\tilde{\Gamma}^{s}_{6,\text{int}^{-}}(x_{ u},x_{u})+\tilde{\Gamma}^{d}_{6,\text{int}^{-}}(x_{u},x_{c})\Big{]}_{\Xi_{b}^{-}}\,, \tag{29}\] \[16\pi^{2}\,\tilde{\Gamma}_{6}\frac{\langle\tilde{\mathcal{O}}_{6 }\rangle_{\Omega_{b}^{-}}}{m_{b}^{3}} =\left[\tilde{\Gamma}^{s}_{6,\text{int}^{-}}(x_{c},x_{c})+\tilde{ \Gamma}^{s}_{6,\text{int}^{-}}(x_{c},x_{u})+\tilde{\Gamma}^{s}_{6,\text{int}^{- }}(x_{u},x_{c})+\tilde{\Gamma}^{s}_{6,\text{int}^{-}}(x_{u},x_{u})\right]_{ \Omega_{b}^{-}}\,, \tag{30}\] where we have now explicitly indicated the specific baryon appearing in the corresponding matrix elements. We stress that the results in eqs. (27)-(30) do not take into account contributions in which the light quark in the four-quark operators differs from the spectator quarks in the \(b\)-baryon, the so-called 'eye contractions'. These have been recently computed for mesons in [93], but they are still unknown for baryons. However, as they constitute subleading corrections to the dimension-six contribution, we expect their effect to go beyond the current accuracy of our study. Moreover, in our numerical analysis we only keep non-vanishing the masses of the charm quark and of the tau-lepton, i.e. we set \(x_{u,d,s}=x_{\mu,e,\nu_{\ell}}=0\), cf. eq. (17). Note that, in eqs. (27)-(30), the non-leptonic contributions have been ordered by topology, and within each topology we have listed the terms in order of their CKM hierarchy. In particular, the leading contributions to the \(\Lambda_{b}^{0}\) and \(\Xi_{b}^{0}\) decay widths arise from the \(\mathrm{int}^{-}\) and exc topologies. As for the semileptonic contributions, they can only arise in the \(\mathrm{int}^{+}\) topology, and, since we do not include the eye contractions, they only enter the decay width of the \(\Lambda_{b}^{0}\) and \(\Xi_{b}^{0}\) baryons, see the last line of eqs. (27), (28), and not that of the \(\Xi_{b}^{-}\) or \(\Omega_{b}^{-}\). However, the semileptonic contributions give a negligible numerical effect to the total widths, and in particular do not generate any significant splitting between the semileptonic branching fractions of \(b\)-baryons, as expected because of the strong CKM suppression \(|V_{ub}|^{2}\ll|V_{cb}|^{2}\). Thus, within our current sensitivity, any difference between the semileptonic branching fractions of \(b\)-baryons can arise only from \(\mathrm{SU}(3)_{F}\) effects in the matrix elements of the two-quark operators. In section 3, we present our predictions for the lifetimes ratios of the \(b\)-baryons with the \(B_{d}\) meson. For completeness, in order to facilitate the comparison, the corresponding leading dimension-six four-quark contribution for the latter is [54] \[16\pi^{2}\,\tilde{\Gamma}_{6}\frac{\langle\tilde{\mathcal{O}}_{6}\rangle_{B_{ d}}}{m_{b}^{3}}=\left[\tilde{\Gamma}_{6,\mathrm{WE}}^{d}(x_{c},x_{u})+\tilde{ \Gamma}_{6,\mathrm{WE}}^{d}(x_{c},x_{c})+\tilde{\Gamma}_{6,\mathrm{WE}}^{d}(x _{u},x_{u})+\tilde{\Gamma}_{6,\mathrm{WE}}^{d}(x_{u},x_{c})\right]_{B_{d}}. \tag{31}\] Finally, at order \(1/m_{b}^{4}\), the short-distance contributions due to four-quark operators are also known in the literature, albeit only at LO-QCD, see e.g. [52]. They have been determined in [94; 11] for operators defined in QCD10 and also in [92; 54] for the HQET operators. However, as compared with our previous studies [52; 53; 54], we do not include the subleading \(1/m_{b}\) corrections to the four-quark matrix elements in our central values for the total widths, preferring instead to treat these contributions as part of the uncertainty estimate. The reason for this is the absence of a consistent procedure to determine the corresponding matrix elements for baryons, particularly in HQET, due to a proliferation of the dimension-seven operators, see e.g. [68], and in contrast to the case of mesons, where the vacuum insertion approximation (VIA) provides a first estimate. This problem was extensively discussed in [53]. Moreover, other \(1/m_{b}^{4}\) corrections are also missing, namely those due to two-quark operators, which so far are known only for semileptonic \(b\)-quark decays [95; 96; 97; 83], and those to the dimension-six matrix elements, see section 2.3. As a result, a complete analysis of the \(b\)-baryon lifetimes up to this order is currently not possible. Given that, in the \(b\)-system, power corrections prove to be well under control, we consider it more justified in this work to treat the \(1/m_{b}^{4}\) contributions as an additional source of uncertainty, rather than trying to include them in the central values for lifetimes with only partial, and potentially misleading, estimates for the dimension-seven matrix elements. Footnote 10: Some inconsistencies in the expressions of [94; 11] were identified in [54], cf. footnote 8 therein. ### Non-perturbative Matrix Elements In this section, we present our determinations of the hadronic parameters. It is convenient to first consider the matrix elements of the four-quark operators, followed by the discussion of the two-quark matrix elements \(\mu_{\pi}^{2}(\mathcal{B})\), \(\mu_{G}^{2}(\mathcal{B})\), \(\rho_{D}^{3}(\mathcal{B})\). A basis of dimension-six four-quark operators in HQET suitable for the \(b\)-baryons is [90]11 Footnote 11: The recent study [53] made use of the QCD basis of operators instead. \[\mathcal{O}_{1}^{q}=(\bar{h}_{v}^{i}\gamma_{\mu}(1-\gamma_{5})q^{ i})(\bar{q}^{j}\gamma^{\mu}(1-\gamma_{5})h_{v}^{j})\,,\qquad\mathcal{O}_{2}^{q}=( \bar{h}_{v}^{i}(1-\gamma_{5})q^{i})(\bar{q}^{j}(1+\gamma_{5})h_{v}^{j})\,, \tag{32}\] \[\tilde{\mathcal{O}}_{1}^{q}=(\bar{h}_{v}^{i}\gamma_{\mu}(1-\gamma _{5})q^{j})(\bar{q}^{j}\gamma^{\mu}(1-\gamma_{5})h_{v}^{i})\,,\qquad\tilde{ \mathcal{O}}_{2}^{q}=(\bar{h}_{v}^{i}(1-\gamma_{5})q^{j})(\bar{q}^{j}(1+\gamma _{5})h_{v}^{i})\,, \tag{33}\] with \(q\) labeling the light quark in the corresponding operator, i.e. \(q=u,d,s\). Note that the colour-rearranged operators \(\tilde{\mathcal{O}}_{1,2}^{q}\) are related to the colour-octet ones commonly adopted in studies of heavy meson lifetimes, see e.g. [52; 53; 54], by the completeness property of the SU(3)\({}_{c}\) generators \[t_{ij}^{a}t_{lm}^{a}=\frac{1}{2}\Big{(}\delta_{im}\delta_{jl}-\frac{1}{N_{c}} \delta_{ij}\delta_{lm}\Big{)}\,. \tag{34}\] The usefulness of the choice of basis in eqs. (32), (33) is exhibited by the relations \[\langle\mathcal{B}|\tilde{\mathcal{O}}_{i}^{q}|\mathcal{B}\rangle=-\tilde{B}_ {i}^{q}\langle\mathcal{B}|\mathcal{O}_{i}^{q}|\mathcal{B}\rangle\,,\qquad i=1, 2\,, \tag{35}\] where, assuming the valence quark approximation, the total colour antisymmetry of the baryon wave function imposes \(\tilde{B}_{i}^{q}=1\)[90]. In our study, we consider a universal parameter \(\tilde{B}_{i}^{q}\equiv\tilde{B}\),12 with \(\tilde{B}=1\) valid at a typical hadronic scale \(\mu_{h}\ll m_{b}\). When performing the numerical analysis, we vary this scale in the range \(1\,\mathrm{GeV}\leq\mu_{h}\leq 1.5\,\mathrm{GeV}\), while taking as our reference value \(\mu_{h}=1.5\,\) GeV. Footnote 12: In general, \(\tilde{B}_{1}^{q}=\tilde{B}_{2}^{q}+\mathcal{O}(1/m_{b})\), and \(\tilde{B}^{u,d}\neq\tilde{B}^{s}\), but we neglect these subleading corrections. In order to estimate the matrix elements on the r.h.s. of eq. (35), we adopt the non-relativistic constituent quark model (NRCQM), according to which the matrix elements of the colour-singlet four-quark operators can be expressed, in terms of baryon wave functions evaluated at the origin, as \[\frac{\langle\mathcal{T}_{b}|\mathcal{O}_{1}^{q}|\mathcal{T}_{b} \rangle}{2M_{\mathcal{T}_{b}}}=-|\Psi^{\mathcal{T}_{b}}(0)|^{2}\,,\qquad\quad \frac{\langle\mathcal{T}_{b}|\mathcal{O}_{2}^{q}|\mathcal{T}_{b}\rangle}{2M_ {\mathcal{T}_{b}}}=\frac{1}{2}|\Psi^{\mathcal{T}_{b}}(0)|^{2}\,, \tag{36}\] for the SU(3)\({}_{F}\) triplet \(\mathcal{T}_{b}=(\Lambda_{b}^{0},\Xi_{b}^{-},\Xi_{b}^{0})\), and \[\frac{\langle\Omega_{b}^{-}|\mathcal{O}_{1}^{s}|\Omega_{b}^{-} \rangle}{2M_{\Omega_{b}^{-}}}=-6|\Psi^{\Omega_{b}^{-}}(0)|^{2}\,,\qquad\quad \frac{\langle\Omega_{b}^{-}|\mathcal{O}_{2}^{s}|\Omega_{b}^{-}\rangle}{2M_{ \Omega_{b}^{-}}}=-|\Psi^{\Omega_{b}^{-}}(0)|^{2}\,, \tag{37}\] for the \(\Omega_{b}^{-}\). It should be emphasised that the constituent quark picture provides access only to the valence quark contributions, for which the field of a light quark within the operator matches at least one of the baryon valence quarks. The missing non-valence contributions are, however, expected to provide subleading corrections. Hence, in (36), it should be understood that the relations are valid only when the light quark \(q\) in the operator \({\cal O}_{i}^{q}\) matches one of the valence quarks in the baryon \({\cal T}_{b}\), and the matrix element is otherwise taken to be zero, and similarly in (37) for the \(\Omega_{b}^{-}\). We stress that, apart from the exploratory study in [6], which has never been followed up, there are no lattice determinations for the four-quark baryonic matrix elements available. A computation for the \(\Lambda_{b}^{0}\), within HQET sum rules, was performed in [5]. In contrast to the case of \(B\) mesons, where one can set up a sum rule for the small deviation of the bag parameter from one [98, 99, 100, 93], for baryons one can only write down sum rules for the whole matrix element. Thus, the baryon case may be sensitive to stability issues often associated with three-point sum rules [101]. Moreover, the sum rule work in [5] does not yet include NLO-QCD effects. These corrections can be large, as was shown in the HQET sum rule calculation of the two-point correlator [102], entering also the computation of the four-quark matrix element, where the \(\alpha_{s}\)-contributions appear to be of a similar size to the leading contribution. Very recently, the four-quark \(\Lambda_{b}^{0}\) matrix elements were also determined with QCD sum rules [103], confirming the relatively small values obtained by the HQET sum rules in [5].13 We are not aware of sum rule determinations of the matrix elements of the \(\Xi_{b}^{0}\), \(\Xi_{b}^{-}\), or \(\Omega_{b}^{-}\) baryons. Therefore, in this work, we choose to consistently apply the NRCQM to calculate the matrix elements of the dimension-six four-quark operators for all the baryons considered. For comparison, however, we briefly discuss the numerical impact of the sum rule determination from [5] on the \(\Lambda_{b}^{0}\) lifetime in section 3. Footnote 13: A separate sum rule calculation in [7] was able to accommodate the then very low experimental values of the \(\tau(\Lambda_{b}^{0})/\tau(B_{d})\) lifetime ratio, at the expense of an anomalously large four-quark contribution; see figure 1. Following the standard approach proposed by de Rujula, Georgi, and Glashow [104], the baryon wave functions can be extracted from the known values of hyperfine mass splittings [105, 106, 107, 14]. In the NRCQM, the hyperfine splittings are controlled by the short-distance gluon exchange between the constituent quarks. For a generic hadron \(H\), the mass \(M_{H}\) can be expressed as \[M_{H}=M_{0}+\langle H_{\rm spin}\rangle\,, \tag{38}\] where \(M_{0}\) contains the spin-independent contributions, including the constituent quark masses and the binding energies. The spin-dependent terms are, for the ground state (\(L=0\)) hadrons, given as \[H_{\rm spin,\,baryons} =\sum_{i>j}\frac{16\pi\alpha_{s}}{9}\frac{(\vec{s}_{i}\cdot\vec {s}_{j})}{m_{i}^{\,b}\,m_{j}^{\,b}}\delta^{3}(\vec{r}_{ij})\,, \tag{39}\] \[H_{\rm spin,\,mesons} =\frac{32\pi\alpha_{s}}{9}\frac{(\vec{s}_{i}\cdot\vec{s}_{j})}{m _{i}^{\,m}\,m_{j}^{\,m}}\delta^{3}(\vec{r}_{ij})\,, \tag{40}\] where \(i,j\), label the constituent quarks in the hadron, with masses \(m_{i}^{\,b}\) and \(m_{i}^{\,m}\) respectively for baryons and mesons, while \(\vec{s}_{i}\) denotes the corresponding quark spin operator. When evaluating the expectation value in eq. (38) for a given hadronic state, the delta functions in eqs. (39), (40) result in the modulus squared of the hadron wave function at the origin, \(|\Psi^{H}(0)|^{2}\), and the light quarks in \(b\)-baryons are taken to form a diquark spin state. Note that we do not assume the constituent quark masses within mesons and baryons to be equal, i.e. \(m_{i}^{\,m}\neq m_{i}^{\,b}\), but instead take their values as used in the fit to hadronic masses [108]. Following the approach of Rosner [14], the wave functions appearing in eqs. (36), (37) are extracted using the hyperfine splittings between the positive-parity spin-3/2 and spin-1/2 bottom baryons. For example, for the \(\Lambda_{b}\) baryon, this results in the relation14 Footnote 14: Applying a similar relation for mesons, and using this to estimate the decay constant, would lead to the estimates \(f_{B}=0.188(14)\,\text{GeV}\), \(f_{B_{s}}=0.241(18)\,\text{GeV}\), where the uncertainty arises from varying the scale of \(\alpha_{s}(\mu_{h})\) between 1.0 and 1.5 GeV. These values are consistent with those obtained from lattice computations, supporting the applicability of the NRCQM to baryons. \[M_{\Sigma_{b}^{\star}}-M_{\Sigma_{b}}=\frac{16\pi\alpha_{s}}{m_{b}^{\,b}\,m_ {\tilde{q}}^{\,b}}\,\frac{3}{2}|\Psi^{\Lambda_{b}}(0)|^{2}\,, \tag{41}\] with \(\tilde{q}=u,d\), while the corresponding relations for \(|\Psi^{\Xi_{b}}(0)|^{2}\), and \(|\Psi^{\Omega_{b}}(0)|^{2}\), involve the hyperfine splittings \(M_{\Xi_{b}^{\star}}-M_{\Xi_{b}^{\star}}\), and \(M_{\Omega_{b}^{\star}}-M_{\Omega_{b}}\), respectively. After normalising these relations to the analogous expressions involving the meson mass splittings, we can express the matrix elements in eqs. (36), (37), in terms of \(B\)-meson wave functions15 as Footnote 15: Note that we use interchangeably the notation \(B_{u}=B_{d}\equiv B\), in the limit of exact isospin symmetry. \[\frac{\langle\Lambda_{b}|\mathcal{O}_{1}^{q}|\Lambda_{b}\rangle}{2M_{\Lambda_ {b}}} =-y_{\tilde{q}}\,\frac{4}{3}\frac{M_{\Sigma_{b}^{\star}}-M_{\Sigma_{b}}}{M _{B^{\star}}-M_{B}}|\Psi^{B}(0)|^{2}\,, \tag{42}\] \[\frac{\langle\Xi_{b}^{0}|\mathcal{O}_{1}^{u}|\Xi_{b}^{0}\rangle} {2M_{\Xi_{b}}} =\frac{\langle\Xi_{b}^{-}|\mathcal{O}_{1}^{d}|\Xi_{b}^{-}\rangle }{2M_{\Xi_{b}}}=-y_{\tilde{q}}\,\frac{4}{3}\frac{M_{\Xi_{b}^{\star}}-M_{\Xi_{ b}^{\prime}}}{M_{B^{\star}}-M_{B}}|\Psi^{B}(0)|^{2}\,,\] (43) \[\frac{\langle\Xi_{b}^{-}|\mathcal{O}_{1}^{s}|\Xi_{b}^{-}\rangle} {2M_{\Xi_{b}}} =\frac{\langle\Xi_{b}^{0}|\mathcal{O}_{1}^{s}|\Xi_{b}^{0}\rangle }{2M_{\Xi_{b}}}=-y_{s}\,\frac{4}{3}\frac{M_{\Xi_{b}^{\star}}-M_{\Xi_{b}^{ \prime}}}{M_{B_{s}^{\star}}-M_{B_{s}}}|\Psi^{B_{s}}(0)|^{2}\,,\] (44) \[\frac{\langle\Omega_{b}^{-}|\mathcal{O}_{1}^{s}|\Omega_{b}^{-} \rangle}{2M_{\Omega_{b}}} =-y_{s}\,6\,\frac{4}{3}\frac{M_{\Omega_{b}^{\star}}-M_{\Omega_{ b}}}{M_{B_{s}^{\star}}-M_{B_{s}}}|\Psi^{B_{s}}(0)|^{2}\,, \tag{45}\] where \(y_{\tilde{q}}\), \(y_{s}\), denote ratios of the constituent quark masses in baryons and mesons [108] \[y_{\tilde{q}}=\frac{m_{b}^{\,b}\,m_{\tilde{q}}^{\,b}}{m_{b}^{\,m}\,m_{\tilde{q }}^{\,m}}\simeq 1.18\,,\qquad\qquad y_{s}=\frac{m_{b}^{\,b}\,m_{s}^{\,b}}{m_{b}^{\,m }\,m_{s}^{\,m}}\simeq 1.12\,. \tag{46}\] The ratios of the mass splittings, \[r_{q}(\mathcal{B})\equiv\frac{4}{3}\frac{M_{\mathcal{B}^{\star}}-M_{\mathcal{B }}}{M_{B_{q}^{\star}}-M_{B_{q}}}\,, \tag{47}\] are key inputs for the evaluation of the matrix elements.16 In our numerical analysis we use the experimental values of meson and baryon mass splittings, when available [111], and assume exact isospin symmetry within the hyperfine splittings, i.e. \(M_{B_{4}^{*}}-M_{B_{d}}=M_{B_{u}^{*}}-M_{B_{u}}\). As for the ratio \(r_{s}(\Omega_{b}^{-})\), since the mass of the \(\Omega_{b}^{*}\) has not yet been measured, we employ the result for the splitting \(M_{\Omega_{b}^{*}}-M_{\Omega_{b}}\) from [108], consistently with the use of the values of the constituent mass ratios in eq. (46). This leads to \[r_{s}(\Omega_{b})=0.66\pm 0.22\,. \tag{48}\] A comparison between the predictions for \(r_{q}\), based both on NRCQM fits and lattice QCD evaluations, alongside the corresponding available experimental results, is shown in table 3. We note that for the \(B\)-meson mass splittings, we use the averages of the experimental values reported in [111]. Having the ratios of hadron mass splittings under control, we proceed by relating the meson wave functions in eqs. (42)-(45) to the static decay constants via \[|\Psi^{B_{q}}(0)|^{2}=\frac{F_{B_{q}}^{2}(\mu_{0})}{12}\,, \tag{49}\] with \[\langle 0|\bar{q}\gamma^{\mu}\gamma_{5}h_{v}|B_{q}\rangle_{\rm HQET}=i\,F_{B_{q} }(\mu_{0})\sqrt{M_{B_{q}}}\,v^{\mu}\,, \tag{50}\] following the conventions for the HQET states used in [52]. Assuming the constituent-quark relations for the matrix elements of the operators \({\cal O}_{i}^{q}\) in eqs. (36), (37), as well as the valence quark approximation result \(\tilde{B}=1\) in (35), to be satisfied at a low hadronic scale \(\mu_{h}\), in eq. (49) we set \(\mu_{0}=\mu_{h}=1.5\,\text{GeV}\), the same hadronic scale that was used in the HQET sum rule derivation of the corresponding bag parameters in \(B\) mesons [93]. The value of the static decay constant at the scale \(\mu_{h}\) can be extracted using its relation [113] to the QCD decay constant in the static limit \(\hat{f}_{B_{q}}\), \[\hat{f}_{B_{q}}=\frac{F_{B_{q}}(\mu_{0})}{\sqrt{M_{B_{q}}}}\bigg{[}1+\frac{ \alpha_{s}(\mu_{0})}{2\pi}\bigg{(}\ln\frac{\mu_{b}^{2}}{\mu_{0}^{2}}-\frac{4}{ 3}\bigg{)}\bigg{]}\,, \tag{51}\] where \(\hat{f}_{B_{q}}\) differs from the full QCD decay constant \(f_{B_{q}}\) used for meson lifetimes, by the terms of order \({\cal O}(1/m_{b})\), and \(\mu_{b}=4.5\,\text{GeV}\). The parameter \(\hat{f}_{B_{q}}\) is available from lattice QCD \begin{table} \begin{tabular}{|c||c|c||c|} \hline Quantity & Experiments [111] & Lattice QCD [112] & NRCQM [108] \\ \hline \(r_{\tilde{q}}(\Sigma_{b})\) & \(0.58\pm 0.01\) & \(0.62\pm 0.26\) & \(0.63\pm 0.24\) \\ \hline \(r_{\tilde{q}}(\Xi_{b}^{\prime})\) & \(0.60\pm 0.00\) & \(0.79\pm 0.27\) & \(0.67\pm 0.24\) \\ \hline \(r_{s}(\Xi_{b}^{\prime})\) & \(0.56\pm 0.02\) & \(0.74\pm 0.25\) & \(0.63\pm 0.22\) \\ \hline \(r_{s}(\Omega_{b})\) & unknown & \(0.78\pm 0.22\) & \(0.66\pm 0.22\) \\ \hline \end{tabular} \end{table} Table 3: Comparisons of the NRCQM results for the \(r_{q}({\cal B})\) to available experimental data and lattice QCD evaluations. For the \(B\)-meson mass splittings, we use the measured values reported in [111]. simulations [114], from which we take the numerical values \[\hat{f}_{B}=(219\pm 17)\,\text{MeV}\,,\qquad\qquad\hat{f}_{B_{s}}=(264\pm 19)\, \text{MeV}\,, \tag{52}\] which result in \[F_{B}(\mu_{h}=1.5\,\text{GeV})=(0.48\pm 0.04)\,\text{GeV}^{3/2}\,, \tag{53}\] \[F_{B_{s}}(\mu_{h}=1.5\,\text{GeV})=(0.58\pm 0.04)\,\text{GeV}^{3/2}\,,\] as compared to \(F_{B}(\mu_{b})=(0.53\pm 0.04)\,\text{GeV}^{3/2}\) and \(F_{B_{s}}(\mu_{b})=(0.64\pm 0.05)\,\text{GeV}^{3/2}\). With this ingredient in place, we list in table 4 the numerical values of the relevant matrix elements of the operator \(\mathcal{O}_{1}^{q}\) at the scale \(\mu_{h}\). Using the results for the renormalisation group evolution of the matrix elements of the dimension-six four-quark operators within HQET [115; 116; 61; 90], for \(\mu_{h}=1.5\) GeV and \(\mu_{b}=4.5\) GeV, we obtain \[\begin{pmatrix}\langle\mathcal{O}_{1}^{q}\rangle\\ \langle\mathcal{O}_{2}^{q}\rangle\\ \langle\tilde{\mathcal{O}}_{1}^{q}\rangle\\ \langle\tilde{\mathcal{O}}_{2}^{q}\rangle\end{pmatrix}(\mu_{b})=\begin{pmatrix} 1.29&0&-0.09&0\\ 0&1.29&0&-0.09\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}\begin{pmatrix}\langle\mathcal{O}_{1}^{q}\rangle\\ \langle\mathcal{O}_{2}^{q}\rangle\\ \langle\tilde{\mathcal{O}}_{1}^{q}\rangle\\ \langle\tilde{\mathcal{O}}_{2}^{q}\rangle\end{pmatrix}(\mu_{h})\,. \tag{54}\] Then, at the scale \(\mu_{b}\), the matrix elements for the triplet \(\mathcal{T}_{b}\) and the \(\Omega_{b}\) baryon read respectively \[\begin{pmatrix}\langle\mathcal{O}_{1}^{q}\rangle\\ \langle\mathcal{O}_{2}^{q}\rangle\\ \langle\tilde{\mathcal{O}}_{1}^{q}\rangle\\ \langle\tilde{\mathcal{O}}_{2}^{q}\rangle\end{pmatrix}(\mu_{b})=\begin{pmatrix} 1.38\,\langle\mathcal{O}_{1}^{q}\rangle\\ -0.69\,\langle\mathcal{O}_{1}^{q}\rangle\\ -\langle\mathcal{O}_{1}^{q}\rangle\\ \frac{1}{2}\langle\mathcal{O}_{1}^{q}\rangle\end{pmatrix}(\mu_{h})\,,\qquad \qquad\begin{pmatrix}\langle\mathcal{O}_{1}^{s}\rangle\\ \langle\mathcal{O}_{2}^{s}\rangle\\ \langle\tilde{\mathcal{O}}_{1}^{s}\rangle\\ \langle\tilde{\mathcal{O}}_{2}^{s}\rangle\end{pmatrix}(\mu_{b})=\begin{pmatrix} 1.38\,\langle\mathcal{O}_{1}^{s}\rangle\\ 0.23\,\langle\mathcal{O}_{1}^{s}\rangle\\ -\langle\mathcal{O}_{1}^{s}\rangle\\ -\frac{1}{6}\langle\mathcal{O}_{1}^{s}\rangle\end{pmatrix}(\mu_{h})\,, \tag{55}\] which amounts, for both the triplet and the \(\Omega_{b}\), to a modification of the parameter \(\tilde{B}\) from the value \(\tilde{B}(\mu_{h})=1\) to \(\tilde{B}(\mu_{b})=1.38\).17 At the same time, the one-loop running preserves the ratios between the matrix elements of the operators \(\mathcal{O}_{1}^{q}\) and \(\mathcal{O}_{2}^{q}\). Footnote 17: Choosing the value for the initial scale \(\mu_{h}=1\,\text{GeV}\) instead of \(\mu_{h}=1.5\,\text{GeV}\) results in \(\tilde{B}(\mu_{b})=1.66\). We now turn to discuss the remaining, non-spectator matrix elements [117, 66, 83, 118], \[\mu_{\pi}^{2}(\mathcal{B}) =-\frac{1}{2M_{\mathcal{B}}}\langle\mathcal{B}|\bar{b}_{v}(iD_{ \mu})(iD^{\mu})b_{v}|\mathcal{B}\rangle\,, \tag{56}\] \[\mu_{G}^{2}(\mathcal{B}) =\frac{1}{2M_{\mathcal{B}}}\langle\mathcal{B}|\bar{b}_{v}(iD_{ \mu})(iD_{\nu})(-i\sigma^{\mu\nu})b_{v}|\mathcal{B}\rangle\,,\] (57) \[\rho_{D}^{3}(\mathcal{B}) =\frac{1}{2M_{\mathcal{B}}}\langle\mathcal{B}|\bar{b}_{v}(iD_{ \mu})(iv\cdot D)(iD^{\mu})b_{v}|\mathcal{B}\rangle\,, \tag{58}\] which correspond to the kinetic, chromomagnetic, and Darwin parameters respectively. Following [83], we define the operators in terms of the field \(b_{v}(x)\), rather than the HQET field \(h_{v}(x)\), with differences due to this choice arising only at order \(1/m_{b}^{4}\). These parameters can be further related to the heavy-quark expansion of the hadron mass [117, 119, 120, 121], \[M_{\mathcal{B}}=m_{b}+\bar{\Lambda}+\frac{\mu_{\pi}^{2}(\mathcal{B})}{2m_{b}} -\frac{\mu_{G}^{2}(\mathcal{B})}{2m_{b}}+\mathcal{O}\left(\frac{1}{m_{b}^{2} }\right)\,, \tag{59}\] where \(\bar{\Lambda}\sim 0.5\,\text{GeV}\). Applying the expansion (59) to the mass difference between hyperfine partners, and taking into account the proportionality of the chromomagnetic parameter to the spin factor \(d_{\mathcal{B}}\), we have \[\mu_{G}^{2}(\mathcal{B})=d_{\mathcal{B}}\frac{M_{\mathcal{B}^{*}}^{2}-M_{ \mathcal{B}}^{2}}{d_{\mathcal{B}}-d_{\mathcal{B}^{*}}}\,, \tag{60}\] with \[d_{\mathcal{B}}=-2\left(S_{\mathcal{B}}(S_{\mathcal{B}}+1)-S_{b}(S_{b}+1)-S_{ l}(S_{l}+1)\right)\,, \tag{61}\] and \(S_{X}\) denoting the spin of the particle \(X\). As only \(d_{\Omega_{b}^{-}}\) is non-zero, with \(d_{\Omega_{b}^{-}}=4\) and \(d_{\Omega_{b}^{-*}}=-2\), it follows that \(\mu_{G}^{2}(\mathcal{B})=0\) for the triplet \(\mathcal{T}_{b}\), while, using the masses and splitting from [108], we obtain \[\mu_{G}^{2}(\Omega_{b}^{-})=(0.193\pm 0.065\pm 0.019)\,\text{GeV}^{2}\,. \tag{62}\] Here, the first uncertainty is parametric, while the second one corresponds to our 10% uncertainty estimate from missing higher-order \(1/m_{b}\) corrections. Concerning the kinetic parameter, one can relate \(\mu_{\pi}^{2}(\Lambda_{b}^{0})\) to \(\mu_{\pi}^{2}(B)\) via \[\overline{M}_{B}-M_{\Lambda_{b}^{0}}=\bar{\Lambda}_{B}-\bar{\Lambda}_{\Lambda _{b}^{0}}+\frac{\mu_{\pi}^{2}(B)-\mu_{\pi}^{2}(\Lambda_{b}^{0})}{2m_{b}}+ \mathcal{O}\left(\frac{1}{m_{b}}\right)\,, \tag{63}\] where we differentiate between the parameter \(\bar{\Lambda}\) for mesons and baryons, and \(\overline{M}_{B}\) denotes the spin-averaged meson mass \(\overline{M}_{B}=(M_{B}+3M_{B^{*}})/4\). To proceed, we assume the equality of the difference \(\bar{\Lambda}_{B_{q}}-\bar{\Lambda}_{\mathcal{B}}\) in the bottom and charmed sectors, as well as \(\mu_{\pi}^{2}(B)=\mu_{\pi}^{2}(D)\) and \(\mu_{\pi}^{2}(\Lambda_{b}^{0})=\mu_{\pi}^{2}(\Lambda_{c}^{+})\), resulting in the expression [122; 63] \[\left(\overline{M}_{D}-M_{\Lambda_{c}^{+}}\right)-\left(\overline{M}_{B}-M_{ \Lambda_{b}^{0}}\right)=\left(\frac{1}{2m_{c}}-\frac{1}{2m_{b}}\right)\left( \mu_{\pi}^{2}(B)-\mu_{\pi}^{2}(\Lambda_{b}^{0})\right)\,+\mathcal{O}\left( \frac{1}{m_{b}},\frac{1}{m_{c}}\right)\,, \tag{2.64}\] where \(\overline{M}_{D}=(M_{D}+3M_{D^{*}})/4\), and for the inputs on the left-hand side, we have used the isospin-averaged hadron masses. Unlike in the charm sector, however, there have been analyses of inclusive semileptonic \(B\to X_{c}\,\ell\nu_{\ell}\) decays [123; 124; 125; 96] in order to extract the values of the parameter \(\mu_{\pi}^{2}(B)\) from fits to experimental data. For our numerical analysis, we use the value obtained in [124]: \[\mu_{\pi}^{2}(B)=(0.477\pm 0.056)\,\,\text{GeV}^{2}\,. \tag{2.65}\] Furthermore, we adopt the spectroscopic estimate of the size of SU(3)\({}_{F}\)-breaking from [126; 54]: \[\mu_{\pi}^{2}(B_{s})-\mu_{\pi}^{2}(B)=(0.04\pm 0.02)\,\,\text{GeV}^{2}\,. \tag{2.66}\] For the \(\Omega_{b}\), the analogous relation, derived for the first time in [53], is \[\mu_{\pi}^{2}(\Omega_{b}^{-})\left(\frac{1}{2m_{b}}-\frac{1}{2m_{c}}\right) \simeq m_{c}\!-\!m_{b}\!+\!\frac{1}{3}\left(\left(M_{\Omega_{b}^{-}}+2M_{ \Omega_{b}^{-*}}\right)-\left(M_{\Omega_{c}^{0}}+2M_{\Omega_{c}^{0}}\right) \right)\!+\!\mathcal{O}\left(\frac{1}{m_{b}},\frac{1}{m_{c}}\right)\,. \tag{2.67}\] This can be recast, using a similar relation for the \(B_{s}\) meson, as \[\left(\mu_{\pi}^{2}(\Omega_{b}^{-})-\mu_{\pi}^{2}(B_{s})\right)\left(\frac{1} {2m_{b}}-\frac{1}{2m_{c}}\right)\simeq\overline{M}_{D_{s}}\!-\!\overline{M}_{ B_{s}}\!+\!\frac{1}{3}\left(\left(M_{\Omega_{b}^{-}}+2M_{\Omega_{b}^{-*}} \right)-\left(M_{\Omega_{c}^{0}}+2M_{\Omega^{0}}\right)\right)\,, \tag{2.68}\] up to corrections of order \(1/m_{b,c}\), and where \(\overline{M}_{B_{s}}\), \(\overline{M}_{D_{s}}\) are again spin-averaged meson masses. Concerning the quark masses, we use their values in the kinetic scheme, i.e. \(m_{b}^{\text{kin}}(\mu_{\text{cut}}=1\,\,\text{GeV})=4.57\,\,\text{GeV}\), as extracted from the fit in [124; 126], and \(m_{c}^{\text{kin}}(\mu_{\text{cut}}=0.5\,\text{GeV})=4.57\,\,\text{GeV}\). \begin{table} \begin{tabular}{|l||c|c|c|} \hline & \(\Lambda_{b}^{0}\) & \(\Xi_{b}^{0,-}\) & \(\Omega_{b}^{-}\) \\ \hline \(\mu_{G}^{2}(\mathcal{B})/\,\text{GeV}^{2}\) & \(0\) & \(0\) & \(0.193\pm 0.068\) \\ \(\mu_{\pi}^{2}(\mathcal{B})/\,\text{GeV}^{2}\) & \(0.50\pm 0.06\) & \(0.54\pm 0.06\) & \(0.56\pm 0.06\) \\ \(\rho_{D}^{3}(\mathcal{B})/\,\text{GeV}^{3}\) & \(0.031\pm 0.009\) & \(0.037\pm 0.009\) & \(0.050\pm 0.021\) \\ \hline \end{tabular} \end{table} Table 5: Non-perturbative parameters for the non-spectator contributions used in our analysis. Values for \(\mu_{\pi}^{2}(\mathcal{B})\) follow from the relations derived in eqs. (2.64) and (2.68), with \(\mu_{\pi}^{2}(B)\) taken from the fit value in [124] and \(\mu_{\pi}^{2}(B_{s})\) obtained using the SU(3)\({}_{F}\)-breaking estimate from [127]. Values for \(\rho_{D}^{3}(\mathcal{B})\) follow from employing the equation of motion (2.73). The errors quoted here are obtained by combining in quadrature the parametric uncertainty and the uncertainty due to missing power corrections. 1.40 GeV [128; 129; 130]. The values for the differences of the kinetic parameters turn out to be small, and we obtain18 Footnote 18: This small separation between \(\mu_{\pi}^{2}(B)\) and \(\mu_{\pi}^{2}(\Lambda_{b}^{0})\) is consistent with the sum rules calculation in [131]. \[\mu_{\pi}^{2}(\Lambda_{b}^{0})-\mu_{\pi}^{2}(B) =(0.029\pm 0.001\pm 0.015)\,\text{GeV}^{2}\,, \tag{69}\] \[\mu_{\pi}^{2}(\Xi_{b})-\mu_{\pi}^{2}(B) =(0.061\pm 0.002\pm 0.030)\,\text{GeV}^{2}\,,\] (70) \[\mu_{\pi}^{2}(\Omega_{b}^{-})-\mu_{\pi}^{2}(B_{s}) =(0.040\pm 0.023\pm 0.020)\,\,\text{GeV}^{2}\,, \tag{71}\] where again the first quoted errors represent the parametric uncertainties, while the second ones follow from our assignment of 50% uncertainties to account for possibly sizeable \(1/m_{c}\) corrections. Combining the above results with those in (65), (66) leads to our estimates for the baryonic kinetic parameters presented in table 5. As for the Darwin parameter \(\rho_{D}^{3}(\mathcal{B})\), this can be related, up to \(\mathcal{O}(1/m_{b})\) corrections, to the four-quark matrix elements by the equation of motion for the gluon field strength tensor, \[[iD_{\mu},iD_{\nu}]=ig_{s}G_{\mu\nu}\,,\qquad[D^{\mu},G_{\mu\nu}]=-g_{s}t^{a} \sum_{q=u,d,s}\bar{q}\gamma_{\nu}t^{a}q\,, \tag{72}\] which leads to the relation \[2M_{\mathcal{B}}\,\rho_{D}^{3}(\mathcal{B})=g_{s}^{2}\sum_{q=u,d,s}\langle \mathcal{B}|\left(-\frac{1}{8}\mathcal{O}_{1}^{q}+\frac{1}{24}\tilde{\mathcal{ O}}_{1}^{q}+\frac{1}{4}\mathcal{O}_{2}^{q}-\frac{1}{12}\tilde{\mathcal{O}}_{2}^{ q}\right)|\mathcal{B}\rangle+\mathcal{O}\left(\frac{1}{m_{b}}\right)\,, \tag{73}\] in terms of the operator basis defined in (32). We evaluate the right-hand side of eq. (73) using the matrix elements of the four-quark operators renormalised at the scale \(\mu_{0}=\mu_{b}\), which, together with \(\alpha_{s}(\mu_{b})=0.22\), results in the values for the Darwin parameter shown in table 5.19 Footnote 19: For comparison, the values obtained using instead \(\alpha_{s}=1\), and the four-quark matrix elements at the low hadronic scale \(\mu_{h}\), read \((\rho_{D}^{3}(\Lambda_{b}),\rho_{D}^{3}(\Xi_{b}),\rho_{D}^{3}(\Omega_{b})) \simeq(0.11,0.13,0.18)\) GeV\({}^{3}\). However, the limit on the size of \(\rho_{D}^{3}(B)\) derived in [132] supports a lower value of \(\alpha_{s}\), consistent with the results in table 5. ## 3 Numerical Analysis and Results In this section, we present our predictions for the total decay widths of \(b\)-baryons and their lifetime ratios, as well as for the values of their lifetimes normalised to \(\tau(B_{d})\), as summarised in table 6 and figure 3. We also provide results for the semileptonic decay widths and inclusive \(b\)-baryon semileptonic branching fractions, shown in eqs. (3.1) and (3.1), (3.1). The values of the non-perturbative parameters used in our numerical analysis are displayed in tables 4 and 5 of section 2.3, while all remaining inputs are collected in appendix A. Note that the renormalisation scales \(\mu_{1}\) and \(\mu_{0}\) are varied independently, both in the same interval \(\mu_{b}/2\leq\mu_{0,1}\leq 2\mu_{b}\), with \(\mu_{b}=4.5\) GeV, and using as central values \(\mu_{0}=\mu_{1}=\mu_{b}\). In addition, in order to account for possible uncertainties in our assumption for the "factorisation" scale \(\mu_{h}\), we vary this between \(1\) GeV and \(1.5\) GeV, fixing its central value to \(1.5\) GeV. As we present results for the lifetime ratios of \(b\)-baryons with the \(B_{d}\) meson, a couple of comments with respect to our recent study [54] are in order. Firstly, as discussed in section 2.2, in our analysis of the \(b\)-baryon total widths, we treat the dimension-seven contributions as an additional source of uncertainty, and do not provide any estimates for their central values. Hence, for consistency, here we have adopted the same treatment also for the total width of the \(B_{d}\) meson, differently from [54].20 Secondly, as the value of the Darwin parameter \(\rho_{D}^{3}(\mathcal{B})\) for baryons is obtained using the equations of motion for the gluon field strength tensor evaluated at the scale \(\mu_{b}\), see eq. (73), we again follow the same procedure for the \(B_{d}\) meson and use21 Footnote 20: However, for the \(B_{d}\) meson, the dimension-seven four-quark contribution turns out to be negligible, see eq. (3.4) of [54]. \[\rho_{D}^{3}(B_{d})=(0.028\pm 0.010)\,\text{GeV}^{3}. \tag{11}\] Our predictions for the total widths are determined from eq. (15), while the lifetime ratios are obtained using the relation \[\frac{\tau(H_{1})}{\tau(H_{2})}=1+[\Gamma(H_{2})-\Gamma(H_{1})]^{\text{HQE}} \,\tau(H_{1})^{\text{exp}}\,, \tag{12}\] where the difference \(\Gamma(H_{2})-\Gamma(H_{1})\) is computed from eq. (15), and we use as input the experimental value for the lifetime of the \(H_{1}\) hadron. In order to understand the size of each of the contributions in the HQE included in our analysis, below we show our results for the decomposition of the total widths of \(b\)-baryons, explicitly indicating the LO- and NLO-QCD corrections when the latter are present. For central values of the input parameters, we obtain \[\Gamma(\Lambda_{b}^{0})=\Gamma_{0}\Bigg{[}(\underbrace{5.97}_{ \text{LO}}-\underbrace{0.44}_{\Delta\text{NLO}})-0.14\,\frac{\mu_{\pi}^{2}( \Lambda_{b}^{0})}{\text{GeV}^{2}}-1.35\,\frac{\rho_{D}^{3}(\Lambda_{b}^{0})}{ \text{GeV}^{3}}-\big{(}\underbrace{10.6}_{\text{LO}}+\underbrace{5.04}_{ \Delta\text{NLO}}\big{)}\frac{\langle\mathcal{O}_{1}^{q}\rangle_{\Lambda_{b} ^{0}}}{\text{GeV}^{3}}\Bigg{]}\,, \tag{13}\] \[\Gamma(\Xi_{b}^{0})=\Gamma_{0}\Bigg{[}(\underbrace{5.97}_{\text{ LO}}-\underbrace{0.44}_{\Delta\text{NLO}})-0.14\,\frac{\mu_{\pi}^{2}(\Xi_{b}^{0})}{ \text{GeV}^{2}}-1.35\,\frac{\rho_{D}^{3}(\Xi_{b}^{0})}{\text{GeV}^{3}}\] \[-\big{(}\underbrace{18.2}_{\text{LO}}+\underbrace{4.02}_{\Delta \text{NLO}}\big{)}\frac{\langle\mathcal{O}_{1}^{q}\rangle_{\Xi_{b}^{0}}}{ \text{GeV}^{3}}-\big{(}\underbrace{-7.31}_{\text{LO}}+\underbrace{1.48}_{ \Delta\text{NLO}}\big{)}\frac{\langle\mathcal{O}_{1}^{s}\rangle_{\Xi_{b}^{0}} }{\text{GeV}^{3}}\Bigg{]}\,, \tag{14}\] \[\Gamma(\Xi_{b}^{-})=\Gamma_{0}\Bigg{[}(\underbrace{5.97}_{\text{LO}}- \underbrace{0.44}_{\Delta\text{NLO}})-0.14\,\frac{\mu_{\pi}^{2}(\Xi_{b}^{-})} {\text{GeV}^{2}}-1.35\,\frac{\rho_{D}^{3}(\Xi_{b}^{-})}{\text{GeV}^{3}}\] \[-\big{(}\underbrace{-7.62}_{\text{LO}}+\underbrace{1.02}_{\Delta \text{NLO}}\big{)}\frac{\langle\mathcal{O}_{1}^{q}\rangle_{\Xi_{b}^{-}}}{ \text{GeV}^{3}}-\big{(}\underbrace{-7.31}_{\text{LO}}+\underbrace{1.48}_{ \Delta\text{NLO}}\big{)}\frac{\langle\mathcal{O}_{1}^{s}\rangle_{\Xi_{b}^{-} }}{\text{GeV}^{3}}\Bigg{]}\,, \tag{15}\] \[\Gamma(\Omega_{b}^{-})=\Gamma_{0}\Bigg{[}(\underbrace{5.97}_{\rm LO }-\underbrace{0.44}_{\Delta\rm NLO})-0.14\,\frac{\mu_{\pi}^{2}(\Omega_{b}^{-})}{ \mbox{GeV}^{2}}-0.24\,\frac{\mu_{G}^{2}(\Omega_{b}^{-})}{\mbox{GeV}^{2}}-1.35 \,\frac{\rho_{D}^{3}(\Omega_{b}^{-})}{\mbox{GeV}^{3}}\] \[-\big{(}\underbrace{-3.81}_{\rm LO}+\underbrace{0.72}_{\Delta \rm NLO}\big{)}\frac{\langle{\cal O}_{1}^{s}\rangle_{\Omega_{b}^{-}}}{\mbox{ GeV}^{3}}\Bigg{]}\,, \tag{11}\] with \(q=u,d\). The total decay widths are clearly dominated by the dimension-three contribution, with the radiative corrections giving a \(\sim\)10% effect. Among the power-suppressed terms, the largest contribution comes from dimension-six four-quark operators, and in particular from the exc topology, which enters the \(\Lambda_{b}^{0}\) and \(\Xi_{b}^{0}\) widths. Radiative corrections also play an important role, and range from \(\sim\)10% to \(\sim\)50% of the four-quark contribution depending on the specific topology. The Darwin term gives the next dominant power correction, and in some cases partially compensates the contribution of four-quark operators, as for example in the \(\Lambda_{b}^{0}\), eq. (10). For completeness, we also show the decomposition for the total width of the \(B_{d}\) meson, cf. eq. (12) of [54]: \[\Gamma(B_{d}^{0}) = \Gamma_{0}\Bigg{[}(\underbrace{5.97}_{\rm LO}-\underbrace{0.44}_ {\Delta\rm NLO})-\,0.14\,\frac{\mu_{\pi}^{2}(B)}{\mbox{GeV}^{2}}-0.24\,\frac{ \mu_{G}^{2}(B)}{\mbox{GeV}^{2}}-1.35\,\frac{\rho_{D}^{3}(B)}{\mbox{GeV}^{3}} \tag{12}\] \[-\,(\underbrace{0.012}_{\rm LO}+\underbrace{0.022}_{\Delta\rm NLO })\,\tilde{B}_{1}^{q}+(\underbrace{0.012}_{\rm LO}+\underbrace{0.020}_{\Delta \rm NLO})\,\tilde{B}_{2}^{q}-\,(\underbrace{0.74}_{\rm LO}+\underbrace{0.03} _{\Delta\rm NLO})\,\tilde{B}_{3}^{q}\] \[+\,(\underbrace{0.78}_{\rm LO}-\underbrace{0.01}_{\Delta\rm NLO })\,\tilde{B}_{4}^{q}-0.14\,\tilde{\delta}_{1}^{qq^{\prime}}+0.02\,\tilde{ \delta}_{2}^{qq^{\prime}}-2.29\,\tilde{\delta}_{3}^{qq^{\prime}}+0.00\,\tilde {\delta}_{4}^{qq^{\prime}}\] \[-\,0.01\,\tilde{\delta}_{1}^{sq}+0.01\,\tilde{\delta}_{2}^{sq}-0. 69\,\tilde{\delta}_{3}^{sq}+0.78\,\tilde{\delta}_{4}^{sq}\,\Bigg{]}\,,\] where \(\tilde{B}_{i}^{q}\) and \(\tilde{\delta}_{i}^{qq^{\prime}},\tilde{\delta}_{i}^{sq}\) denote, respectively, the \(B\) meson dimension-six Bag parameters and the 'eye contractions', see [54] for details. Their numerical values, as well as of those for \(\mu_{\pi}^{2}(B)\) and \(\mu_{G}^{2}(B)\), are taken to be the same as in [54]. Our HQE predictions for the \(b\)-baryon lifetimes and their ratios, together with the corresponding experimental values, are presented in table 6 and visualised in figure 3. The quoted theoretical errors are obtained by combining uncertainties due to variation of the input parameters and of the renormalisation scales \(\mu_{0},\mu_{1}\), and \(\mu_{h}\), as well as an additional 15% uncertainty added to the dimension-six contribution to account for missing \(1/m_{b}^{4}\) corrections. Overall, we find excellent agreement between the HQE predictions and the experimental data for all the observables considered. It is important to point out that computing the lifetime ratios entirely within the HQE, i.e. without using the experimental values for \(\tau(H_{1})^{\rm exp}\) in eq. (10), leads to very similar results as those in table 6, albeit with slightly larger uncertainties. Furthermore, when using the HQET sum rules result for the four-quark matrix elements [5] \[\langle{\cal O}_{1}^{u}\rangle_{\Lambda_{b}}=\langle{\cal O}_{1}^{d}\rangle_{ \Lambda_{b}}=-(3.2\pm 1.6)\times 10^{-3}\,\mbox{GeV}^{3}\,, \tag{13}\] we obtain a larger value for the lifetime ratio \(\tau(\Lambda_{b}^{0})/\tau(B_{d}^{0})\), namely \[\tau(\Lambda_{b}^{0})/\tau(B_{d}^{0})=0.976\pm 0.012\,, \tag{3.9}\] which however is consistent, within uncertainties, with the value shown in table 6. Finally, we also present HQE predictions for the inclusive semileptonic decay rates \(\Gamma_{\rm SL}({\cal B})\), defined as \[\Gamma_{\rm SL}({\cal B})\equiv\Gamma({\cal B}\to X_{c+u}\ell\bar{\nu}_{\ell})\,, \tag{3.10}\] with a massless lepton \(\ell=e,\mu\). We obtain \[\Gamma_{\rm SL}({\cal T}_{b})=0.075^{+0.004}_{-0.003}\;{\rm ps}^{-1}\,, \qquad\qquad\Gamma_{\rm SL}(\Omega_{b})=0.073^{+0.004}_{-0.003}\;{\rm ps}^{-1}\,, \tag{3.11}\] which leads to the following results for the inclusive semileptonic branching fractions \({\rm BR}_{\rm SL}({\cal B})\): \[{\rm BR}_{\rm SL}(\Lambda_{b}^{0}) =(11.0^{+0.6}_{-0.5})\,\%\,, {\rm BR}_{\rm SL}(\Xi_{b}^{-}) =(11.7^{+0.7}_{-0.6})\,\%, \tag{3.12}\] \[{\rm BR}_{\rm SL}(\Xi_{b}^{0}) =(11.1^{+0.6}_{-0.6})\,\%\,, {\rm BR}_{\rm SL}(\Omega_{b}^{-}) =(12.0^{+1.4}_{-1.4})\,\%\,, \tag{3.13}\] where \[{\rm BR}_{\rm SL}({\cal B})=\Gamma_{\rm SL}({\cal B})\,\tau({\cal B})^{\rm exp }\,. \tag{3.14}\] Note that the value for \({\rm BR}_{\rm SL}(\Lambda_{b}^{0})\) in eq. (3.12) perfectly agrees with the result obtained in the recent study [133]. Although measurements of inclusive \(b\)-baryon semileptonic branching fractions are extremely difficult at present machines, the theoretical predictions might still prove useful in Monte Carlo simulations. ## 4 Conclusions We have performed a phenomenological study of the lifetimes of \(b\)-baryons, including for the first time the contribution of the Darwin operator and a new extraction of the matrix elements of the four-quark operators within the framework of the non-relativistic constituent quark model. Overall, we observe an excellent agreement between our predictions and the experimental data. For the lifetime ratios \(\tau(\Lambda_{b}^{0})/\tau(B_{d})\), \(\tau(\Xi_{b}^{0})/\tau(B_{d})\), and \(\tau(\Xi_{b}^{0})/\tau(\Xi_{b}^{-})\), the theoretical and experimental uncertainties are comparable, while for the total decay rates the theoretical errors dominate, although, in the case of the \(\Omega_{b}^{-}\) baryon, the experimental uncertainties are also still quite sizeable. In particular, we find \[\frac{\tau(\Lambda_{b}^{0})}{\tau(B_{d}^{0})}^{\rm HQE}=1-(0.045\pm 0.014)\,, \quad\frac{\tau(\Lambda_{b}^{0})}{\tau(B_{d}^{0})}^{\rm Exp.}=1-(0.031\pm 0.006)\,, \tag{4.1}\] showing that the measured suppression of the \(\Lambda_{b}^{0}\) lifetime by \((-3.1\pm 0.6)\%\) compared to \(\tau(B_{d})\) is impressively confirmed by the corresponding theory prediction of \((-4.5\pm 1.4)\%\). Therefore, we do not see any indications for visible violations of quark-hadron duality affecting the HQE, as applied to the \(\Lambda_{b}^{0}\) baryon. It is interesting to note that the theory estimate from 1986 [1] led to almost exactly the same central value as the one obtained in our study. The authors of [1] included in their analysis: LO-QCD corrections to the free-quark decay, \(\Gamma_{3}^{(0)}\) in eq. (15), taking into account charm quark mass dependence; LO-QCD corrections to the spectator effects, \(\tilde{\Gamma}_{6}^{(0)}\) in eq. (15), without charm quark mass dependence; and estimates of the matrix elements of the four-quark operators based on a simplified version of the non-relativistic constituent quark model. They neglected corrections of order \(1/m_{b}^{2}\), i.e. \(\Gamma_{5}^{(0)}\) in eq. (15), as well as \(1/N_{c}\) corrections in the free quark decay. Furthermore, the NLO-QCD corrections to the \(\Delta B=1\) Wilson coefficients, to the free-quark decay, \(\Gamma_{3}^{(1)}\) in eq. (15), and to the spectator effects, \(\tilde{\Gamma}_{6}^{(1)}\) in eq. (15), as well as the contribution of the Darwin operator, \(\Gamma_{6}^{(0)}\) in eq. (15), were unknown in 1986. Shifman and Voloshin correctly predicted 36 years ago a small negative deviation \begin{table} \begin{tabular}{|c||c|c|} \hline Observable & HQE prediction & Experimental value \\ \hline \hline \(\Gamma(\Lambda_{b}^{0})\) & \(0.671^{+0.108}_{-0.071}\,\text{ps}^{-1}\) & \((0.680\pm 0.004)\,\text{ps}^{-1}\) \\ \hline \(\Gamma(\Xi_{b}^{0})\) & \(0.670^{+0.108}_{-0.071}\,\text{ps}^{-1}\) & \((0.678\pm 0.014)\,\text{ps}^{-1}\) \\ \hline \(\Gamma(\Xi_{b}^{-})\) & \(0.622^{+0.104}_{-0.067}\,\text{ps}^{-1}\) & \((0.636\pm 0.016)\,\text{ps}^{-1}\) \\ \hline \(\Gamma(\Omega_{b}^{-})\) & \(0.591^{+0.108}_{-0.071}\,\text{ps}^{-1}\) & \(0.610^{+0.070}_{-0.066}\,\text{ps}^{-1}\) \\ \hline \hline \(\tau(\Lambda_{b}^{0})/\tau(B_{d}^{0})\) & \(0.955\pm 0.014\) & \(0.969\pm 0.006\) \\ \hline \(\tau(\Xi_{b}^{0})/\tau(B_{d}^{0})\) & \(0.956\pm 0.023\) & \(0.974\pm 0.020\,^{*}\) \\ \hline \(\tau(\Xi_{b}^{-})/\tau(B_{d}^{0})\) & \(1.029\pm 0.015\) & \(1.035\pm 0.027\,^{*}\) \\ \hline \(\tau(\Omega_{b}^{-})/\tau(B_{d}^{0})\) & \(1.081\pm 0.042\) & \(1.080^{+0.118}_{-0.112}\,^{*}\) \\ \hline \hline \(\tau(\Xi_{b}^{0})/\tau(\Lambda_{b}^{0})\) & \(1.002\pm 0.023\) & \(1.006\pm 0.021\,^{*}\) \\ \hline \(\tau(\Xi_{b}^{-})/\tau(\Lambda_{b}^{0})\) & \(1.078\pm 0.021\) & \(1.069\pm 0.028\,^{*}\) \\ \hline \(\tau(\Omega_{b}^{-})/\tau(\Lambda_{b}^{0})\) & \(1.132\pm 0.047\) & \(1.115^{+0.122}_{-0.116}\,^{*}\) \\ \hline \(\tau(\Xi_{b}^{0})/\tau(\Xi_{b}^{-})\) & \(0.929\pm 0.028\) & \(0.929\pm 0.028\) \\ \hline \end{tabular} \end{table} Table 6: Comparison between our predictions based on the HQE and the data. The theoretical uncertainties are obtained by combining uncertainties due to input parameters, the renormalisation scales \(\mu_{0},\mu_{1}\), and \(\mu_{h}\), and missing \(1/m_{b}^{4}\) corrections. The experimental numbers marked with an asterisk are obtained by dividing the corresponding values shown in table 1, and do not take into account possible experimental correlations. of \(\tau(\Lambda_{b}^{0})/\tau(B_{d}^{0})\) from one, however, the perfect matching of their result with our post-diction from 2023 is a kind of numerical coincidence, since the effect of their approximations seems to be negligible. Figure 3: Graphical representation of the results presented in table 6. have cancelled with the low value of the decay constant used in 1986, \(f_{B}=110\) MeV, resulting in \(\left(f_{B}^{(1986)}/f_{B}^{(2023)}\right)^{2}\approx 0.34\). Moreover, we confirm the experimentally observed lifetime splitting of the \(\Xi_{b}^{0}\) and \(\Xi_{b}^{-}\) baryons \[\frac{\tau(\Xi_{b}^{0})}{\tau(\Xi_{b}^{-})}^{\rm HQE}=1-(0.071\pm 0.028)\,, \quad\frac{\tau(\Xi_{b}^{0})}{\tau(\Xi_{b}^{-})}^{\rm Exp.}=1-(0.071\pm 0.028)\,, \tag{10}\] coincidentally obtaining the same central value and uncertainty estimate. For the \(\Omega_{b}^{-}\) baryon we predict a larger lifetime compared to the \(B_{d}^{0}\) meson, although here a clear experimental confirmation is still missing. Our results also agree, within uncertainties, with the most recent estimate of \(b\)-baryon lifetimes presented in [13]. This agreement holds in spite of the fact that NLO-corrections in dimension-six four-quark contributions, as well as the Darwin contribution, which was at the time unknown, are missing from the theoretical expression in [13], while the uncertainties in [13] are artificially small, as they arise only from the variation of \(\mu_{h}\) and from the bag parameters entering the four-quark contribution to \(\tau(B_{d}^{0})\), and do not include other parametric and scale uncertainties. Concerning the lifetime hierarchy, our calculations indicate \[\tau(\Lambda_{b}^{0})\approx\tau(\Xi_{b}^{0})<\tau(\Xi_{b}^{-})\leq\tau( \Omega_{b}^{-})\,, \tag{11}\] which is confirmed by data. Note that this hierarchy was already predicted in e.g. [8; 13]. Finally, we have presented numerical updates for the inclusive semileptonic branching fractions of the \(b\)-baryons, which currently seem to be difficult to measure at LHCb, and are not possible at \(\Upsilon(4S)\) runs with Belle II. However, they might be feasible for the flavour physics programme at the high luminosity upgrade of the LHC [134] or even further in the future at FCC-ee, see e.g. [135; 136]. In order to further improve the theoretical precision in the lifetime ratios, the following calculations can be performed in the future: * Non-perturbative, and in particular lattice QCD, determinations of the matrix elements of the four-quark operators of dimension-six, \(\langle\tilde{\mathcal{O}}_{6}\rangle\), and of dimension-seven, \(\langle\tilde{\mathcal{O}}_{7}\rangle\). * NNLO-QCD corrections to the dimension-six spectator contributions, \(\tilde{\Gamma}_{6}^{(2)}\). * Complete determination of LO-QCD dimension-seven contributions, \(\Gamma_{7}^{(0)}\). As for the total decay rates, the HQE prediction is dominated by the free-quark decay. In this case, the theoretical uncertainties could be significantly reduced if the complete NNLO-QCD contributions, i.e. \(\Gamma_{3}^{(2)}\), were available. Hence, the computation of the missing \(\alpha_{s}^{(2)}\)-corrections to non-leptonic \(b\)-quark decays is highly desirable. In conclusion, combined with our recent studies on charmed hadrons [52; 53] and on \(B\) mesons [54], the results of this work confirm that the HQE provides a consistent framework to predict inclusive decay rates of heavy hadrons. ### Acknowledgements We would like to thank Olivier Schneider for providing us with old HFAG averages for the \(\Lambda_{b}\) lifetime. We would also like to thank Johannes Albrecht and Tim Gershon for useful correspondence on the prospects for measuring inclusive semileptonic decay rates of \(b\)-baryons at LHCb. We acknowledge support from the Alexander von Humboldt Foundation in the framework of the Research Group Linkage Programme, funded by the German Federal Ministry of Education. JG and BM have also been supported by the Croatian Science Foundation (HRZZ) project "Heavy hadron decays and lifetimes" IP-2019-04-7094. The work of MLP was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - project number 500314741. JG and IN wish to thank the theoretical particle physics group at the University of Siegen for the kind hospitality shown during their recent stay, where part of this work was undertaken. ## Appendix A Numerical inputs Here we collect the values of the parameters (table 7) and of the \(\Delta B=1\) Wilson coefficients (table 8) used in our analysis. ## Appendix B Dimension-six four-quark operator contributions at LO-QCD The analytical expressions for the functions \(\tilde{\Gamma}^{q}_{6,T}(x_{f_{1}},x_{f_{2}})\), introduced in eq. (25), are provided explicitly below at LO-QCD. For non-leptonic transitions \(b\to q_{1}\bar{q}_{2}q_{3}\), with \(q_{1,2}=u,c\) \begin{table} \begin{tabular}{|c|c|c||c|c|} \hline Parameter & Value & Source & Parameter & Value & Source \\ \hline \hline \(M_{B^{+}}\) & \(5.27934\,\mathrm{Ge\kern-1.0ptV}\) & & \(|V_{us}|\) & \(0.22500^{+0.00024}_{-0.00021}\) & \\ \(M_{B_{d}}\) & \(5.27965\,\mathrm{Ge\kern-1.0ptV}\) & & \(|V_{ub}|\) & \(0.08848^{+0.00224}_{-0.00219}\) & [137] \\ \(M_{B_{s}}\) & \(5.36688\,\mathrm{Ge\kern-1.0ptV}\) & & \(V_{cb}\) & \(0.04145^{+0.00035}_{-0.00061}\) & \\ \(M_{\Lambda_{b}}\) & \(5.61960\,\mathrm{Ge\kern-1.0ptV}\) & [111] & \(\delta\) & \(\left(65.5^{+1.3}_{-1.2}\right)^{\circ}\) & \\ \(M_{\Xi^{-}_{b}}\) & \(5.7970\,\mathrm{Ge\kern-1.0ptV}\) & & \(m_{b}^{\mathrm{kin}}\) & \(\left(4.573\pm 0.012\right)\mathrm{Ge\kern-1.0ptV}\) & [124] \\ \(M_{\Xi^{0}_{b}}\) & \(5.7919\,\mathrm{Ge\kern-1.0ptV}\) & & \(\bar{m}_{c}(\bar{m}_{c})\) & \(\left(1.27\pm 0.02\right)\mathrm{Ge\kern-1.0ptV}\) & [111] \\ \(M_{\Omega_{b}}\) & \(6.9452\,\mathrm{Ge\kern-1.0ptV}\) & & \(f_{B}\) & \(\left(0.1900\pm 0.0013\right)\mathrm{Ge\kern-1.0ptV}\) & [138] \\ \(\alpha_{s}(M_{Z})\) & \(0.1179\pm 0.0010\) & & \(f_{B_{s}}\) & \(\left(0.2303\pm 0.0013\right)\mathrm{Ge\kern-1.0ptV}\) & \\ \hline \end{tabular} \end{table} Table 7: Summary of inputs used in the numerical analysis. Values of the non-perturbative parameters for \(b\)-baryons are presented in tables 4 and 5. and \(q_{3}=d,s\), they read respectively \[\tilde{\Gamma}_{6,{\rm int}^{-}}^{q_{3}}(x_{q_{1}},x_{q_{2}}) =\frac{G_{F}^{2}}{12\pi}|V_{q_{1}b}|^{2}|V_{q_{2}q_{3}}|^{2}m_{b}^{2 }\sqrt{\lambda(1,x_{q_{1}},x_{q_{2}})}\left\{k_{1}\Big{[}\,\omega_{1}(x_{q_{1}},x_{q_{2}})\,\langle{\cal O}_{1}^{q_{3}}\rangle-2\omega_{2}(x_{q_{1}},x_{q_{2} })\langle{\cal O}_{2}^{q_{3}}\rangle\right]\] \[+k_{2}\Big{[}\omega_{1}(x_{q_{1}},x_{q_{2}})\langle\tilde{\cal O} _{1}^{q_{3}}\rangle-2\omega_{2}(x_{q_{1}},x_{q_{2}})\langle\tilde{\cal O}_{2 }^{q_{3}}\rangle\Big{]}\Bigg{\}}\,, \tag{105}\] \[\tilde{\Gamma}_{6,{\rm exc}}^{q_{2}}(x_{q_{1}},x_{q_{3}}) =\frac{G_{F}^{2}}{2\pi}|V_{q_{1}b}|^{2}|V_{q_{2}q_{3}}|^{2}m_{b}^ {2}\sqrt{\lambda(1,x_{q_{1}},x_{q_{3}})}\,(1-x_{q_{1}}-x_{q_{3}})\left[k_{3} \langle{\cal O}_{1}^{q_{2}}\rangle+k_{4}\langle\tilde{\cal O}_{1}^{q_{2}} \rangle\right],\] (106) \[\tilde{\Gamma}_{6,{\rm int}^{+}}^{q_{1}}(x_{q_{2}},x_{q_{3}}) =\frac{G_{F}^{2}}{12\pi}|V_{q_{1}b}|^{2}|V_{q_{2}q_{3}}|^{2}m_{b}^ {2}\sqrt{\lambda(1,x_{q_{3}},x_{q_{2}})}\left\{k_{5}\Big{[}\,\omega_{1}(x_{q_{ 3}},x_{q_{2}})\,\langle{\cal O}_{1}^{q_{1}}\rangle-2\omega_{2}(x_{q_{3}},x_{q_{ 2}})\langle{\cal O}_{2}^{q_{1}}\rangle\right]\] \[+k_{6}\Big{[}\omega_{1}(x_{q_{3}},x_{q_{2}})\langle\tilde{\cal O} _{1}^{q_{1}}\rangle-2\omega_{2}(x_{q_{3}},x_{q_{2}})\langle\tilde{\cal O}_{2} ^{q_{1}}\rangle\Big{]}\Bigg{\}}\,, \tag{107}\] while for semileptonic transitions \(b\to q_{1}\bar{\nu}_{\ell}\ell\), with \(q_{1}=u,c\) and \(\ell=e,\mu,\tau\), the explicit expression is \[\tilde{\Gamma}_{6,{\rm int}^{+}}^{q_{1}}(x_{\ell},x_{\nu_{\ell}}) =\frac{G_{F}^{2}}{12\pi}|V_{q_{1}b}|^{2}m_{b}^{2}\sqrt{\lambda(1,x_{\ell},x_{ \nu_{\ell}})}\,\Big{[}\,\omega_{1}(x_{\ell},x_{\nu_{\ell}})\,\langle{\cal O}_ {1}^{q_{1}}\rangle-2\omega_{2}(x_{\ell},x_{\nu_{\ell}})\langle{\cal O}_{2}^{q_ {1}}\rangle\Big{]}\,, \tag{108}\] \begin{table} \begin{tabular}{|c||c|c|c|c|c|} \hline \(\mu_{1}\)[GeV] & 2.5 & 4.2 & 4.5 & 4.8 & 9 \\ \hline \multirow{2}{*}{\(C_{1}(\mu_{1})\)} & \(1.13\) & \(1.08\) & \(1.08\) & \(1.07\) & \(1.04\) \\ & (1.17) & (1.12) & (1.11) & (1.11) & (1.07) \\ \hline \multirow{2}{*}{\(C_{2}(\mu_{1})\)} & \(-0.27\) & \(-0.19\) & \(-0.18\) & \(-0.17\) & \(-0.11\) \\ & (\(-0.36\)) & (\(-0.27\)) & (\(-0.26\)) & (\(-0.25\)) & (\(-0.17\)) \\ \hline \multirow{2}{*}{\(C_{3}(\mu_{1})\)} & \(0.02\) & \(0.01\) & \(0.01\) & \(0.01\) & \(0.01\) \\ & (0.02) & (0.01) & (0.01) & (0.01) & (0.01) \\ \hline \multirow{2}{*}{\(C_{4}(\mu_{1})\)} & \(-0.05\) & \(-0.04\) & \(-0.03\) & \(-0.03\) & \(-0.02\) \\ & (\(-0.04\)) & (\(-0.03\)) & (\(-0.03\)) & (\(-0.03\)) & (\(-0.02\)) \\ \hline \multirow{2}{*}{\(C_{5}(\mu_{1})\)} & \(0.01\) & \(0.01\) & \(0.01\) & \(0.01\) & \(0.01\) \\ & (0.01) & (0.01) & (0.01) & (0.01) & (0.01) \\ \hline \multirow{2}{*}{\(C_{6}(\mu_{1})\)} & \(-0.06\) & \(-0.04\) & \(-0.04\) & \(-0.04\) & \(-0.03\) \\ & (\(-0.05\)) & (\(-0.03\)) & (\(-0.03\)) & (\(-0.03\)) & (\(-0.02\)) \\ \hline \(C_{8}^{\rm eff}(\mu_{1})\) & (\(-0.17\)) & (\(-0.15\)) & (\(-0.15\)) & (\(-0.15\)) & (\(-0.14\)) \\ \hline \end{tabular} \end{table} Table 8: Values of the Wilson coefficients at NLO(LO)-QCD for different choices of \(\mu_{1}\). where \(x_{f}=m_{f}^{2}/m_{b}^{2}\) and \(\lambda(a,b,c)=(a-b-c)^{2}-4bc\) is the Kallen function. Moreover, in eqs. (B.1)-(B.4) we have introduced the functions \(\omega_{1,2}(a,b)\), symmetric in their arguments, with \[\omega_{1}(a,b)=(a-b)^{2}+a+b-2\,,\qquad\omega_{2}(a,b)=2\,(a-b)^{2}-(1+a+b)\,,\] (B.5) while \(k_{1},\ldots,k_{6}\), denote the following combinations of Wilson coefficients: \[k_{1}=2\,C_{1}C_{2}+N_{c}\,C_{2}^{2}\,, k_{2}=\,C_{1}^{2}\,,\] (B.6) \[k_{3}=2\,C_{1}C_{2}\,, k_{4}=\,\left(C_{1}^{2}+C_{2}^{2}\right),\] (B.7) \[k_{5}=N_{c}\,C_{1}^{2}+2\,C_{1}C_{2}\,, k_{6}=\,C_{2}^{2}\,.\] (B.8)
2307.03770
Establishing the impact of luminous AGN with multi-wavelength observations and simulations
Cosmological simulations fail to reproduce realistic galaxy populations without energy injection from active galactic nuclei (AGN) into the interstellar medium (ISM) and circumgalactic medium (CGM); a process called `AGN feedback'. Consequently, observational work searches for evidence that luminous AGN impact their host galaxies. Here, we review some of this work. Multi-phase AGN outflows are common, some with potential for significant impact. Additionally, multiple feedback channels can be observed simultaneously; e.g., radio jets from `radio quiet' quasars can inject turbulence on ISM scales, and displace CGM-scale molecular gas. However, caution must be taken comparing outflows to simulations (e.g., kinetic coupling efficiencies) to infer feedback potential, due to a lack of comparable predictions. Furthermore, some work claims limited evidence for feedback because AGN live in gas-rich, star-forming galaxies. However, simulations do not predict instantaneous, global impact on molecular gas or star formation. The impact is expected to be cumulative, over multiple episodes.
C. M. Harrison, A. Girdhar, S. R. Ward
2023-07-07T18:00:01Z
http://arxiv.org/abs/2307.03770v1
[ ###### Abstract Cosmological simulations fail to reproduce realistic galaxy populations without energy injection from active galactic nuclei (AGN) into the interstellar medium (ISM) and circumgalactic medium (CGM); a process called 'AGN feedback'. Consequently, observational work searches for evidence that luminous AGN impact their host galaxies. Here, we review some of this work. Multi-phase AGN outflows are common, some with potential for significant impact. Additionally, multiple feedback channels can be observed simultaneously; e.g., radio jets from 'radio quiet' quasars can inject turbulence on ISM scales, and displace CGM-scale molecular gas. However, caution must be taken comparing outflows to simulations (e.g., kinetic coupling efficiencies) to infer feedback potential, due to a lack of comparable predictions. Furthermore, some work claims limited evidence for feedback because AGN live in gas-rich, star-forming galaxies. However, simulations do not predict instantaneous, global impact on molecular gas or star formation. The impact is expected to be cumulative, over multiple episodes. active galactic nuclei, quasars, feedback, galaxy evolution Establishing the impact of luminous AGN with multi-wavelength observations] Establishing the impact of luminous AGN with multi-wavelength observations and simulations C.M. Harrison et al.] C.M. Harrison\({}^{1}\), A. Girdhar\({}^{1,2,3}\) and S.R. Ward\({}^{2,3,4}\) 2023 371 Title of your IAU Symposium G. Bruni, M. Diaz-Trigo, K. Fukumura, S. Laha, eds. ## 1 Introduction There is a consensus across galaxy formation models and simulations that considerable energy injection, into the interstellar medium (ISM) and beyond, from active galactic nuclei (AGN) is required to regulate star formation and to reproduce many of the observable properties of massive galaxies (e.g., Schaye et al., 2015; Pillepich et al., 2018; Dave et al., 2019). This theoretical work has motivated a multitude of observational work, searching for 'direct' evidence that AGN impact upon the properties of the ISM, or circumgalactic medium (CGM), and/or the host galaxy's star formation (see e.g., Harrison et al., 2017). We provide a brief overview of some work studying the most luminous AGN (i.e., roughly with \(L_{\rm AGN}>\)10\({}^{43}\) erg s\({}^{-1}\)), which are associated with rapidly accreting supermassive black holes and that are identified mostly using optical, infrared or X-ray wavelengths. Three popular approaches that observers take to investigate if luminous AGN impact their host galaxies are discussed: (1) measuring mass outflow rates and kinetic coupling efficiencies (Section 2); (2) comparing the location of jets and outflows to spatially-resolved ISM properties and sites of star formation (Section 3); and (3) investigating the star formation rates and molecular gas properties of AGN host galaxies (Section 4). Section 5 finishes this article by discussing what can be learnt about the impact of luminous AGN from these observations, within the context of galaxy formation simulations. ## 2 Outflow rates and kinetic coupling efficiencies AGN are known to drive multi-phase 'outflows' into the interstellar medium (ISM) and circumgalactic medium (CGM), as evidenced by a variety of emission-line and absorption-line studies (see e.g., Cicone et al., 2018; Harrison et al., 2018; Veilleux et al., 2020). These AGN outflows can result in increased turbulence and/or can expel material from the host galaxy's ISM (at least temporarily). A common approach to assess the potential impact of AGN outflows is to measure their properties, including the mass outflow rates and kinetic powers. These calculations typically involve: identifying which gas is outflowing; converting fluxes to gas masses; measuring the velocities and spatial distribution of the outflowing gas; and then applying models or basic assumptions to calculate outflow rates (Harrison et al., 2018). The inferred mass outflow rates in some sources are found to exceed the star formation rates (see discussion in e.g., Harrison, 2017; Fiore et al., 2017). This means that gas is being removed more rapidly than it is forming stars, implying the potential for a significant impact on the host galaxy. However, it is very challenging to infer the long-term impact of such outflows, including understanding what fraction of this material ultimately escapes the host galaxy. Furthermore, these extreme rates are not ubiquitous. The inferred outflow rates of some more 'typical' systems can be very low to negligible (e.g., Davies et al., 2020; Ramos Almeida et al., 2022; Villar Martin et al., 2023). There are also analysis challenges, with very different results possible when applying different assumptions or using different gas tracers (e.g., Harrison et al., 2018; Davies et al., 2020). Another common approach in the literature is to measure the ratio of outflow's kinetic power to AGN luminosity to derive the 'kinetic coupling efficiency' (see e.g., Fiore et al., 2017). In some cases, these measurements have been compared to simulations, which invoke AGN feedback to establish if the observations are consistent - or not - with feedback models. However, these comparisons are very challenging. It is important to note that the observations are typically measuring an outflow rate in one particular gas phase and often without information on how the properties vary spatially. Furthermore, the AGN luminosities are 'instantaneous' and do not contain information about earlier times, including when the outflows may have been initially launched. Therefore, these measured kinetic coupling efficiencies are not comparable quantities to the AGN coupling efficiencies invoked in sub-grid models of simulations (see discussion in Harrison et al., 2018). Indeed, if simulations track the kinetic powers of outflows as they propagate through the ISM, they can be significantly lower than their initial energy, and it remains extremely challenging to simulate the phases of outflows that are captured by observations (e.g., Costa et al., 2020). ## 3 Spatially-resolved investigation of impact Recent numerical simulations have highlighted that multiple feedback'modes' can exist simultaneously, caused by AGN-driven winds, jets and/or radiation pressure. This can include: injected turbulence to reduce star formation efficiency; removal of high-density nuclear gas (i.e., star-forming material); halting accretion from the halo (i.e., regulating future star formation); and even the localised boosting of star formation due to shocks compressing the gas to higher densities (e.g., Costa et al., 2020; Mandal et al., 2021). Observations now also show that AGN can impact upon the ISM, and wider-scale environment, in multiple different ways. For example, luminous 'radio quiet' quasars, which have considerable radiative output and are often associated with powerful winds, are also able to interact with the star-forming material in their host galaxies via low-power radio jets and lobes (e.g., Girdhar et al., 2022; Audibert et al., 2023; Cresci et al. 2023). An example of this is showcased in Figure 1, where broad, and asymmetric CO emission-line profiles (tracing molecular outflows in the inner regions of galaxies) and molecular filaments entrained by expanding radio lobes are observed simultaneously (also see Morganti et al. 2023). These processes are typically searched for asynchronously in observations of luminous AGN (e.g., Fluetsch et al. 2019) and brightest cluster galaxies (BCGs; e.g., Russel et al. 2019; Tamhane et al. 2022), respectively. The aforementioned observations are, at least qualitatively, similar to expectations of simulations of individual galaxies, which predict AGN will ultimately have a negative impact upon a host galaxy (e.g., Costa et al. 2020; Mandal et al. 2021). However, this global, and long-term suppression of star formation, may not easily be associated with a single AGN episode, or a localised ISM interaction. This could help explain why 'direct' observational evidence of star formation suppression continues to be scarce, tentative or inconclusive (e.g., Scholtz et al. 2021; Cresci et al. 2023). On the other hand, observations that show depleted and/or excited molecular gas within the ISM at the locations of low-power jets or outflows can indicate that the star-forming material is directly affected by the AGN (e.g., Rosario et al. 2019; Girdhar et al. 2022; Audibert et al. 2023). Nonetheless, the _long-term_ and _global, galaxy-wide impact_ of such processes remains more speculative. This motivates a statistical approach, as described below, that measures the global molecular gas and star formation properties of large samples of AGN host galaxies. ## 4 Gas content and star formation rates One popular approach to investigate the impact of AGN on their host galaxies, is to take galaxy samples and measure how the star formation properties and/or the molecular gas properties (i.e., the 'fuel' for star formation) are related (or not) to the presence of Figure 1: _(a)_ Molecular gas velocities and radial extents for two types of AGN-ISM interaction, traced via CO emission: (1) broad emission-line components for a sample of AGN and star-forming galaxies (Fluetsch et al. 2019): (2) filaments around radio lobes for BCGs (Tamhane et al. 2022). The larger symbols represent data for four \(z<0.2\) ‘radio quiet’ quasars, two of which exhibit both types of structure. _(b)_ Example CO emission-line profile for one of these quasars. _(c)_ Example CO velocity map of the same quasar, with radio contours, and highlighting CO structures associated with \(\approx\)10 kpc radio lobes. Figures adapted from Girdhar et al. (in prep). an AGN. These studies come in various forms, including: (1) investigating trends of star formation rates or molecular gas content with AGN luminosities (e.g., Stanley et al., 2015; Ramasawmy et al., 2019; Shangguan et al., 2020); and (2) establishing if AGN host galaxies have different distributions of star formation and/or molecular gas properties to non-AGN host galaxies (e.g., Scholtz et al., 2018; Koss et al., 2021; Valentino et al., 2021). One specific example of this type of analysis is shown in Figure 2. The molecular gas fractions (derived from CO observations) are plotted as a function of specific star formation rate (i.e., star formation rate divided by stellar mass) for low redshift galaxies and AGN. The star-forming galaxies and AGN from Tacconi et al. (2018) are shown to occupy the same area of parameter space. Furthermore, quasars with known radio jets and powerful ionised outflows also have the same molecular gas and star formation properties than their less active galaxy counterparts (Jarvis et al., 2020). With only a few exceptions (e.g., Circosta et al., 2021; Bischetti et al., 2021, which investigate molecular gas content at \(z\sim 2\)), studies of AGN across multiple epochs and using a variety of samples and approaches tend towards a consensus that luminous AGN typically live in gas-rich, star-forming galaxies. It can be tempting to conclude that this is in conflict with predictions that luminous AGN regulate star formation (as suggested by some observational studies), however, as we demonstrate below, this empirical result is not conflicting with predictions of cosmological simulations including AGN feedback. ## 5 Discussion and conclusions We have given a very brief overview of some of the observational work investigating the impact of luminous AGN on their host galaxies (in particular the molecular gas content and the star formation). On the one hand, there is overwhelming evidence that AGN are able to modify the distribution and properties of the ISM and CGM of their host galaxies. This is due to the ever-growing body of observations that AGN radiation, winds and/or jets are able to influence the molecular gas by injecting turbulence, driving it away, and/or exciting it (Section 2 and 3). On the other hand, the _long-term_ and _global significance_ of these processes is less well-established, with the possibility that their impact is very Figure 2: Ratio of molecular gas to stellar mass versus specific star formation rate. Galaxies from Tacconi et al. (2018) with and without an identified AGN are represented by density contours (also green points) and by magenta squares, respectively. Type 2 \(z<0.2\) quasars, with known outflows and/or jets are shown with black circles. Adapted from Jarvis et al. (2020). localised, lasts only on short timescales, and/or is insignificant. Indeed, the observation that luminous AGN tend to live in gas-rich, star-forming galaxies has sometimes been presented as evidence against effective feedback (Section 4). With this in mind, it is worth revisiting the cosmological simulations, which invoke AGN feedback, and motivated much of the observational work. Whilst these simulations typically lack the resolution to include a detailed physical prescription of AGN feedback, they are useful for comparing to the statistical studies of AGN and the molecular gas and star formation properties presented in Section 4. For example, Ward et al. (2022) investigated predictions from the cosmological simulations: (1) IllustrisTNG100 (Pillepich et al., 2018); (2) EAGLE (Schaye et al., 2015) and (3) SIMBA (Dave et al., 2019). These are broadly similar in scope, with \(\sim\)100 Mpc\({}^{3}\) boxes and simulation outputs that enable predictions of AGN luminosities, stellar masses, star formation rates and molecular gas content as a function of cosmic time. Ward et al. (2022) used these to reproduce the type of experiments performed by observers, as discussed in Section 4. One example of this analysis is presented in Figure 3, where the molecular gas fractions of the \(z=0\) galaxies from IllustrisTNG are presented as a function of stellar mass. The luminous AGN (selected on either luminosity or Eddington-ratio) are found to be preferentially located in gas rich galaxies. Broadly speaking, across all three simulations investigated, and at both \(z=2\) and \(z=0\), AGN are predicted to live in gas-rich, star-forming galaxies, in qualitative agreement with most observational work, although it is worth noting that there are distinct _quantitative_ differences in the predictions across the three simulations due to the different feedback models used (Ward et al., 2022). We conclude that we may not expect to have a'smoking gun' observational signature that luminous AGN have a long-term, significant impact on molecular gas content or star formation. The reasons for this are many, with some discussion on this presented in e.g., Scholtz et al. (2018); Ward et al. (2022); Piotrowska et al. (2022). For example, an AGN may stop being visible _before_ they have had an appreciable impact on their host galaxies. Furthermore, the significant impact, which ultimately results in a 'quenched' galaxy, may not be attributed to a single luminous AGN episode, but instead due to the cumulative effect of many AGN episodes. Indeed, according to Piotrowska et al. Figure 3: Ratio of molecular gas to stellar mass versus stellar mass, for \(z=0\) galaxies in the IllustrisTNG100 simulation. Colouring represents the mean AGN bolometric luminosity within each bin. Contour lines represent number density, and the triangles show the median values in bins of stellar mass for galaxies classified as AGN (blue) and non-AGN (red), based on their luminosity. The dashed line is the observed main sequence from Tacconi et al. (2018). The histograms show the distributions of gas fractions for sources classified as AGN (blue) or non-AGN (red), based on their Eddington ratios. Adapted from Ward et al. (2022). (2022), the strongest predictor of a galaxy as having quenched star formation (or not), is considered to be the black hole mass, rather than AGN luminosity, in both cosmological simulations and observations. In this context, black hole mass can be considered as an indirect tracer of the integrated power output of the AGN across the history of the galaxy. In summary, the current body of evidence does not appear to be in conflict with AGN feedback models. However, work is still needed to test the different assumed models of AGN feedback. This can be done by comparing the quantitative predictions of the distributions of properties of galaxy populations from the different cosmological simulations (see e.g., Ward et al. 2022). Furthermore, the current and forthcoming generation of high-resolution individual galaxy simulations should make it possible to extract meaningful predictions of spatially-resolved outflow properties in different phases (e.g., mass outflow rates, kinetic powers), compare these to observations, and to assess the ultimate impact that they can have on galaxy evolution. CMH acknowledges a United Kingdom Research and Innovation grant (MR/V022830/1). SRW acknowledges the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy (EXC-2094-390783311).
2305.03109
Understanding the Salt Effects on the Liquid-Liquid Phase Separation of Proteins
Protein aggregation via liquid-liquid phase separation (LLPS) is ubiquitous in nature and intimately connects to many human diseases. Although it is widely known that the addition of salt has crucial impacts on the LLPS of protein, full understanding of the salt effect remains an outstanding challenge. Here, we develop a molecular theory which systematically incorporates the self-consistent field theory for charged macromolecules into the solution thermodynamics. The electrostatic interaction, hydrophobicity, ion solvation and translational entropy are included in a unified framework. Our theory fully captures the long-standing puzzles of the non-monotonic salt concentration dependence and the specific ion effect. We find that proteins show salting-out at low salt concentrations due to ionic screening. The solubility follows the inverse Hofmeister series. In the high salt concentration regime, protein remains salting-out for small ions but turns to salting-in for larger ions, accompanied by the reversal of the Hofmeister series. We reveal that the solubility at high salt concentrations is determined by the competition between the solvation energy and translational entropy of ion. Furthermore, we derive an analytical criterion for determining the boundary between the salting-in and salting-out regimes. The theoretical prediction is in quantitative agreement with experimental results for various proteins and salt ions without any fitting parameters.
Chao Duan, Rui Wang
2023-05-04T18:51:54Z
http://arxiv.org/abs/2305.03109v2
# Understanding the Salt Effects on the Liquid-Liquid Phase Separation of Proteins ###### Abstract Protein aggregation via liquid-liquid phase separation (LLPS) is ubiquitous in nature and intimately connects to many human diseases. Although it is widely known that the addition of salt has crucial impacts on the LLPS of protein, full understanding of the salt effect remains an outstanding challenge. Here, we develop a molecular theory which systematically incorporates the self-consistent field theory for charged macromolecules into the solution thermodynamics. The electrostatic interaction, hydrophobicity, ion solvation and translational entropy are included in a unified framework. Our theory fully captures the long-standing puzzles of the non-monotonic salt concentration dependence and the specific ion effect. We find that proteins show salting-out at low salt concentrations due to ionic screening. The solubility follows the inverse Hofmeister series. In the high salt concentration regime, protein remains salting-out for small ions but turns to salting-in for larger ions, accompanied by the reversal of the Hofmeister series. We reveal that the solubility at high salt concentrations is determined by the competition between the solvation energy and translational entropy of ion. Furthermore, we derive an analytical criterion for determining the boundary between the salting-in and salting-out regimes. The theoretical prediction is in quantitative agreement with experimental results for various proteins and salt ions without any fitting parameters. ## I Introduction Protein aggregation is ubiquitous in living cells, through which plenty of biomolecular condensates can be assembled [1; 2]. These biomolecular condensates play a vital role in cellular organization and functions, such as the formation of nucleoli [3], heterochromatin and ribonucleoprotein granule [4; 5] as well as signal transduction within the cytoplasm [6; 7; 8]. In addition, the aggregation of various misfolded proteins intimately linked to many neurodegenerative diseases including Alzheimer's, Parkinson's, diabetes, and prion diseases [9; 10]. Evidence is mounting that protein aggregation proceeds via a liquid-liquid phase separation (LLPS), which is manifested as the formation of a dense phase often resembling liquid droplets and a coexisting dilute phase [11; 12; 13; 14]. Revealing the essential physical chemistry of the LLPS-driven aggregation will help delineate the functions of biomolecular condensates and provides useful guidance for the therapy of diseases [15; 16; 17]. In spite of increasing academic interests, understanding and regulating LLPS of protein remains a big challenge [18]. Salt effect on LLPS of protein is one of the most long-standing puzzles. It is well-known that the ionic environment has critical impacts on the LLPS; besides, the addition of salt also provides an effective tool to modulate it [19]. However, this salt effect is very complicated: the LLPS of protein has non-trivial dependence on both the salt concentration and the chemical identity of ions (usually known as the specific ion effect) [20; 21; 22; 23; 24]. Zhang and Cremer measured the cloud point of positively-charged lysozyme solutions [26]. At low salt concentrations, they found that the solubility of lysozyme decreases as salt concentration increases, i.e. protein salting-out. The increase of solubility follows the inverse Hofmeister series of anion. In contrast, at high salt concentrations, lysozyme remains to show salting-out for some anions (e.g. Cl\({}^{-}\)), whereas other anions (e.g. Br\({}^{-}\) and I\({}^{-}\)) enhance the lysozyme solubility, i.e. protein salting-in. The solubility increase follows the direct Hofmeister series in the high salt concentration regime. Neither the non-monotonic salt concentration effect nor the specific ion effect can be explained, even qualitatively, by the standard mean-field Poisson-Boltzmann (PB) theory [25]. Similar salt-dependent behaviors have also been observed in other protein solutions [27; 28; 29; 30] and soft matter systems such as synthetic polymers [31; 32; 33] and colloidal dispersions [34; 35], implying the universality of the salt effects on LLPS. Many theoretical and computational efforts have been made to explain these salt effects. Kastelic et al. assumed a phenomenological model for the interaction energy between proteins, where the well depths in the presence of different alkali-halide salts were fitted to experimental data [36]. They suggested that the salt effect on LLPS is majorly attributed to the ionic screening, but the salting-in behavior and the reversal of Hofmeister series observed at high salt concentrations have not been captured. Zhang and Cremer developed a modified binding isotherm model [26]. The model parameters representing the effectiveness and equilibrium constant for the association of a specific anion to protein surface were fitted to the measured cloud point. They found that the salt effect in the high salt concentration regime is correlated to the interfacial tension of protein surrounded by anions with different polarizability. Furthermore, using a modified PB theory to account for ion size and polarizability, Bostrom et al. suggested that the reversal of the Hofmeister series at high salt concentrations originates from the inversion of effective surface charge of proteins [37]. However, there is no theory up to now that can unify the description of the salt effects on LLPS of proteins for the entire salt concentration regime. The underlying physical chemistry, particularly for the counterintuitive behaviors observed at high salt concentrations, is still unclear. To uncover the salt effect on LLPS of protein, we develop a molecular theory which systematically includes the electrostatics, hydrophobic interaction, ion solvation and translational entropy of protein in a unified framework. Compared to the existing theories, we have made the following two major improvements. First, we explicitly account for the highly localized density fluctuation of proteins in the dilute phase rather than assuming random mixing as invoked in the Flory-Huggins (F-H) theory [38; 39]. This enables the accurate treatment of ionic screening effect on a charged protein aggregate. Second, we include the self-energy of ions as a result of electrostatic fluctuation, which captures the salt effects beyond the mean-field PB level [40; 41]. Our theory predicts that protein salting-out at low salt concentrations is attributed to the screening effect, whereas protein solubility at high salt concentrations is determined by the competition between the solvation energy and translational entropy of ions. Furthermore, we derive an analytical criterion for determining the boundary between the salting-in and salting-out regimes for different proteins and ions. The theoretical prediction is in quantitative agreement with experimental data reported in literature without any fitting parameters. ## II Theory The solubility of protein in a salt solution is built upon the equilibrium between a dilute phase and a protein-rich concentrated phase as illustrated in Fig. 1\(A\). The concentrated solution can be modeled by a homogeneous liquid-like condensate due to the negligible density fluctuation and the surface contribution. However, the description of the dilute phase is nontrivial because of the large localized density fluctuation. An instantaneous picture of the dilute protein solution has localized high concentrations where the proteins are located and pure salt solutions elsewhere. This is an exactly different scenario compared to that envisioned in the random mixing picture of F-H theory used in existing work [15; 16; 17; 42; 43]. To account for this large localized density fluctuation in the dilute phase, we focus on the subvolume of the entire solution containing only one isolated protein or one multi-protein aggregate (see Fig. 1_B_). The density profile and free energy of the protein/aggregate is obtained by applying the self-consistent field theory (SCFT) in the subvolume. This information is then incorporated into the framework of dilute solution thermodynamics to reconstruct the solution behavior of the entire dilute phase. ### Self-Consistent Field Theory for an Isolated Protein/Aggregate As shown in Fig. 1\(B\), we consider a subvolume consisting of an isolated aggregate of \(m\) proteins and _n_S solvent molecules in the presence of _n_\({}_{\pm}\) mobile ions with valency _z_\({}_{\pm}\). _m_\({}=1\) specifies the case of an isolated protein. The subvolume is taken as a semicanonical ensemble: the number of proteins is fixed whereas solvent and mobile Figure 1: (_A_) Schematic of the total system consisting of coexisting dilute phase (D) and concentrated phase (C). The dilute phase is an assemble of protein aggregates with different aggregation number. (_B_) A subsystem containing one isolated aggregate in the presence of salt ions. (_C_) A representative phase diagram plotting the equilibrium volume fractions of the two coexisting phases (_ph_D and _ph_C) as a function of bulk salt concentration _c_\({}_{b}\). _a_\({}_{+}=a_{-}=2.5\)Å, _z_\({}_{+}=z_{-}=1\), _e_\({}_{P}=30\), and _e_\({}_{S}=80\). ions are connected with a bulk salt solution of ion concentration \(c_{\pm}^{b}\) that maintains the chemical potentials of the solvent \(\mu_{S}\) and ions \(\mu_{\pm}\)[39; 44]. The proteins considered here are assumed to be unfolded or intrinsic disordered, where the widely-adopted charged macromolecular model is invoked to describe these proteins [45; 46; 17]. This model is also general for synthetic polyelectrolytes and other biomacromolecules [47]. The charged macromolecule is assumed to be a Gaussian chain of \(N\) Kuhn segments with Kuhn length \(b\). The smeared charge model is adopted to describe the backbone charge distribution with the charge density \(\alpha\)[48]. For For simplicity, the volume of the chain segment and the solvent molecule are assumed to be the same \(v_{0}\). The local hydrophobic interaction between protein and solvent is described by the Flory parameter \(\chi\). The key results of the SCFT are the following set of equations for protein density \(\rho_{P}(\mathbf{r})\), conjugate fields \(\omega_{p}(\mathbf{r})\) and \(\omega_{S}(\mathbf{r})\), electrostatic potential \(\psi(\mathbf{r})\) and ion concentration \(c_{\pm}(\mathbf{r})\) (see SI, Section 1 for the detailed derivation): \[\omega_{P}(\mathbf{r})-\omega_{S}(\mathbf{r})=\chi[1-2\rho_{P}( \mathbf{r})]-\frac{v_{0}}{2}\frac{\partial\varepsilon(\mathbf{r})}{\partial \rho_{P}(\mathbf{r})}[\nabla\psi(\mathbf{r})]^{2}\\ +\alpha\psi(\mathbf{r})+v_{0}\left[c_{+}(\mathbf{r})\frac{ \partial u_{+}(\mathbf{r})}{\partial\rho_{P}(\mathbf{r})}+c_{-}(\mathbf{r}) \frac{\partial u_{-}(\mathbf{r})}{\partial\rho_{P}(\mathbf{r})}\right] \tag{1a}\] \[\rho_{P}(\mathbf{r})=\frac{m}{Q_{P}}\int_{0}^{N}\mathrm{d}sq(\mathbf{r},s)q( \mathbf{r},N-s)\] (1b) \[1-\rho_{P}(\mathbf{r})=\mathrm{e}^{\mu_{S}}\mathrm{exp}[-\omega_{S}(\mathbf{r})]\] (1c) \[-\nabla\cdot[\varepsilon(\mathbf{r})\nabla\psi(\mathbf{r})]=z_{+}c_{+}(\mathbf{ r})-z_{-}c_{-}(\mathbf{r})+\frac{\alpha}{v_{0}}\rho_{P}(\mathbf{r})\] (1d) \[c_{\pm}(\mathbf{r})=\lambda_{\pm}\mathrm{exp}[\mp z_{\pm}\psi(\mathbf{r})-u_{ \pm}(\mathbf{r})] \tag{1e}\] where \(\varepsilon(\mathbf{r})=kT\varepsilon_{0}\varepsilon_{r}(\mathbf{r})/e^{2}\) is the scaled permittivity with \(\varepsilon_{0}\) the vacuum permittivity, \(e\) the elementary charge and \(\varepsilon_{r}(\mathbf{r})\) the local dielectric constant. \(\varepsilon_{r}(\mathbf{r})\) can be evaluated based on the local composition [49; 50]. Here a linear mixing rule is adopted which leads to \(\varepsilon_{r}(\mathbf{r})=\varepsilon_{P}\rho_{P}(\mathbf{r})+\varepsilon_ {S}(1-\rho_{P}(\mathbf{r}))\), with \(\varepsilon_{P}\) and \(\varepsilon_{S}\) the dielectric constant of the pure protein and solvent, respectively [49; 50; 51]. \(\lambda_{\pm}=\mathrm{e}^{\mu_{\pm}}/v_{\pm}\) is the fugacity of the ions controlled by the bulk salt concentration. \(Q_{P}\) is the single-chain partition function given by \(Q_{P}=(1/v_{0})\int\mathrm{d}\mathbf{r}\)\(q(\mathbf{r},N)\), whereas \(q(\mathbf{r},s)\) is the chain propagator determined by the diffusion equation \[\frac{\partial q(\mathbf{r},s)}{\partial s}=\frac{b^{2}}{6}\nabla^{2}q( \mathbf{r},s)-\omega_{P}(\mathbf{r})q(\mathbf{r},s) \tag{2}\] \(u_{\pm}(\mathbf{r})\) in Eq. 1e in the self energy of ions resulting from the fluctuation of the electrostatic field [40; 41]. If the nonuniversal contribution of the fluctuation in the length scale of the ion size is retained, \(u_{\pm}(\mathbf{r})\) reduces to the local Born energy as: \[u_{\pm}(\mathbf{r})= \frac{z_{\pm}^{2}e^{2}}{8\pi a_{\pm}\varepsilon(\mathbf{r})} \tag{3}\] with \(a_{\pm}\) the Born radius of ions. The Born solvation energy accounts for the electrostatic interaction between the ion and the local dielectric medium [49; 52]. It captures the fact that ions are more preferable to be distributed in the medium with higher dielectric constant. For systems with spatially varying dielectric permittivity, \(u_{\pm}\) is not a constant, and cannot be adsorbed into the redefinition of the chemical potential. It will thus affect both the ion distribution and protein density profile as indicated in Eqs. 1a and 1e. The non-local contributions of electrostatic fluctuation, such as ion correlation and image force, can be rigorously included into the self energy through Gaussian variational approach [40; 41]. We refer interested readers to the relevant literature for more details. The free energy of the subsystem is then \[F_{m} =-m\mathrm{ln}Q_{P}+\mathrm{ln}(m!)-\mathrm{e}^{\mu_{S}}Q_{S}\] \[+\frac{1}{v_{0}}\int\mathrm{d}\mathbf{r}\left[\chi\rho_{P}(1- \rho_{P})-\omega_{P}\rho_{P}-\omega_{S}(1-\rho_{P})\right]\] \[+\int\mathrm{d}\mathbf{r}\left[\frac{\alpha}{v_{0}}\rho_{P}\psi- \frac{\varepsilon}{2}(\nabla\psi)^{2}-c_{+}-c_{-}+c_{+}^{b}+c_{-}^{b}\right] \tag{4}\] ### Phase Equilibrium The protein solution in the dilute phase can be reconstructed by incorporating the density profile and free energy of the _m_-aggregate obtained from SCFT into the framework of dilute solution thermodynamics [39]. The free energy density of the entire dilute solution with volume \(V\), including the translational entropy of aggregates, can be written as \[\frac{F_{D}}{V}=\sum_{m=1}^{\infty}\left\{C_{m}F_{m}+C_{m}[\mathrm{ln}(C_{m}v_{ m})-1]\right\} \tag{5}\] where \(C_{m}\) is the concentration of the _m_-aggregate. \(v_{m}\) is a reference volume which for simplicity can be taken as the volume of the _m_-aggregate. \(C_{m}v_{m}\) thus becomes the corresponding volume fraction \(\phi_{m}\) of the _m_-aggregate. In Eq. 5, the interaction between different aggregates is ignored under the assumption of sufficiently dilute solution. The equilibrium concentration of _m_-aggregate can be obtained by minimization of the free energy density in Eq. 5 subject to fixed total protein concentration \(\sum_{m=1}^{\infty}mC_{m}\), which results in the following distribution: \[\phi_{m}= \phi_{1}^{m}\mathrm{exp}(-\Delta F_{m}) \tag{6}\] Here \(\Delta F_{m}=F_{m}-mF_{1}\) is the free energy of formation of the _m_-aggregate from \(m\) single isolated proteins. The protein solution in the concentrated phase can be modeled as an infinitely large aggregate with uniform protein density. The free energy density is directly obtained by applying SCFT to a homogeneous system and Eq. 4 becomes: \[\frac{F_{C}}{V} =\frac{\phi_{P}}{N}\ln\left(\frac{\phi_{P}}{N}-1\right)+(1-\phi_{P} )[\ln(1-\phi_{P})-1)]\] \[+\chi\phi_{P}(1-\phi_{P})+\frac{\alpha}{\nu_{0}}\phi_{P}\psi-c_{+} -c_{-}+c_{+}^{b}+c_{-}^{b} \tag{7}\] where \(\psi\) is the electrostatic potential difference between the concentrated phase and the dilute phase usually known as Donnan potential or Galvani potential [49; 41]. \(\psi\) is obtained by applying the charge neutrality constrain to the homogeneous concentrated phase. The equilibrium between the protein dilute phase and the protein concentrated phase is determined by the respective equality of the chemical potential of the protein and the solvent in the two coexisting phases, which results in \[F_{1}+\ln\phi_{1} =\ln\frac{\phi_{C}}{N}-1+(1-N)(1-\phi_{C})+\chi N(1-\phi_{C})^{2}\] \[+\mu_{P}^{flex} \tag{8a}\] \[-\sum_{m=1}\frac{\phi_{m}}{mN}=\left(1-\frac{1}{N}\right)\phi_{C}+\ln(1- \phi_{C})+\chi N\phi_{C}{}^{2}+\mu_{S}^{flex} \tag{8b}\] where \(\phi_{C}\) is the equilibrium volume fraction of protein in the concentrated phase and \(\phi_{m}\) is the equilibrium volume fraction of the \(m\)-aggregate in the dilute phase given by Eq. 6. The total volume fraction of protein in the dilute phase is thus \(\phi_{D}=\sum_{m=1}\phi_{m}\). It should be noted that the sum on the left-hand side of Eq. 8b is the dimensionless osmotic pressure in dilute phase (in accordanace with Van't Hoff law) as expected for an ideal solution [53]. \(\mu_{P}^{flex}\) and \(\mu_{S}^{flex}\) are the electrostatic contributions in the chemical potentials of protein and solvent, respectively, which are given by \[\mu_{P}^{flex} =\alpha N\psi-N\nu_{0}[(c_{+}-c_{+}^{b})+(c_{-}-c_{-}^{b})]\] \[+N\nu_{0}(u_{+}c_{+}+u_{-}c_{-})\left(\frac{\epsilon_{S}- \epsilon_{P}}{\epsilon}\right)(1-\phi_{C}) \tag{9a}\] \[\mu_{S}^{flex} =-\nu_{0}[(c_{+}-c_{+}^{b})+(c_{-}-c_{-}^{b})]\] \[+\nu_{0}(u_{+}c_{+}+u_{-}c_{-})\left(\frac{\epsilon_{S}- \epsilon_{P}}{\epsilon}\right)\phi_{C} \tag{9b}\] It is worth noting that the three terms on the right hand side of Eq. 9a represent the contributions from the energy of a charged protein in the electrostatic field, translational entropy and the solvation energy of salt ions, respectively. For each salt concentration \(c_{b}\) in the bulk salt solution (i.e. reservoir), the equilibrium volume fractions in the coexisting dilute and concentrated phases \(\phi_{D}\) and \(\phi_{C}\) are obtained by solving Eqs. 8a and 8b simultaneously, from which the phase diagram as illustrated in Fig. 1\(C\) can be obtained. ## III Results In the current work, we focus on the salt concentration effect and the specific ion effect. The number of Kuhn segments in the protein is set as \(N=50\) with \(b=1.0\)nm. We use the simple system of a homogeneous chain with uniform backbone charge distribution to illustrate the fundamental physical chemistry. The backbone charge density \(\alpha=+0.05\), where positive \(\alpha\) is adopted to facilitate the comparison with the corresponding proteins studied in experiments [26; 27; 28; 30]. The volume of the chain segment and the solvent molecule is assumed to be the same as \(v_{0}=1.0\)nm\({}^{3}\). The temperature is set to be 298K with the Flory parameter \(\chi=1.2\). ### Salt Effects on the Protein Solubility The salt effects on LLPS of protein observed in experiments show complicated dependence on both the salt concentration and the chemical identity of ions. We theoretically investigate the protein solubility for different salt concentrations and various anion radii. Here, the solubility is represented by \(\mathbf{\phi}_{D}\), the equilibrium volume fraction of the dilute phase on the coexistence curve (see Fig. 1_C_). Figure 2\(A\) shows that the solubility decreases as \(c_{b}\) increases in the low salt concentration regime (\(c_{b}<0.2\)M), indicating protein salting-out. At the same \(c_{b}\), the solubility decreases with the increase of anion radius, consistent with the trend of inverse Hofmeister series. In contrast, in the high salt concentration regime (\(c_{b}>0.2\)M), protein remains salting-out for small anion (\(a_{-}=2.0\)A), but turns to salting-in for larger ions. The solubility increases with the increase of anion radius, indicating the direct Hofmeister series. The dependence of LLPS on both the salt concentration and the specific ions predicted by our theory is in good agreement with the solubility measurements of lysozyme in Zhang and Cremer's experiments [26]. Particularly, they found salting-out behavior at high salt concentrations only for small Cl\({}^{-}\), whereas all other larger anions show salting-in behavior, exactly captured by Fig. 2\(A\). The salt effects on the solubility in the high salt concentration regime also depends on the property of protein. If a protein with lower dielectric constant (\(\epsilon_{P}=10\)) is adopted as shown in Fig. 2\(B\), it exhibits salting-out in the entire salt concentration regime for all the anions with \(a_{-}\leq 3.5\)A. This is in stark contrast with the behavior predicted for proteins with high \(\epsilon_{P}\). It is interesting to note that the same trend has also been reported in experiments. Cho et al. measured the solubility of elastin-like polypeptide which has lower dielectric constant than lysozyme [27]. All anions investigated in their work show salting-out at high salt concentrations. Similar all salting-out behavior has been observed by Zhang et al. in the synthetic poly(N-isopropylacrylamide) (PNIPAM) system[31]. The dielectric constant of PNIPAM is less than 5 as reported in the literature [54]. These experimental results are in good agreement with our theoretical prediction. As elucidated in Eq. 8, the LLPS of protein is determined by the interplay between the hydrophobic attraction, Coulomb repulsion, as well as the solvation energy and the translational entropy of ions. The solubility is directly controlled by the effective two-body interaction between proteins: attractive contributions to the interaction favor condensation, whereas repulsive contributions prefer dissolution. The impacts of the aforementioned four contributions on the two-body interaction and their salt-concentration dependence are summarized in Table 1. The hydrophobicity of protein backbone always leads to effective attraction and is independent of \(c_{b}\), which thus can be neglected when considering salt effects. Coulomb interaction between likely-charged proteins is repulsive and decays exponentially with \(c_{b}\) as a result of ionic screening. Furthermore, the contribution of ion solvation is effectively attractive. Ions prefer to be dissolved in the medium with higher dielectric constant as indicated by the Born solvation model (Eq. 3). This selective partition leads to depletion of ions from proteins and thus drives phase separation. Lastly, the translational entropy of ions favors a uniform distribution in the entire solution, which suppresses the aggregation of proteins and thus provides an effective repulsion. As illustrated in Eq. 9, the contributions of both the ionic solvation and translational entropy depend linearly on \(c_{b}\). In the following two subsections, we will provide more detailed analysis on the salt effects in the low and high salt concentration regimes, respectively. ### Ionic Screening at Low Salt Concentrations In the low salt concentration regime, the Coulomb repulsion between proteins dominates compared to the contributions from ionic solvation and translational entropy. Thus, the key factor that determines the salt effects on LLPS is how the Coulomb repulsion are screened by salt ions. The screening effect gets stronger as \(c_{b}\) increases, which leads to the reduction of the effective charge of protein and thus weakens the two-body repulsion. Therefore, the solubility of protein decreases as \(c_{b}\) increases, indicating a salting-out behavior (see Fig. 2). While the salting-out behavior is universal for all ions in the low salt concentration regime, its degree exhibits specific ion effect because of the different efficacy of anions in screening the Coulomb repulsion. Based on the Born solvation model, ions are more preferable to be distributed in the solvent region than the protein region as \(\epsilon_{S}>\epsilon_{P}\) in most cases. This selective partition becomes more pronounced for smaller anions. Figure 3 shows the electrostatic double layer structure around a positively-charged protein. Anions with smaller radius are repelled more from the protein center, resulting in a less screened Coulomb potential. Therefore, protein solubility decreases with the increase of the anion radius, in \begin{table} \begin{tabular}{l c c} Contribution & Effective interaction & c\({}_{\text{b}}\)-dependence \\ \hline hydrophobicity & attractive & \(\sim c_{b}^{0}\) \\ Coulomb interaction & repulsive & \(\sim e^{-\kappa r}/r\) (\(\kappa\sim c_{b}^{1/2}\)) \\ ion solvation & attractive & \(\sim c_{b}\) \\ entropy of ion & repulsive & \(\sim c_{b}\) \\ \end{tabular} \end{table} Table 1: The impacts of different contributions and their salt-concentration dependence on the effective two-body interaction between proteins agreement with the trend of inverse Hofmeister series observed in experiments at low salt concentrations. Zhang and Cremer suggested that the specific ion effect on LLPS in the low salt concentration regime is mainly originating from the effectiveness of anions with different sizes in associating with the positively-charged protein [26]. Their explanation is consistent with the mechanism revealed in our results. ### Competition between Ion Solvation and Translational Entropy at High Salt Concentrations In the high salt concentration regime, the charges carried by proteins are largely screened, and hence the Coulomb repulsion becomes less significant. The LLPS of protein is mainly determined by the competition between the solvation and translational entropy of ions as illustrated in Fig. 4. The tendency for ions to be preferentially solvated by medium with higher dielectric constant leads to a driving force for the separation of proteins from the solvent phase. This reduces the solubility, i.e. salting out. On the contrary, the translational entropy of ions favors a uniform distribution in the entire system, which enhances the miscibility between protein and solvent, i.e. salting in. Based on the electrostatic contributions to the chemical potential in Eq. 9, the competition between ion solvation and translational entropy can be quantified by: \[\Delta\mu_{P}^{elec}= \mu_{P}^{elec}(D)-\mu_{P}^{elec}(C)\] \[\approx (z_{+}+z_{-})Nv_{0}c_{b}\left[\frac{l_{B,0}}{2}\left(\frac{ \epsilon_{S}-\epsilon_{P}}{\epsilon_{S}^{2}}\right)\frac{1}{\tilde{a}}-1\right] \tag{10}\] where \(\tilde{a}\) is the valency-weighted harmonic average radius of cation and anion given by \((z_{+}+z_{-})/\tilde{a}=z_{+}^{2}z_{-}/a_{+}+z_{-}^{2}z_{-}/a_{-}\). \(l_{B,0}=e^{2}/(4\pi\epsilon_{0}kT)\) is the Bjerrum length in vacuum. The detailed derivation of Eq. 10 is provided in SI, Section 2. \(\Delta\mu_{P}^{elec}\) represents the driving force for a single protein to transfer from the concentrated phase (Phase C) to the dilute phase (Phase D). \(\Delta\mu_{P}^{elec}>0\), ion solvation dominates, and protein prefers to stay in the concentrated phase rather than the dilute phase, which indicates salting-out. \(\Delta\mu_{P}^{elec}<0\), translational entropy dominates, indicating salting-in. Equation 10 shows that the solvation effect becomes less pronounced as \(\tilde{a}\) increases. This explains our numerical results in Fig. 2 and the experimental observations that protein salting-in occurs for larger ions. This can also explain the specific ion effect that protein solubility increases with the anion radius, consistent with the trend of direct Hofmeister series observed in the high salt concentration regime Figure 4: Schematics of salt effects on the LLPS of proteins in the high salt concentration regime. (_A_) Ion solvation dominates for the case of small ions, which favors salting-out. (_B_) Translational entropy of ion dominates for the case of large ions, which favors salting-in. Figure 3: The salt effect on the electrostatic double layer structure around a positively-charged protein in the low salt concentration regime. (_A_) anion concentration profile \(c_{-}(r)\) and (_B_) electrostatic potential \(\psi(r)\). \(a_{+}=2.5\)Å, \(z_{+}=z_{-}=1\), \(\epsilon_{P}=10\), \(\epsilon_{S}=80\), and \(c_{b}=20\)mM. (_C_) Schematics of different screening effects for small anion and large anion. [26; 27; 28; 30; 31; 32; 33]. Furthermore, the solvation energy depends on ion valency as well. From the expression of \(\vec{a}\), ions with higher valency can be equivalently interpreted as monovalent ions with smaller effective radius. Therefore, multivalent ions promote salting-out. It explains the experimental findings in various protein and polymer solutions that SO\({}_{4}^{2-}\) shows much stronger tendency of salting-out even than Cl\({}^{-}\), although \(a_{\text{SO}_{4}^{2-}}\) is larger than \(a_{\text{Cl}^{-}}\)[27; 30; 31]. As indicated by Eq. 10, the solubility at high salt concentrations also depends on the dielectric constant of protein \(\varepsilon_{P}\). \(\Delta\mu_{\text{C}}^{\text{elec}}\) decreases with the increase of \(\varepsilon_{P}\), preferring salting-in. This is consistent with the experimental observation that lysozyme with higher \(\varepsilon_{P}\) has stronger tendency of salting-in than elastin-like polypeptide with lower \(\varepsilon_{P}\). Baldwin measured the solubility of peptide and observed that salting-out becomes more pronounced as the number of hydrocarbon side groups increases [55]. More hydrocarbon side groups leads to the reduction of the dielectric constant of peptide. Furthermore, Shimada et al. recently investigated the LLPS of ureido-derivatized polymers [33]. They found that the solubility behavior turns from salting-out to salting-in as more ureido groups are grafted the polymer. The ureido group is highly polar and hence expected to increase the dielectric constant of the polymer [56; 57]. Their experimental results can be well captured by our theory. Our theory provides a simple analytical criterion for determining the solubility behavior, i.e. salting-in versus salting-out. From \(\Delta\mu_{\text{C}}^{\text{elec}}=0\) in Eq. 10, the boundary between the salting-in and salting-out regimes is given by the following universal line: \[\frac{l_{B,0}}{2}\left(\frac{\varepsilon_{S}-\varepsilon_{P}}{\varepsilon_{S}^ {2}}\right)=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! suggested that charge inversion can explain the specific ion effect on protein solubility: reversal of the Hofmester series from the inverse sequence at low salt concentrations to the direct sequence at high salt concentrations [37]. Similar charge inversion can also be predicted by our theory. Figure 6\(A\) shows the electrostatic potential profiles \(\psi(r)\) of a single protein in the presence of various anions. As anion radius increases, \(\psi(r)\) turns from its original positive value to negative, which indicates the occurrence of charge inversion for larger anions. This is in agreement with previous experimental results and theoretical predictions. The charge inversion can be explained by the induced Galvani potential and local charge separation at protein surface due to different Born solvation energy between cation and anion [49]. The induced negative charge becomes stronger with the increase of \(a_{-}\), which can even overcompensate the original positive charge of protein as the net charge density shown in the inset of Fig. 6\(A\). It is worth noting that charge inversion is usually observed in the presence of multivalent counterions as a result of strong ionic correlation. Here, it occurs in monovalent salt systems, which is driven by the unequal solvation energy between cation and anion. We further compare the charge inversion for a single protein and the collective solubility behavior for the entire protein solution to elucidate their relation. Figure 6\(B\) plots the electrostatic potential at protein surface \(\psi_{S}\) as a function of salt concentration. Charge inversion can only been observed for larger anion with \(a_{-}=3.5\)A. By comparing the results of Fig. 2\(A\) and Fig. 6\(B\), it can been clearly seen that the turning point from salting-out to salting in for \(a_{-}=3.5\)A appears at \(c_{b}=0.16\)M, which is different from the charge inversion point \(c_{b}=0.55\)M. The inconsistency is more obvious for smaller ions (\(a_{-}=2.5\)A and \(a_{-}=3.0\)A), where the turning points of solubility can be observed in Fig. 2\(A\) but the charge inversion is absent within the given range of salt concentration. Similar inconsistency can also been found from the comparison between the electrophorphic mobility of a single protein and the solubility of protein solutions measured by experiments [63; 26]. For the same lysozyme protein, the charge inversion occurs for all anions, whereas on the contrary, the turning of the solubility from salting-out to salting-has not been observed for Cl\({}^{-}\). Furthermore, charge inversion is also inconsistent with the reversal of Hofmester series. Figure 2\(A\) shows that the reversal of Hofmeister series from the inverse sequence to the direct sequence occurs around \(c_{b}=0.25\)M, largely different from the charge inversion point shown in Fig. 6\(B\). Therefore, our theoretical results suggest that the counterintuitive behaviors of turning solubility and reversal of Hofmeister series observed at high salt concentrations are not related to charge inversion. They are actually attributed to the competition between the ion solvation and translational entropy as elucidated in the above subsection. ## Conclusions and Discussion We develop a self-consistent theory to study salt effect on LLPS of protein solutions by systematically incorporating electrostatic interaction, hydrophobicity, ion solvation and transnational entropy into a unified framework. Our theory has made important improvements compared to previous mean-field work. Both the highly localized density fluctuation of proteins in the dilute phase and the electrostatic fluctuation (manifested by the self-energy of ions) are explicitly accounted for. The long-standing puzzles of the non-monotonic salt concentration dependence and the specific ion effect are fully captured by our theory. We find that proteins show salting-out at low salt concentrations due to ionic screening. The solubility decreases with the increase of anion radius, following the inverse Hofmeister series. On the other hand, in the high Figure 6: The relation between charge inversion and solubility. (_A_) Electrostatic potentials \(\psi(r)\) profile of a single protein in the presence of various anions. \(a_{+}=2.5\)Å, \(z_{+}=z_{-}=1\), \(\epsilon_{P}=30\), and \(c_{b}=0.7\)M. The inset shows the profiles of the net charge density \(c_{\text{net}}(r)=z_{+}c_{+}(r)-z_{-}c_{-}(r)+\alpha\rho r(r)/v_{0}\). (_B_) Electrostatic potentials at the protein surface \(\psi_{S}\) as a function of salt concentration \(c_{b}\). Filled symbols in Fig. 6\(B\) locate the corresponding turning points from salting-out to salting-in shown in Fig. 2\(A\). salt concentration regime, protein remains salting-out for small ions but turns to salting-in for larger ions. The Hofmeister series is reversed to the direct sequence. We reveal that both the turning of solubility from salting-out to salting-in and the reversal of the Hofmeister series are attributed to the competition between the solvation energy and translational entropy of ions, but are not related to charge inversion of a single protein. Furthermore, we derive an analytical criterion for determining the boundary between the salting-in and salting-out regimes. Without any fitting parameters, the theoretical prediction is in quantitative agreement with experimental results for various proteins and polymers in sodium solutions with a broad range of anions. Our theory reveals the essential physical chemistry of salt effects on LLPS using a simple charged macromolecular model, which can also be applied to other soft matter systems. The theory can be generalized to macromolecules with more complicated structures (e.g., chain architecture, heterogenous composition and charge distribution, local rigidity, helicity, etc.) and interactions that better represent real proteins. Although the charged macromolecular model seems only applicable to unfolded or intrinsic disordered proteins, the salt concentration effect and specific ion effect elucidated here is universal for both the unfolded and folded proteins. This is because the description of a giant liquid-like condensate is not sensitive to the folding details of a single protein. Furthermore, our theory captures the salt effects on LLPS by only considering the contribution of Born energy in the ion solvation, indicating its dominant role for simple ions like halogen anions. However, other contributions such as hydration, dispersion and polarization should also be taken into account for ions with more complex structures. These effects can be straightforwardly incorporated into the current theoretical framework. The existence and relative importance of these higher order effects on LLPS can only be evaluated when the essential Born energy and translational entropy of ions are accurately treated as in our work. The fundamental insight revealed here provides important guidance for modulating the LLPS of proteins via the addition of salt as an effective tool, which helps understand the functions of cellular organization and rationally design therapy for diseases. ## Materials and Methods We use spherical coordinate in the numerical calculations of self-consistent field theory based on the symmetry of spherical aggregate. Both the protein density and electrostatic potential are set to be zero at the boundary of the spherical simulation box. The Crank-Nicolson method is used to solve the modified diffusion equation of chain propagator (Eq. (2)) [64]. The number of points that the chain contour has been discretized is set to be \(N_{\text{s}}=1000\). The grid lattices are set such that the lattice spacings are smaller than \(0.1b\). \(\mu_{\text{g}}\) is set to be \(-1\), such that the free energy of the reservoir of pure salt solution outside of the subvolume is \(0\). The equilibrium structure and the free energy can be obtained by solving Eqs. (1)-(3) iteratively until convergence. To accelerate the convergence, we use the following strategy to update the fields. Fields conjugate to the density of protein and solvent molecules are updated by a simple mixing rule, i. e., \(\omega_{P,S}^{\text{new}}\leftarrow\lambda\omega_{P,S}^{\text{new}}+(1-\lambda )\omega_{P,S}^{old}\). The same rule is adopted for updating electrostatic potential \(\psi\) and Born energy \(u_{\pm}\). The field conjugated to the incompressibility condition is updated by \(\eta^{\text{new}}\leftarrow\eta^{old}+\kappa(\rho_{P}+\rho_{S}-1)\), where the second term on the r.h.s is adopted to reinforce the incompressibility. \(\lambda\)=0.01 and \(\kappa\)=2.0 are chosen in our calculations. The relative errors for the free energy and the incompressibility condition are set to be below \(10^{-11}\) and \(10^{-7}\), respectively. ## Acknowledgment Acknowledgment is made to the donors of the American Chemical Society Petroleum Research Fund for partial support of this research. This research used the computational resources provided by the Kenneth S. Pitzer Center for Theoretical Chemistry.
2306.13892
Differentially Private Decentralized Deep Learning with Consensus Algorithms
Cooperative decentralized deep learning relies on direct information exchange between communicating agents, each with access to a local dataset which should be kept private. The goal is for all agents to achieve consensus on model parameters after training. However, sharing parameters with untrustworthy neighboring agents could leak exploitable information about local datasets. To combat this, we introduce differentially private decentralized learning that secures each agent's local dataset during and after cooperative training. In our approach, we generalize Differentially Private Stochastic Gradient Descent (DP-SGD) -- a popular differentially private training method for centralized deep learning -- to practical subgradient- and ADMM-based decentralized learning methods. Our algorithms' differential privacy guarantee holds for arbitrary deep learning objective functions, and we analyze the convergence properties for strongly convex objective functions. We compare our algorithms against centrally trained models on standard classification tasks and evaluate the relationships between performance, privacy budget, graph connectivity, and degree of training data overlap among agents. We find that differentially private gradient tracking is resistant to performance degradation under sparse graphs and non-uniform data distributions. Furthermore, we show that it is possible to learn a model achieving high accuracies, within 3% of DP-SGD on MNIST under (1, 10^-5)-differential privacy and within 6% of DP-SGD on CIFAR-100 under (10, 10^-5)-differential privacy, without ever sharing raw data with other agents. Open source code can be found at: https://github.com/jbayrooti/dp-dec-learning.
Jasmine Bayrooti, Zhan Gao, Amanda Prorok
2023-06-24T07:46:00Z
http://arxiv.org/abs/2306.13892v1
# Differentially Private Decentralized Deep Learning with Consensus Algorithms ###### Abstract Cooperative decentralized deep learning relies on direct information exchange between communicating agents, each with access to a local dataset which should be kept private. The goal is for all agents to achieve consensus on model parameters after training. However, sharing parameters with untrustworthy neighboring agents could leak exploitable information about local datasets. To combat this, we introduce _differentially private_ decentralized learning that secures each agent's local dataset during and after cooperative training. In our approach, we generalize Differentially Private Stochastic Gradient Descent (DP-SGD) - a popular differentially private training method for centralized deep learning - to practical subgradient- and ADMM-based decentralized learning methods. Our algorithms' differential privacy guarantee holds for arbitrary deep learning objective functions, and we analyze the convergence properties for strongly convex objective functions. We compare our algorithms against centrally trained models on standard classification tasks and evaluate the relationships between performance, privacy budget, graph connectivity, and degree of training data overlap among agents. We find that differentially private gradient tracking is resistant to performance degradation under sparse graphs and non-uniform data distributions. Furthermore, we show that it is possible to learn a model achieving high accuracies, within 3% of DP-SGD on MNIST under \((1,10^{-5})\)-differential privacy and within 6% of DP-SGD on CIFAR-100 under \((10,10^{-5})\)-differential privacy, without ever sharing raw data with other agents. Open source code can be found at: [https://github.com/jbayrooti/dp-dec-learning](https://github.com/jbayrooti/dp-dec-learning). ## 1 Introduction Decentralized machine learning methods provide benefits over standard centralized approaches in aspects such as scalability and data privacy [23; 28]. Most research in this direction has focused on traditional federated learning, where a central server is used to orchestrate training among participating agents [4; 31; 8; 23]. However, these learning schemes can be limited by computational bottlenecks and communication overhead. Fully decentralized methods do not rely on central components during training and, hence, unlock potential for large-scale applications in privacy sensitive domains, such as autonomous driving, analyzing healthcare and finance trends, and email filtering. Fully decentralized learning methods involve sharing information with local neighbors in a communication graph, each with access to a private dataset and a local model. Decentralized systems of agents can cooperate to exchange information and eventually consense on system-wide optimal model parameters. Such works primarily utilize subgradient-based methods [40; 28; 7; 21], Alternating Direction Method of Multipliers (ADMM) based approaches [54], or a combination of both, which all typically involve direct parameter passing among agents. Even though raw data is not exchanged during training, a determined attacker may be able to extract parts of a local training dataset from parameter exchanges during model consensus. Centralized and federated learning models have been shown to be vulnerable to such white-box attacks [34; 6; 15; 52] and fully decentralized learning methods may be even more at risk due to the impracticality of vetting many communicating peers. In practice, agents can gather sensitive information in their local datasets after interacting with humans, exploring locations, or snapshotting surroundings. Therefore, it is crucial to protect local agents' datasets from outside observers or other agents in the system during fully decentralized deep learning. **Contributions.** To mitigate these privacy risks, we propose using differential privacy, the established standard for protecting against attacks targeting individual training samples, to each agent's training routine. More specifically, we introduce fully decentralized, first-order, deep learning algorithms that guarantee differential privacy for each agent's local dataset on every step of cooperative training. Our algorithms offer robust protection against a knowledgeable adversary with full access to the training process and model parameters. This is attractive for applications of cooperative decentralized deep learning involving sensitive information where participating agents are not entirely trustworthy. Moreover, we show convergence for strongly convex objectives and demonstrate our algorithms' practicality on challenging image classification tasks, finding reasonable performance-privacy tradeoffs that have, thus far, not been achieved. In more detail, our main contributions are: * We present three new differentially private, decentralized learning algorithms to protect agents' local datasets from honest-but-curious agents participating in a learning system. Our algorithms generalize the popular DP-SGD algorithm [2] to the decentralized setting and build on both subgradient- and ADMM-based distributed optimization methods. Our methods' differential privacy guarantees apply to any deep learning objective function. * For each differentially private decentralized learning algorithm, we analyze the convergence properties and show convergence for strongly convex objective functions. * We evaluate the performance disparity between differentially private decentralized and centralized methods under multiple privacy budgets, using graphs of various connectivity, and for agents with disparate levels of training data overlap. We find that differentially private gradient tracking algorithms are invariant to communication graph connectivity and all algorithms also achieve consistent performance across graph connectivities above a threshold value. This insight is valuable as it allows us to relax the communication graph connectivity in bandwidth-limited situations without compromising performance. * We evaluate our algorithms' performance on two standard image classification tasks: MNIST and CIFAR100. 1 Our findings indicate that achieving privacy protection in decentralized learning comes at a modest trade-off with performance. Our best differentially private algorithms reach near DP-SGD levels of accuracy, coming within 3% accuracy on MNIST classification under \((1,10^{-5})\)-differential privacy and 6% accuracy on CIFAR-100 classification under \((10,10^{-5})\)-differential privacy. Footnote 1: We choose these tasks because both datasets are publically available and have been used by previous influential works as benchmarks for differentially private deep learning. ## 2 Related Work In this paper, we introduce decentralized differentially private algorithms that build on prior research in the fields of decentralized optimization and differential privacy. **Decentralized Optimization.** Decentralized optimization methods typically involve agents performing iterative computations using subgradients of the objective function, the ADMM framework, or a combination of both [42; 35; 9; 50; 54]. Subgradient-based distributed learning approaches are especially suited for non-smooth and non-convex optimization problems [57; 22; 35; 40; 28] and have been successfully applied in [21; 28; 40]. To ultimately reach consensus on model parameters, these algorithms interleave local gradient descent steps with averaging over values from neighboring agents. Much effort has been dedicated to analyzing the rate of convergence to consensus [41; 26; 21; 29; 43]. On the other hand, ADMM-based methods facilitate _explicit_ constrained optimization [46; 19; 10; 30], thus optimizing for both consensus and performance. While ADMM was originally designed for convex optimization, it can also be applied to non-convex settings [54; 39; 51]. DiNNO [54] is one such algorithm that has recently achieved performance on par with centrally trained methods (using all data). Similarly, DSGD [28] and DSGT [40] have become widely adopted methods. In summary, these three algorithms are uniquely complementary in nature and, hence, we use them as the foundations for our differentially private approach. **Differential Privacy.** Differential privacy applications involve perturbing messages or objective functions to mask the impact of individual samples on the output. Such techniques are commonly used in convex problem settings to secure control systems [11; 37], distributed risk minimization [16; 18; 56; 55], and more [17; 48]. We are interested in neural network optimization to solve complex tasks and these objectives are rarely convex. The popular DP-SGD algorithm [2] provides privacy guarantees (quantified by the moments accountant) while training deep learning models with a practical balance between privacy and utility. While [38] is another approach for differential privacy in deep learning, this paper primarily focuses on DP-SGD as it is more common and has inspired many follow-up works, largely in centralized and federated learning settings [12; 47; 3; 14]. While differentially private federated learning focuses on privacy during aggregation at the central server [3], fully decentralized differentially private learning ensures privacy at the individual agent level. A few approaches consider differential privacy for decentralized deep learning problems. For instance, [49; 45] present differentially private gradient tracking and gossip algorithms, although their methods protect agents' objective functions rather than agents' datasets. Additionally, the recent work [44] only bounds the privacy loss of a _single_ training iteration, evaluates on non-standard tasks for differential privacy, and uses an algorithm similar to DSGD, which we find has low performance relative to other algorithms. Naive composition leads to an overestimation of the privacy loss [55; 32], hence, it is crucial to analyze the privacy budget over _all_ training iterations. In contrast to [44], our work examines the impact of _cumulative_ privacy budgets, communication graph connectivity, and data distribution on decentralized differentially private learning for traditional tasks. Our paper is the first to develop and study high-accuracy, cooperative, differentially private learning that secures agents' local datasets for fully decentralized, general, deep learning tasks by adapting DP-SGD to a wider scope. ## 3 Problem Statement Consider a cooperative deep learning problem involving \(N\) agents, each with access to separate, private datasets and operating in an undirected, connected communication graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Agents \(i,j\in\mathcal{V}\) can exchange information if they are one-hop neighbors in the graph i.e. \((i,j)\in\mathcal{E}\). Let \(\mathcal{N}_{i}=\{i\}\cup\{j\in\mathcal{V}\mid(i,j)\in\mathcal{E}\}\) denote the local neighborhood of agent \(i\) and note that agent \(i\) is included in the neighbor set to simplify notation in the following sections. We parameterize communication links with a symmetric, doubly stochastic mixing matrix \(W\in\mathbb{R}^{N}\times\mathbb{R}^{N}\) with non-negative entries where \(w_{ij}=0\) if and only if nodes \(i\) and \(j\) are not communicating. Let \(\mathcal{D}_{i}\) be the subset of data that agent \(i\) can access and \(l(\cdot)\) be the objective function to be minimized. We aim to optimize a neural network parameterized by the weights \(\theta\in\mathbb{R}^{d}\) over the aggregate dataset \(\mathcal{D}=\cup_{i\in\mathcal{V}}\mathcal{D}_{i}\) where each agent \(i\) stores its own local estimate of the network weights \(\theta_{i}\). We desire the agents to reach consensus on optimal network parameters \(\theta^{*}\) after training. This distributed optimization problem can be formulated as: \[\theta^{*}=\min_{\theta\in\mathbb{R}^{d}}\sum_{i=1}^{N}l(\theta;\mathcal{D}_{ i})=\min\sum_{i=1}^{N}l(\theta_{i};\mathcal{D}_{i})\quad\text{s.t.}\quad \quad\theta_{1}=\theta_{2}=\cdots=\theta_{N} \tag{1}\] Our problem is cooperative in the sense that all agents share the same goal, i.e., the same objective function. However, some agents may be untrustworthy and seek to uncover sensitive information about other agents' private datasets during network optimization. We assume all agents are honest and communicate their true parameters. ## 4 Background We proceed by outlining differential privacy and key decentralized learning algorithms. ### Differential Privacy Differential privacy ensures that, if two datasets differ by only one individual's data, the output of an algorithm does not reveal whether that individual's data was used. This property provides a strong privacy guarantee while allowing useful insights to be extracted from the dataset. We consider datasets \(\mathcal{D}\in\mathcal{X}\times\mathcal{Y}\) where \(\mathcal{X}\) is the feature domain and \(\mathcal{Y}\) the label domain. Changes in a dataset are formalized by a symmetric, binary adjacency relation \(Adj(\cdot,\cdot)\). **Definition 4.1** (Adjacent Datasets [11]).: Two datasets \(\mathcal{D}=\{(x,y)_{i}\}_{i=1}^{n}\) and \(\mathcal{D}^{\prime}=\{(x^{\prime},y^{\prime})_{i}\}_{i=1}^{n}\) are adjacent if there exists an index \(j\) such that \(1\leq j\leq n\) where \((x,y)_{i}=(x^{\prime},y^{\prime})_{i}\) for all \(i\neq j\). This allows us to state the definition of differential privacy in a machine learning context. **Definition 4.2** (Differential Privacy [20]).: A mechanism \(\mathcal{M}:\mathcal{X}\times\mathcal{Y}\rightarrow\mathcal{R}\) is \((\epsilon,\delta)\)-differentially private if, for any adjacent datasets \(\mathcal{D},\mathcal{D}^{\prime}\), and every set of outputs \(\mathcal{O}\subseteq\mathcal{R}\), the following holds: \[\mathbb{P}[\mathcal{M}(\mathcal{D})\in O]\leq e^{\epsilon}\mathbb{P}[ \mathcal{M}(\mathcal{D}^{\prime})\in O]+\delta. \tag{2}\] The privacy budget is quantified by \(\epsilon\), which essentially bounds the log-likelihood ratio of any particular output being obtained when running the algorithm on two adjacent datasets. The \(\delta\) term bounds the occurrence of outputs that violate the privacy limit. **DP-SGD**. Differentially Private Stochastic Gradient Descent (DP-SGD) [2] is a popular differentially private deep learning algorithm. The authors propose three modifications from the standard SGD procedure: independently picking samples in a mini-batch with uniform sampling probability \(q\), clipping the \(l_{2}\) norm of each per-sample gradient \(\nabla l(\theta_{i};\mathcal{D}_{i})\) to a maximal norm \(C\), and adding Gaussian noise \(\xi\) with variance proportional to \(C\) to gradients. Abadi et al. also introduce the moments accountant to select the noise variance \(\sigma\) so that the algorithm provably satisfies a cumulative \((\epsilon,\delta)\)-differential privacy guarantee [2]. In our experiments, we use the sampled Gaussian mechanism [33], which is closely related to the moments accountant, for the same purpose. To formalize the clipping process, we refer to the definition used in [12]. **Definition 4.3** (Clipping function).: Define \(\texttt{clip}_{C}:v\in\mathbb{R}^{d}\rightarrow\min\{1,\frac{C}{\|v\|_{2}} \}\cdot v\in\mathbb{R}^{d}\). ### Decentralized Optimization We develop differentially private decentralized algorithms on top of non-private, complementary distributed learning algorithms: DSGD, DSGT, and DiNNO. We select these base methods because DSGD is a popular baseline, DSGT has been shown to achieve strong performance, and DiNNO is a recently developed and effective ADMM-based decentralized deep learning method. Moreover, these algorithms exhibit core similarities with other distributed algorithms. Distributed Stochastic Gradient Descent (DSGD) [28] extends the SGD algorithm to the distributed setting with the following update for agent \(i\): \[\theta_{i}^{k+1}=\sum_{j\in\mathcal{N}_{i}}w_{ij}\theta_{j}^{k}-\eta^{k}\nabla l (\theta_{i}^{k};\mathcal{D}_{i}) \tag{3}\] where \(\eta^{k}\) is the learning rate at the \(k\)th iteration. DSGD achieves consensus across agents in the graph for most objectives and, for strongly convex objectives, converges to a neighborhood of the global optimum [21; 25]. DSGD has also been shown to be affected by non-IID data among agents [25]. Gradient tracking methods are known for their ability to mitigate the impact of non-IID data distributions [29; 49; 36] and a convergence rate that can match that of mini-batch SGD for strongly convex objectives [26]. As in [11], we select Distributed Stochastic Gradient Tracking (DSGT) [40] from the family of gradient tracking methods. The DSGT updates include an additional variable \(y_{i}^{k}\) to track the per-agent estimate of the joint loss gradient. \[\theta_{i}^{k+1}=\sum_{j\in\mathcal{N}_{i}}w_{ij}(\theta_{j}^{k}-\eta^{k}y_{j} ^{k}) \tag{4}\] \[y_{i}^{k+1}=\nabla l(\theta_{i}^{k+1};\mathcal{D}_{i})+\sum_{j\in\mathcal{N}_{ i}}w_{ij}y_{j}^{k}-\nabla l(\theta_{i}^{k};\mathcal{D}_{i}) \tag{5}\] Note that in DSGT, agents share gradient estimates \(y_{i}^{k}\) as well as model parameters \(\theta_{i}^{k}\) with neighbors. Finally, we consider a consensus ADMM-based algorithm DiNNO [11] which optimizes each agents' primal variable \(\theta_{i}^{k}\) and the dual variable \(y_{i}^{k}\) to enforce agreement between agents: \[y_{i}^{k+1}=y_{i}^{k}+\rho\sum_{j\in\mathcal{N}_{i}}(\theta_{i}^{k}-\theta_{j} ^{k}) \tag{6}\] \[\theta_{i}^{k+1}=\text{argmin}_{\theta}l(\theta;\mathcal{D}_{i})+\theta^{T}y_{i}^{ k+1}+\rho\sum_{j\in\mathcal{N}_{i}}\left\|\theta-\frac{\theta_{i}^{k}+\theta_{j}^{k}}{2 }\right\|_{2}^{2} \tag{7}\] For strongly convex objectives, [11] shows that DiNNO converges to the unique global solution. ## 5 Differentially Private Decentralized Learning In these first-order distributed learning approaches, each agent \(i\) updates its parameters by incorporating information from its neighbors and taking a step in a direction related to the gradient \(\nabla l(\theta_{i};\mathcal{D}_{i})\) (or a mini-batch gradient) using the local dataset \(\mathcal{D}_{i}\). Agent \(i\) then broadcasts the updated values to its neighbors. However, sharing parameters directly with neighboring agents presents a privacy risk because local training samples can leave a discernible trace on the gradients and, transitively, the parameters [34]. To mitigate this privacy risk, we propose differentially private distributed algorithms: DP-DSGD, DP-DSGT, and DP-DiNNO, which incorporate key elements from the DP-SGD algorithm into the decentralized setting. We utilize the sampled Gaussian mechanism [33] to select the suitable standard deviation \(\sigma\) for adding Gaussian noise to gradients, ensuring cumulative \((\epsilon,\delta)\)-differential privacy for each agent. In this section, we describe each algorithm and state their convergence for strongly convex objective functions. ### Dp-Dsgd To design DP-DSGD to be differentially private under the adjacency relation 4.1, we utilize a variation of DP-SGD on each agent in the network that allows for inter-agent communication via a modified update step. For a single agent \(i\), this entails independently picking a lot \(\mathcal{L}_{i}\subseteq\mathcal{D}_{i}\) of \(L\) samples with probability \(q=L/|\mathcal{D}_{i}|\), clipping the \(l_{2}\) norm of each per-sample gradient, averaging the clipped gradients in the batch, adding Gaussian noise to the aggregate gradient, and taking a step using the perturbed gradient. In a departure from the original DP-SGD paper, which only considers single agent learning, agent \(i\) takes the final step using its aggregated local estimate of the parameters \(\sum_{j\in\mathcal{N}_{i}}w_{ij}\theta_{j}\) (note that this includes \(\theta_{i}\)) and shares the noisy parameters with neighbors. Algorithm 1 outlines the training procedure. ``` 1:Require:\(\mathcal{G}\), \(\mathcal{D}\), \(l(\cdot)\), \(\mathcal{W}\), \(\theta_{\text{initial}}\), \(L\), \(C\), \(\eta\), \(\sigma\), \(K\) 2:Each agent \(i\) does:\(\triangleright\) In parallel 3:\(\theta_{i}^{0}=\theta_{\text{initial}}\)\(\triangleright\) Initialize parameters 4:For\(k=1,2,\dots,K\) 5:Communicate: send \(\theta_{i}^{k}\) to neighbors \(\mathcal{N}_{i}\) 6:Each agent \(i\) does:\(\triangleright\) In parallel 7:Take a random lot of samples \(\mathcal{L}_{i}^{k}\) with sampling probability \(L/|\mathcal{D}_{i}|\) 8:\(\xi\sim N(0,I_{d})\)\(\triangleright\) Draw a Gaussian sample 9:\(\widehat{G}(\theta_{i}^{k};\mathcal{D}_{i})=\frac{1}{L}\sum_{d\in\mathcal{L}_ {i}^{k}}\texttt{clip}_{C}\left(\nabla l(\theta_{i}^{k};d)\right)+\frac{\sigma C }{L}\xi\)\(\triangleright\) Clip and add noise 10:\(\theta_{i}^{k+1}=\sum_{j\in\mathcal{N}_{i}}w_{ij}\theta_{j}^{k}-\eta^{k} \widehat{G}(\theta_{i}^{k};\mathcal{D}_{i})\)\(\triangleright\) DP version of Equation 3 ``` **Algorithm 1** DP-DSGD ### Dp-Dsgt We introduce DP-DSGT, a differentially private gradient tracking approach based on DSGT. Similar to DP-DSGD, we integrate the key components of DP-SGD (uniform sampling, gradient clipping, and injecting calibrated Gaussian noise) during training on each iteration so that the algorithm is differentially private for all agents. Algorithm 2 describes the training procedure in detail. ### Dp-DiNno Finally, we present DP-DiNNO, a differentially private version of the DiNNO algorithm [54]. Unlike the subgradient algorithms which iteratively calculate \(\nabla l(\theta_{i};\mathcal{D}_{i})\) and take a step in the opposite direction, consensus-ADMM based algorithms iteratively update primal variables after solving an optimization problem (7). This solution is difficult to characterize when \(l\) is non-convex. Existing differentially private ADMM-based methods typically assume the objective function \(l\) is convex and bound the update function's sensitivity to adjacent datasets using the closed form solution to the primal optimization problem [18; 55]. We make no assumptions about the convexity of \(l\) and instead solve the primal optimization problem iteratively, as proposed in [54], while injecting noise for each gradient to achieve differential privacy. Once again, we incorporate the core elements of DP-SGD to privatize the DiNNO algorithm. Note that we do not need to clip the entire gradient of \(l(\theta_{i};\mathcal{D}_{i})+\theta_{i}^{T}y_{i}^{k+1}+\rho\sum_{j\in \mathcal{N}_{i}}\left\|\theta_{i}-\frac{\theta_{i}^{k}+\theta_{j}^{k}}{2} \right\|_{2}^{2}\) since only \(\nabla l(\theta_{i};\mathcal{D}_{i})\) relies on agent \(i\)'s private dataset \(\mathcal{D}_{i}\). The method is described in detail in Algorithm 3. The learning rate \(\eta^{k}\) decays during training and is used by a first-order optimizer in line 14. We use the Adam optimizer in this paper. ``` 1:Require:\(\mathcal{G}\), \(\mathcal{D}\), \(l(\cdot)\), \(\theta_{\mathrm{initial}}\), \(L\), \(C\), \(\rho\), \(\eta\), \(\sigma\), \(K\) 2:Each agent \(i\) does:\(\triangleright\) In parallel 3:\(\theta_{i}^{0}=\theta_{\mathrm{initial}}\)\(\triangleright\) Dual variable 4:\(\theta_{i}^{0}=\theta_{\mathrm{initial}}\)\(\triangleright\) Primal variable 5:For\(k=1,2,\ldots,K\) 6:Communicate: send \(\theta_{i}^{k}\) to neighbors \(\mathcal{N}_{i}\) 7:Each agent \(i\) does:\(\triangleright\) Update model parameters (4) 8:Take a random lot of samples \(\mathcal{L}_{i}^{k}\) with sampling probability \(L/|\mathcal{D}_{i}|\) 9:\(\xi\sim N(0,I_{d})\)\(\triangleright\) Draw a Gaussian sample 10:\(\tilde{G}^{k+1}(\theta_{i}^{k+1};\mathcal{D}_{i})=\frac{1}{L}\sum_{d\in \mathcal{L}_{i}^{k}}\texttt{clip}_{C}\left(\nabla l(\theta_{i}^{k+1};d)\right) +\frac{\sigma C}{L}\xi\)\(\triangleright\) Clip and add noise 11:\(y_{i}^{k+1}=\tilde{G}^{k+1}(\theta_{i}^{k+1};\mathcal{D}_{i})+\left(\sum_{j\in \mathcal{N}_{i}}w_{ij}y_{j}^{k}-\tilde{G}^{k}(\theta_{i}^{k};\mathcal{D}_{i})\right)\)\(\triangleright\) DP version of 5 ``` **Algorithm 3** DP-DiNNO ``` 1:Require:\(\mathcal{G}\), \(\mathcal{D}\), \(l(\cdot)\), \(\theta_{\mathrm{initial}}\), \(L\), \(C\), \(\rho\), \(\eta\), \(\sigma\), \(K\), \(T\) 2:Each agent \(i\) does:\(\triangleright\) In parallel 3:\(y_{i}^{0}=0\)\(\triangleright\) Dual variable 4:\(\theta_{i}^{0}=\theta_{\mathrm{initial}}\)\(\triangleright\) Primal variable 5:For\(k=1,2,\ldots,K\) 6:Communicate: send \(\theta_{i}^{k}\) to neighbors \(\mathcal{N}_{i}\) 7:Each agent \(i\) does:\(\triangleright\) In parallel 8:\(y_{i}^{k+1}=y_{i}^{k}+\rho\sum_{j\in\mathcal{N}_{i}}(\theta_{i}^{k}-\theta_{ j}^{k})\)\(\triangleright\) Update the dual 9:\(\psi^{0}=\theta_{i}^{k}\)\(\triangleright\) Warm start primal optimization 10:For\(t=1,2,\ldots,T\) 11:Take a random lot of samples \(\mathcal{L}_{i}^{k}\) with sampling probability \(L/|\mathcal{D}_{i}|\) 12:\(\xi\sim N(0,I_{d})\)\(\triangleright\) Draw a Gaussian sample 13:\(\tilde{g}(\psi^{t};\mathcal{D}_{i})=\frac{1}{L}\sum_{d\in\mathcal{L}_{i}^{k}} \texttt{clip}_{C}\left(\nabla l(\psi^{t};d)\right)+\frac{\sigma C}{L}\xi\)\(\triangleright\) Clip and add noise 14:\(\tilde{G}(\psi^{t})=\tilde{g}(\psi^{t};\mathcal{D}_{i})+\nabla\left((\psi^{t})^{T}y_{i}^{k+1}+ \rho\sum_{j\in\mathcal{N}_{i}}\left\|\psi^{t}-\frac{\theta_{i}^{k}+\theta_{j}^{k }}{2}\right\|_{2}^{2}\right)\)\(\triangleright\) Aggregate 15:\(\psi^{t+1}=\psi^{t}+\tilde{G}(\psi^{t})\)\(\triangleright\) Update with an optimizer step 16:\(\theta_{i}^{k+1}=\psi^{T}\)\(\triangleright\) Update primal ``` **Algorithm 4** DP-DiNNO We show the convergence of differentially private consensus algorithms for strongly convex objectives. **Theorem 5.1** (Convergence Theorem).: _Let the objective function \(l\) be strongly convex and \(L\)-smooth, \(\theta\in\mathbb{R}^{Nd}\) be a vector concatenating local variables \(\{\theta_{i}\}_{i=1}^{N}\), \(M(\theta)\) be a function of \(\theta\) that is of interest for convergence, and \(\sigma\) be the standard deviation of Gaussian noise for differentially private gradient perturbations. If \(M(\theta)\) of the base decentralized algorithm is Lipschitz w.r.t. a constant \(K\), i.e., \(\|M(\theta)-M(\hat{\theta})\|_{2}\leq K\|\theta-\hat{\theta}\|_{2}\) for any \(\theta\) and \(\hat{\theta}\), and converges to the optimal solution \(M(\theta^{*})\) at a linear rate, \(M(\hat{\theta})\) of the differentially private decentralized algorithm also converges to the optimal solution \(M(\theta^{*})\) at a linear rate within an error neighborhood on the order of \(\mathcal{O}(\sigma)\)._ The convergence proof is provided in Appendix A. Theorem 5.1 states that the proposed differentially private algorithm does not affect the convergence of the base decentralized algorithm and \(M(\cdot)\) can be any function of interest, i.e., \(M(\theta)=\sum_{i=1}^{N}l(\theta_{i},\mathcal{D}_{i})\) is the overall objective function or \(M(\theta)=\theta\) is the variable itself. Note that the base algorithms we consider, namely DSGD, DSGT, and DiNNO, converge linearly when the objective function is strongly convex [21; 26; 40; 54]. The key insight to our proof is that the noise terms are scaled by a diminishing learning rate \(\eta^{\epsilon}\) and a linear convergence rate, such that the cumulative noise is bounded on the order of the standard deviation \(\mathcal{O}(\sigma)\). ## 6 Experiments and Discussion In this section, we explore the relationships between accuracy, privacy budget, communication graph connectivity, and data distribution among agents based on class labels. **Setup.** We consider a system with \(N=10\) agents communicating over randomly selected connected graphs with connectivity measured by the normalized Fiedler value: the classical Fiedler value divided by the number of agents. Our baseline for comparison against centrally-trained differentially private algorithms is the widely-used DP-SGD algorithm. To ensure a fair comparison between decentralized algorithms, we utilize Ray's population-based tuning algorithm [1] and additional manual tuning to select appropriate learning rates for each method and privacy budget. Throughout all experiments, we use fixed \(\delta=10^{-5}\), gradient clipping threshold of \(C=1\) for DP-SGD training (as in [12]), and \(C=10\) for DP-DSGD, DP-DSGT, and DP-DiNNO (since these algorithms converge slower). See Appendix B and our code for more information about our experimental setup. **Graph Connectivity.** To assess the performance of our algorithms compared to DP-SGD, we experimentally investigate the critical choice of the underlying communication graph. Our experiments entail training DP-DSGD, DP-DSGT, and DP-DiNNO on cooperative MNIST digit classification [13] over multiple graphs of varying connectivity for numerous privacy budgets. MNIST is a natural choice of dataset because it is publically available and has been used as a benchmark for differential privacy with deep learning [2]. Each agent maintains its own local model, a CNN with two fully connected layers, and is allocated all the training data corresponding to one random digit. The agents seek to nonsense on model parameters that are optimal over the aggregated dataset, with evaluations on the held out validation set (which includes all digits). We report the means and standard deviations from five training trials for each algorithm and multiple privacy budgets on randomly sampled graphs with connectivity across a range of normalized Fiedler values in Table 1. We observe that centrally-trained DP-SGD always upper bounds the decentralized algorithms' accuracy by a few percentage points. Nevertheless, the results demonstrate that differentially private decentralized Figure 1: **All algorithms appear relatively invariant to connectivity up to a normalized Fiedler value of around 0.4.** DP-DSGD performance (left) is variable and decreases for sparse graphs. DP-DSGT (middle) performance is invariant to connectivity for all privacy budgets. DP-DiNNO (right) performance is only affected by graph connectivity for sparse graphs and low \(\epsilon\) values. learning can achieve strong performance on MNIST classification even for low privacy budgets (i.e., strong privacy guarantees), with the DP-DSGT accuracy around 3% of the DP-SGD accuracy under cumulative \((1,10^{-5})\)-differential privacy. Furthermore, we isolate the effects of changing graph connectivity and privacy budgets on decentralized differentially private classification in Figure 1. These findings indicate that DP-DSGT is invariant to graph connectivity. This likely stems from DP-DSGT's exchange of per-agent global gradient estimates (in addition to model parameters), which can facilitate more effective tracking of the global gradient. However, this procedure requires exchanging twice the message size as in DP-DSGD and DP-DiNNO, which may be infeasible depending on the application. Figure 1 also demonstrates that DP-DSGD and DP-DiNNO exhibit graph invariance for normalized Fiedler values above 0.4 and declining performance below 0.4. The consistent performance of all algorithms across graph connectivities above a normalized Fiedler value of 0.4 is significant because it implies that we can relax the connectivity of the communication graph up to a specific threshold in bandwidth-limited scenarios without compromising performance. **Data Distribution.** Data heterogeneity between agents poses another challenge in decentralized learning [26]. In this set of experiments, we study how the data distribution among agents impacts differentially private learning. To quantify the notion of data distribution, we define the matrix \(A(t)\) where rows correspond to agents and columns correspond to classes in the dataset: \[a_{ij}(t)=\begin{cases}\frac{1-t}{N}\quad i\neq j\text{ mod }N\\ 1-\sum_{k\neq i}a_{kj}\quad\text{if }i=j\text{ mod }N\end{cases} \tag{8}\] Data is distributed according to \(A(t)\) so that agent \(i\) has the fractional amount \(a_{ij}\) of data labeled with class \(j\) in its local dataset. Therefore \(t=0\) represents an equal distribution of each data class among all agents, while \(t=1\) signifies that each agent has complete and exclusive access to specific \begin{table} \begin{tabular}{l l l l l l} \hline \multicolumn{6}{c}{**Normalized Fiedler Value: 1 (Complete Graph)**} \\ \hline Method & Non-Private & \(\varepsilon=100\) & \(\varepsilon=10\) & \(\varepsilon=1\) & \(\varepsilon=0.5\) \\ \hline Central & 98.22 \(\pm\) 0.19 & 96.16 \(\pm\) 0.62 & 94.73 \(\pm\) 0.74 & 92.04 \(\pm\) 0.84 & 90.82 \(\pm\) 1.14 \\ \multirow{2}{*}{\(\,\)D\(\bar{\text{P}}\)-DSGD} & 95.38\(\pm\) 0.37 & 92.34\(\pm\) 0.9 & 91.27\(\pm\) 0.92 & 87.26 \(\pm\) 1.16 & 84.07 \(\pm\) 1.89 \\ DP-DSGT & **98.13**\(\pm\) 0.11 & **96.13**\(\pm\) 0.84 & **96.54**\(\pm\) 0.17 & 89.24 \(\pm\) 0.27 & 85.41 \(\pm\) 0.54 \\ DP-DiNNO & 97.81 \(\pm\) 0.2 & 95.41 \(\pm\) 0.36 & 93.21 \(\pm\) 0.24 & **89.66**\(\pm\) 0.51 & **86.22**\(\pm\) 0.58 \\ \hline \hline \multicolumn{6}{c}{**Normalized Fiedler Value: 0.7 \(\pm\) 0.05**} \\ \hline Method & Non-Private & \(\varepsilon=100\) & \(\varepsilon=10\) & \(\varepsilon=1\) & \(\varepsilon=0.5\) \\ \hline Central & 98.22 \(\pm\) 0.19 & 96.16 \(\pm\) 0.62 & 94.73 \(\pm\) 0.74 & 92.04 \(\pm\) 0.84 & 90.82 \(\pm\) 1.14 \\ \multirow{2}{*}{\(\,\)D\(\bar{\text{P}}\)-DSGD} & 95.19\(\pm\) 0.37 & 92.03\(\pm\) 0.64 & 91.27 \(\pm\) 1.1 & 87.16 \(\pm\) 1.58 & 84.09 \(\pm\) 2.23 \\ DP-DSGT & 97.76 \(\pm\) 0.83 & **96.16**\(\pm\) 0.77 & **96.62**\(\pm\) 0.11 & **89.75**\(\pm\) 0.77 & 84.98 \(\pm\) 1.13 \\ DP-DiNNO & **97.94**\(\pm\) 0.21 & 95.51 \(\pm\) 0.43 & 93.41 \(\pm\) 0.37 & 89.13 \(\pm\) 0.34 & **85.7**\(\pm\) 0.25 \\ \hline \hline \multicolumn{6}{c}{**Normalized Fiedler Value: 0.39 \(\pm\) 0.05**} \\ \hline Method & Non-Private & \(\varepsilon=100\) & \(\varepsilon=10\) & \(\varepsilon=1\) & \(\varepsilon=0.5\) \\ \hline Central & 98.22 \(\pm\) 0.19 & 96.16 \(\pm\) 0.62 & 94.73 \(\pm\) 0.74 & 92.04 \(\pm\) 0.84 & 90.82 \(\pm\) 1.14 \\ \multirow{2}{*}{\(\,\)D\(\bar{\text{P}}\)-DSGT} & 94.77\(\pm\) 0.87 & 91.56 \(\pm\) 0.68 & 90.61 \(\pm\) 0.93 & 87.68 \(\pm\) 0.74 & 85.22 \(\pm\) 1.21 \\ DP-DSGT & 97.36 \(\pm\) 1.43 & **96.28**\(\pm\) 0.7 & **96.42**\(\pm\) 0.21 & **89.84**\(\pm\) 0.39 & 84.79 \(\pm\) 1.07 \\ DP-DiNNO & **97.91**\(\pm\) 0.25 & 95.6 \(\pm\) 0.35 & 93.25 \(\pm\) 0.34 & 89.02 \(\pm\) 0.35 & **85.91**\(\pm\) 0.46 \\ \hline \hline \multicolumn{6}{c}{**Normalized Fiedler Value: 0.06 \(\pm\) 0.05**} \\ \hline Method & Non-Private & \(\varepsilon=100\) & \(\varepsilon=10\) & \(\varepsilon=1\) & \(\varepsilon=0.5\) \\ \hline Central & 98.22 \(\pm\) 0.19 & 96.16 \(\pm\) 0.62 & 94.73 \(\pm\) 0.74 & 92.04 \(\pm\) 0.84 & 90.82 \(\pm\) 1.14 \\ \multirow{2}{*}{\(\,\)D\(\bar{\text{P}}\)-DSGD} & 90.55\(\pm\) 2.06 & 88.36 \(\pm\) 3.75 & 84.12 \(\pm\) 2.49 & 82.49 \(\pm\) 1.23 & 80.57 \(\pm\) 1.47 \\ DP-DSGT & **97.73**\(\pm\) 0.48 & **95.97**\(\pm\) 1.67 & **96.26**\(\pm\) 0.17 & **89.16**\(\pm\) 0.82 & **84.53**\(\pm\) 1.29 \\ DP-DiNNO & **97.73**\(\pm\) 0.28 & 95.03 \(\pm\) 0.79 & 91.99 \(\pm\) 0.67 & 85.68 \(\pm\) 1.13 & 78.44 \(\pm\) 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Accuracy comparison over algorithms, privacy budgets, and graph connectivities on MNIST. classes (and no direct information on other classes). We specifically investigate the impact of varying \(t\) on MNIST classification using privacy budgets \(\epsilon=0.5\) and \(\epsilon=10\) and a communication graph of normalized Fiedler value \(0.06\) (with means and standard deviations reported over five trials). As shown in Table 2, we find that DP-DSGT maintains consistent performance regardless of the data distribution scheme, which agrees with results for non-private DSGT [29; 49; 36], making this algorithm a good choice for applications where agents own non-overlapping data classes. On the other hand, DP-DSGD and DP-DiNNO benefit more from firsthand experience with each class as they drop off in performance by 8% and 10% respectively as \(t\) goes from \(0\) to \(1\) when \(\epsilon=0.5\). Still, DP-DiNNO is more resilient than DP-DSGD to changes in data allocation for higher privacy budgets as shown for \(\epsilon=10\). **Further Experiments.** We show that our differentially private algorithms scale to a more complex problem (i.e., CIFAR-100 classification [27] with a deeper model), while maintaining strong performance, see Appendix C. Notably, DP-DSGT achieves accuracy within 6% of DP-SGD under \((10,10^{-5})\)-differential privacy after 80 training epochs. Finally, we sanity check our approaches with membership inference attacks as described in Appendix D. ## 7 Final Remarks **Limitations and Future Work.** Throughout experimentation, we found that our algorithms required tuned learning rates for different tasks and privacy budgets. We made a best-effort attempt to achieve comparable algorithm evaluations using Ray's population-based tuning and additional hand-tuning; we acknowledge that further extensive tuning (given adequate computing resources) might lead to different hyperparameter choices. Throughout this work, we assumed agents could be honest-but-curious, though not deceptive, as this would invalidate the consensus. Future work could consider non-cooperative and adversarial threat models. We also focused on classification tasks since these are the most standard differential privacy benchmarks. It would be interesting to apply our algorithms to other problems beyond classification, such as cooperative representation learning (as done in [54] within a non-private setting). Finally, for all use cases, it would be important to understand and mitigate our algorithms' fairness loss when applied to sensitive use cases, particularly considering that differential privacy has been shown to have a disparate impact on model accuracy [5]. **Conclusion.** We introduced differentially private consensus-based learning algorithms that achieve strong performance on deep learning tasks, regardless of convexity, while protecting agents' local datasets from untrustworthy agents. Our work showcases the feasibility of achieving differential privacy in fully decentralized settings, even with relaxed communication topologies, low privacy budgets (i.e., strong privacy guarantees), and non-uniformly distributed data. Differential privacy enables the responsible and ethical use of sensitive data, balancing the societal benefits of data analysis with the protection of individual privacy. Expanding the scope of differential privacy in deep learning to encompass decentralized learning algorithms holds significant importance, and we anticipate that our work will lay the foundation for future advancements in this area. \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multicolumn{3}{c}{\(\boldsymbol{\epsilon=0.5}\)} & & & \\ \hline Method & \(t=0\) & \(t=0.25\) & \(t=0.5\) & \(t=0.75\) & \(t=1\) \\ \hline DP-DSGD & 88.9 \(\pm\) 0.99 & **88.3**\(\pm\) 0.31 & **87.65**\(\pm\) 0.28 & **86.06**\(\pm\) 0.24 & 80.57 \(\pm\) 1.47 \\ DP-DSGT & 85.78 \(\pm\) 0.63 & 86 \(\pm\) 0.96 & 85.79 \(\pm\) 0.96 & 85.3 \(\pm\) 0.9 & **84.53**\(\pm\) 1.29 \\ DP-DiNNO & **86.76**\(\pm\) 0.96 & 86.19 \(\pm\) 0.79 & 85.28 \(\pm\) 0.39 & 83.55 \(\pm\) 0.71 & 78.44 \(\pm\) 2 \\ \hline \hline \multicolumn{3}{c}{\(\boldsymbol{\epsilon=10}\)} & & & \\ \hline Method & \(t=0\) & \(t=0.25\) & \(t=0.5\) & \(t=0.75\) & \(t=1\) \\ \hline DP-DSGD & 94.08 \(\pm\) 0.41 & 93.89 \(\pm\) 0.35 & 93.63 \(\pm\) 0.31 & 92.56 \(\pm\) 0.24 & 84.92 \(\pm\) 2.26 \\ DP-DSGT & **95.97**\(\pm\) 0.35 & **96.04**\(\pm\) 0.33 & **95.72**\(\pm\) 0.34 & **96.10**\(\pm\) 0.41 & **96.06**\(\pm\) 0.24 \\ DP-DiNNO & 93.42 \(\pm\) 0.53 & 93.34 \(\pm\) 0.49 & 93.28 \(\pm\) 0.48 & 93.03 \(\pm\) 0.33 & 92 \(\pm\) 0.54 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison over algorithms and data splits on MNIST. Acknowledgements J. Bayrooti is supported by a DeepMind scholarship. Z. Gao and A. Prorok are supported in part by European Research Council (ERC) Project 949940 (gAIa). We also thank Carl Henrik Ek for insightful conversations and useful feedback on early drafts of this work. Additionally, we thank George Pappas for discussing differentially private machine learning with us and Alex Sablayrolles for helpful information on using the Opacus library.
2308.04842
Merging in a Coupled Driving Simulator: How do drivers resolve conflicts?
Traffic interactions between merging and highway vehicles are a major topic of research, yielding many empirical studies and models of driver behaviour. Most of these studies on merging use naturalistic data. Although this provides insight into human gap acceptance and traffic flow effects, it obscures the operational inputs of interacting drivers. Besides that, researchers have no control over the vehicle kinematics (i.e., positions and velocities) at the start of the interactions. Therefore the relationship between initial kinematics and the outcome of the interaction is difficult to investigate. To address these gaps, we conducted an experiment in a coupled driving simulator with a simplified, top-down view, merging scenario with two vehicles. We found that kinematics can explain the outcome (i.e., which driver merges first) and the duration of the merging conflict. Furthermore, our results show that drivers use key decision moments combined with constant acceleration inputs (intermittent piecewise-constant control) during merging. This indicates that they do not continuously optimize their expected utility. Therefore, these results advocate the development of interaction models based on intermittent piecewise-constant control. We hope our work can contribute to this development and to the fundamental knowledge of interactive driver behaviour.
Olger Siebinga, Arkady Zgonnikov, David A. Abbink
2023-08-09T10:04:48Z
http://arxiv.org/abs/2308.04842v1
# Merging in a Coupled Driving Simulator: ###### Abstract Traffic interactions between merging and highway vehicles are a major topic of research, yielding many empirical studies and models of driver behaviour. Most of these studies on merging use naturalistic data. Although this provides insight into human gap acceptance and traffic flow effects, it obscures the operational inputs of interacting drivers. Besides that, researchers have no control over the vehicle kinematics (i.e., positions and velocities) at the start of the interactions. Therefore the relationship between initial kinematics and the outcome of the interaction is difficult to investigate. To address these gaps, we conducted an experiment in a coupled driving simulator with a simplified, top-down view, merging scenario with two vehicles. We found that kinematics can explain the outcome (i.e., which driver merges first) and the duration of the merging conflict. Furthermore, our results show that drivers use key decision moments combined with constant acceleration inputs (intermittent piecewise-constant control) during merging. This indicates that they do not continuously optimize their expected utility. Therefore, these results advocate the development of interaction models based on intermittent piecewise-constant control. We hope our work can contribute to this development and to the fundamental knowledge of interactive driver behaviour. ## 1 Introduction Interactions between vehicles, such as in highway merging, play a major role in everyday traffic. Therefore, driving behaviour in these interactions is an essential aspect of many transportation technologies. Empirical data and microscopic traffic models of human driving behaviour are thus essential tools for transportation engineers. These models and data are used in the design and safety assessment of highway on-ramps [1, 2] and urban intersections [3]. Microscopic traffic models can be used to evaluate traffic management systems [4]. And finally, autonomous vehicle designers are interested in these interactions to develop socially acceptable and human-like autonomous behaviour [5, 6]. Particularly for the last use case, a good understanding of the individual negotiations and the continuous reciprocal actions of the drivers during interactions is essential. Many recent studies have investigated interactive merging behaviour by modelling this behaviour or by conducting empirical investigations. Most of these studies use naturalistic data, i.e., data recorded in real-world scenarios. For example, Daamen et al. [7] and Marczak et al. [8] performed empirical analysis on traffic data which they recorded with helicopters. Wang et al. [9] and Srinivasan et al. [10] used existing open datasets to evaluate driver behaviour on merge ramps. Others have modelled interactive driver behaviour using naturalistic data to gain insights, e.g., using game theory [11, 12, 13], acceleration models comparable to car-following models [14], or machine-learned models [15, 10]. The usage of naturalistic data has the advantage that real-world behaviour can be studied. However, this approach has two main drawbacks. First, naturalistic data is recorded with cameras on helicopters, quad-copters, or high buildings. Therefore, only sequential positions are recorded. Velocities and accelerations are reconstructed from this position data. This makes it challenging to directly investigate the drivers' operational behaviour and control inputs. Second, kinematic differences between conditions can be observed, but not controlled. This makes it difficult to investigate the relationship between the initial kinematics of the vehicles and the outcome of the merging conflict (e.g., who merges first and who yields). To gain a deeper understanding of individual reciprocal interactions, controlled experiments are needed. However, only a very limited number of studies in a controlled environment (i.e., in a driving simulator) targeted interactions during merging (i.e., excluding studies of autonomous control strategies, gap acceptance, or traffic flow). Stoll et al. investigated human decision-making in merging scenarios based on videos of a controlled simulation [16]. Participants had to select their preferred reaction (e.g., accelerate or decelerate) after watching videos of vehicles they were "interacting with". Shimojo et al. used a driving simulator to investigate how the merging behaviour of drivers is affected by their perception of other drivers [17]. They used predetermined controls for one of the vehicles in the interaction, to influence this perception in a controlled way. In both experiments, the behaviour of one of the drivers was predetermined. Thus, there was no interaction or dynamic negotiation between two human drivers. We conclude that the existing literature misses studies that investigate the reciprocal merging interactions between at least two human drivers in a controlled environment. Figure 1: The simplified merging scenario used in the experiment. Two vehicles approach a pre-defined merge point at which their lanes merge into one. The track consists of three sections of equal length (50 \(m\), total track length 150 \(m\)). The vehicle dimensions are 4.5 \(m\) x 1.8 \(m\). In the tunnel, participants could observe both vehicles, but not control their vehicles. During the approach, the participants could control the acceleration of their vehicles to resolve the merging conflict. During the car-following section, the vehicles follow each other in the same lane. Figure 2: The experimental setup as seen from a participant’s view. The other participant in the pair used an identical setup. The participants could not see each other. To address this gap, we conduct an experiment in a top-down view, coupled driving simulator in which we investigate reciprocal merging interactions between two human drivers. We investigate the operational behaviour of the drivers in terms of inputs (acceleration and velocity profiles). Furthermore, we examine the influence of different initial kinematics (both position and velocity) on the outcome of the interaction. Both on a high level in terms of which driver merges first, and in more detail through the metric Conflict Resolution Time (CRT) [18]. We hope this experiment advances the fundamental knowledge about vehicle-vehicle interactions in traffic and contributes to the development of interaction-aware intelligent transportation systems. ## 2 Methods We conducted an experiment in a coupled, top-down view driving simulator with 9 pairs of participants (6 female, 12 male, mean age: 25, std: 2.6). The details of this experiment (including Figures 1 and 2), and the analysis tools we developed to gain insight into the merging behaviour, have been previously published in [18]. This experiment was approved by TU Delft's Human Research Ethics Committee (HREC). All participants gave their consent before participating in the experiment. The experiment regarded a symmetric simplified merging scenario (Figure 1) in which participants could control the acceleration of their vehicle using the gas and brake pedal of a steering-wheel game controller (Logitech Driving Force GT). The headings of the vehicles were always equal to the heading of the road, so no steering was involved. Participants could see the simulation on a computer screen (Figure 2). However, they could not see the other participant, who was seated in the same room behind a screen. To prevent auditory communication, participants wore noise-cancelling headsets (Sony WH-1000XM3) with ambient music. All gathered data was published in the 4TU data repository [19]. The software needed to reproduce the experiment can be found on GitHub1. Interactive plots of all our results can be found in the online supplementary materials2. Footnote 1: [https://github.com/tud-hri/simple-merging-experiment](https://github.com/tud-hri/simple-merging-experiment) Footnote 2: [https://tud-hri.github.io/simple-merging-experiment](https://tud-hri.github.io/simple-merging-experiment) To investigate the effects of the initial vehicle kinematics on the outcome of the merging conflict we varied the initial positions and initial velocities of the vehicles. Participants were instructed to maintain their initial velocity yet prevent a collision. To ensure a merging conflict, all conditions were chosen such that if both drivers would maintain their initial velocity, they would collide. Furthermore, participants were instructed to remain seated, use one foot on the gas or brake pedal, keep both hands on the steering wheel, and not to communicate by making sounds or noise. Finally, participants were told that this is a scientific experiment -not a game or a race- and that no vehicle had the right of way. The participants received visual feedback on their computer screens. Their visuals were randomly mirrored such that they appeared to approach the merge point from the left or the right side randomly. While on the experimenter's view, and in all results discussed here, we refer to the same participant in a pair as the left or right driver. If participants deviated from their initial velocity, their steering wheel provided vibration feedback, increasing with the deviation and with a dead band around the initial velocity. If the vehicles collided, the participants got a time penalty of 20 seconds. This was longer than the duration of a single trial, which took approximately 16 seconds. The vehicles started in a tunnel where participants could observe the initial velocities of both vehicles. But they could not control their vehicle yet. Once both vehicles had exited the tunnel, both participants gained control. This marked an unambiguous moment when the interaction started. The vehicles' initial positions and velocities were varied to create 11 experimental conditions. We used the projected headway at the merge point as the underlying metric to design the conditions and determine the initial positions. The projected headway is the headway (distance from front bumper to front bumper) at the merge point if both drivers would maintain their initial velocity. We chose this metric because it does not depend on track dimensions or a snapshot of the vehicle state at an arbitrary point along the track (e.g., at the tunnel exit). To visualise the differences between conditions, we plotted them in a 2D projected-headway - relative-velocity plane (Figure 3). This figure shows the conflict space. If the projected headway is larger than the vehicle length, there is no conflict. These areas are shown in grey on the left and right side of Figure 3. The figure also shows in which areas we expected the right or the left driver to have an advantage. This expectation was based on a (shorter) pilot experiment with the same experimental setup but different kinematic conditions. We used this expectation to design and spread the conditions evenly over the conflict space. The diagonal darker area represents the area in which the (kinematic) advantage changes from the left to the right driver. We decided not to investigate this area but to (first) focus on driver behaviour in cases where the outcome is more distinct. Our aim here is to gain insight into the interactions and negotiations between the two drivers in these situations. However, we did include a baseline condition where neither driver has a position or velocity advantage. With these conditions, we aim to obtain a quantitative description of the most likely outcome (who merges first) based on the initial kinematics. We used the Python package Pymer4 [20] for all statistical models in this work. We named the conditions based on the two dimensions that define them: the projected headway in meters, and the velocity difference in decimeters per second. Positive numbers indicate that the left driver has an advantage. For example, in condition **-2_8**, the right driver has a projected headway advantage of \(2\ m\), but the left driver drives \(0.8\ \frac{m}{s}\) faster. For more visual examples of conditions and their names, see Figure 4. In our experiment, every condition was repeated 10 times in a random order for every pair of participants. We used the Conflict Resolution Time (CRT) [18] to analyse the conflict resolution behaviour of Figure 3: The experimental conditions in their two-dimensional space. The x-axis shows the projected headway at the merge point if both drivers would keep their initial velocity. If the headway is larger than the vehicle length (\(4.5\ m\)) there is no projected collision, this is indicated by the grey areas on the left and right side. The y-axis shows the initial velocity differences. Positive values mean that the left vehicle is (projected to be) ahead or moving faster. The diagonal darker area divides the space into areas where the left or right driver has the advantage to pass the merge point first. This line was estimated based on pilot experiments. Note that this does not simply divide the plane into areas where one driver has the velocity or projected headway advantage. the pairs of participants. The CRT denotes the time from the start of the interaction until the first moment at which the vehicles are no longer on a collision course (assuming constant velocity). To calculate the CRT, we post-process the data and determine for every time step if a collision would occur on the remaining track if both vehicles would continue their velocity. The time between the tunnel exit and the first moment where no collision would occur is the CRT. Thus, CRT is a measure of the amount of time needed to resolve the conflict and therefore can be used as a measure of the difficulty of the merging conflict. ## 3 Results We structure our investigation of driver conflict resolution behaviour into two parts. First, we present the analysis of the joint behaviour of two drivers, to analyse the outcome of the conflict (who gives way) and how quickly each pair of drivers resolved the merging conflict. Metrics that capture the joint behaviour for each pair under different conditions include a percentage of who merged first, as well as the Conflict Resolution Time (CRT). Second, we investigate the contributions of each individual driver in a pair to resolve the conflict. This includes the actions the individual drivers took in terms of accelerations and the resulting velocity profiles. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline Condition & -4\({}_{-}\)8 & -4\({}_{-}\)0 & -4\({}_{-}\)8 & -2\({}_{-}\)8 & 0\({}_{-}\)8 & 0\({}_{-}\)8 & 2\({}_{-}\)8 & 4\({}_{-}\)8 & 4\({}_{-}\)0 & 4\({}_{-}\)8 \\ \hline Number of Collisions & 3 & 1 & 3 & 2 & 4 & 5 & 3 & 2 & 1 & 2 & 2 \\ \hline \end{tabular} \end{table} Table 1: The number of observed collisions per condition. The total number of trials per condition was 90. Most collisions occurred between 4 and 6 seconds after the vehicles exited the tunnels. Figure 4: Three visualisations of experimental conditions. The figures show the relative positions of the vehicles and the start point, tunnel exit, and merge point. These merge point positions would occur if both vehicles would maintain their initial velocity. In most conditions, the slower vehicle has a position advantage at the tunnel exit. The exceptions are conditions \(4\_8\) and \(-4\_-8\), where the vehicles exit the tunnel at the same time. ### Joint Behaviour #### 3.1.1 Who Merged First? The high-level outcome of a merging conflict can be summarised by which driver reached the merge point first, except for the trials where the vehicles collided. However, collisions were rare across all conditions (Table 1). We plot the proportion of left and right vehicles that went first as a function of initial conditions in Figure 5. In the "neutral" 0_0 condition this proportion is almost evenly distributed. For the other 10 conditions with kinematic differences between the drivers, 5 conditions show a consistent outcome over all pairs and trials. This indicates that the outcome in these conditions is entirely defined by kinematics, with no variation between participant pairs. In one other condition \((2.-8)\), only a single trial deviated from the outcome norm. Four conditions \((-4.-8,\,4.8,\,0.-8,\) and \(0.8)\) show a large majority of the outcomes where a particular driver merges first and a minority of the other driver merging first. To investigate the relationship between the initial conditions (i.e. the kinematics at the start of each scenario) and the outcome (which driver merges first), we fitted a mixed-effects logistic regression model to the data. The model parameters are shown in Table 2, and the model outcome is visualised in Figures 5 and 6. These results show that increasing the projected headway advantage increases the chances of a driver merging first (\(z=14.4,\,p<10^{-46}\)). The relative velocity on the other hand has a negative effect on the probability the driver merges first (\(z=-10.6\), \(p<10^{-25}\)). This means that, for equal projected headways, a driver with a higher initial velocity tends to merge behind the driver with a lower initial velocity. The explanation for this is that drivers with a higher velocity exit the tunnel later than the slower vehicle in most conditions (Figure 4). An important side-note to these effects is that we found these in a symmetric scenario with no right of way for Figure 5: An overview of the high-level outcome per condition: which driver went first? Every condition was repeated 10 times for all 9 participant pairs. Therefore, the total number of trials per condition is 90. The markers show the measured data as the percentage of the left driver merging first, with the vertical line representing the 95% binomial proportion confidence intervals. Collisions were omitted from these results (see Table 1). The lines and shaded areas represent the (population) predictions of the mixed-effects logistic regression model (Table 2) with the 95% confidence interval. either of the drivers. The population level intercept had a negative estimated value that is not significant (\(z=-1.5,\,p=0.13\)). This could be explained by the fact that the intercept explains a bias in the data towards the left or the right driver. This effect is clearest in the neutral condition (0_0), where we found that the right driver merged first in a small majority of the cases. Table 3 shows the estimated intercept values for the individual participant pairs. We expect that with more participants, the bias on the population level will disappear and the intercept value will approach 0. To visualise at which locations in the conflict space the left or right driver is more likely to merge first, we have created a top-down view heat map of the regression model. This heat map is shown in Figure 7 and closely resembles Figure 3. \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l|} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{Confidence interval} \\ & Estimate & SE & Z & P-value & 0.025 & 0.975 \\ \hline \hline Intercept & -0.32 & 0.212 & -1.50 & \(1.326\times 10^{-1}\) & -0.73 & 0.10 \\ Projected headway & 1.15 & 0.080 & 14.4 & \(6.966\times 10^{-47}\) & 0.99 & 1.31 \\ Relative velocity & -3.4138 & 0.321 & -10.6 & \(2.858\times 10^{-26}\) & -4.04 & -2.78 \\ \hline \end{tabular} \end{table} Table 2: Mixed-effects logistic regression model describing the effect of projected headway and relative velocity on which driver reached the merge point first. Collisions were excluded, the left vehicle going first was labelled as 1, right first as 0. The model includes a random intercept for participant pairs to account for between-pair differences. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline Participant Pair & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline Intercept & \(-0.54\) & \(-0.42\) & \(-1.17\) & \(0.06\) & \(-0.13\) & \(-0.51\) & \(0.16\) & \(-0.22\) & \(-0.13\) \\ \hline \end{tabular} \end{table} Table 3: Fixed effects estimates of the random intercept values per pair for the mixed-effects logistic regression model (Table 2). Figure 6: A 3-dimensional visualisation of a (population) prediction of the logistic regression model on the data. All three subplots show the same data for different angles. The model predictions are shown as the black surface and the background projections. The coloured bars show the data from the experiment. The \(x\) and \(y\)-axis represent the condition kinematics. The \(z\)-axis shows the percentage of trials where the left driver merged first. Collisions were excluded from this data (see Table 1). An interactive version of this plot can be found in the online supplementary materials. #### 3.1.2 Conflict Resolution Time Besides how the conflict was resolved (which driver merged first) we investigated how quickly the conflict was resolved by examining the Conflict Resolution Time (CRT). This is a measure of the time it took the drivers to resolve the conflict and therefore resembles the difficulty of the conflict in a specific trial. Figure 8 shows the CRT distributions we found for all experimental conditions. The median CRT is highest for the neutral condition 0_0. In this condition, no driver has a headway or velocity advantage. Drivers have to negotiate a solution without a "most-likely" candidate solution. The lowest median CRT was found for the conditions where one driver only had a projected headway difference but the velocity was the same for both drivers. The conditions with velocities differences but no projected headway difference had high median CRTs. Thus conflicts where one driver has a pure projected headway advantage are easier to resolve than conflicts where one driver has a pure velocity advantage. But besides these high-level observations, Figure 8 reveals no clear relationship between the initial kinematics and the CRT of the merging conflicts. We expected that the high-level outcome of the conflict (who merged first) might partly explain the CRT of that trial. More concretely, we expected trials where the driver with the kinematic advantage went first, to be resolved more quickly than trials where the driver with a disadvantage went first. To investigate this, we analysed CRT as a function of the kinematic advantage from the perspective of the first merging driver (Figure 9, Table 4). The projected headway and velocity differences in this figure are positive if the first merging driver had the advantage. We found that trials with a larger headway advantage for the driver that merged first had a lower CRT (\(t=-15.3\), \(p<10^{-46}\)). Trials with a velocity advantage for the first merging driver had a higher CRT (\(t=5.02,\,p<10^{-6}\)). Moreover, we found that the association between the CRT and the projected headway advantage was stronger for larger velocity advantage (\(t=-6.09,\,p<10^{-8}\)). One important side note is that drivers with a higher initial Figure 7: A heat map of a logistic regression model prediction for the driver that will merge first. The conditions where data was gathered are marked with black squares. velocity have a headway disadvantage in the approach section, i.e., they are approaching the merge point behind the other driver. ### Individual behaviour To gain insight into the operational behaviour of the drivers, we investigated the aggregated velocity traces of all drivers (Figure 10). We choose to show the velocity traces for the neutral condition (0_0) here because this condition has the widest variety of solutions (in terms of who merges first). Because of this spread, this velocity plot is easier to read than the same plot for other conditions. However, the key aspects identified in this plot are representative of the other conditions (for the raw data, including plots, see [19]. Interactive versions of these plots are available in the supplementary material). One of the striking characteristics of the velocity traces in Figure 10 are the triangular patterns that can be observed in many traces. Such triangular-shaped velocity patterns indicate two things. First, it shows that drivers use blocks of constant acceleration (step inputs on gas/brake) to control their vehicle during an interaction. Second, in between these step inputs, or straight lines in the velocity trace, the input changes rapidly, causing a sharp angle in the velocity trace. This indicates that drivers select an input level and stick to that until something triggers a new decision resulting in a new input level. We refer to this combination as _intermittent piecewise-constant control_, where intermittent refers to the observed decision moments, and piecewise-constant to the constant Figure 8: Distribution of the Conflict Resolution Time (CRT) for all conditions. The CRT is the time from the moment at which the drivers gain control until the first moment when they are no longer on a collision course (assuming constant velocities). The coloured horizontal bars indicate the average time at which the first vehicle reached the merge point in that condition. A figure that shows the same CRT distribution placed in the 2-dimension conflict space on the locations of the corresponding conditions is available in the online supplementary material. acceleration levels in between. With this intermittent piecewise-constant control, drivers use key decision moments at which they determine a plan. After this decision, they stick with this plan until something triggers a new decision. Therefore, Figure 10 provides evidence that drivers do not continuously optimise their acceleration input while interacting in traffic. Thus, the assumption of continuous utility maximisation that is made in many models of driver behaviour (e.g. [6, 21, 22, 23, 24]) does not hold for these interactions. Another aspect shown in Figure 10 is that in many cases, the drivers immediately accelerate or decelerate at the moment they gain control. This indicates, that even in this purely symmetrical condition, drivers exit the tunnel with an intended solution in mind (i.e., they plan to go first or yield). To further investigate if drivers start the interaction with a mutual solution in mind, and if this solution is also reached, we plotted the outcome of the merging conflict versus the initial drivers' actions in Figure 11. Figure 11 shows that in the majority of the interactions that do not end in a collision, the drivers initially cooperate. In most interactions that end in the left vehicle reaching the merge point first, the left driver's initial input was to accelerate and the right driver's initial input was to decelerate. This indicates two things. First, it shows that drivers form compatible ideas about who will merge first before they even start interacting (in that trial), i.e., drivers use a shared mental model [25]. Second, even though there are cases where the conflict is resolved by only one of the drivers (i.e., where the other driver's input is 0), in most cases, both drivers initially act simultaneously to prevent a collision. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline & \multicolumn{1}{c|}{Estimate} & \multicolumn{1}{c|}{SE} & \multicolumn{1}{c|}{T-stat} & \multicolumn{1}{c|}{P-value} & \multicolumn{1}{c|}{0.025} & \multicolumn{1}{c|}{0.975} \\ \hline \hline Intercept & 1.61 & 0.107 & 15.2 & \(4.97\times 10^{-10}\) & 1.41 & 1.83 \\ \hline Projected headway & -0.25 & 0.016 & -15.3 & \(2.16\times 10^{-47}\) & -0.28 & -0.22 \\ \hline Relative velocity & 0.40 & 0.080 & 5.02 & \(6.07\times 10^{-7}\) & 0.25 & 0.56 \\ \hline Relative velocity : projected & -0.14 & 0.023 & -6.09 & \(1.68\times 10^{-9}\) & -0.18 & -0.09 \\ headway & & & & & & \\ \hline \end{tabular} \end{table} Table 4: Mixed-effects linear regression model analysing the Conflict Resolution Time (CRT) as a function of the kinematic conditions. Positive headways and relative velocities indicate an advantage for the driver who merged first. Collisions were excluded. Figure 11: The outcome of the merging conflict plotted versus the initial acceleration input at tunnel exit for the left (x-axis) and right (y-axis) drivers for all conditions. Figure 10: Velocity traces of the left and right drivers for all trials in the neutral condition 0_0, from the tunnel exit up until the merge point. The trials of a representative pair are highlighted to provide more insight into individual traces. The markers at the end of the trials indicate the final outcome of the trial. These plots show that drivers use triangular velocity patterns while interacting. These triangular patterns indicate that drivers use blocks of constant acceleration input with key decision moments in between. Interactive versions of these plots for all conditions are available in the online supplementary material. Discussion In this paper, we investigated the conflict-resolving behaviour of pairs of drivers in a simplified merging scenario. Our four most important findings are: 1) both the relative velocity and projected headway have a significant effect on which driver merges first; 2) the time it takes drivers to resolve the conflict (CRT) can be explained by the kinematics from the perspective of the driver that merges first; 3) drivers used a shared mental model about which driver merges first based on observations before the start of the interaction; and 4) drivers use intermittent piecewise-constant control to resolve the conflict. suggesting they do not constantly optimise some utility function. Rather the observed control behaviour is in line with satisficing (see [26]): in our experiment drivers seem to search for a plan that is good enough and stick to that plan until it no longer suffices. At this key decision moment, they re-plan to find a new input that is good enough, and act accordingly. ### Relation to the Existing Literature Our study indicated for the first time that both the relative velocity and the projected headway significantly influence which driver merges first. A velocity advantage decreases the probability of a vehicle merging first while a projected headway advantage increases that probability. Earlier studies mostly used naturalistic data, where these kinematics can not be controlled (e.g., [7, 8, 9, 27]), or reduced the analysis of kinematics to one dimension by studying time to arrival (e.g., [10]). The finding that humans do not constantly optimise their behaviour corresponds to previous findings in simple economic games [28], velocity choice for isolated drivers [29], and high-level skill switching (between manual braking and using cruise control) during driving [30]. The key-decision moments with constant inputs in-between have previously been observed in individual truck driver behaviour [31]. However, our results are the first to show that these operational aspects of human driving are also present in merging interactions in a controlled experiment. Previous empirical studies on merging behaviour used naturalistic data [7, 8, 9, 10, 27], in which these operational aspects are not included. Most of these studies focus on evaluating gap acceptance behaviour and were inspired by an interest in the effects of merging behaviour on traffic flow [7, 8, 27]. Among the existing studies of naturalistic merging conflicts, two -in particular- had a goal similar to ours: to understand the dynamics of drivers' conflict-resolving behaviour. Wang et al. [9] studied social interactions on congested highways in the INTERACTION dataset [32]. They divided the merges based on the social preference of the drivers of trough-lane vehicles. Drivers that overtook a merging car before the merge, were labelled "rude", while drivers that slowed down to let the merging vehicle in were labelled "courteous". However, our results indicate that the outcome (who goes first) for most scenarios we tested depends strongly on the kinematic vehicle states _at the start_ of the interaction, not on individual differences between drivers. Because the kinematics are not controlled for in naturalistic data, these form a substantially confounding factor which merits caution when attributing driving style as key factor for merging outcomes. Srinivasan et al. [10] used naturalistic data to evaluate a machine-learned model of human merging behaviour. They concluded that this machine-learned model can successfully predict the trajectories shown by drivers in scenarios where one of the vehicles has a large kinematic advantage. Compared to our work, they reduced the kinematic differences to a single dimension: time-to-arrival. A \(0.0\ s\) time-to-arrival difference corresponds to a \(0.0\ m\) projected headway in our work, but other time-to-arrival differences can be obtained with multiple combinations of projected headway and relative velocity. Our results show that these both have a significant impact on the outcome of the conflict in terms of the driver that merges first (Table 2) and on the CRT (Table 4). An important difference between our work and [10] is that we only regarded situations where the drivers are on a collision course from the start of the interaction while [10] regards large(r) kinematic differences. Nevertheless, we advocate using both relative velocity and projected headway for the kinematic analysis, because they have different effects on the outcome of the interaction. Besides that, we expect no major implications for machine-learned models of human behaviour based on our results. ### Implications However, when regarding approaches that are not purely data-driven, our results could have major implications for models and control strategies. Many driver models make the assumption that humans behave as rational utility maximisers (e.g., [12, 21, 24, 33]). And because these models make this assumption, many control strategies for autonomous vehicles in mixed traffic were proposed that make the same assumption (e.g., [5, 6, 34, 35, 36, 37]). Roughly, two kinds of rational utility maximization are used in driver models. First, there are the models that regard merging as a single high-level decision about who merges first, such as Kita already proposed in 1999 [33]. Second, there are models that assume drivers continuously optimize some reward function to determine their current input (Naumann et al. showed many examples of reward functions used for this approach in 2020 [24]). Our results have major implications for both assumptions. For models that regard merging as a single decision, our exploration of different kinematic conditions provides valuable insights into driver behaviour. Our results confirm that the vehicles' kinematics at the start of the interaction have a major impact on which driver merges first. This is in line with the model proposed by Kita [33]. However, our results also show that the individual differences in outcomes between pairs of drivers are restricted to a limited range of kinematic scenarios. In most scenarios, the same driver merges first for all driver pairs. This would indicate that modelling the decision of who merges first based on individual preferences (differences in reward function) is only valuable for a limited set of conditions where the kinematic differences are small. For models that assume continuous optimisation, our results have more far-reaching implications. The aggregated velocity plot (Figure 10) shows that drivers do not continuously optimise, but replan at specific decision moments. This indicates that the assumptions that drivers **continuously** either: optimize, approximately optimize (up to a threshold), or noisily optimise their inputs are not consistent with driver behaviour. Instead, drivers seem to be triggered to change their behaviour at a certain point (at which they might partially optimise to find a new plan). Besides the key-decision moments, Figure 10 also shows piecewise-linear velocity patterns. This indicates that the assumption that drivers aim to minimise a squared difference between their current and desired velocity (as used in many models, e.g., [24, 38]) is also inconsistent with driver behaviour because that would lead to non-linear velocity profiles. In general, our findings imply that the mathematical convenience related to main assumptions in game-theoretic models comes at a serious cost to their descriptive power. Thus, although game-theoretic approaches can be very valuable to determine optimal control decisions between rational agents (e.g., in vehicle-to-vehicle communication approaches [39, 40, 41]), we advise caution in applying them to predicting driver behaviour (either in driver models or in AV control). ### Recommendations, Limitations, and Future Work Therefore, we interpret our results as an encouragement to develop new types of traffic interaction models that do allow for intermittent piecewise-constant control in operational behaviour. Siebinga et al. previously proposed a model framework that could describe intermittent control in traffic interactions [42]. But there are other (existing) lines of research that also hold potential for appli cation to interactive scenarios, such as evidence accumulation models (e.g., [43, 44, 45]). Besides the intermittent control, new interaction models should use piecewise-constant acceleration as control inputs. Furthermore, they should be able to describe the most likely outcomes for different initial kinematics, independent of individual driver differences (Figure 5). Although our work might provide inspiration for the development of novel interaction models, it also has some limitations. The main limitation is the simplification of the merging scenario. We started with a simplified symmetric merging scenario to investigate operational behaviour independent of factors such as a right-of-way. In real-world merging, this does play an important role in the interaction. Furthermore, we reduced the control inputs to acceleration and deceleration only (no steering). This was done to simplify the analysis of the experiment. This design choice decreased the difficulty of the task which could have increased performance. Finally, we used a top-down-view simulation for simplicity. This makes it easier for participants to estimate velocities, which could have decreased the variability in the outcomes. So although this experiment provided valuable insights, more work is needed to validate these results in a more realistic scenario. This validation should be done in a coupled driver simulator where the experimenter has full control over the initial kinematics to make a useful comparison. Besides the simplification of the scenario, another limitation is that participants in this experiment knowingly executed 110 merging manoeuvres against the same opponent. This could have influenced the outcome because the participants could have learned the other driver's behaviour. An experiment with more than two drivers and an experimental setup with random pairing could account for this. ## 5 Conclusion In this paper, we investigated how drivers resolved merging conflicts in a coupled, top-down view driving simulator. We used a simplified merging scenario that only includes longitudinal control. We investigated driver behaviour under initial conditions with varying relative velocities and projected headways. We used mixed effects regression models, the concept of Conflict Resolution Time (CRT), and aggregated velocity plots to gain insight into driver behaviour. For the experimental conditions studied, we conclude: * Drivers used intermittent control (modifying acceleration only at key decision moments) to resolve merging conflicts. This suggests that drivers do not behave as continuous rational utility maximisers in merging interactions. * Drivers use piecewise-constant acceleration control (blocks of continuous acceleration) resulting in triangular velocity patterns to control their vehicle. * Relative velocity and projected headway are good predictors of which driver is most likely to merge first. They have different effects and are thus both needed for a reliable prediction (instead of reducing the kinematics to a single time-to-arrival value). * We used a metric to describe the amount of time the drivers need to resolve a merging conflict (CRT). We found CRT is associated with the outcome of the interaction combined with the initial kinematic differences (projected headway and relative velocity). * Conditions where one driver has a pure projected headway advantage are resolved faster than conditions with a pure velocity advantage. * Drivers used shared mental models and observations before the start of the interaction to determine which driver will merge first. ## Acknowledgements We thank Alexis Derumigny for the advice on statistical modelling. This research was funded by Nissan Motor Company, Ltd. and by the RVO grant TKI2012P01.
2307.05301
Signal-background separation and energy reconstruction of gamma rays using pattern spectra and convolutional neural networks for the Small-Sized Telescopes of the Cherenkov Telescope Array
Imaging Atmospheric Cherenkov Telescopes (IACTs) detect very-high-energy gamma rays from ground level by capturing the Cherenkov light of the induced particle showers. Convolutional neural networks (CNNs) can be trained on IACT camera images of such events to differentiate the signal from the background and to reconstruct the energy of the initial gamma ray. Pattern spectra provide a 2-dimensional histogram of the sizes and shapes of features comprising an image and they can be used as an input for a CNN to significantly reduce the computational power required to train it. In this work, we generate pattern spectra from simulated gamma-ray and proton images to train a CNN for signal-background separation and energy reconstruction for the Small-Sized Telescopes (SSTs) of the Cherenkov Telescope Array (CTA). A comparison of our results with a CNN directly trained on CTA images shows that the pattern spectra-based analysis is about a factor of three less computationally expensive but not able to compete with the performance of an CTA image-based analysis. Thus, we conclude that the CTA images must be comprised of additional information not represented by the pattern spectra.
J. Aschersleben, T. T. H. Arnesen, R. F. Peletier, M. Vecchi, C. Vlasakidis, M. H. F. Wilkinson
2023-07-11T14:45:52Z
http://arxiv.org/abs/2307.05301v2
Signal-background separation and energy reconstruction of gamma rays using pattern spectra and convolutional neural networks for the Small-Sized Telescopes of the Cherenkov Telescope Array ###### Abstract Imaging Atmospheric Cherenkov Telescopes (IACTs) detect very high-energy gamma rays from ground level by capturing the Cherenkov light of the induced particle showers. Convolutional neural networks (CNNs) can be trained on IACT camera images of such events to differentiate the signal from the background and to reconstruct the energy of the initial gamma ray. Pattern spectra provide a 2-dimensional histogram of the sizes and shapes of features comprising an image and they can be used as an input for a CNN to significantly reduce the computational power required to train it. In this work, we generate pattern spectra from simulated gamma-ray and proton images to train a CNN for signal-background separation and energy reconstruction for the Small-Sized Telescopes (SSTs) of the Cherenkov Telescope Array (CTA). A comparison of our results with a CNN directly trained on CTA images shows that the pattern spectra-based analysis is about a factor of three less computationally expensive but not able to compete with the performance of the CTA images-based analysis. Thus, we conclude that the CTA images must be comprised of additional information not represented by the pattern spectra. keywords: CTA, gamma rays, Imaging Atmospheric Cherenkov Telescopes, atmospheric shower reconstruction, machine learning + Footnote †: journal: NIM-A ## 1 Introduction When a gamma ray reaches the Earth's atmosphere, it induces a cascade of secondary particles which are known as air showers. The secondary particles can reach velocities higher than the speed of light in air, inducing a flash of _Cherenkov light_[1]. The Cherenkov light can be captured by _Imaging Air Cherenkov Telescopes_ (IACTs) from the ground to reconstruct specific properties of the initial particle, such as its type, energy and direction (see [2; 3; 4] for an overview of ground-based gamma-ray astronomy). The _Cherenkov Telescope Array_ (CTA) [5] is the next-generation ground-based observatory for gamma-ray astronomy at very-high energies, offering 5-10 times better flux sensitivity than current-generation gamma-ray telescopes [6], such as H.E.S.S. [7], MAGIC [8] and VERITAS [9]. It will cover a wide energy range between \(20\,\mathrm{GeV}\) to \(300\,\mathrm{TeV}\) benefiting from three different telescope types: _Large-Sized Telescopes_ (LSTs), _Medium-Sized Telescopes_ (MSTs) and _Small-Sized Telescopes_ (SSTs). The CTA Observatory will be distributed on two arrays in the northern hemisphere in La Palma (Spain) and the southern hemisphere near Paranal (Chile). CTA will outperform the energy and angular resolution of current instruments providing an energy resolution of \(\sim 5\,\mathrm{\char 37}\) around \(1\,\mathrm{TeV}\) and an angular resolution of \(1\,^{\prime}\) at its upper energy range. With its short timescale capabilities and large field of view of \(4.5\,^{\circ}-8.5\,^{\circ}\), it will enable the observation of a wide range of astronomical sources, including transient, high-variability or extended gamma-ray sources. Several analysis methods for IACT data have been developed to classify the initial particle and reconstruct its energy and direction. _Hillas parameters_[10] are one of the first reconstruction techniques proposed by A. M. Hillas in 1985. They describe features of the Cherenkov emission within the camera images and are widely used as input to machine learning algorithms like _Random Forest_[11] or _Boosted Decision Trees_[12; 13; 14] to perform full event reconstruction of gamma rays. Another approach is the _ImPACT_ algorithm [15], which performs event reconstruction using expected image templates generated from Monte Carlo simulations. Other methods such as _model analysis_[16] and _3D model analysis_[17], which are based on a semi-analytical shower model and a Gaussian photosphere shower model respectively, managed to be more sensitive to certain properties of the shower [18]. Recently, _convolutional neural networks_ (CNNs) [19; 20; 21] have been proposed and applied to IACT data [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. CNNs are machine learning algorithms that are specialised for image data and are currently one of the most successful tools for image classification and regression tasks [35]. They rely on _convolutional layers_ which consist of image filters that are able to extract relevant features within an image. Among many others, models such as _AlexNet_[36], _GoogLeNet_[37] and _ResNet_[38] established many new techniques, such as the _Rectified Linear Unit_ (ReLU) [39] activation function and deeper architectures, which set the milestones for many upcoming architectures. _ResNet_ won the _ImageNet Large Scale Visual Recognition Challenge_ (ILSVRC) in 2015 by introducing _shortcut connections_ into the architecture and achieving a _top-5 classification error_ of only 3.6 % [40]. CNNs that contain these shortcut connections often achieve higher performances and are referred to as _residual neural networks_ (ResNets). The first event classifications with a CNN trained on IACT images have been presented in [22] and [23], which have demonstrated the signal-background separation capabilities of CNNs. Later work has shown the energy and direction reconstruction capabilities of gamma rays with CNNs [24; 25; 26; 27], their ability to run in stereo telescope mode [28; 29; 30] and to be applied to real data [31; 32; 33]. However, one of the main drawbacks of this method is that the training of CNNs is computationally very expensive [41]. It typically requires access to computing clusters with powerful graphics processing units (GPUs) and large amounts of random-access memory (RAM). The larger the dimension of the input image, the larger the computational power and time needed for the CNN training. A significant reduction of the dimension of the input image without any performance losses would therefore result in substantial savings in hardware and human resources, increase the efficiency of related scientific works and lower the environmental impact of CNNs [42]. An approach to this problem is _pattern spectra_[43], which are commonly used tools for image classification [44; 45; 46] and can significantly reduce the computational power needed to train CNNs. They provide a 2-dimensional distribution of sizes and shapes of features within an image and can be constructed using a technique known as granulometries [47]. The features within the image are extracted with connected operators [48], which merge regions within an image with the same grey scale value. Compared to other feature extraction techniques, this approach has the advantage of not introducing any distortions into the image. In this work, we generate pattern spectra from simulated CTA images to apply them on a ResNet for signal-background separation and energy reconstruction of gamma rays. The application of a ResNet on pattern spectra takes advantage of their 2D nature by selecting relevant combinations of features within the CTA images. Our pattern spectra algorithm is based on the work presented in [44], which provides two main advantages compared to other existing pattern spectra algorithms: (i) the computing time for creating the pattern spectra is independent of its dimensions and (ii) it is significantly less sensitive to noise. These properties merit the investigation of pattern spectra-based analysis for IACTs. Direction reconstruction of gamma rays is not considered here since pattern spectra are rotation invariant, meaning that the same CTA image rotated by an arbitrary angle would result in the same pattern spectrum. By generating pattern spectra from simulated CTA images, we aim to obtain a competitive algorithm that is significantly faster and less computationally intensive while keeping comparable performance to a CNN trained on CTA images in terms of signal-background separation and energy reconstruction of gamma rays. The structure of this article is as follows: In Section 2, the CTA dataset used in this analysis is described. Section 3 is devoted to our analysis methods including the pattern spectra algorithm, the ResNet architecture and the performance evaluation methods for our algorithms. The results are shown in Section 4 and discussed in detail in Section 5. Finally, we state our conclusions in Section 6. The source code of this project is publicly available at [49]. ## 2 Dataset The dataset consists of simulated gamma-ray and proton events detected by the southern CTA array (Prod5_DL1 (ctapipe v0.10.5 [51]), zenith angle of 20 \({}^{\circ}\), North pointing [52; 53]). Due to the hexagonal pixels integrated in the LSTs and MSTs cameras, which cannot be processed by the current version of the pattern spectra algorithm, only the 37 SSTs with rectangular pixels are considered in this analysis. The SST images containing the charge information, i.e. the integrated photodetector pulse, will be referred to as _CTA images_ in the following. CTA images generated by gamma rays with an energy between 500 GeV and 100 TeV and protons with an energy between 1.5 TeV and 100 TeV have been considered for this study to match the operating energy range of the SSTs. For the energy reconstruction \(\sim 3\cdot 10^{6}\) gamma-ray events generated with a \(0.4\,^{\circ}\) offset from the telescope pointing position, referred to as _pointlike gamma rays_ in the following, are used. For the signal-background separation, \(\sim 2\cdot 10^{6}\)_diffuse gamma rays_ and \(\sim 2\cdot 10^{6}\)_diffuse protons_ are used, whereas the term _diffuse_ describes events generated in a view cone of \(10\,^{\circ}\). The pointlike and diffuse events are considered in the analysis to represent real observation conditions. When observing a source, background events reach the telescopes not only from the direction of the source but potentially from a much larger view cone. However, using pointlike gamma-rays and diffuse proton events for signal-background separation would introduce a bias in the learning process of the CNN. Therefore, we consider diffuse events for the signal-background separation and pointlike events for the energy reconstruction task. In particular for high energies, the dataset often includes single events that were captured by multiple SSTs. This results in several CTA images for a single event. Since the construction and training of a CNN, that is able to handle a varying amount of input images, is very challenging, we constructed a single CTA Figure 1: Visual representation of the pattern spectra algorithm (adapted from [44; 50]) image for each event as a first step towards the implementation of pattern spectra for the analysis of CTA images. In order to obtain a single CTA image per event, all CTA images of the same event are combined into a single image by adding up the individual pixel values of each image. We are aware that this is reducing the performance of the array, but we adopt this strategy to simplify our proof of concept work. However, we do not promote the idea of image stacking for CNN analyses with CTA data when trying to maximise the performance of the CNN. ## 3 Analysis ### Pattern spectra The algorithm used to extract pattern spectra from the CTA images is based on the work presented in [44] and will be briefly summarised in the following. Let \(f\) be a grey-scale image with grey levels \(h\). Consider an image domain \(E\subseteq\mathbb{R}^{2}\) and let the set \(X\subseteq E\) denote a binary image with domain \(E\). The _grain_ of a binary image \(X\) is defined as a connected component of \(X\). The _peak components_\(P_{h}^{k}(f)\) of an image \(f\) are defined as the \(k\)th grain of the threshold set \(T_{h}(f)\), which is defined as \[T_{h}(f)=\{x\in E|f(x)\geq h\}. \tag{1}\] For each image \(f\), a _Max-tree_ is computed according to the algorithm described in [55]. The Max-tree is composed of _nodes_\(N_{h}^{k}(f)\), which consist of the subset of the peak components \(P_{h}^{k}(f)\). Figure 1 (a) shows an example of a 2D grey-scale image, (b) the corresponding peak components \(P_{h}^{k}(f)\) and (c) its Max-tree with nodes \(N_{h}^{k}(f)\). The pattern spectra are based on the size and shape attributes of the peak components \(P_{h}^{k}(f)\). The size attribute corresponds to the area \(A(P_{h}^{k}(f))\), which is computed by the sum of the pixels belonging to the detected feature. The shape attribute corresponds to \(I/A^{2}\) with the moment of inertia \(I\) describing the sum of squared differences to the centre of gravity of the feature. The size and shape attributes are binned into \(N=20\) size classes \(s\) and shape classes \(r\), which results in a good compromise between the performance of the pattern spectra and the computational power needed to train the ResNet. The 2D pattern spectrum is computed from the Max-tree as follows [44]: 1. Construct a 2D array \(\Phi[r,s]\) of size \(N\times N=20\times 20\). 2. Set all elements of \(\Phi[r,s]\) to zero. 3. For each node \(N_{h}^{k}(f)\) of the Max-tree, compute the size class \(r\) from the area \(A(P_{h}^{k}(f))\), the shape class \(s\) from \(I(P_{h}^{k}(f))/A(P_{h}^{k}(f))^{2}\) and the grey-level difference \(\delta_{h}\) between the current node and its parent. 4. Add the product of \(\delta_{h}\) and \(A(P_{h}^{k}(f))\) to \(\Phi[r,s]\). An example of a pattern spectrum extracted from a CTA image is shown in Figure 2. The image in the top-left shows a CTA image of a \(1.9\,\mathrm{TeV}\) gamma-ray event that was captured by eight Figure 2: Top-left: CTA image of a \(1.9\,\mathrm{TeV}\) gamma-ray event captured by eight SSTs. Bottom-left: pattern spectrum extracted from the CTA image. Middle-top: CTA image with a set of detected features highlighted in red. Middle-bottom: pattern spectrum with pixel corresponding to the detected features (small \(A\) and \(I/A^{2}\)) highlighted in red. Right-top: CTA images with a different set of detected (sub-features highlighted in (red) orange. Right-bottom: pattern spectrum with pixels corresponding to the detected features (intermediate \(A\) and \(I/A^{2}\)) highlighted in red. SSTs. The bright features in the centre of the image correspond to the Cherenkov emission induced by the particle shower. Due to the different locations of the SSTs, the Cherenkov light is captured with different intensities and at different positions on the SST cameras. The pattern spectrum generated from the CTA image is shown in the bottom-left. Each pattern spectrum pixel represents a set of detected features. An example of the detected features is shown in the middle of Figure 2. The image on top shows a set of detected features within the CTA image highlighted in red. The image at the bottom shows the pattern spectrum with the red pixel representing these features. This specific example shows features with a small \(A\) and small \(I/A^{2}\) referring to features with a small size and a circular-like shape. They correspond to individual pixels in the CTA image and represent mostly noise. Another example is shown in the top-right and bottom-right of Figure 2. Compared to the previous example, the red-marked pattern spectrum pixels correspond to larger \(A\) and \(I/A^{2}\) values. Thus, the highlighted objects (red/orange) in the CTA image correspond to features with a larger size and more elliptical-like shape. The detected features in this example are of particular interest since they represent the Cherenkov photons induced by the particle shower, which contain information about the type and energy of the initial particle. ### Residual neural network architecture For the signal-background separation and energy reconstruction of gamma-ray events, two individual but almost identical ResNet architectures are constructed and trained with either CTA images or pattern spectra. The architectures of our ResNets are identical to the ResNets presented in [54] and are based on the work presented in [30; 38; 56]. The ResNet is illustrated in Figure 3. Due to the rather shallow architecture compared to the ResNet presented in [38], we refer to our architectures as _thin residual neural networks_ (TRNs) in the following. They are constructed using _Tensorflow 2.3.1_[57] and _Keras 2.4.3_[58] and consist of 13 convolutional layers with _Rectified Linear Unit_ (ReLU) [39] activation function, a _global average pooling layer_ and two _fully connected (dense) layers_ with 64 and 32 neurons respectively. The output layer consists of a single neuron for the energy reconstruction and two neurons with _softmax_[59] activation function for the signal-background separation. _Shortcut connections_[38] at every third convolutional layer were implemented in order to improve the stability and performance of the algorithm. The solid arrows in Figure 3 represent linear shortcut connections, in which the input of a _building block_\(x\) is added to the output of the last layer of the building block \(F(x)\). If the input and output of a building block have different dimensions, the input \(x\) is put into another convolutional layer with the same number of filters as the last layer of the building block. The output of this residual operation \(G(x)\) is added to the output of the last layer of the building block \(F(x)\). A filter size of \(1\times 1\) is used for all shortcut connections with a convolutional operation. In total, the two TRNs have about 150000 trainable parameters. Figure 3: Top: Architecture of the thin residual neural network (TRN) [54]. For each convolutional layer, the filter size and number of filters are specified. Bottom: Building block with a linear shortcut connection (left) and non-linear shortcut connection (right) (adapted from [38]). ### Experiments The TRNs described in the previous section are trained and evaluated 10 times each on the datasets for both signal-background separation and energy reconstruction to perform a statistical analysis of the training process. Similar to the work presented in [30], a multiplicity cut of four or more triggered telescopes is applied for both the gamma-ray and proton events. The dataset is split into 90 % training data, from which 10 % is used as validation data, and 10 % test data. The weights of the TRN are initialized using the Glorot Uniform Initializer [60] and the training, validation and test data are randomized for each run. The _adaptive moment_ (ADAM) optimizer [61] with a learning rate of 0.001, and a batch size of 32 is used for the TRN training. The training is stopped if there is no improvement on the validation dataset for over 20 epochs, and the model with the lowest validation loss is saved. The _categorical cross entropy_ and _mean squared error_[62] are applied as loss functions for the signal-background separation and energy reconstruction, respectively. The results shown in Section 4 are obtained by evaluating the performance of each TRN on the test data. #### 3.3.1 Signal-background separation Each event is labelled by its _gammaness_\(\Gamma\), whereas \(\Gamma=1\) corresponds to a gamma-ray (photon) and \(\Gamma=0\) corresponds to a proton. The output of the TRN is a \(\Gamma\)-value between 0 and 1, which describes a pseudo-probability of the event being a photon according to the TRN. For a fixed \(\Gamma\)-threshold \(\alpha_{\Gamma}\), the _photon efficiency_\(\eta_{\gamma}\) is defined as \(\eta_{\gamma}=TP/P\), where \(TP\) is the number of _true positives_, i.e. photon events with \(\Gamma\geq\alpha_{\Gamma}\) (correctly classified photons), and \(P\) is the total number of positives (photons) that pass the selection criteria described in Section 2. Similarly, the _proton efficiency_\(\eta_{p}\) is defined as \(\eta_{p}=FP/N\), where \(FP\) is the number of _false positives_, i.e. proton events with \(\Gamma<\alpha_{\Gamma}\) (misclassified protons), and \(N\) is the total number of negatives (protons) that pass the selection criteria. A good classifier results in a high photon efficiency \(\eta_{\gamma}\) and a low proton efficiency \(\eta_{p}\) for a given \(\Gamma\)-threshold. In order to evaluate the performance of our TRNs, the efficiencies as a function of the \(\Gamma\)-threshold and the _effective area_\(A_{\rm eff}\) as a function of the _true energy_\(E_{\rm true}\) are calculated. The effective area is determined by \(A_{\rm eff}=\tilde{\eta}_{\gamma}\cdot A_{\rm geom}\), where \(A_{\rm geom}\) is the geometrical area of the instrument, i.e. \(A_{\rm geom}=\pi r_{\rm max}^{2}\) with \(r_{\rm max}\) being the maximum simulated impact radius, and \(\tilde{\eta}_{\gamma}=TP/\tilde{P}\) with \(\tilde{P}\) being the total number of simulated photons, including the events that did not pass the selection criteria in Section 2. Similarly, we define \(\tilde{\eta}_{p}=FP/\tilde{N}\) with \(\tilde{N}\) being the total number of simulated protons. The energy range is split into seven logarithmic bins, whereas each event is assigned to an energy bin based on its true energy \(E_{\rm true}\). The effective area is then calculated for each energy bin by increasing the \(\Gamma\)-threshold until \(\tilde{\eta}_{p}=10^{-3}\) is reached and extracting the corresponding \(\tilde{\eta}_{\gamma}\). The value \(\tilde{\eta}_{p}=10^{-3}\) is motivated by the photon flux of Figure 4: Example of the gammaness distributions obtained from a single TRN trained with CTA images (left) and pattern spectra (right). Figure 5: Mean photon efficiency \(\eta_{\gamma}\) and proton efficiency \(\eta_{p}\) as a function of the \(\Gamma\)-threshold \(\alpha_{\Gamma}\) obtained from 10 independent TRNs. the Crab Nebula being about three orders of magnitude lower than the isotropic flux of cosmic rays (CRs) within an angle of 1 deg around the direction of the source: \(\Phi_{\gamma}^{\rm Crab}\approx 10^{-3}\cdot\Phi_{\rm CR}\)[2]. Furthermore, the _receiver operating characteristic_ (ROC) curve [63] is determined. The ROC curve describes the photon efficiency \(\eta_{\gamma}\) versus the proton efficiency \(\eta_{p}\). The _area under the ROC curve_ (AUC) is calculated and used as a measure of the performance of each TRN. For part of our calculations, we make use of pyrf rv 0.7.0 [64], which is a python library for the generation of Instrument Response Functions (IRFs) and sensitivities for CTA. From the 10 TRNs, the mean efficiencies, effective area, ROC curve and AUC value are calculated for both the CTA images and pattern spectra-based analyses. ### Energy reconstruction The gamma-ray events are labelled by their true energy \(E_{\rm true}\), which the TRN learns to predict based on the training input. The performance of the TRN on the test data is evaluated by comparing the reconstructed energy \(E_{\rm rec}\) of the TRN with the true energy \(E_{\rm true}\) of the initial gamma ray. Therefore, the _relative energy error_\(\Delta E/E_{\rm true}=(E_{\rm rec}-E_{\rm true})/E_{\rm true}\) is calculated for each event. The whole energy range between \(500\,\rm GeV\) and \(100\,\rm TeV\) is split into seven logarithmic bins and each event is assigned to an energy bin based on its true energy \(E_{\rm true}\). For each of these energy bins, the distribution of the relative energy error \(\Delta E/E_{\rm true}\) is determined and its median calculated. The median of \(\Delta E/E_{\rm true}\) is referred to as the _energy bias_ in the following. Small (high) energy biases indicate high (low) accuracies. The distributions of the relative energy error \(\Delta E/E_{\rm true}\) are then bias-corrected by subtracting the median, i.e. \((\Delta E/E_{\rm true})_{\rm corr}=\Delta E/E_{\rm true}-{\rm median}(\Delta E /E_{\rm true})\). The _energy resolution_ is defined as the 68th percentile of the distribution \([(\Delta E/E_{\rm true})_{\rm corr}]\). From the 10 TRNs, the mean energy bias and energy resolution with their standard deviation are calculated for each energy bin for both the CTA images and pattern spectra-based analyses. ## 4 Results ### Signal-background separation Two examples of the gammaness distributions obtained from a single TRN trained with the CTA images and pattern spectra are shown in Figure 4. Figure 4 (left) shows a distinct separation between photon and proton events for the TRN trained with CTA images. The majority of photon events are classified with \(\Gamma=1\) and the majority of proton events with \(\Gamma=0\). The number of proton (photon) events continuously decreases for larger (smaller) \(\Gamma\)-values, which indicates a good separation capability of the TRN. Figure 4 (right) shows the performance of the TRN trained with the pattern spectra, which results in a lower signal-background separation capability compared to the TRN trained with CTA images. Once again, the majority of photon events are classified with \(\Gamma=1\) and the majority of proton events with \(\Gamma=0\). However, the distributions decrease less rapidly compared to the CTA images-based analysis. The mean photon efficiency \(\eta_{\gamma}\) and proton efficiency \(\eta_{p}\) as a function of the \(\Gamma\)-threshold \(a_{\rm T}\) are shown in Figure 5. The shaded regions in this figure and the upcoming ones depict the standard deviation across the 10 TRNs. Both the photon efficiency and proton efficiency decrease steadily for an increasing \(\alpha_{\Gamma}\)-value. Up to \(\Gamma\sim 0.1\) the pattern spectra-based analysis results in a very similar photon efficiency but in a much higher proton efficiency in comparison to the CTA images-based analysis. The proton efficiency of the pattern spectra approaches a similar value compared to the CTA images at \(\Gamma\sim 0.9\) at which, however, the CTA images outperform the pattern spectra in the photon efficiency. Therefore, the CTA images result overall Figure 6: Left: mean effective area \(A_{\rm eff}\) as a function of the true energy \(E_{\rm true}\) obtained from 10 independent TRNs. Right: mean ROC curve and mean AUC value obtained from 10 independent TRNs. The solid black line corresponds to a ROC curve expected from a random classifier. The performances stated here do not represent the expected performance by the CTA Observatory at the end of its construction phase. in better photon and proton efficiencies independent of the \(\Gamma\)-threshold \(\alpha_{\rm{T}}\). Figure 6 (left) shows the mean effective area \(A_{\rm{eff}}\) as a function of the true energy \(E_{\rm{true}}\). The CTA images result in a higher effective area than the pattern spectra for all energies. The difference between the two analyses increases with increasing energy. The CTA images result in a maximum effective area of \(\sim 12.8\times 10^{5}\,\rm{m}^{2}\) at \(\sim 80\,\rm{TeV}\), whereas the pattern spectra result in a maximum effective area of \(\sim 7.0\times 10^{5}\,\rm{m}^{2}\) at \(\sim 80\,\rm{TeV}\), which corresponds to factor of 1.8 between the two analyses. The mean ROC curve and corresponding AUC value are shown in Figure 6 (right). As expected from the gammaness distributions discussed above, the ROC curve obtained from the CTA images is significantly steeper than the ROC curve obtained from the pattern spectra. The mean AUC value of 0.987 for the CTA images is therefore significantly larger than the value of 0.929 obtained from the pattern spectra by a factor of 1.06. Therefore, the TRN trained with CTA images shows a higher signal-background capability than the pattern spectral-based analysis. ### Energy reconstruction Figure 7 shows two examples of the energy migration matrices, i.e. the 2D histogram of \(E_{\rm{rec}}\) against \(E_{\rm{true}}\), obtained from a single TRN trained with the CTA images and pattern spectra. Most of the events are distributed around the \(E_{\rm{rec}}=E_{\rm{true}}\) line for both the CTA images and pattern spectra-based analysis. However, the distribution obtained from the pattern spectra is more spread compared to the CTA images-based analysis. The mean energy accuracy obtained from 10 independent TRNs is shown in Figure 8 (left). The energy biases obtained from the CTA images-based analysis are closely distributed around 0 with the largest energy bias of \(\sim 5\,\%\) at the lowest energy bin. The energy biases obtained from the pattern spectra-based Figure 8: Mean energy accuracy (left) and resolution (right) obtained from 10 independent TRNs. The dashed grey line represents the CTA energy resolution requirement for the southern CTA array [65]. The performances stated here do not represent the expected performance by the CTA Observatory at the end of its construction phase. Figure 7: Example of the energy migration matrix obtained from a single TRN trained with CTA images (left) and pattern spectra (right). analysis reach up to \(\sim 20\,\%\) with the largest energy biases at the lowest and highest energy bin. The absolute value of the energy bias obtained from the pattern spectra-based analysis is larger than the values obtained from the CTA images for all energies. The mean energy resolution obtained from 10 independent TRNs is shown in Figure 8 (right). The CTA images-based analysis ranges from 0.08 to 0.12 with a minimum at \(\sim 7.5\,\mathrm{TeV}\). While we simplified our analysis by stacking CTA images for each event, the energy resolution still meets the CTA requirements [65] for all energy bins, except for the lowest energy bin. The pattern spectra result in an energy resolution between 0.22 and 0.25 with a minimum at the highest energy bin and does not meet the CTA requirements. Thus, the CTA images-based analysis outperforms the pattern spectra for all energies with a maximum factor of 2.9 at \(\sim 7.5\,\mathrm{TeV}\) between the two curves. ## 5 Discussion A comparison of the computational performance of the analyses is shown in Figure 9. The TRN training with pattern spectra is about a factor of 2.5 faster and requires a factor of 2.5 less RAM compared to the TRN training with CTA images. The pattern spectra are capable of detecting and classifying relevant features in the CTA images, which is illustrated by the gammaness distributions shown in Figure 4 (right) and the energy migration matrix shown in Figure 7 (right). However, the pattern spectra-based analysis is outperformed by the CTA images with respect to their signal-background and energy reconstruction capabilities. For a given \(\Gamma\)-threshold \(\alpha_{\mathrm{T}}\), the pattern spectra result in a poorer photon and proton efficiency compared to the CTA images (see Figure 5), which is a main drawback of the analysis since both efficiencies are important quantities for the analysis of real gamma-ray data. Moreover, we infer from the effective area versus energy plot shown in Figure 6 (left) that the signal-background capabilities of the pattern spectra-based analysis are below the capabilities of the CTA images-based analysis independent of the energy of the initial particle. The AUC value obtained from the CTA images is a factor 1.06 larger than the pattern spectra AUC value and illustrates once again the overall lower signal-background capabilities of the pattern spectra-based analysis. The CTA images result in a better energy resolution and a lower energy bias for all energies compared to the pattern spectra. Although our choice of attributes, i.e. size and shape attribute, is well-motivated, these two attributes do not seem to be sufficient to fully describe all relevant features within the CTA images. Potentially, the pattern spectra might not be able to detect, e.g., the electromagnetic substructure in proton showers. Other feature attributes, e.g. the _perimeter_, _sum of grey levels_ and _compactness_ (_perimeter_/\(A^{2}\)), were tested for both signal-background separation and energy reconstruction but did not result in a significantly better performance. Furthermore, we applied pattern spectra on other algorithms including classification and regression trees (CART) [66], Learning Vector Quantization (LVQ) and Generalized Matrix Learning Vector Quantization (GMLVQ) [67]. None of these algorithms achieved a better performance than the TRN. We, therefore, conclude that the TRN relies on features within the CTA images that are not detected by the pattern spectra algorithm. The performances stated in this work do not represent the expected performance by the CTA Observatory at the end of its construction phase. ## 6 Conclusions For the first time, signal-background separation and energy reconstruction of gamma rays were performed under the application of pattern spectra. We have shown that the pattern spectra algorithm has the capability to detect and classify relevant features in IACT images. The detected features are capable of differentiating between gamma-ray and proton events and to reconstruct the energy of gamma-ray events. The training of the TRN with pattern spectra requires 2.5 less RAM and is about a factor of 2.5 faster than the TRN trained with CTA images, Figure 9: Mean time (left) and RAM (right) required to train the TRN for signal-background separation and energy reconstruction obtained from 10 independent TRNs for each analysis. The training was performed on a _Nvidia A100 GPU_. which agrees with our expectation due to the smaller size of the pattern spectra as compared to CTA images. The reduction in computational power was one of the main motivations to test the performance of pattern spectra on IACT data. However, the pattern spectra-based analysis is not competitive with the CTA images-based analysis in signal-background separation and energy reconstruction. The AUC value, which is a measure of the signal-background separation capability of an algorithm, obtained from the CTA images is a factor 1.06 larger than the value obtained from the pattern spectra. The CTA images result in better energy accuracy and energy resolution for all energies with a maximum factor of 2.9 at \(\sim 7.5\,\mathrm{TeV}\) in energy resolution compared to the pattern spectra. We, therefore, conclude that the relevant features within the CTA images are not sufficiently detected or described by our choice of size and shape attributes. Other sets of attributes were tested but resulted in no major improvements. Thus, the TRN trained on CTA images must rely on additional features not captured by the pattern spectra. In other applications, especially when the input images are larger, or vary in size, the results may be different. ## Acknowledgements This work was conducted in the context of the CTA Consortium and CTA Observatory. We gratefully acknowledge financial support from the agencies and organizations listed at [http://www.cta-observatory.org/consortium](http://www.cta-observatory.org/consortium) acknowledgements. We would like to thank the Center for Information Technology of the University of Groningen for their support and for providing access to the Peregrine high-performance computing cluster.
2304.13033
SmartChoices: Augmenting Software with Learned Implementations
In many software systems, heuristics are used to make decisions - such as cache eviction, task scheduling, and information presentation - that have a significant impact on overall system behavior. While machine learning may outperform these heuristics, replacing existing heuristics in a production system safely and reliably can be prohibitively costly. We present SmartChoices, a novel approach that reduces the cost to deploy production-ready ML solutions for contextual bandits problems. SmartChoices' interface cleanly separates problem formulation from implementation details: engineers describe their use case by defining datatypes for the context, arms, and feedback that are passed to SmartChoices APIs, while SmartChoices manages encoding & logging data and training, evaluating & deploying policies. Our implementation codifies best practices, is efficient enough for use in low-level applications, and provides valuable production features off the shelf via a shared library. Overall, SmartChoices enables non-experts to rapidly deploy production-ready ML solutions by eliminating many sources of technical debt common to ML systems. Engineers have independently used SmartChoices to improve a wide range of software including caches, batch processing workloads, and UI layouts, resulting in better latency, throughput, and click-through rates.
Daniel Golovin, Gabor Bartok, Eric Chen, Emily Donahue, Tzu-Kuo Huang, Efi Kokiopoulou, Ruoyan Qin, Nikhil Sarda, Justin Sybrandt, Vincent Tjeng
2023-04-12T21:55:35Z
http://arxiv.org/abs/2304.13033v3
# SmartChoices: Augmenting Software with Learned Implementations ###### Abstract We are living in a golden age of machine learning. Powerful models are being trained to perform many tasks far better than is possible using traditional software engineering approaches alone. However, developing and deploying those models in existing software systems remains difficult. In this paper we present SmartChoices, a novel approach to incorporating machine learning into mature software stacks easily, safely, and effectively. We explain the overall design philosophy and present case studies using SmartChoices within large scale industrial systems. ## 1 Introduction Modern deep learning models power an increasing range of products and services, such as search, recommendation and discovery systems, and online advertising (among many others). However, training and deploying these models in production systems is fraught with new failure modes and opportunities to accreu distinct forms of technical debt (Sculley et al., 2015). Two major issues identified by those authors are that we cannot crisply define desired program behavior in cases where machine learning (ML) is necessary (which erodes abstraction boundaries) and most practitioners lack sophisticated tooling to track data provenance and data dependencies the way we do with source and object code. In this paper, we re-envision the workflows to deploy machine learning in large scale systems, with an eye towards significantly reducing engineering effort and scope for errors. The result, _SmartChoices_, treats machine learning models as _learned implementations_ within an application, which benefit from existing tooling for managing software. Ultimately, we aspire to make improving systems with ML nearly as easy as using a typical software library, and treat trained decision policies as code. Hence SmartChoices' design represents a fertile middle ground between grand ambitions to pervasively replace traditional software with deep learning, and the hard-won lessons of veteran engineers on how to build and run reliable production systems - and particularly systems that are critically dependent on machine learning models. It also means we diverge considerably from the design philosophy of ML platforms such as TFX (Baylor et al., 2017), Kubeflow2, and others, which provide facilities to setup arbitrary ML pipelines that are not inherently tied to application behavior, are not tightly integrated into the client software, and require separate development workflows for pipeline management above and beyond standard software engineering workflows. Footnote 2: [https://github.com/kubeflow/kubeflow](https://github.com/kubeflow/kubeflow) ## 2Scope and Capabilities We designed SmartChoices to address the following class of problems: A system is faced with a sequence of decisions, such that at time \(t\) it is provided a _context_\(x_{t}\in\mathcal{X}\) as input, and a set of permissible outputs \(A_{t}\) (known as _arms_ in the bandit literature) which is a subset of the universe of arms \(\mathcal{A}\). It then must choose an arm \(a_{t}\in A_{t}\) and receives feedback \(y_{t}\in\mathbb{R}^{k}\) indicating the quality of the arm with respect to \(k\geq 1\) metrics. The goal is to provide an implementation \(\pi:\mathcal{X}\times 2^{\mathcal{A}}\rightarrow\mathcal{A}\) to optimize the metrics, which we call a _policy_. Throughout, we will refer to metrics we wish to maximize as _rewards_ and those we wish to minimize as _costs_. In most cases, this implementation will be a parameterized function trained on available data \(\{(x_{t},a_{t},y_{t}):t\geq 1\}\). If \(k=1\), optimization consists simply of maximizing a reward or minimizing a cost. For \(k>2\), we consider two types of optimization tasks: metric-constrained optimization (e.g., maximize reward without increasing cost), and efficiently discovering the Pareto frontier of tradeoffs and then targeting a point along it selected by the system owner. This problem class is broad and slightly underspecified. As such, we will illustrate the details in various special cases below and with case studies of real deployments in SS6. ### Contextual Bandits The most basic formulation for contextual bandits involves a fixed small universe of arms \(\mathcal{A}\) (e.g., categories encoded as enum values) and a context domain \(\mathcal{X}=\mathbb{R}^{d}\) for some \(d\). SmartChoices supports several important modeling features beyond this basic formulation. Mixed-type inputsSmartChoices supports mixed-type inputs (including numeric, enumeration, and string inputs) via automatically-generated embedding layers in our critic-model based implementation. _Case Studies:_ _(i)_ deciding how much compute capacity (specifically, an integral thread pool size) to devote per task for a service with diverse task sizes (SS6.3) and _(ii)_ deciding whether to partition tasks for scheduling on multiple machines (SS6.4). Time-varying arm setsWhen \(A_{t}\subset\mathcal{A}\), we enforce that \(a_{t}\in A_{t}\) via selection masks in the policy. _Case Study:_ Selecting from a curated list of experiences (SS6.6) Arm-FeaturesIn many applications, the range of arms \(A_{t}\) is complex and may completely change over time (e.g., recommending today's top news stories). In such cases, it is critical to be able to generalize across arms. SmartChoices supports associating each arm with features (henceforth _arm-features_) to allow for generalization to unseen arms. For example, suppose there is an available embedding \(\Phi\) of objects (e.g., sentences or images) into \(\mathbb{R}^{d}\) for some \(d\), and a retrieval process for selecting a reasonably sized set of candidate objects \(C(x)\) for a context \(x\). Then SmartChoices may select arms using the resulting embedding as features, by effectively choosing among \(A_{t}=\{\Phi(c):c\in C(x_{t})\}\) and then returning the object associated with the selected embedding vector. _Case Studies:_ _(i)_ improving the efficiency of a large content delivery network via a self-adapting cache (SS6.1), _(ii)_ accelerating ML workloads via smarter compiler optimization (SS6.2). ### Ranking Ranking involves ordering some provided items and returning a list of the best \(\ell\) of them. In this regard it involves a combinatorially large arm space (with \(n!/(n-\ell)!\) possible arms given \(n\) items). It also involves a different feedback setting than contextual bandits, with feedback for particular positions (and hence items) in the list. SmartChoices supports ranking via the use of a critic model \(m_{\theta}:\mathcal{X}\times\mathcal{A}\rightarrow\mathbb{R}\) predicting the reward for each item. Users provide feedback via a scalar score for each list entry (henceforth _score vector feedback_); this enables us to support the popular _cascade click feedback_ model, where we assume the user sequentially observes the items and interacts with the first one they find relevant. In addition to a simple greedy approach that ranks items in descending order of predicted reward, SmartChoices allows for smart exploration via weighted sampling of items. Specifically, SmartChoices picks a list of \(\ell\) items by sampling \(\ell\) times from the Plackett-Luce distribution (Grover et al., 2019) seeded by the predicted reward. SmartChoices also supports diversity by penalizing items similar to the ones that have been already selected. _Case Studies:_ Ranking is used pervasively for recommendations. We discuss recommending actions for business owners to take to optimize their profiles (SS6.7). ### Multimetric optimization In most applications, there are several metrics of interest, with inevitable tradeoffs. #### 2.3.1 Metric constrained optimization A common response is to optimize with respect to one metric while constraining our decisions with respect to another. Examples include minimizing latency while maintaining a specified throughput target (latency vs. throughput), or maximizing the quality of results while keeping the average cost below a target (quality vs cost). SmartChoices enables metric constrained optimization to be done at the policy level. More specifically, we presume a distribution of decision problem instances \((X,A)\) and an associated (possibly stochastic) scalar reward and cost(s) - all of which may be initially unknown - and a known vector of budgets \(C\). SmartChoices can then search for implementations \[\begin{array}{rl}\pi^{*}:=&\operatorname*{arg\,max}_{\pi}\mathbb{E}\left[ \operatorname*{reward}(\pi(X,A))\right]\quad\text{s.t.}\\ &\mathbb{E}\left[\operatorname*{cost}(\pi(X,A))\right]\leq C\end{array}\] Without contexts (i.e., \(\mathcal{X}=\emptyset\)), this is known as Bayesian optimization with unknown constraints (Gelbart et al., 2014). With contexts, this problem is very closely related to contextual bandits with knapsack constraints, for which there are known results for the stochastic (Agrawal et al., 2016) and adversarial settings (Sun et al., 2017). In contrast to the prior work, we are interested in per-instance average budgets (e.g., serving an infinite stream of online queries at bounded average latency) rather than cumulative budgets that eventually run out (e.g., dynamically pricing a limited supply of goods for maximum revenue). An important consideration is that these constraints (specifically, the costs) must be learned. As such, the algorithm must be allowed to violate the constraints while learning. While there are ways to mitigate this (e.g., see SS4.2), SmartChoices is not designed or intended for circumstances where individual decisions are high-stakes (e.g., selecting medical treatments). _Case Study:_ We apply metric constrained optimization to choose when to update an ads cache, trading off engagement metrics with compute cost (SS6.5). #### 2.3.2 Pareto Frontier Search For applications with soft constraints, engineers may prefer a more exploratory approach than that of SS2.3.1. In particular, we offer the ability to identify the set of possible trade-offs via identifying the _Pareto frontier_, defined as follows. For a vector \(y\in\mathbb{R}^{k}\), call it _achievable_ if there is an implementation \(\pi\) with expected metrics \(y\) under the distribution of inputs and rewards. That is, \(\exists\pi\cdot\mathbb{E}\left[Y|\pi(X,A)\right]=y\). For maximization metrics (where higher values are desirable), the Pareto frontier is the set of achievable metric vectors \(y\in\mathbb{R}^{k}\) such that no other achievable metric vector \(y^{\prime}\) exceeds it in all metrics, i.e., satisfies \(y^{\prime}_{i}>y_{i}\) for all \(i\). To address this, SmartChoices supports _scalarizing_ predicted metrics, i.e., combining them in a single scalar reward via a known _scalarization_ function. Parameterized scalarization functions are supported via inference-time parameters, allowing the scalarization used to vary for each choice made, which enables efficient exploration of the Pareto frontier. Linear scalarizations (i.e., linear combinations of metrics) are simple and enable the discovery of Pareto frontiers when the achievable metrics form a convex set. For generally shaped Pareto frontiers, we use the hypervolume scalarizations (Zhang & Golovin, 2020), which can discover arbitrarily shaped frontiers. Using scalarizations decoupled from the metric predictions has several advantages. It allows us to largely reduce the multiobjective optimization to the single objective case. This in turn enables us to reason about multiobjective optimization using theorems and algorithmic ideas developed for the single objective case. It also allows us to simplify our infrastructure. Finally, engineers can easily and rapidly focus on particular parts of the Pareto frontier, e.g., by adaptively selecting sets of scalarizations to experiment with in production and progressively narrowing in on regions of interest. After investigating the Pareto frontier, engineers may elect to fix a tradeoff (i.e., scalarization parameters) corresponding to their preferred Pareto-optimal point and use it in all future policies. Alternatively, they may find it more natural to express desired system behavior in terms of metric constraints (SS2.3.1). These options may appear equivalent at first glance, but they respond differently to changing input and reward distributions in important ways. ## 3 Design Overview SmartChoices has two major deployment settings: service and in-process. In the service setting, a central service collects data, trains models, and periodically transmits new policies back to the client. In contrast, in the in-process setting, data is collected and models for the policy are trained on the client machine. Both deployment settings provide very low latency, safety and ease of use. We discuss key design decisions that support these properties, with more detail provided for the service setting. Note that the same simple SmartChoices API (shown in Fig. 1) is used in both settings. ### Service SmartChoices The overall infrastructure for service SmartChoices is shown in Fig. 2. _SmartChoices uses local policies, enabling client-side inference._ Model graphs and weights for policies are loaded from the service by the client application and a local model is instantiated by the XLA just-in-time compiler (Leary & Wang, 2017). Client code does not have to wait for compilation to complete before calling Choose: the SmartChoice object will safely fall back to the default arm (see SS4.2) until a policy is ready. However, it may block on policy readiness if preferred. Local policies have several advantages. First, they enable very low decision latency; for the applications in SS6, typical median latency is \(O(10)\)\(\mu s\) and can be as low as \(2\)\(\mu s\). Requests to the central service are reduced. Unit tests and production client code can use the same code paths with Figure 1: A simplified example of the SmartChoices API in C++. Figure 2: SmartChoices service infrastructure. Policies are trained in a central service and sent to client applications. Inference is local to the client and implemented using XLA. out complex mock objects. Finally, inference is robust to network issues, since the main application thread performs inference without communication with the SmartChoices service. Using local policies does limit us to models that can fit in RAM for a single machine. However, in practice, this is sufficient to outperform existing heuristics for a wide variety of applications (see SS6). _Communication with the service is asynchronous and handled by separate background threads._ Client applications need to communicate with the SmartChoices service to _(i)_ send logged data to train and evaluate new policies and _(ii)_ receive policy updates. This communication is handled by a background thread initialized when the SmartChoice object is instantiated. Logged data generated by Choose and GiveFeedback calls are buffered in a shared queue by the foreground thread, and then sent in batches by the background thread. Policy updates are obtained by polling the service. As with the use of local policies, asynchronous communication via background threads ensures that Choose and GiveFeedback calls are low latency and do not block on network issues. _Training and evaluation for all clients uses the same service code._ Logged data is sent to the service in a standard log format and contains all information required to train _and_ evaluate new policies. These logs are thin wrappers around the Protocol Buffers defining the input, output and feedback types (see Fig. 3). For training, Choose logs include \(x_{t},A_{t},a_{t}\), and metadata about the policy \(\pi\), and GiveFeedback logs include the metric values \(y_{t}\) and the ID of the corresponding Choose call. For evaluation, Choose logs also include the default arm and arm selection policies. This enables code to be shared, reducing the risk of errors or behavioral differences due to divergent code. _Common steps in the ML pipeline are automated._ Logged data is collected in a centralized database, and the service periodically collects summary statistics on the data. If new data is available, the service automatically begins creating new policies. This process involves feature normalization, training (either incremental or from scratch), model evaluation, computing constraints based on desired system-level behavior for metric-constrained optimization (SS6.5), and automated analysis and validation checks (SS5.3.1) prior to policy rollout. In addition, hyperparameter tuning is available on demand. By integrating with an industrial scale black box optimization platform, we perform hundreds of training trials with a single command, trying different architectural and training hyperparameter configurations in order to identify the parameters that minimize loss on the evaluation holdout dataset. _Engineers customize SmartChoices via a single configuration file._ Almost all of the steps in our ML pipeline can be customized; for example, engineers can configure how features should be normalized or what validation checks should block a policy from being rolled out to client binaries. Enabling engineers to customize all of this via a single configuration file not only makes using SmartChoices easier but increases the likelihood that unintended changes are caught in code review. _Feedback is flexible._ The feedback_handle object required to GiveFeedback (see Fig. 1) can be restored from an ID, allowing feedback to be provided days later in a separate process. This is particularly useful for engineers who are only able to measure the quality of the decision after some delay. In addition, the Choose and GiveFeedback calls can be tied together by an ID provided by client code; this often significantly reduces the amount of infrastructure engineers need to add to use SmartChoices (e.g., storing a mapping from a SmartChoices ID to their ID). ### In-Process SmartChoices Some client applications do not want to depend on a hosted logging and training service. Reasons include strict constraints around privacy or data sovereignty, a need to adapt rapidly to recent data, or a need for even lower latency inference (\(<1\;\mu s\)). SmartChoices supports such applications either directly linking in trained policies in the client binary or training locally (i.e., within the same binary making decisions) in background threads. No data leaves the client binary in either case. Local training enables very rapid adaptation to data: the default latency from when feedback is provided to when the data is trained on is \(200\,\mathrm{ms}\). This is configurable, with resultant trade-offs in CPU usage and speed of adaptation. Local training shares some key design decisions with Service SmartChoices (SS3.1). In particular, local training for different SmartChoices applications uses the same code paths, and engineers also customize local training via a single configuration file. ## 4 Safety ### Models as Code To date, people express most computations in traditional software (as opposed to ML models). This has huge advantages for building, testing, maintaining, and modifying complex systems. However, some desired behaviors - say, recommending good items to an end-user - do not admit a concise description in code but must instead be learned from data. In practice, this nearly always means defining a parametric function class \(m_{\theta}\) and then searching for parameters \(\theta\in\Theta\) to optimize some objective(s). This _training_ processes bears no resemblance to traditional software engineering via human brains and keystrokes, and is usually decoupled from the typical software release cycle. Still, models are code. In theory they can implement any function (Zhou, 2020), and in practice we see continuous improvement in capabilities as the field progresses. Over time, engineering best practices have tended to treat ML models more and more like code, e.g., tracking data and experiments and versioning models. SmartChoices builds on this by allowing engineers to rely on their existing integration testing, canary, release, and rollback processes. For example, engineers can choose to link trained models directly into application binaries; in this case, undoing a problematic policy rollout is as simply as rolling back the binary version. Alternatively, engineers can specify environment-dependent policy tags (SS5.3.3). This allows policy updates to be first deployed in a staging environment before being used for all production traffic. ### Specifying a Default Action We require SmartChoices users to provide a _default action_ when calling into SmartChoices. This has several advantages. First and foremost, it provides a safe fallback in case of any error. Even with rigorous software engineering practice, bugs will arise: via logical errors, numerical instabilities, or low level compiler bugs. Conducting model inference in-memory on CPU allows us to detect failures (e.g., malformed inputs, infinite predicted rewards, or errors in the XLA compiler) without overhead. If a failure is detected, we fall back to outputting the default action. Secondly, providing the default allows us to automatically setup a long-running "holdback" experiment, i.e., we choose the default uniformly at random some fraction of the time in order to A/B test our learned implementation against the default. Even without an explicit holdback, we can use the identity of the default action to estimate the metrics for a policy that always choose the default action (henceforth the "default policy") via counterfactual policy evaluation or CPE (Botto et al., 2013). Finally, having the default allows us to implement imitation learning against the default policy and regularize to it, penalizing deviations from it. This allows us to bootstrap from a baseline policy that achieves the current system performance. ### Fairness As computational systems take on increasing influence in society, it has become increasingly important to understand the implications, and, ideally, design systems that encourage healthy outcomes for businesses and society at large. This presents a vast research frontier (see e.g., Chouldechova and Roth (2020)) that is currently actively being explored, even at the foundational level of appropriate formal definitions of fairness3. Still, whatever best practices around ML fairness emerge over time, treating ML models as code and fundamentally tying models to the decisions they result in has the advantage of providing a centralized surface to design, implement, and monitor compliance with the desired behavioral constraints in production. Footnote 3: Some definitions are incompatible, with known impossibility results revealing fundamental tradeoffs. ### Testing and Production Readiness As noted, ML can create novel types of technical debt and production risks. By design, SmartChoices mitigates many of these. For example, Breck et al. (2017) suggest a rubric for scoring ML productionization readiness. As shown in Table 1, SmartChoices integrations automatically meet 19 of the 28 specified criteria, and manually meet 5 more (and could be extended to automatically meet them); the remainder are concerned with feature generation, which remain the responsibility of engineers using SmartChoices. As for the tests marked "manual" in the table, most have supporting analyzes automatically performed and displayed on the SmartChoices frontend (SS5.3.2). Field sensitivity charts show how important each feature was to any specific SmartChoices model's performance, supporting the identification of non-beneficial features for later removal, and suggesting if simpler models may be better. The results of hyperparameter tuning on architectural parameters can surface whether simpler models perform better. Automatic CPE against logged data reveals the impact of model staleness. Another interesting case is _training / serving skew_, in which feature semantics change between training time and inference time. The SmartChoices workflow discourages a common source of skew, namely the use of different code paths in training and inference4. Any skew introduced is due to code or configuration changes in the client system, which can and should be tracked and audited via standard production engineering principles. Ultimately, however, detecting semantic changes in features requires human oversight, and SmartChoices facilitates that by tracking feature distributions and surfacing them to engineers on the frontend. Footnote 4: It takes specific extra development work to enable such skew. ## 5 Integrating With SmartChoices ### Problem and Type Specification Users of SmartChoices begin by expressing their problem in terms of types. Using Protocol Buffers (Google, 2008), users define (input) _Context_, (output) _Arm_, and _Feedback_ types. (See examples in Fig. 3). These types, similar to structs in C, encapsulate a collection of multiple datatypes. Each data element inside a Protocol Buffer is described as a field with a type and a name. Users indicate modeling requirements for a field - such as the size of a categorical feature, or whether a reward should be minimized or maximized - via field annotations. Protocol Buffers have several advantages: they support reflection and easily adding and removing fields, have cross-language support, support compressed serialization, and are widely used. These benefits enabled us to create generic components for converting Protocol Buffers into encoding tensors suitable for ML models and logging training data. This dramatically cuts down on "glue code" and "pipeline jungles." Additionally, because Protocol Buffer annotations allow SmartChoices users to configure parameters in-line with their field datatypes, using Protocol Buffers reduces a major source of "config debt." ### Instrumentation in User Code To instrument their code, SmartChoices users begin by _(i)_ adding a build rule (Bazel, 2023) parameterized by their configuration file as a target dependency and _(ii)_ including the SmartChoices library in their code. As shown in Fig. 1, users then create a SmartChoices instance at the location in code where they want to use a learned implementation. Next, users call Choose with a context proto as well as a set of candidate arms and a default arm. Choose selects one of the arms based on the policy, and handles logging the context, arms, and choice for training. User code performs some action as a result of the choice, and measures rewards and penalties which are recorded in a feedback proto passed to the GiveFeedback method. ### Service SmartChoices: Additional Tools Users of the SmartChoices service deployment have access to tools that eliminating the need for custom code to analyze, monitor, and manage machine learning pipelines. #### 5.3.1 Analysis and Validation An automated analysis is run using a held-out dataset for every newly-trained policy. Available analysis includes: _(i)_ the distribution of chosen arms; _(ii)_ per-metric distributions of critic model predictions; _(iii)_ per-metric estimates of feature importance; _(iv)_ per-metric CPEs comparing the newly-trained policy to three baselines (the default policy, a random policy, and the current "live" policy); _(v)_ (for problems with binary feedback) the area under the receiver operating characteristic curve (henceforth, "ROC AUC") and the precision-recall curve; _(vi)_ (for Pareto Frontier Search (SS2.3.2)) an estimate of the Pareto-optimal tradeoffs achievable by the trained policy. A policy is _validated_ if analysis demonstrates that it has "good" behavior; users can what this means by modifying \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}} \hline \hline **Data Tests** & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ \cline{2-3} Feature expectations are captured in a schema. & Automatic \\ All features are beneficial. & Manual** \\ No feature’s cost is too much. & User responsibility \\ Features induce true-level requirements. & Yes (for select requirements) \\ The data pipeline has negotiating privacy controls. & Automatic \\ New features can be added quickly. & User responsibility \\ All input feature could is tested. & \\ \hline **Model Tests** & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ \cline{2-3} Model poses are reviewed and submitted. & Automatic \\ Offline and online metrics correlate. & Automatically measured \& surfced. \\ All hyperparameters have been tuned. & Automatic \\ The impact of model addresses is known. & Manual** \\ A simpler model is not better. & Manual** \\ Model quality is sufficient on important data slices. & Automatic (if configured) \\ The model is tested for considerations of inclusion. & Manual** \\ \hline **Infrastructure Tests** & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ \cline{2-3} Training is reproducible. & Automatic \\ Model specs are unit tested. & Automatic \\ The ML pipeline is integration tested. & Automatic (with proper integration) \\ Model quality is validated before serving. & Automatic \\ The model is debugging. & Yes \\ Models are carried before serving. & Automatic (if configured) \\ Serving models can be rolled back. & Yes \\ \hline **Monitoring Tests** & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } \\ \cline{2-3} Dependency changes result in notification. & User responsibility \\ Data invariants hold for inputs. & Yes (for selected invariants). \\ Training and verifying are not skewed. & User responsibility (untigated per 84-4) \\ Models are not too safe. & Automatic (if configured) \\ Models are numerically stable. & Automatically pulled against. \\ Computing performance has not regressed. & Manual** \\ Prediction quality has not regressed. & Automatic \\ \hline \hline \end{tabular} \end{table} Table 1: Measuring SmartChoices against the ML Test Score Rubric of Breck et al. (2017). Items marked ** could be made automatic in a straightforward manner. Figure 3: An example context, arm, and feedback Protocol Buffer defining an input, output and feedback type. Syntax simplified for brevity. their configuration file. By default, a policy is validated if we have at least 95% confidence that it outperforms each of the three baselines, with uncertainty estimated via Poisson bootstraps (Chamandy et al., 2012) of the held-out dataset. Additional validation checks include conditions on just the newly-trained policy (e.g. a minimum ROC AUC) or conditions comparing the newly-trained policy to the current "live" policy (e.g. upper bounds on the statistical distance between the distribution of chosen arms or critic model predictions). #### 5.3.2 Monitoring Behavior and Performance An automatically generated web frontend shows: _(i)_ training progress; _(ii)_ the logged distribution of each context, arm, and feedback field over time; _(iii)_ the logged reward over time for the trained policy, a random policy, and the "default" policy; _(iv)_ the results of automated analysis (SS5.3.1) for any policy. #### 5.3.3 Managing Policy Rollouts Policy rollouts in SmartChoices are managed through policy tags. Each "tag" is a human-readable string referencing a single policy, with the referenced policy updated over time. Two tags are available by default. The "latest" tag always references the most recently trained policy, while the "live" tag references the most recent trained policy that was validated. Users typically use the "live" policy, but custom rollout (and rollback) strategies can be implemented via custom tags. Client binaries periodically poll the service for updated policies and tag references, and seamlessly switch over to the new policies. Policy tags also simplify exploring tradeoffs when conducting multimetric optimization (SS2.3). Tags are created corresponding to a range of behavior (e.g., ranging from "treat the first metric as \(5\) times as important" to "treat the second metric as \(5\) times as important"), and policies for each tag are generated using a holdback evaluation dataset. Users can directly reference these tags in code. ## 6 Case Studies SmartChoices has been successfully applied in a diverse range of problem domains, ranging from low-level optimizations (e.g., SS6.1) to user-facing applications (e.g., SS6.6). We select a representative sample that motivates each of the capabilities introduced in SS2 and demonstrates how SmartChoices' design (SS3) enables engineers to successfully apply ML in their systems. Client using existing capabilities (e.g., SS6.4) have integrated SmartChoices with \(O(days)\) engineering time. Additional engineering work can be required to make features available at Choose time (see Fig. 1) or measure the quality of the choice. ### Learned Cache Eviction SmartChoices reduced the fraction of user-requested bytes missed in a Content Delivery Network (CDN) cache for a large-scale video service by improving its eviction policy. This reduced latency for end users, improved several quality of experience (QoE) metrics and lowered the cost of content distribution. The learned eviction policy first uses a heuristic to select \(4\) promising candidates, and then uses SmartChoices to run a \(2\)-stage tournament selection to pick the item to evict. Many heuristics are suitable, but in practice we found selecting the \(4\) least recently used (LRU) items works well. Feedback is binary: 1 if the next access time for the SmartChoices-selected entry was the latest among the candidates it was compared with, and 0 otherwise. Two SmartChoices features were key to a successful launch: its low latency (SS3) enabled timely _individual_ cache eviction decisions, and efficient in-process model training (SS3.2) eliminated the need for transfer training data between edge servers and data centers while imposing only minimal compute overhead. In addition, using SmartChoices on top of a decent heuristic (LRU) provided additional production safety, guaranteeing acceptable performance during initial training and later adaptation to new usage patterns. Using SmartChoices decreased the portion of user-request bytes missed by 9.1% at peak traffic, representing a significant improvement over highly-tuned code. ### Optimizing Compilation SmartChoices achieved performance gains in a compiler by enabling compiler parameters to be dynamically adjusted, e.g., in response to changes to other components. ML workloads consume enormous amounts of compute in large industrial settings. Specialized accelerators such as Tensor Processing Units (TPUs) are increasingly used for model training and inference. The XLA compiler (Leary Figure 4: Kernel embedding consumed as input by SmartChoices. & Wang, 2017) generates code that can run on these accelerators; optimizing the compiler can improve latency, throughput and cost. _Tile size selection_(Rivera & Tseng, 1999) is one of the most performance-critical optimizations in the XLA TPU compiler, affecting how tensors are moved between distinct levels of the memory hierarchy. The goal is to select an optimal tile size for a high-level operation (HLO) such that its input and output tensors fit in the scratchpad memory, while minimizing its execution time. Concretely, we must choose from a finite set of arms \(A_{t}\subset\mathbb{R}_{\geq 0}^{d}\) for operation \(t\), where \(d\) is an upper bound on the sum of tensor ranks of all inputs to an operation we wish to consider for optimization. A naive approach is exhaustive search over all tile sizes. However, since each HLO has a large number of valid tile sizes, and the optimal tile size frequently changes due to active development of the XLA compiler, exhaustive search is intractable. Existing search-based techniques such as TVM (Chen et al., 2018) cannot be deployed in the XLA compiler since they assume that optimization decision can be made independently from the rest of the graph and require optimizations to be applied at the same stage in the compilation flow. (Phothilimthana et al., 2021) Instead, using arm features (SS2.1), SmartChoices is used to filter out \(99\%\) of candidates, with an exhaustive search applied to the remaining \(1\%\). We begin by pretraining a learned cost model (Phothilimthana et al., 2019) that predicts the TPU runtime of an HLO using an historic dataset of actual runtimes. To keep up with changes to the compiler, we fine-tune the model periodically, freezing the graph-embedding network (Fig. 4) and retraining only the feed-forward head. Continuous training (SS3.1) for the feed-forward head allow us to stay up to date while avoiding the full cost of end-to-end training. Searching over the candidates selected by SmartChoices achieves \(90-95\%\) of the speedup achieved by full exhaustive search, scales to optimize all relevant HLOs (vs. \(5\%\) coverage for the prior exhaustive search), and is about \(29\) times faster overall. ### Optimizing Thread Counts SmartChoices reduced tail latency for end-user queries on a flight booking search service by optimizing thread count. When processing an end-user query, the service first determines all relevant sequences of flights, or "itineraries". The service then retrieves fares for all subsequences of each itinerary. As itineraries can largely be processed independently, this work can be parallelized across multiple threads. A fixed thread count of four resulted in reasonable system-wide performance. However, experimental data demonstrated that complex end-user queries benefited significantly from more threads. For each query, SmartChoices dynamically selected the thread count based on context including the number of flight sub-sequences and the source and destination regions. We then measured latency and CPU usage for that query, providing both as feedback. The resulting contextual bandit policy selects the thread count that minimizes a weighted linear combination of both measures. SmartChoices' support for mixed-type context (SS2.1) via automatic conversion of Protocol Buffers into encoding tensors (SS5.1) enabled rapid experimentation with different context features. At launch, average latency reduced by 25% and P99 latency reduced by 16% without a significant increase in CPU cost. ### Optimized Work Partitioning SmartChoices improved data availability and freshness for a service that monitors machine learning workloads via dynamic work rebalancing (Kirpichov & Denielou, 2016) between monitoring tasks. Each monitoring task summarizes telemetry (e.g., RAM and accelerator usage) for a list of workloads. Tasks that are too large will take a long time to complete, resulting in stale or missing data. In contrast, tasks that are too small incur unnecessary overhead, and can result in outages if the _monitoring_ workload exceeds its allocated capacity. When a monitoring task begins running, it can either shard (scheduling two smaller tasks covering the relevant workloads) or execute. SmartChoices optimizes this decision. The reward for shard is always zero, while the reward for execute is the difference \(t_{d}-t_{e}\) between the target maximum data staleness \(t_{d}\) and the actual execution time \(t_{e}\). This approach encourages sharding only tasks for which the expected execution time is greater than the deadline \(t_{d}\). The reward formulation allows feedback to be provided to the model immediately for continuous re-training. Context for each decision includes information about the data source being queried, the number of entities contained in the shard and the time and day of the week. As in SS6.3, SmartChoices' support for mixed-type context enabled rapid experimentation with context features. In addition, automated training and validation (SS3.1) enables policies adapting to changes in query distributions (e.g., from a sudden increase in long-running queries) to be deployed daily. Using SmartChoices significantly reduced alerts on missing or stale data, translating directly into time savings for the team responsible for maintaining the service. In addition, 53% fewer tasks hit the execution deadline. ### Optimizing Refresh Rates SmartChoices optimized resource usage on ad refreshes for an ads service supporting a widely used product. Ads viewed by users are stored locally within an app on their mobile devices, and these ads are periodically refreshed via the process shown in Fig. 5 so that users can view new ads. The computational resources used to run the ad auction to fulfil the refresh request are considered to be wasted if the refreshed ad is not viewed. The Ads team used SmartChoices to throttle those ad requests from the app that are less likely to result in an ad view. SmartChoices uses constrained optimization (SS2.3.1) to do so. Feedback for training is binary: 1 if the ad is viewed and 0 otherwise. The SmartChoices policy involves a critic model predicting the probability that refreshing the ads will result in an ad view (pView), with requests with pView below a threshold \(q\) throttled. \(q\) controls the tradeoff between resource usage and the number of ad views (and downstream metrics such as clicks). Increasing \(q\) saves more resources at the potential cost of a reduction in views. In practice, we found that different tradeoffs are appropriate for different device types; as such, we use a single critic model to predict pView but specify \(q_{0},q_{1},...\) per device type. Instead of selecting a fixed threshold, the Ads team wanted to ensure that SmartChoices throttled approximately the same _fraction_ of traffic over time. We eliminated the need for manual tuning when training a new model by automatically computing the appropriate pView threshold based on the _distribution_ of that model's predictions on a holdout dataset. To guarantee that policy updates do not drastically impact downstream systems, we compare throttling decisions of the "live" policy with new policies during validation (SS5.3.1). New policies are validated for use only if the change in throttling decisions is minimal for all device types. SmartChoices was deployed in three stages. Overall, ad requests to ad servers were reduced by \(5.8\%\) while views increased by \(6.3\%\). 1. Phase 1 reduced in ad requests to ad servers of \(12\%\), with no change in views and downstream metrics. 2. Phase 2 removed a heuristic filter on inactive users. This increased views by \(1.4\%\) with only a \(4\%\) increase in ad requests. 3. Phase 3 doubled the frequency of ad requests from the client app while adjusting the thresholds on pView. This increased views by \(4.8\%\) with only a \(2.2\%\) increase in ad requests. ### Optimizing User Experience SmartChoices improved user-engagement metrics across a variety of User Experience (UX) optimization applications by identifying the best of a set of human-designed candidate options. Example applications include: * Selecting the best notification string to inform a user that their storage space is running low or has been exhausted. * Selecting the best notification string to inform a user about new personalized curated content for them. * Selecting the best string during mobile device onboarding, to guide the user to enable features of interest. For these applications, using SmartChoices yielded a \(2\%\) to \(10\%\) improvement in user-engagement metrics. Prior to SmartChoices, the typical solution for such UX optimization problems was A/B testing. This approach has several disadvantages: * **Poor scalability**: Setting up and running each A/B experiment is a manual process, typically requiring significant engineering effort. As such, the number of such live experiments teams run is usually much smaller than the number of potential applications they want to optimize. * **Limited personalization and contextualization**: The candidate that performs best overall may not be the best for a specific sub-population of users or in specific scenarios (for example, if the same user is using a different device). Optimizing for each contextual feature potentially requires a separate A/B experiment. * **Non-adaptive**: After conducting one A/B experiment, the teams usually do not know whether the best-performing candidate has changed unless they conduct another A/B experiment. SmartChoices addresses all of these issues. Continuous model re-training, validation and deployment (SS3.1) enables Figure 5: Use of SmartChoices in the ads service. The app continues to send ad requests as before. Throttling requests in the ad server allows us to skip subsequent steps, including the computationally-costly auction. SmartChoices to adapt to external changes automatically. With contextual bandits at its core (SS2.1), SmartChoices takes advantage of contextual and/or user features to optimize for specific users. Simple APIs (see Fig. 1) abstract away cumbersome machine learning workflows, enabling SmartChoices to be easily integrated with multiple applications. Indeed, these advantages over traditional A/B testing were the primary reason the teams adopted SmartChoices. Although initial adoption usually entails some upfront effort, teams are usually able to easily expand SmartChoices to related applications after the initial integration. ### Ranking Recommendation Cards SmartChoices increased the rate at which business owners completed tasks by optimizing the order in which recommendation cards are displayed. Each card prompts business owners (here, "users") to complete a different action (e.g., upload phone numbers, respond to custom reviews). Our goal is to show the most relevant cards to users. Ordering affects visibility; only the first three items are visible without scrolling. Prior to using SmartChoices, cards were simply ordered by their global click-through rates (CTR). SmartChoices was deployed in two phases: 1. Phase 1 used click-based feedback, with a reward of 1 when end-users click on a card and 0 otherwise. We used this feedback since we expected optimizing CTR _contextually_ to improve overall CTRs. Other important metrics also saw significant improvements: \(\mathbf{8.7}\%\) more users interacted with cards, and the count of 28-day active users increased \(\mathbf{27}\%\). 2. Phase 2 used task completion feedback, with a reward only when end-users completed the task for a card. This problem framing allowed us to focus on the most important tasks by appropriately scaling rewards. For example, \(\mathbf{0.8}\%\) more users updated their business profile, while \(\mathbf{2.2}\%\) more users visited a page summarizing for their profile. This represented a significant increase on a large user base. While CTR decreased, we observed no change to the overall task completion rate, indicating a reduction in low-task-completion-intent clicks. Once again, automated training and validation (SS3.1) enables policies to to adapt to seasonal changes automatically; for example, the "Holiday Hour Edits" card is automatically ranked higher before local holidays. requires users to implement fallback logic in case of failure, in contrast to our default action. Finally, Looper's median inference latency is reported at \(2\,\mathrm{ms}\), and feature extraction latency at \(45\,\mathrm{ms}\) - about \(3\) orders of magnitude slower than SmartChoices - preclude its use in many systems applications. Footnote 1: [https://www.tensorflow.org/](https://www.tensorflow.org/) Microsoft's Decision Service (Agarwal et al., 2016) provides a service for contextual bandits, and shares many design goals with SmartChoices. As a cloud service, it has hosted logging, training, and inference, however it also supports loading models into the client for faster local decision making. SmartChoices has considerably lower decision latency, however, making it suitable for additional low level system optimizations (as low as \(O(1)\)\(\mu s\) vs. a reported average latency of \(0.2\,\mathrm{ms}\) in (Agarwal et al., 2016)), as well as capabilities beyond standard contextual bandits (c.f., SS2). Natarajan et al. (2020) investigate _programming by rewards_ (PBR), whereby system performance can be used to aid the programming process - either by filling in values from a user-provided template (i.e., programs with missing constants), or by generating programs within a limited class representable by fixed-depth decision trees. Like SmartChoices, PBR searches for a reward maximizing implementation, however it is restricted to learn functions of type \(\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\), owing to the type of training used. It has a different deployment model in which learned implementations are translated directly into source code that is checked-in. This has advantages in terms of interpretability, speed, and avoiding additional dependencies. However, only a restricted of class of implementations are considered, and there are no affordances for adapting implementations to changing environments over time. Two classes of problems similar to contextual bandits are bayesian optimization (BO) and reinforcement learning (RL). BO focuses on the low-data regime where gathering feedback is expensive; in contrast, we focus on settings with more abundant data. In the RL setting, the arm selected at time \(t\) affects the next context \(x_{t+1}\). Since a large class of practical optimizations can be framed as a contextual bandit problem, we have not yet needed to support RL. Contextual bandit models have been used across industry to solve a wide range problems. For example, contextual bandits can replace the typical A/B testing process for deciding on UI changes. Instead of manually comparing the relative performance of two fixed UIs, contextual bandits can learn to personalize UI elements to maximize each user's experience. Prominent examples include deciding the relative position of news stories (Li et al., 2010), selecting thumbnail artwork for video content (Amat et al., 2018), and ordering products on a "carousel" (Ermis et al., 2020). Industrial applications also use contextual bandit models in the backend to solve problems in dynamic or ambiguous environments. These use cases include disambiguating ambiguous verbal requests of smart speakers (Moerchen et al., 2020), personalizing the recommendations of products (Sawant et al., 2018), and determining user "intent" when interacting with support chat bots (Sajeev et al., 2021). The wide range of applications speaks to the huge potential impact of an approach like SmartChoices that accelerates improving systems with ML. ## 8 Acknowledgements This work benefited from the wisdom of many colleagues engaged in the development of machine learning in mature production systems. We especially wish to thank Jeff Dean, Sanjay Ghemawat, Jay Yagnik, Andrew Bunner, George Baggott, Jesse Berent, Ben Solnik, Alex Grubb, Weikang Zhou, Eugene Kirpichov, Arkady Epshteyn, Ketan Mandke, Wei Huang, and Eugene Brevdo for thoughtful input into the design and goals of the SmartChoices project. Lastly, we would like to thank our many colleagues who integrated SmartChoices into their projects and championed this effort within their respective teams.
2306.05028
Condorcet Markets
The paper studies information markets concerning single events from an epistemic social choice perspective. Within the classical Condorcet error model for collective binary decisions, we establish equivalence results between elections and markets, showing that the alternative that would be selected by weighted majority voting (under specific weighting schemes) corresponds to the alternative with highest price in the equilibrium of the market (under specific assumptions on the market type). This makes it possible in principle to implement specific weighted majority elections, which are known to have superior truth-tracking performance, by means of information markets without needing to elicit voters' competences.
Stéphane Airiau, Nicholas Kees Dupuis, Davide Grossi
2023-06-08T08:29:00Z
http://arxiv.org/abs/2306.05028v3
# Condorcet Markets ###### Abstract The paper studies information markets about single events from an epistemic social choice perspective. Within the classical Condorcet error model for collective binary decisions, we establish equivalence results between elections and markets, showing that the alternative that would be selected by weighed majority voting (under specific weighting schemes) corresponds to the alternative with highest price in the equilibrium of the market (under specific assumptions on the market type). This makes it possible to implement specific weighted majority elections, which are known to have superior truth-tracking performance, through information markets and, crucially, without needing to elicit voters' competences. ## 1 Introduction Information markets (also known as prediction markets) [1, 13, 5, 14] are markets of all-or-nothing contracts (so-called Arrow securities) that pay one unit of currency if a designated event occurs and nothing otherwise. Under the view, inspired by [12], that markets are good aggregators of the information dispersed among traders, proponents of information markets have argued that equilibrium prices are accurate estimates of the probability of the designated event. Much research--theoretical and empirical--has probed this interpretation of prices in information markets, finding that equilibrium prices successfully track the traders' average belief about the event, under several models of trader's utilities [16, 18]. In this paper we address a closely related, but different question: _if we are to take a decision based on the information we extract from the equilibrium price, how accurate would such a decision be?_ In other words, rather than relating equilibrium prices to belief aggregation, we relate them directly to the quality of the decision they would support. We frame the above question within the standard binary choice framework of epistemic social choice, stemming from the Condorcet jury theorem tradition [6, 11, 19] and the maximum-likelihood estimation approach to voting [11, 7, 17, 10]. ContributionTo the best of our knowledge, the above one is a novel perspective on information markets. In particular, counter to the common assumption on traders' beliefs being subjective, we study information markets when traders' beliefs are obtained by Bayesian update from a private independent signal with known (to the trader) accuracy, just like in the classic jury theorems setting. In other words, we study 'jurors' as if they were 'traders' who, instead of relaying their vote to a central mechanism, trade in an information market. In taking this perspective, we ask the above question by comparing the decisions that would be taken based on the equilibrium price of an information market, with the decisions that would be taken by specific weighted majority elections, whose truth-tracking behavior is already well-understood [11]. Specifically, we aim at identifying correspondences between classes of markets and of weighted majority elections which are equivalent from a decision-making Figure 1: Elections and information markets commute. point of view. That is, they are such that the weighted majority when agents vote according to the event they believe more likely, always coincides with the event whose price is highest in equilibrium when agents trade in Arrow securities according to their beliefs. Figure 1 depicts such relationship via a commutative diagram. Such results open up the possibility of implementing weighted majority voting with proven truth-tracking performance without needing to know jurors' competences, which may be hard to truthfully elicit or estimate [3]. Paper outlineSection 2 introduces the standard binary truth-tracking framework and presents our model of information markets. Section 3 presents results on equilibrium prices in two of the three types of markets we consider (Naive and Kelly markets) and Section 4 proves 'Figure 1-type' results for those markets. Section 5 then shows how such results could be lifted even to the case of majority voting where jurors are weighted perfectly according to their competence. Section 6 outlines future research directions. Two examples illustrating our framework and analysis are provided in Appendix A. All proofs not directly included in the main text are available in Appendix B and C. ## 2 Preliminaries ### Collective truth-tracking We are concerned with a finite set of agents \(N=\{1,\ldots,n\}\) who have to decide collectively on the correct state of the world \(x\in\{A,B\}\). A prior probability \(P(x=A)=\pi=0.5\) is given, that the correct state is \(A\). Each agent \(i\) observes a private independent signal \(y_{i}\in\{A,B\}\) that has quality \(q_{i}\in(0.5,1)\), that is \(q_{i}=P(y=A\mid x=A)=P(y=B\mid x=B)\). Each \(q_{i}\) represents the competence or accuracy of \(i\). We call each vector \(\mathbf{q}=(q_{1},\ldots,q_{n})\) of individual accuracies an _accuracy_ or _competence profile_ of the group. Having observed her private signal, each agent then forms a posterior belief \(b_{i}=P(x=A\mid y=A)\) about state \(x=A\) by Bayes rule. Observe that, by the conditions we impose on competences and prior, if \(b_{i}>0.5\) then, by Bayes rule, \(b_{i}=q_{i}\) (the belief in \(A\) equals \(q_{i}\)), and \(b_{i}=1-q_{i}\) (the belief in \(A\) equals \(1-q_{i}\)) otherwise. This gives us, for all \(i\in N\): \[b_{i}=\mathbb{1}\left(b_{i}>0.5\right)\cdot(2q_{i}-1)+(1-q_{i}) \tag{1}\] where \(\mathbb{1}\) denotes the indicator function. Individual beliefs are then collected in a _belief profile_\(\mathbf{b}=(b_{1},\ldots,b_{n})\in[0,1]^{n}\). Given an accuracy profile \(\mathbf{q}\), the set of possible belief profiles is denoted \(\mathcal{B}_{\mathbf{q}}=\{\mathbf{b}\in[0,1]^{n}\mid P(\mathbf{b}\mid \mathbf{q})>0\}\). Observe that the size of this set equals \(2^{n}\): the number of all signal realizations. Based on a profile \(\mathbf{b}\) of individual beliefs, the group then takes a decision by mapping the profile to \(A\) or to \(B\). In this process of aggregation, agents may have different weights. These weights are collected in a _weight profile_\(\mathbf{w}=(w_{1},\ldots,w_{n})\in[0,1]^{n}\). We refer to \(\mathbf{1}=(1,\ldots,1)\) as the _egalitarian weight profile_ in which all agents have equal weight. So, assuming a given weight profile \(\mathbf{w}\), we call _aggregator_ any function \[\mathcal{A}^{\mathbf{w}}:[0,1]^{n}\to 2^{\{1,0\}}\backslash\emptyset \tag{2}\] mapping belief profiles to alternatives, where \(\{1\}\) denotes \(A\); \(\{0\}\) denotes \(B\); and \(\{1,0\}\) denotes a tie. Types of aggregatorsWe will study two classes of mechanisms to implement aggregators. In the first class, agents cast binary ballots based on their beliefs and these ballots are submitted to a voting mechanism. The winning alternative is the outcome of the aggregation process. In the second class, agents' trade in special types of securities, based on their beliefs. The equilibrium price of this securities market is then used as a proxy for the group's belief in the probability of state \(A\). In this case, it is the alternative favored by this collective belief to be the outcome of the aggregation process. Let us make the above notions more precise. First of all, a belief \(b\in[0,1]\) is translated into binary opinions, or _votes_, for \(A\) or \(B\) via the following binarization function: \[\widehat{b}=\begin{cases}\{1\}&\text{ if }b>0.5\\ \{0\}&\text{ if }b<0.5\\ \{0,1\}&\text{ otherwise}\end{cases} \tag{3}\] That is, agents are assumed to vote in accordance to their posterior belief (this is sometimes referred to as sincere voting [2]). A binarized belief profile \(\widehat{\mathbf{b}}=(\widehat{b}_{1},\ldots,\widehat{b}_{n})\) is therefore a binary vector and we will referred to such vectors also as _voting profiles_ and denote them by \(\mathbf{v}=(v_{1},\ldots,v_{n})\).1 Footnote 1: As individual beliefs cannot equal \(0.5\), the reduction function always outputs a singleton \(\{0\}\) or \(\{1\}\) on individual beliefs. This will not be the case, however, for collective beliefs, which may be undecided (\(\{0,1\}\)). Given a weight profile \(\mathbf{w}\), a (belief) merger is a function \(F^{\mathbf{w}}:[0,1]^{n}\to[0,1]\) taking as input a belief profile and outputting a group belief. A choice function is a function \(f^{\mathbf{w}}:\{1,0\}^{n}\to 2^{\{0,1\}}\backslash\emptyset\) taking as input a voting profile and outputting a possibly tied choice between \(1\), i.e., \(A\), and \(0\), i.e., \(B\). We will study aggregators of the type \(f^{\mathbf{w}}\circ\widehat{\phantom{\circ}}\) (voting) and \(\widehat{\phantom{\circ}}\circ F^{\mathbf{w}}\) (trading). A voting mechanism is a choice function \(f^{w}\) which, applied to a binarized belief profile \(\widehat{\mathbf{b}}\), yields a collective choice \(f^{\mathbf{w}}(\widehat{\mathbf{b}})\) (under the weight profile \(\mathbf{w}\)). A market mechanism is a belief aggregation function \(F\) that, once applied to a belief profile \(\mathbf{b}\), yields a collective belief \(F^{\mathbf{w}}(\mathbf{b})\) whose binarization \(\widehat{F^{\mathbf{w}}(\mathbf{b})}\) yields a collective choice (under the weight profile \(\mathbf{w}\)). Group accuracyWe will study aggregators from a truth-tracking perspective. The accuracy \(Q(\mathcal{A}^{\mathbf{w}},\mathbf{q})\) of an aggregator \(\mathcal{A}^{\mathbf{w}}\) under the accuracy profile \(\mathbf{q}\), is the conditional probability that the outcome of the aggregator is \(x\) if the state of the world is \(x\). The above describes an epistemic social choice setting where the group is confronted with a maximum-likelihood estimation task in a dichotomous choice situation (see [10]). ### Voting and market mechanisms We turn now to the description of the mechanisms we are concerned with. #### 2.2.1 Voting mechanisms After observing their private signal, agents decide whether to vote for \(A\) or \(B\) according to Equation (3). A weighted majority rule is then applied to these votes to determine the group's choice: \[M^{\mathbf{w}}(\mathbf{v})=\begin{cases}\{1\}&\text{ if }\sum_{i\in N}w_{i}v_{i}> \frac{1}{2}\sum_{i\in N}w_{i}\\ \{0\}&\text{ if }\sum_{i\in N}w_{i}v_{i}<\frac{1}{2}\sum_{i\in N}w_{i}\\ \{0,1\}&\text{ otherwise}\end{cases} \tag{4}\] We will be working in particular with three variants of Equation (4) defined by three different weight profiles: the egalitarian weight profile \(\mathbf{1}\); the weight profile allocating to each agent \(i\) a weight proportional to \(q_{i}-0.5\); the weight profile allocating to each agent \(i\) a weight proportional to \(\log\frac{q_{i}}{1-q_{i}}\). The rule induced by the egalitarian weight profile is the _simple majority_ rule. We will see that the second weight profile simulates decision-making according to the average belief. The latter weight profile can be inferred from Bayes theorem and induces the weighted majority rule which we refer to as _perfect majority_, and which has been proven to optimize the truth-tracking ability of the group: **Theorem 1** ([11]).: _For any accuracy profile \(\mathbf{q}\), \(Q(M^{\mathbf{w}},\mathbf{q})\) is maximal if \(\mathbf{w}\) is such that \(w_{i}\propto\ln\left(\frac{q_{i}}{1-q_{i}}\right)\) for all \(i\in N\)._ #### 2.2.2 Markets The market model we use is borrowed from [14, 5]. Two symmetric Arrow securities are traded: securities of type \(A\), which cost \(p_{A}\in[0,1]\) and pay 1 unit of currency if \(x=A\), and 0 otherwise; securities of type \(B\), which cost \(p_{B}\in[0,1]\) and pay 1 unit if \(x=B\) and 0 otherwise. After observing their private signal, agents decide what fraction of their endowment to invest in which securities. We assume that all agents have the _same endowment_ consisting of 1 unit of currency. We also assume that agents _invest in at most one_ of these securities, so if \(s^{A}>0\) then \(s^{B}=0\) and vice versa. We call agents investing in \(A\), \(A\)-_traders_ and agents investing in \(B\), \(B\)-_traders_. In our setting, this assumption is without loss of generality (see Proposition 1 in the appendix2). When the true state of the world is revealed, the market resolves and payouts based on the agents' investments are distributed. We refer to tuples \(\mathbf{s}^{A}=\left(s^{A}_{1},\ldots,s^{A}_{n}\right)\) (respectively, \(\mathbf{s}^{B}=\left(s^{B}_{1},\ldots,s^{B}_{n}\right)\)) as investment profiles in \(A\)-securities (respectively, \(B\)-securities). We refer to a pair \(\mathbf{s}=\left(\mathbf{s}^{A},\mathbf{s}^{B}\right)\) as an _investment profile_. We proceed now to define the notions of price, utility and equilibrium. Footnote 2: We are indebted to Marcus Pivato for bringing this issue to our attention. Market mechanismWe assume that when the market opens all purchasing orders for each security are executed by the the market operator, who therefore sells all requested securities to agents when the market opens and pays the winning securities out immediately when the market resolves, that is, when it is determined whether \(A\) or \(B\) is the case. We further assume that the operator makes no profits and incurs no losses. So, for every \(A\)-security sold at price \(p^{A}\) a \(B\)-security is sold at price \(p^{B}=1-p^{A}\) and vice versa. In other words, the price of the risk-less asset consisting of one of each security is \(p_{A}+p_{B}=1\). In this way the operator finances the payout of any bet by the pay-in of the opposite bet. Under the above assumptions, the market clears3 when the total amount of individual wealth invested in \(A\)-securities, divided by the price of \(A\)-securities (demand of \(A\)-securities) matches the amount of individual wealth invested in \(B\)-securities, divided by the price of \(B\)-securities (demand of \(B\)-securities), that is:4 Footnote 3: A market is normally said to clear when supply and demand match. In our model, supply and demand are implicit in the following way: purchasing one \(A\)-security at price \(p^{A}\) (i.e., reducing one’s endowment to \(1-p^{A}\)) is equivalent to selling one \(B\)-security thereby \(p^{B}=1-p^{A}\). Footnote 4: It may be worth observing that by the above design we are effectively treating the operator as an extra trader in the market, who holds a risk-less asset consisting of \(\frac{1}{p^{A}}\sum_{i\in N}s^{A}_{i}\)\(A\)-securities and \(\frac{1}{1-p^{A}}\sum_{i\in N}s^{B}_{i}\)\(B\)-securities. We are indebted to Marcus Pivato for this observation. \[\frac{1}{p^{A}}\sum_{i\in N}s^{A}_{i}=\frac{1}{1-p^{A}}\sum_{i\in N}s^{B}_{i}. \tag{5}\] It follows that, given an investment profile \(\mathbf{s}\), solving Equation (5) for \(p^{A}\), yields the clearing price \(\frac{\sum_{i\in N}s^{A}_{i}}{s^{A}_{i}+\sum_{i\in N}s^{A}_{i}}\) to which we refer as \(p(\mathbf{s})\). Note that the price is undefined if either \(p_{A}=0\) or \(p_{A}=1\). We come back to this issue in Remark 2. When the market resolves, each agent receives a different payout depending on how much of each security she owns, how the market resolves, and how much of her endowment is not invested. The payout, that is, the amount of wealth obtained by an agent with a given strategy \(s_{i}^{A}\) investing in \(A\) under a price \(p^{A}\), is defined as follows: \[z(p^{A},s_{i}^{A})=\left\{\begin{array}{ll}\frac{s_{i}^{A}}{p^{A}}&\quad\text {$A$ is correct}\\ 1-s_{i}^{A}&\quad\text{otherwise}\end{array}\right. \tag{6}\] The payout for an investment in \(B\)-securities is defined in the same manner. **Remark 1**.: _In what follows, to simplify notation, we will refer to the price of \(A\)-securities as \(p\) instead of \(p^{A}\) and to the price of \(B\)-securities as \(1-p\) instead of \(p^{B}\)._ UtilityWe study price \(p\) by making assumptions on how much utility agents extract from their payout at that price. We consider two types of utility functions: **Naive**: Given a price \(p\in[0,1]\), the naive utility function of a \(A\)-trader \(i\) is \(u(p,s_{i}^{A})=z(p,s_{i}^{A})\) Similarly, for a \(B\)-trader, it is \(u(1-p,s_{i}^{B})=z(1-p,s_{i}^{B}).\) The expected utility for investment in \(A\)-securities is then: \[U_{i}^{A}(p,s_{i})=\mathbb{E}[u(p,s_{i}^{A})]=b_{i}\left(\frac{s_{i}^{A}}{p}-s _{i}+1\right)+(1-b_{i})(1-s_{i}^{A}). \tag{7}\] The expected utility for investment in \(B\)-securities is, correspondingly, \(b_{i}(1-s_{i}^{B})+(1-b_{i})\left(\frac{s_{i}^{B}}{1-p}-s_{i}^{B}\right)\). We will refer to markets under a naive utility assumption as _Naive markets_. **Kelly**: Given a price \(p\in[0,1]\) the Kelly [13] utility function of an \(A\)-trader \(i\) is \(u(p,s_{i}^{A})=\ln(z(p,s_{i}^{A}))\), and mutatis mutandis for \(B\)-traders. The expected Kelly utility for an \(A\)-trader is therefore: \[U_{i}^{A}(p,s_{i}^{A})=\mathbb{E}[u(p,s_{i}^{A})]=b_{i}\ln\left(s_{i}^{A}\frac {1-p}{p}+1\right)+(1-b_{i})\ln(1-s_{i}^{A}). \tag{8}\] Correspondingly, the expected utility of investment \(s_{i}^{B}\) for a \(B\)-traders is \(b_{i}\ln(1-s_{i}^{B})+(1-b_{i})\ln\left(\frac{s_{i}^{B}}{1-p}-s_{i}^{B}\right)\). We will refer to markets under such logarithmic utility assumption as _Kelly markets_. Investing with a logarithmic utility function is known as Kelly betting and is known to maximize bettor's wealth over time [13]. Information market traders with Kelly utilities have been studied, for instance, in [5]. EquilibriaFor each of the above models of utility we will work with the notion of equilibrium known as competitive equilibrium [16]. This equilibrium assumes that agents optimize the choice of their investment strategy \(s_{i}\) under the balancing assumption of Equation (5), while not considering the effect of their choice on the price (they behave as 'price takers'). **Definition 1** (Competitive equilibrium).: _Given a belief profile \(\mathbf{b}\), an investment profile \(\mathbf{s}\) is in competitive equlibrium with respect to price \(p\) if and only if:_ 1. _Equation (_5_) holds, that is,_ \(p=p(\mathbf{s})\)_;_ 2. _for all_ \(i\in N\)_, if_ \(i\) _is a_ \(t\)_-trader in_ \(\mathbf{s}\) _then_ \(s_{i}^{t}\in\arg\max_{x\in[0,1]}U_{i}^{t}(p^{t},x)\)_, for_ \(t\in\{A,B\}\)_._ So, when the investment profile \(\mathbf{s}\) is in equilibrium with respect to price \(p\), no agent would like to purchase more securities of any type when the price of \(A\)-securities is \(p\). If \(\mathbf{s}\) is in equilibrium with respect to \(p(\mathbf{s})\) we say that \(\mathbf{s}\) is an equilibrium. If equilibria always exist and are all such with respect to one unique price, then such price can be interpreted as the market's belief that the state of the world is \(A\), given the agents' underlying beliefs \(\mathbf{b}\). We can therefore view a market as a belief merger \(F^{\mathbf{w}}:[0,1]^{n}\rightarrow[0,1]\) mapping belief profiles to the equilibrium price. **Remark 2** (Null price).: _Under equation (5) a price \(p=0\) (respectively, \(p=1\)) implies that there are no \(A\)-traders (respectively, no \(B\)-traders). In such cases Equations (6), (7) (Naive utility) and (8) (Kelly utility) would be formally undefined. Such situations, however, cannot occur in equilibrium because as \(p\) approaches \(0\) (respectively, \(1\)), the utility for \(s_{i}^{A}>0\) (respectively, \(s_{i}^{B}>0\)) approaches \(\infty\) under both utility models. No investment profile can therefore be in equilibrium with respect to prices \(p=0\) or \(p=1\)._ ## 3 Equilibrium price in Naive and Kelly markets In order to see markets as belief aggregators we need to show the above market types always admit equilibria and, ideally, that equilibrium prices are unique, thereby making the aggregator resolute. The present section is concerned with these issues. ### Equilibrium \(p\) in Naive markets is the \((1-p)\)-quantile belief Let us start by observing that under naive utility agents maximize their utility by investing all their wealth, unless their belief equals the price, in which case any level of investment would yield the same utility to them. **Lemma 1**.: _For any \(\mathbf{q}\) and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\), and \(p\in[0,1]\) we have that, for any \(i\in N\):_ \[\operatorname*{arg\,max}_{x\in[0,1]}U_{i}^{A}(p,x)=\begin{cases}\{1\}&\text{ if }p<b_{i}\\ \{0\}&\text{if }p>b_{i}\\ \end{cases}\ ;\ \ \operatorname*{arg\,max}_{x\in[0,1]}U_{i}^{B}(p,x)= \begin{cases}\{1\}&\text{if }(1-p)<(1-b_{i})\\ \{0\}&\text{if }(1-p)>(1-b_{i})\\ \end{cases}\] Proof.: We reason for \(A\). The argument for \(B\) is symmetric. Observe first of all that Equation (7) can be rewritten as \(U_{i}^{A}(p,s_{i}^{A})=\frac{b_{i}}{p}(s_{i}^{A}(1-p)+p)+(1-b_{i})(1-s_{i}^{A}).\) So, the utility for strategy \(s_{i}^{A}=1\) is \(\frac{b_{i}}{p}\) and for \(s_{i}^{A}=0\) is \(1\). If \(\frac{b_{i}}{p}>1\), \(U_{i}^{A}(p,s_{i})\in[1,\frac{b_{i}}{p}]\) and so \(s_{1}^{A}=1\) maximizes Equation (7). By our assumptions, we therefore also have \(s_{i}^{B}=0\). If \(\frac{b_{i}}{p}<1\), instead, \(U_{i}^{A}(p,s_{i})\in[\frac{b_{i}}{p},1]\) and \(s_{1}^{A}=0\) maximizes Equation (7). The agent then takes the opposite side of the bet and maximizes \(U_{i}^{B}(p,s_{i}^{B})\) by setting \(s_{i}^{B}=1\). Finally, if \(\frac{b_{i}}{p}=1\), all investment strategies yield utility of \(1\). So, if \(\mathbf{s}\) is in competitive equilbrium with respect to price \(p(\mathbf{s})\) in a Naive market, then for each agent \(i\): \(s_{i}^{A}=1\) if \(b_{i}>p(\mathbf{s})\), \(s_{i}^{A}=0\) if \(b_{i}<p(\mathbf{s})\), and \(s_{i}\in[0,1]\) if \(b_{i}=p(\mathbf{s})\), and correspondingly for \(s_{i}^{B}\). Let us denote by \(\mathit{NC}(\mathbf{b})\) the set of investment profiles \(\mathbf{s}\) in competitive equilibrium (under naive utilities). We show now that such equilibria always exist and are unique. **Lemma 2**.: _For any \(\mathbf{q}\) and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\), \(|NC(\mathbf{b})|\geq 1\)._ Proof.: We prove the claim by construction via Algorithm 1, showing that the algorithm outputs an investment profile which is in competitive equilibrium. The algorithm consists of two routines: lines 1-7, lines 8-21. We first show that the conditions of the loops of the two routines are such that an output is always obtained. The two routines compare entries in two vectors: the \(n\)-long vector of beliefs \((b_{1},\ldots,b_{n})\), assumed to be ordered by decreasing values (thus, stronger beliefs first); the \(n+1\)-long vector \((0,\frac{1}{n},\frac{2}{n},\ldots,\frac{n}{n})\), ordered therefore by increasing values. The two vectors define two functions from \(\{0,\ldots,n\}\) to \([0,1]\) (we postulate \(b_{0}=1\)). Because the first function is non-increasing, and the second one is increasing and its image contains both \(0\) and \(1\), there exists \(i\in\{0,\ldots n\}\) such that the two segments \([b_{i+1},b_{i}]\) and \([\frac{i}{n},\frac{i+1}{n}]\) have a non-empty intersection. There are four ways in which the two segments can overlap giving rise to two exhaustive cases: \(\frac{i}{n}\) lies in \([b_{i+1},b_{i}]\), in which case the condition of the first routine applies; or \(b_{i+1}\) lies in \([\frac{i}{n},\frac{i+1}{n}]\), in which case the condition of the second loop applies. ``` input : A belief profile \(\mathbf{b}=(b_{1},\ldots,b_{n})\) ordered from highest to lowest beliefs output : An investment profile \(\mathbf{s}=(\mathbf{s}^{A},\mathbf{s}^{B})\) 1\(\mathbf{s}^{A}\leftarrow(0,\ldots,0)\) ; /* We start by assuming no agent invests in \(A\) */ 2for\(1\leq i<n\)do 3if\(b_{i}\geq\frac{i}{n}\geq b_{i+1}\)then 4\(\mathbf{s}^{A}\leftarrow(\underbrace{1,\ldots,1}_{i\text{ times}},0,\ldots,0)\) and \(\mathbf{s}^{B}\leftarrow(\underbrace{0,\ldots,0}_{i\text{ times}},1,\ldots,1)\) ; 5return\((\mathbf{s}^{A},\mathbf{s}^{B})\) and exit ; 6 7 end if 8 9 end for 10 11 end for 12for\(1\leq i\leq n\)do 13if\(\frac{i-1}{n}<b_{i}<\frac{i}{n}\)then 14\(x\leftarrow\) solve \(\frac{1}{b_{i}}((i-1)+x)=\frac{1}{1-b_{i}}(n-i)\) ; /* partial \(A\) investment */ 15if\(x\geq 0\)then 16\(s_{i}^{A}\gets x\) ; 17\(\mathbf{s}^{A}\leftarrow(\underbrace{1,\ldots,1}_{i-1\text{ times}},s_{i}^{A},0,\ldots,0)\) and \(\mathbf{s}^{B}\leftarrow(\underbrace{0,\ldots,0}_{i-1\text{ times}},0,1,\ldots,1)\) ; 18return\((\mathbf{s}^{A},\mathbf{s}^{B})\) and exit 19 20 end if 21 22 end for 23 24 end for 25 26 end for ``` **Algorithm 1**Competitive equilibria in Naive markets It remains to be shown that the outputs of the two routines are equilibria. The output of the first routine is an investment profile \(\mathbf{s}=(\mathbf{s}^{A},\mathbf{s}^{B})\) where \(i\) agents fully invest in \(A\) and \(n-i\) agents fully invest in \(B\), yielding a price \(p(\mathbf{s})=\frac{i}{n}\in[b_{i},b_{i+1}]\). By Lemma 1 such a profile is an equilibrium. The output of the second routine is an investment profile \(\mathbf{s}\) where \(i-1\) agents fully invest in \(A\), \(n-i\) agents fully invest in \(B\) and agent \(i\), whose belief equals the price, invests partially in either \(A\) or \(B\) in order for the market to clear (Equation (5)). Again by Lemma 1 we conclude that the profile is in equilibrium with respect to \(b_{i}\) Observe that the price constructed by Algorithm 1 is either equal to the belief of the \(i\)-th agent (ordered from stronger to weaker beliefs) or falls in the interval between the belief of the \(i\)-th and the \(i+1\)-th agents. **Lemma 3**.: _For any \(\mathbf{q}\) and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\), \(|NC(\mathbf{b})|\leq 1\)._ Proof.: Assume towards a contradiction there exist \(\mathbf{s}\neq\mathbf{t}\in NC(\mathbf{b})\). It follows that \(p(\mathbf{s})\neq p(\mathbf{t})\). Assume w.l.o.g. that \(p(\mathbf{s})<p(\mathbf{t})\). By Equation 5 and the definition of competitive equilibrium, it follows that \(\sum_{i\in N}s_{i}^{A}\leq\sum_{i\in N}t_{i}^{A}\) (larger \(A\)-investment in \(\mathbf{t}\)). By Lemma 1 it follows that there are more agents \(i\) such that \(b_{i}>p(\mathbf{t})\) rather than \(b_{i}>p(\mathbf{s})\), and therefore that \(p(\mathbf{t})<p(\mathbf{s})\). Contradiction. We can thus conclude that the equilibrium price in Naive markets is unique. **Theorem 2**.: _For any \(\mathbf{q}\) and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\), \(NC(\mathbf{b})\) is a singleton._ We will refer to such equilibrium profile as \(\mathbf{s}_{NC}(\mathbf{b})\) and to its equilibrium price as \(p_{NC}(\mathbf{b})\). An interesting consequence of the above results is that such equilibrium price behaves like a quantile of \(\mathbf{b}\), splitting the belief profile into segments roughly proportional to the price. **Corollary 1**.: _Fix \(\mathbf{q}\). For any belief profile \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\) there are \(n\cdot p(\mathbf{s})\) agents \(i\) such that \(b_{i}\geq p_{NC}(\mathbf{b})\) and there are \(n\cdot(1-p(\mathbf{s}))\) agents \(i\) such that \(b_{i}\leq p_{NC}(\mathbf{b})\)._ The equilibrium price \(p_{NC}(\mathbf{b})\) corresponds to the \((1-p_{NC}(\mathbf{b}))\)-quantile of \(\mathbf{b}\).5 Footnote 5: A similar observation, but for a continuum of players (\(N=[0,1]\)) and for subjective beliefs is reported in [15]. ### The average belief is the equilibrium price in Kelly markets The two following lemmas are known results, which we restate here for completeness. **Lemma 4** ([13]).: _For any \(b_{i}\in[0,1]\) and \(p\in[0,1]\):_ \[\operatorname*{arg\,max}_{x\in[0,1]}U_{i}^{A}(p,x)=\begin{cases}\frac{b_{i}-p }{1-p}&\text{if }p<b_{i}\\ 0&\text{otherwise}\end{cases};\;\;\operatorname*{arg\,max}_{x\in[0,1]}U_{i}^{B} (p,x)=\begin{cases}\frac{p-b_{i}}{p}&\text{if }(1-p)<(1-b_{i})\\ 0&\text{otherwise}\end{cases}\] So, a strategy profile \(\mathbf{s}\) is in Kelly competitive equilibrium with respect to price \(p(\mathbf{s})\) whenever Equation (5) is satisfied together with the 'Kelly conditions' of Lemma 4. Unlike in the case of Naive markets it is easy to see that such equilibrium is unique. So, for a given belief profile \(\mathbf{b}\) let us denote by \(\mathbf{s}_{KC}(\mathbf{b})\) such competitive equilibrium and by \(p_{KC}(\mathbf{b})\) the price at such equilibrium. We then have: **Lemma 5** ([5]).: _For any \(\mathbf{q}\) and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\), \(p_{KC}(\mathbf{b})=\frac{1}{|N|}\sum_{i\in N}b_{i}\)._ ## 4 Truth-Tracking via Equilibrium Prices In this section we show how competitive equilibria in Naive and Kelly markets correspond to election by simple majority and, respectively, by a majority in which agents carry weight proportional to their competence minus \(0.5\). ### Simple majority and Naive markets Simple majority is implemented in competitive equilibrium by a Naive market: **Theorem 3**.: _For any \(\mathbf{q}\) and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\): \(M^{\mathbf{1}}(\widehat{\mathbf{b}})=\widehat{p_{NC}(\mathbf{b})}.\)_ Proof.: The claim follows from the observation that, by Corollary 1, \(p_{NC}(\mathbf{b})>0.5\) if and only if there exists a majority of traders whose beliefs are higher than the price. From which we conclude that \(\widehat{\mathbf{b}}\) determines a voting profile with a majority of votes for \(A\). Put otherwise, the theorem tells us that the outcome of simple majority always consists of the security that the \((1-p)\)-quantile belief (where \(p\) is the equilibrium price) would invest in equilibrium when the market is naive. So we can treat _NC_ as a belief aggregator \([0,1]^{n}\rightarrow[0,1]\) mapping belief profiles to prices induced by competitive equilibria. In other words, for any belief profile \(\mathbf{b}\) induced by independent individual competences in \((0.5,1]\), the diagram on the right commutes. **Remark 3**.: _It is worth observing that, by Theorem 3, known extensions of the Condorcet Jury Theorem with heterogeneous competences [11] directly apply to Naive markets in competitive equilibrium. In particular with \(N\rightarrow\infty\) the probability that \(p_{NC}(\mathbf{b})\) is correct approaches \(1\) for any \(\mathbf{b}\) induced by a given competence profile._ ### Weighted majority and Kelly markets A weighted majority rule with weights for each \(i\) proportional to \(q_{i}-0.5\) is implemented in competitive equilibrium by Kelly markets. Intuitively, such markets then implement a majority election where individuals' weights are proportional to how much better the individual is compared to an unbiased coin. **Theorem 4**.: _For any \(\mathbf{q}\), and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\): \(M^{\mathbf{w}}(\widehat{\mathbf{b}})=\widehat{p_{KC}(\mathbf{b})},\) where \(\mathbf{w}\) is s.t. for all \(i\in N\), \(w_{i}\propto 2q_{i}-1\)._ Intuitively, by implementing a weighted average of the beliefs, the competitive equilibrium price for Kelly utilities behaves like a weighted majority where agents' weights are a linear function of their individual competence \((2q_{i}-1)\). So, for any belief profile \(\mathbf{b}\) induced by a competence profile \(\mathbf{q}\) and weights \(w_{i}=2q_{i}-1\), the diagram on the right commutes. ## 5 Markets for Perfect Elections In this section we show how, by introducing a specific tax scheme, we can modify Kelly markets in such a way as to make their equilibrium price implement a perfect weighted majority, that is, a majority in which the weight of each individual is proportional to the natural logarithm of their competence ratio. The intuition of our approach is the following: Theorem 4 has shown that Kelly markets correspond to elections where individuals are weighted proportionally to their competence in excess of \(0.5\); in order to bring such weights closer to the ideal values of Theorem 1 we need therefore to allow more competent agents to exert substantially more influence on the equilibrium price; we do so by designing a tax scheme which achieves such effect asymptotically in one parameter of the scheme. ### Taxing payouts We modify Equation (8) by building in the effects of a tax scheme \(T\) on utility as follows: \[U_{i}^{A}(p,s_{i})=b_{i}\ln T\left(s_{i}\frac{1-p}{p}+1\right)+(1-b_{i})\ln(1-s_ {i}) \tag{9}\] where \[T(x)=\frac{1-e^{-kx\frac{p}{1-p}}}{k\frac{p}{1-p}} \tag{10}\] with \(k\in\mathbb{R}^{>0}\). Observe that as parameter \(k\) approaches \(0\), \(T(x)\) approaches \(x\), that is, null taxation is approached. The best way to gain an intuition of the working of function \(T\) is to observe its effects on the agent's optimal investment strategy supposing the price \(0.5\). For \(p=0.5\) the optimal strategy of a Kelly trader is \(2b_{i}-1\) (Lemma 4). Function \(T\) makes that strategy asymptotically proportional to \(\ln\left(\frac{b_{i}}{1-b_{i}}\right)\) (Figure 2) as \(k\) grows. We call markets under the utility in Equation (9) _taxed markets_ and denote their equilibrium prize by \(p_{TC}(\mathbf{b})\), for any belief profile \(\mathbf{b}\). ### Equilibria in taxed Kelly markets Like for Naive and Kelly markets, we first determine the optimal strategy of the traders. We do that for \(A\)-traders, as the lemma for \(B\)-traders is symmetric. **Lemma 6**.: _For any \(i\in N\), if \(b_{i}>p\), then as \(k\to\infty\),_ \[\operatorname*{arg\,max}_{x\in[0,1]}U_{i}^{A}(p,x)\propto\ln\left(\frac{1-p}{ p}\cdot\frac{b_{i}}{1-b_{i}}\right).\] Proof.: We start from \(i\)'s utility, given by Equation (9). By setting \(\frac{dU_{i}^{A}}{ds_{i}}=0\) (first order condition) we obtain: \[\frac{bT^{\prime}(s\frac{1-p}{p})\frac{1-p}{p}}{1+T(s\frac{1-p}{p})}=\frac{1-b _{i}}{1-s_{i}} \tag{11}\] Figure 2: Left: returns after taxation by \(T\) as a function of investment (Equation (9)). Right: investment strategy (red) approximating \(\ln\left(\frac{b_{i}}{1-b_{i}}\right)\frac{1}{k}\) (blue) as \(k\) grows when price equals \(0.5\). Functions plotted for \(k\in\{0.1,0.2,1,2,10,20\}\). If we plug Equation (10) into Equation (11), we obtain: \[\frac{be^{-ks_{i}}\frac{1-p}{p}}{1+\frac{1-e^{-ks_{i}}}{k\frac{1}{k-p}}}=\frac{1- b_{i}}{1-s_{i}} \tag{12}\] and therefore \[\frac{kbe^{-ks_{i}}}{k\frac{p}{1-p}+1-e^{-ks_{i}}}=\frac{1-b}{1-s_{i}}. \tag{13}\] As \(k\) approaches infinity, \(s_{i}\) approaches zero. For this reason we rescale strategies by \(k\) and consider a value \(y=sk\). This allows us to understand the form to which strategies tend to as they approach zero. We thus obtain: \[\frac{kbe^{-y}}{k\frac{p}{1-p}+1-e^{-y}}=\frac{1-b}{1-\frac{y}{k}}. \tag{14}\] As \(k\) approaches infinity this approaches: \[\frac{be^{-y}}{\frac{p}{1-p}}=(1-b) \tag{15}\] which can be rewritten in turn as \[y=\ln\left(\frac{1-p}{p}\frac{b}{1-b}\right) \tag{16}\] from which we conclude \(s_{i}=\frac{1}{k}\log(\frac{1-p}{p}\frac{b}{1-b})\) as desired. As \(k\) tends to infinity, the optimal investment strategy will tend to \(0\) for all agents. However, it will do so in such a way that as \(k\) grows, the optimal investment strategy tends to be proportional to \(\ln(\frac{1-p}{p}\cdot\frac{b_{i}}{1-b_{i}})\) as desired. So, as \(k\) grows large, a strategy profile \(\mathbf{s}\) is in competitive equilibrium in a taxed market with respect to price \(p(\mathbf{s})\) whenever Equation (5) is satisfied together with the condition identified by Lemma 6. We denote by \(\mathbf{s}_{TC}(\mathbf{b})\) such competitive equilibrium and by \(p_{TC}(\mathbf{b})\) the price at such equilibrium. We then have: **Lemma 7**.: _For any \(\mathbf{q}\) and \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\), as \(k\rightarrow\infty\),_ \[\ln\left(\frac{p_{TC}(\mathbf{b})}{1-p_{TC}(\mathbf{b})}\right)\propto\sum_{ i}^{n}\ln\left(\frac{b_{i}}{1-b_{i}}\right).\] Proof.: To lighten notation we write \(p\) for \(p_{TC}(\mathbf{b})\). From the equilibrium condition (Equation (5)) and Lemma 6 we have that \[\frac{1}{p}\sum_{i\in N^{A}}\ln\frac{b_{i}}{1-b_{i}}=\frac{1}{1-p}\sum_{i\in N ^{B}}\frac{1-b_{i}}{b_{i}} \tag{17}\] where \(N^{A}=\{i\in N\mid b_{i}>p\}\) and \(N^{B}=\{i\in N\mid b_{i}<p\}\). From the above we obtain: \[0=\sum_{i}^{N}\ln\left(\frac{1-p}{p}\frac{b_{i}}{1-b_{i}}\right) \tag{18}\] which rewrites to \[\ln\left(\frac{p}{1-p}\right)=\frac{1}{N}\sum_{i}^{N}\ln\left(\frac{b_{i}}{1- b_{i}}\right) \tag{19}\] as desired. That is, the equilibrium price ratio between \(A\) and \(B\) securities in a taxed market tends to be proportional, in logarithmic scale, to the average belief ratio. **Theorem 5**.: _For any \(\mathbf{q}\), \(\mathbf{b}\in\mathcal{B}_{\mathbf{q}}\) and \(k\to\infty\):_ \[M^{\mathbf{w}}(\widehat{\mathbf{b}})=\widehat{p_{TC}(\mathbf{b})}\] _where that \(\mathbf{w}\) is s.t. for all \(i\in N\)\(w_{i}\propto\ln\frac{q_{i}}{1-q_{1}}\)._ This last result shows that elections that are perfect from a truth-tracking perspective (Theorem 1) can be implemented increasingly faithfully by Kelly markets once the taxation scheme \(T\) is applied and the taxation parameter \(k\) in Equation (10) grows larger. So, for any belief profile \(\mathbf{b}\) induced by a competence profile \(\mathbf{q}\) and weights \(w_{i}=\frac{q_{i}}{1-q_{i}}\), the diagram on the right commutes as \(k\) tends to infinity and, therefore, taxation grows. ## 6 Conclusions and Outlook Our paper is, to the best of our knowledge, the first one establishing a formal link between voting and information markets from an epistemic social choice perspective. The link consists specifically of correspondence results between weighted majority voting on the one hand, and information markets under three types of utility on the other. Such results open up the possibility to implement weighted majority voting with strong epistemic guarantees even without having access to individual competences because such information becomes indirectly available in the market via the equilibrium price. Notice that, in particular, while it may be difficult to elicit truthful weights from agents, investment strategies are subject to the natural incentive of maximizing investment returns. Whether this can prove advantageous also in practice, for instance in the setting of classification markets [4] or voting-based ensembles [8], should definitely be object of future research. The study we presented is subject to at least three main limitations. First, our analysis inherits all assumptions built into standard jury theorems, in particular: jurors' independence; homogeneous priors; equivalence of type-1 and type-2 errors in jurors' competences. Future research should try to lift our correspondence to more general settings relaxing the above assumptions (see [9] for a recent overview). Secondly, our study limited itself to one-shot interactions. However, markets and specifically Kelly betting make most sense in a context of iterated decisions. Extending our results to the iterated setting, along the lines followed for instance in [5], is also a natural avenue for future research. Thirdly, our market model makes use of the notion of competitive equilibrium. Although such notion of equilibrium is standard in information markets, it responds to the intuition individuals operate in a large group and, therefore, behave as price takers. We consider it interesting to study, at least experimentally, how different notions of equilibrium that do not make such assumption (e.g., Nash equilibrium) would behave within our framework. AcknowledgmentsThis research was (partially) funded by the Hybrid Intelligence Center, a 10-year programme funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organisation for Scientific Research, [https://hybrid-intelligence-centre.nl](https://hybrid-intelligence-centre.nl), grant number 024.004.022. Davide Grossi wishes to also thank Universite Paris Dauphine and the Netherlands Institute for Advanced Studies (NIAS), where parts of this research were completed. The authors also wish to thank the anonymous reviewers of COMSOC'23 for several helpful suggestions.
2305.04666
Power Distribution Grid Enhancement via Online Feedback Optimization
The rise in residential photovoltaics and other distributed energy sources poses unprecedented challenges for the operation of power distribution grids. When high amounts of active power are injected into the grid by such power sources, the overall power flow is often limited because of voltages reaching their upper acceptable limits. Volt/VAr control aims to raise this power flow limit by controlling the voltage using reactive power. This way, more active power can be transmitted safely without physically reinforcing the grid. In this paper, we use real consumption and generation data on a low-voltage CIGR\'E grid model and an experiment on a real distribution grid feeder to analyze how different Volt/VAr methods can enhance grid capacity, i.e., by how much they can improve the grid's capability to transmit active power without building new lines. We show that droop control enhances the grid but vastly underutilizes the reactive power resources. We discuss how the effectiveness of droop control can be partially improved by employing machine-learning techniques to tune the droop coefficients, but we demonstrate that local control laws are inherently unable to achieve optimal grid enhancement. In contrast, methods that coordinate the use of reactive power resources across the grid, such as Online Feedback Optimization (OFO), can enhance the grid to its full potential. A numerical study performed on data from an entire year using a realistic grid model suggests that OFO can enable another 9\% of maximum active power injections compared to droop control. To achieve that, OFO only requires voltage magnitude measurements, minimal model knowledge, and communication with the reactive power sources. A real-life experiment provides a demonstration of the practical feasibility of the proposed approach and enhanced the grid by another 10.5\% compared to droop control.
Jonas G. Matt, Lukas Ortmann, Saverio Bolognani, Florian Dörfler
2023-05-08T12:42:01Z
http://arxiv.org/abs/2305.04666v2
# Virtual Power Grid Reinforcement ###### Abstract The rise in residential photovoltaics (PV) as well as other distributed energy sources poses unprecedented challenges for the operation of distribution grids. The high active power infeed of such sources during times of peak production is a stress test which distribution grids have usually not been exposed to in the past. When high amounts of active power are injected into the grid, the overall power flow is often limited because of voltages reaching their upper acceptable limits. Volt/VAr methods aim to raise this power flow limit by controlling the voltage using reactive power. This way, more active power can be transmitted safely without physically reinforcing the grid. In this paper, we use real consumption and generation data on a low-voltage CIGRE grid model and an experiment on a real distribution grid feeder to analyze how different Volt/VAr methods can virtually reinforce the distribution grid. We show that droop control and machine-learning-improved droop control virtually reinforce the grid but do not utilize the reactive power resources to their full extent. In contrast, methods which coordinate the usage of reactive power resources across the grid, such as Online Feedback Optimization (OFO), can reinforce the grid to its full potential. The simulation study performed on data of an entire year suggests that OFO can enable another 9% of maximum active power injections. To achieve that, OFO only requires voltage magnitude measurements, minimal model knowledge and communication with the reactive power sources. A real-life experiment provides a demonstration of how OFO acts at the level of a single device, and proves the practical feasibility of the proposed approach. ## I Introduction Electrifying the transportation sector and building heating will lead to much higher peak electric power consumption in distribution grids. At the same time, the shift towards decentralized energy resources and residential PV in particular will increase the peak electric power generation in distribution grids. While shifting power consumption in time and storing power generation in local batteries can reduce the peak power flows, some parts of the distribution grids will still reach or have already reached their capacity limit, which is determined both by the maximal permissible current as well as voltage deviation. Which of these two constraints becomes violated first depends on the topology of the grid. On electrically shorter lines, the maximum current constraints are usually reached before the voltage deviates too much. On electrically longer lines however, the voltages reach their lower or upper limit before the current reaches its limit. Today's distribution grids typically consist of electrically long lines which means that the voltage deviations determine the grid's capacity. A study found that 75% of the necessary grid reinforcement in the low voltage distribution grids in Germany is due to voltage problems [1, page 172]. Therefore, controlling the voltage to enforce voltage limits can allow more active power to flow and hence virtually reinforce the grid. One of the most economic ways to control the voltage is Volt/VAr control, i.e. controlling the amount of reactive power injected or absorbed at certain nodes in the grid. Conveniently, the inverters of photovoltaic systems, heat pumps, or electric vehicle chargers can typically choose their reactive power setpoints rather freely and thus actively participate in the mitigation of any voltage deviations that their active power generation or consumption might cause. Most modern grid codes hence stipulate some form of Volt/VAr control to be done, e.g. by PV inverters [2, 3, 4]. The state-of-the-art is droop control, a scheme in which every inverter selects its reactive power setpoint according to a predefined function of the local voltage measurement. We consider the virtual reinforcement, provided by Volt/VAr control, to be optimal when the voltage limits are enforced whenever the available reactive power resources allow to do so as this maximizes the possible active power flows and hence the grid capacity. This feasibility problem can be phrased as an Optimal Reactive Power Flow (ORPF) problem. It was shown that local controllers do not necessarily find the solution of this ORPF problem and that coordination is needed for optimal virtual reinforcement [5]. Fortunately, the advance of communication capabilities in distribution grids render coordination-based controllers a promising alternative to conventional local methods. However, a communication infrastructure makes the control setup more complicated. We are therefore interested in how much more virtual reinforcement can coordinated Volt/VAr methods provide in comparison to local Volt/VAr methods. The local methods we analyze are droop control, as the state-of-the-art, and an approach based on machine learning (ML), called ML-tuned droop. This approach attempts to learn the ORPF solution from data [6] and therefore tries to achieve optimal virtual reinforcement without using a communication channel. It solves the ORPF problem offline to generate data points and a ML approach then takes these to synthesize droop curves for each reactive power source trying to mimic the ORPF solution. The coordinated methods we present also aim at solving the ORPF problem. The first one is a standard ORPF solver. However, it needs an accurate grid model, is not robust to model mismatch, and needs to measure or estimate all active and reactive consumption and production in the grid. This means not only communication with the reactive power resources is needed but also with sensors everywhere in the grid that currently do not exist. These requirements are unrealistic in a distribution grid setup and therefore the ORPF solver can only serve as the benchmark for optimal virtual grid reinforcement. The second coordinated method is called OFO and circumvents the problems of a standard ORPF solver by using measurements from the grid as feedback. It uses this to iteratively update the reactive power setpoints until it has converged to the solution of the ORPF problem. The method is characterized by being nearly model-free, robust and only voltage magnitude measurements are needed. It comes with theoretical convergence guarantees, strictly respects constraints in steady-state and has already been tested experimentally on power system testbeds [7, 8, 9]. OFO controllers can be implemented as centralized or distributed controllers. A centralized controller gathers measurements from and sends setpoints to the reactive power sources [7]. A distributed controller can be implemented in the reactive power resources with distributed communication between them [8]. Overall, we analyze the virtual reinforcement capabilities of three Volt/VAr control methods. Namely, droop control, ML-tuned droop control, and OFO. We are interested in how much more active power can be transported through the grid when using each of these three controllers. A special focus is on the difference between local and coordinated methods. We implement the three controllers on a CIGRE benchmark grid and quantify their performance using real consumption and PV generation data. Finally, we show results from a proof-of concept experiment of OFO on a real distribution grid feeder to demonstrate its viability in a real-life setup and its capability to achieve grid reinforcement. The remainder of this paper is structured as follows: In Section II, we describe the setup of the numerical study. The three different Volt/VAr controllers are introduced in Section III. We present the simulation results in Section IV and the experimental results in Section V. Finally, we conclude the paper in Section VI. ## II Simulation and data setup The comparison of the different control algorithms is conducted on a benchmark distribution grid, and using real load and PV generation data to obtain meaningful time-series results. The simulation framework is based on Python and the open-source package pandapower [10]. All code has been made available here [11]. ### _Benchmark Low Voltage Grid_ At the foundation of the simulation framework lies the benchmark low voltage distribution grid with European layout, as proposed by CIGRE in 2014 [12]. It is available as a native implementation in pandapower and has been found to be a good testbed for the analysis of voltage constraint violations. As shown in Fig. 1, only the residential subnetwork is used, which is characterized by a radial structure, underground cable transmission, a line-to-line voltage of 400 V and a system frequency of 50 Hz. A transformer connects the network to the external MV grid, modeled as a constant voltage source at 1 p.u. The grid layout comprises 18 measurement buses, 5 of which are the connection points of loads with local PV generation. Each load bus aggregates a small neighborhood of buildings with nominal load values ranging from 15 to 55 kW per bus. ### _Data_ To quantify how the different Volt/VAr methods behave, real data was used for both household electricity consumption and PV generation. Among the limited range of publicly available datasets suited for this purpose, Dataport by Pecan Street Inc. [13] was identified as the best one according to the following criteria: Large number of different households at one single location with matching load and PV generation data, high temporal resolution and a total duration which is sufficient to reveal seasonal effects. A more detailed overview of all reviewed datasets can be found in [14]. The dataset available under free academic licensing is from the year 2018, and contains data from 25 different households in Austin, USA with a temporal resolution of 1 sec and a total duration of 1 year. It was downsampled to a resolution of 1 min to deal with the high cost of computing the power flows at each time step. The houses available in Dataport were assigned to the 5 load buses of the benchmark grid such that the peak demand and generation values were in accordance with the nominal values stated by CIGRE. Fig. 2 shows the resulting total load and generation at each bus for one exemplary day. ### _Scenarios for PV integration_ To investigate the impact of increasing power injections from DERs into distribution grids, three scenarios have been designed which differ in the amount of installed PV capacity. Starting from the base scenario given by the actual PV data Fig. 1: The CIGRE low-voltage distribution grid with European layout. The residential subnetwork is highlighted and was used for this work. Figure adapted from [12]. from the year 2018, PV capacity (both active and reactive power resources) is increased by the factors 2 and 3.5, to create the PV integration scenarios 2030 and 2035, respectively. These values agree with current average predictions for the global increase in PV installations [15, 16, 17, 18]. ## III Reviewed Methods This work aims at comparing three different controllers which are suited for this purpose and are therefore benchmarked with the proposed simulation framework. Particular attention is given to the importance of communication for achieving optimal control and constraint satisfaction. The current state-of-the-art, local droop control with standard parameter tuning, serves as the lower baseline, whereas an ORPF defines the theoretical limit of what can be achieved using reactive power resources for voltage control. Relative to these two benchmarks, the performance of two recently proposed methods is evaluated. On the one hand, a controller implemented according to [6] that utilizes a data-driven procedure to learn optimal local droop curves, using the outputs of the ORPF as training data. On the other hand, OFO which relies on full communication but is essentially model-free and utilizes only voltage magnitude measurements to steer the reactive power setpoints to the solution of the ORPF problem. ### _Droop Control_ Droop control is a purely local control scheme in which every PV inverter is required to follow a predefined droop curve that maps the bus voltage at its own point of connection to a reactive power setpoint. Usually, these so-called droop curves take the form of a piece-wise linear function as the one shown below which is characterized by a deadband around the nominal voltage and maximum injection/consumption of reactive power whenever the voltage constraints are violated. In control language, droop control behaves like a nonlinear P-controller, whose gain is given by the slope of the droop curve. ### _Optimal Reactive Power Flow_ ORPF is the most obvious, and in theory ideal, candidate for an optimization-based Volt/Var controller. Assuming full communication, it centrally solves an optimization problem to determine and communicate back the reactive power setpoints for all PV inverters in the grid. However, it relies on perfect knowledge of the grid topology, cable parameters and on measurements of the active and reactive power consumption and production at each bus, all of which are in practice not achievable. Furthermore, solving an ORPF problem is computationally expensive, which can render it infeasible for real-time settings. Nevertheless, in the simulation framework a flawless model of the grid is available, and an ORPF approach can be implemented. It represents the theoretical limit of what can be achieved by any controller and will hence serve as the upper benchmark. Its objective is to find the vector of reactive power setpoints \(q\) to solve the constrained optimization problem \[\min_{q} \ \frac{1}{2}q^{T}q\] (1) s.t. \[v_{\text{min}}\leq v_{h}(q,w)\leq v_{\text{max}} \text{at all inverters }h\] \[q_{\text{min}}\leq q_{h}\leq q_{\text{max}} \text{at all inverters }h.\] Hence, the ORPF approach attempts to keep all voltages within the constraints while using a minimum amount of squared reactive power. If at some point the combined reactive power capabilities of all PV inverters are insufficient to achieve constraint satisfaction, the ORPF problem becomes infeasible. In that case, the reactive power resources operate at the reactive power limit, and by that keep the voltages as close to the constraints as possible. In practice, a secondary control scheme such as the curtailment of active power would need to become active under these circumstances. This is to ensure that the voltage constraints are never violated but comes at the expense of a higher associated cost which is why in this paper, we focus on reactive power compensation only. ### _ML-tuned Droop_ As an improved local controller, a data-driven droop control based on [6] has been added to the comparison. The idea is to Fig. 2: One exemplary day (July 2, 2018) from the resulting dataset, after assigning the available Dataport data to the load buses in the CIGRE LV grid. learn optimized droop curves by using the setpoints generated by the ORPF approach as training inputs. In principle, the resulting controller should then approximate the ORPF solution through local control. The proposed algorithm consists of fitting a piece-wise linear function to the training data, i.e. the Volt/Var pairs generated by the ORPF controller, for each inverter separately and offline. During operation, the resulting mappings are used in a manner analogous to standard droop control. As the behavior of ORPF can be described as bang-bang, this procedure would result in the deployment of droop curves with very steep slopes. However, it has been shown that droop curves which are too steep lead to system instability [19] and the set of allowed functions has hence been limited. To still enable a larger negative slope and by this leave more freedom to the algorithm when fitting the data, the output of the ML-tuned droop controller has been low-pass filtered according to \[q_{h}(t)=(1-\beta)q_{h}(t-1)+\beta f_{h,\text{all- droop}}(v_{h}(t)), \tag{2}\] with \(f_{h,\text{mil- droop}}\) being the learned, piecewise linear droop curve for bus \(h\) and with a smoothing factor \(\beta=0.8\). Overall, the resulting lower limit for the slope is the marginal value for which the closed-loop system remains stable under application of the low-pass filter and has been determined to be -4.5 VAr/V, both experimentally and theoretically according to [19]. Furthermore, the droop curves are forced to exhibit only non-positive slopes and to go through the points \((v_{min},q_{max})\) and \((v_{max},q_{min})\). The latter ensures that the reactive power source is utilizing its full reactive power when the measured voltage is outside the limits. The resulting droop curves are shown in Figure 3. Depending on the amount of training data, the algorithm can become computationally intense, leading to significant runtimes. However, once the curves have been determined offline, the controller works just like standard droop and does not require any further online computations. Nonetheless, if there are any changes to the system, the complete procedure must be run again. The droop curves must be updated for every reactive power source and under the premise of being a local control method, there might not be communication in place to do so. ### _Online Feedback Optimization_ OFO is a novel control method which can steer a system to the solution of a constrained optimization problem. In the presented case, the ORPF problem (1) is used as the steady-state specification for the OFO design. However, we remark that compared to the ORPF controller, the resulting feedback law does not require a detailed model of the system. While the ORPF approach typically calculates the model-based solution in one step and operates the system in an open-loop manner, OFO exploits voltage measurements to gradually update the inputs towards the optimum. In this paper, we use the OFO controller from [7] which employs a dual ascent strategy to deal with constraints. Two dual variables per inverter integrate constraint violations at each time step according to \[\lambda_{\text{min}}(t+1)=[\lambda_{\text{min}}(t)+\alpha(v_{ \text{min}}-v)]_{\geq 0} \tag{3}\] \[\lambda_{\text{max}}(t+1)=[\lambda_{\text{max}}(t)+\alpha(v-v_{ \text{max}})]_{\geq 0}. \tag{4}\] The new reactive power setpoints are then determined based on the current values of the dual variables and a sensitivity matrix H as follows: \[\begin{split} q_{\text{unc}}&=H^{T}(\lambda_{\text {min}}(t+1)-\lambda_{\text{max}}(t+1))\\ q(t+1)&=\arg\min_{q\in\mathcal{Q}}{(q-q_{\text{unc} })^{T}(q-q_{\text{unc}})},\end{split} \tag{5}\] where \(\mathcal{Q}=\{q\mid q_{\text{min}}\leq q\leq q_{\text{max}}\}\). More specifically, the matrix H captures how the bus voltages change for a change in reactive power at each bus. It represents a linearization of the power flow equations and can be computed analytically based on an approximate grid model [20], or experimentally. In practice, OFO controllers are typically robust against an inaccurate choice of H. It may be determined for a nominal and known operating point such as the one without any load and generation as was done in the presented case. However, it has been shown that the algorithm performs well even ignoring any prior information on the grid topology (that is, setting H equal to the identity matrix) [7]. The OFO algorithm is further characterized by the step size \(\alpha\) which determines the size of the dual ascent update and functions as an integral control gain. Larger values of \(\alpha\) increase the convergence speed whereas too large values can render the system unstable. Overall, this yields a tradeoff and tuning the step size is hence required to obtain good performance. However, given that it is a single scalar value, this is not a very challenging task in practice. In this work, \(\alpha\) has been chosen to be 4.0. Finally, we note that OFO is computationally cheap and can hence be run at small time intervals and on any low power microcontroller. ## IV Numerical study on Virtual Grid Reinforcement This section presents the results of the numerical study. First, the characteristic behavior of the different control methods is illustrated based on intraday simulations of a particular summer day in the 2035 scenario. Subsequently, the performance of the controllers is compared across the three PV integration scenarios, pointing out the importance of improving the optimality of Volt/VAr control. Finally, a Fig. 3: Training points for the ML-tuned droop and the optimized droop curves. different perspective on the matter is presented: We quantify by how much the PV generation fed into distribution grids can be increased by applying each of the reviewed Volt/VAr control methods. This is to reveal the economic impact that optimal Volt/VAr control can have for grid operators. ### _Controller Characteristics_ In 2035, the active PV power fed into the grid is expected to reach levels at which overvoltages occur frequently, despite the larger availability of reactive power. In fact, none of the proposed Volt/VAr controllers is able to keep the voltages within bounds at all times. However, the amount of constraint violations varies drastically for different controllers. This makes this scenario well-suited to observe how the different methods cope with voltages approaching and exceeding the constraints and at the same time, shows the future need for better Volt/VAr control methods. Figure 4 shows the intra-day evolution of the voltage and reactive power injection at each bus. Negative values of the latter correspond to consumption of reactive power by the PV inverters. The amounts of reactive power which are injected or absorbed at each bus are commanded by droop control (Figure 4b), ML-tuned droop control (Figure 4c), OFO (Figure 4d) and ORPF control (Figure 4a), respectively. The irradiation and active power consumption data is from July 2, 2018 and represents a typical warm and sunny day. This is the data shown in Figure 2, except that PV generation was scaled by 3.5 in the 2035 scenario. Around noon, the irradiance peak leads to high PV active power production, causing overvoltages to occur across the grid. In the evening, high power consumption and low irradiance give rise to marginal undervoltages. In the following, we will focus on the controller behavior with respect to overvoltages. Figure 4a shows the results for the ideal ORPF approach which corresponds to the best performance achievable by any Volt/VAr controller under perfect knowledge of the grid model and all active and reactive consumption and generation. This is unachievable in practice, rendering the ORPF approach an idealized upper benchmark for the other control methods. The ORPF approach solves problem (1) at each timestep and hence uses the reactive power resources optimally. When no constraints are violated, no reactive power is used. Once the voltages exceed the constraints, the ORPF approach uses the minimum amount of reactive power that is required to keep them right at the constraints. Once the optimization problem becomes infeasible, i.e. constraint violations become unavoidable, OPF control uses the full reactive power resources to keep the voltages as close to the constraints as possible. Standard droop control applies control inputs based solely on the local bus voltage. Maximum reactive power is used only if the local voltage constraints at a bus are violated. However, the inverters which are connected to a bus that is close to the external grid do never experience such high voltages. Hence, they do not absorb much reactive power even though there might be overvoltages occurring further into the grid. Thus, constraint violations persist even if problem (1) is feasible, meaning the voltage violation could be mitigated using other reactive power sources differently. Additionally, some reactive power is used unnecessarily at times when all voltages are within bounds and would have been admissible using lower or even no control effort. Using ML-tuned droop control, the local controllers aim to emulate the global behavior of ORPF but there is still no coordination between the participants. By adapting the local droop curves based on the inputs that an ideal ORPF approach would apply, the inverters close to the external grid are instructed to contribute slightly more than under standard droop control. Furthermore, the waste of reactive power at uncritical times is smaller compared to standard droop control. However, ML-tuned droop is not able to come close to the optimality of the ORPF approach whose behavior it tries to imitate. Rather, its performance with respect to the duration and size of constraint violations is not significantly better than the one of standard droop. Lastly, the OFO controller exhibits a performance which is comparable to the one of the ideal ORPF approach, as shown in Figure 4d. Importantly, it achieves this by using only the sensitivity matrix \(H\) and voltage magnitude measurements. Active and reactive power consumption and generation do not need not be measured. As with the ORPF approach, the control inputs are coordinated, yielding near-optimal performance. Due to being an integral-like controller, the closed-loop convergence of OFO is not immediate, as constraint violations must be integrated for at least one time-step to initiate an update of the set points. Hence, high-frequency changes in the grid can cause temporary constraint violations until OFO has converged and then satisfies all constraints. In our experiments, the OFO controller was operated every 10 seconds, yielding convergence after 1 minute at most. ### _Overall Controller Performance_ This section analyzes the performance of the controllers over a complete yearly cycle. For this purpose, all the available data is used, corresponding to a full year of household power consumption and PV power production. This is to reveal the actual implications which different Volt/VAr control strategies have for grid operation. Figure 5 summarizes the results. In the 2020 scenario, corresponding to the PV capacity which is present in distribution grids today, no significant overvoltages occur over the entire year. This indicates why local droop control, currently the state-of-the-art Volt/VAr control method, has been sufficient until now. However, as PV capacity increases towards the 2030 and 2035 scenarios, the size and duration of constraint violations increase drastically. Even though the reactive power capacities and hence the control capabilities increase by the same factor as active power production, they no longer suffice to mitigate the overvoltages. This shows the importance of moving toward better Volt/VAr control methods in the future and to use the available reactive power resources as effectively as possible. Standard droop control yields the worst performance in all three scenarios, allowing the most constraint violations while using the largest amount of reactive energy over the yearly period. ML-tuned droop achieves slightly more efficient control with respect to the reactive energy used, but allows about the same amount of constraint violations as standard droop control. The proposed OFO controller outperforms both local droop controllers significantly in terms of both control effectiveness and efficiency. It achieves near-optimal constraint satisfaction that is almost equal to the one of the ideal ORPF approach. ### _Virtual Grid Reinforcement_ In this section, we approach the analysis of the different Volt/Var controllers from a different perspective. The goal is to evaluate for each of the three controllers, how much power can be fed into the grid without exceeding the upper voltage limit at any bus. Active power production is assumed to be equal at all load buses and increased linearly until the critical point and beyond. Figure 6 presents the results. In the absence of any Volt/Var control, the first overvoltages occur at a maximum infected of 407 kW. Using local droop control (both standard or ML-tuned) the critical amount is about 490 kW. Both the ideal ORPF approach and OFO are able to increase the infeed limit to 535 kW. Hence, OFO optimally virtually reinforces the grid. Compared to local droop control, OFO increases the maximum grid capacity by 45 kW or more than 9%. This emphasizes the potential economic value of establishing communication to achieve better Volt/Var control in distribution grids. Fig. 4: Voltages and reactive power injections throughout a sunny summer day for the 2035 scenario for each of the considered controllers. ## V Experimental Results on Virtual Grid Reinforcement The virtual reinforcement capabilities of droop control and OFO were also experimentally tested on a distribution grid in Roskilde, Denmark. A detailed description of the experimental setup and the OFO implementation can be found in [7]. In short, an active power source was connected at the end of a long feeder, see Figure 7. Its active power injections lead to a voltage increase that limits the active power that can be injected. This power source and two reactive power sources along the feeder perform Volt/Var control to regulate the voltage. Figure 8 shows the voltage at the end of the feeder for different active power injections. The lines correspond to the different control methods used for the Volt/VAr control. The Figure shows that without any Volt/VAr control, the voltage limit of 1.05 p.u. is reached at 8.77 kW. Using droop control, the limit is reached at 10.16 kW. When OFO is used for control, the grid can virtually be reinforced to 11.23 kW which is another 10.5% on top of the capacity achieved using droop control. Finally, we note that the pink line, corresponding to the ideal ORPF approach, violates the voltage constraint. This is due to a model mismatch that occurred even though the cable types and parameters were known and all generation and consumption was measured. This showcases that feed Fig. 5: Visualization of the yearly reactive power usage, voltage distributions, and constraint violations for droop, ML-tuned droop, OFO, and ORPF for all three scenarios. forward methods fail to enforce constraints in real systems even under excellent model knowledge. Robust optimization methods can be used to guarantee constraint satisfaction but they would not utilize the resources to the full extent as they must leave some slack for any potential model mismatch. In contrast, OFO guarantees constraint satisfaction even under model mismatch and virtually reinforces the grid to its full potential using only limited model information. ## VI Conclusion The results of the experiment and simulations show that droop control, as currently used in many grid codes, helps to control the voltage. This enables more power to be transmitted before voltage constraints are violated. However, reactive power is often used even if the voltages are within their limits leading to unnecessary losses. The simulations further reveal that the grid can be virtually reinforced by another 9% when communication is added. Furthermore, we showed that OFO can reach this maximum level of virtual reinforcement using only minimal model information, voltage magnitude measurements and communication channels for these measurements. Also, OFO only uses reactive power when really needed helping to minimize losses. Finally, we demonstrated OFO's practical viability on a real distribution grid feeder. Overall, our analysis suggests that OFO can extend the capacity of a distribution grid in the range of 10% beyond the state-of-the-art which would enable distribution grid operators to mitigate or postpone physical grid reinforcements that will be required in the future.
2301.07840
Multifrequency microwave imaging of weak transients from the quiet solar corona
Understanding the dynamics of the quiet solar corona is important for answering key questions including the coronal heating problem. Multiple studies have suggested small-scale magnetic reconnection events may play a crucial role. These reconnection events are expected to involve acceleration of electrons to suprathermal energies, which can then produce nonthermal observational signatures. However, due to the paucity of sensitive high-fidelity observations capable of probing these nonthermal signatures, most studies were unable to quantify their nonthermal nature. Here we use joint radio observations from the Very Large Array (VLA) and the Expanded Owens Valley Solar Array (EOVSA) to detect transient emissions from the quiet solar corona in the microwave (GHz) domain. While similar transients have been reported in the past, their nonthermal nature could not be adequately quantified due to the unavailability of broadband observations. Using a much larger bandwidth available now with the VLA and EOVSA, in this study, we are able to quantify the nonthermal energy associated with two of these transients. We find that the total nonthermal energy associated with some of these transients can be comparable to or even larger than the total thermal energy of a nanoflare, which underpins the importance of nonthermal energy in the total coronal energy budget.
Surajit Mondal, Bin Chen, Sijie Yu
2023-01-19T01:18:48Z
http://arxiv.org/abs/2301.07840v2
# Multifrequency microwave imaging of weak transients from the quiet solar corona ###### Abstract Understanding the dynamics of the quiet solar corona is important for answering key questions including the coronal heating problem. Multiple studies have suggested small-scale magnetic reconnection events may play a crucial role. These reconnection events are expected to involve acceleration of electrons to suprathermal energies, which can then produce nonthermal observational signatures. However, due to the paucity of sensitive high-fidelity observations capable of probing these nonthermal signatures, most studies were unable to quantify their nonthermal nature. Here we use joint radio observations from the Very Large Array (VLA) and the Expanded Owens Valley Solar Array (EOVSA) to detect transient emissions from the quiet solar corona in the microwave (GHz) domain. While similar transients have been reported in the past, their nonthermal nature could not be adequately quantified due to the unavailability of broadband observations. Using a much larger bandwidth available now with the VLA and EOVSA, in this study, we are able to quantify the nonthermal energy associated with two of these transients. We find that the total nonthermal energy associated with some of these transients can be comparable to or even larger than the total thermal energy of a nanoflare, which underpins the importance of nonthermal energy in the total coronal energy budget. 0000-0002-4880-7880]Surajit Mondal 0000-0002-4170-7880]Bin Chen 0000-0002-4188-7808]Sijie Yu ## 1 Introduction The solar corona is generally classified into active regions and quiet solar corona. Active regions are defined as those regions of the solar corona which harbor strong magnetic fields and are bright in X-rays and extreme ultraviolet bands. They are responsible for large explosive events like solar flares, coronal mass ejections, etc. On the other hand, the quiet solar corona has weak magnetic fields and does not show such explosive behavior. However, with the development of each new generation of instruments, it has become increasingly clear that the quiet sun also shows a plethora of dynamics. The most recent example of this dynamics is the observation of "campfires" (Berghmans et al., 2021) by the Extreme Ultraviolet Imager (Rochus et al., 2020) onboard the Solar Orbiter (Muller et al., 2020). Quiet sun dynamics has also been reported by other authors (e.g., Krucker et al., 1997; Kuhar et al., 2018; Innes, 2001; Harrison et al., 1999; Berghmans et al., 1998; Parnell and Jupp, 2000; Chitta et al., 2021; Mandal et al., 2021; Chen et al., 2019; Rouppe van der Voort et al., 2016; Joshi et al., 2020; Phillips et al., 2000). We refer readers to reviews by Shibasaki et al. (2011); Madjarska (2019); Nindos et al. (2022) for a more detailed discussion on these topics. Observations of such extensive dynamics in the quiet sun are increasingly making the name "quiet" a misnomer. There are also theoretical reasons as to why we expect to have a dynamic quiet solar corona as well. The most important of these reasons probably comes from the efforts made so far to understand the coronal heating problem (see Klimchuk, 2015, for a review). All of the notable theories which try to explain this process suggest some dynamics in the quiet solar corona, although the exact dynamics can vary from theory to theory. For example, the nanoflare hypothesis (Parker, 1988) suggests that a large number of nanoflares are continuously happening throughout the quiet corona, which in turn are responsible for maintaining the million-degree coronal temperature. This scenario requires the presence of ubiquitous small-scale magnetic-reconnection-driven nanoflare events throughout the corona. During the process of magnetic reconnection, particles get accelerated, although the extent to which this energization happens depends intimately on the details of the reconnection process (see, e.g., Bakke et al., 2018; James and Subramanian, 2018; Glesener et al., 2020; Frogner et al., 2020). These accelerated particles can either escape into the interplanetary space and contribute to the suprathermal particle population observed in the solar wind (Wang et al., 2016; Hou et al., 2021; Mitchell et al., 2020; Hill et al., 2020) or can get thermalized in the solar atmosphere and contribute to the energy budget of the quiet solar corona (e.g. De Pontieu et al., 2011; Frogner et al., 2020, etc.). However, due to the rarity of studies regarding the energetics of these accelerated nonthermal electrons from the small microflares and nanoflares, their exact contribution to coronal heating is, as of now, unknown. Thanks to the availability of new instruments with high angular resolution, the recent detection of small-scale reconnection-driven spicules (Samanta et al., 2019), "campfires" (Berghmans et al., 2021), and the suggestions that they might be related to the hypothesized nanoflares also make studies regarding detection of nonthermal electrons particularly interesting and timely. While the presence of nonthermal electrons in the quiescent corona has been inferred using observations at the (extreme) ultraviolet wavelengths (e.g., Testa et al., 2014), a more direct means of detecting these nonthermal electrons is through their emission at X-ray and radio wavelengths. Detection of X-ray transients from the quiet solar corona has already been reported (e.g. Kuhar et al., 2018; Vadawale et al., 2021; Paterson et al., 2022, etc.). Confirmed detection of nonthermal electrons has also been reported for weak microflares with the Reuven Ramaty High Energy Solar Spectroscopic Imager and Nuclear Spectroscopic Telescope Array (NuSTAR) X-ray telescope thanks to its high sensitivity (Qiu et al., 2004; Kundu et al., 2006; Stoiser et al., 2007; Hannah et al., 2008; Glesener et al., 2020; Cooper et al., 2021). Radio observations have also presented strong evidence regarding the presence of nonthermal electrons in the quiet solar corona. Mondal et al. (2020); Sharma et al. (2022); Mondal et al. (2023) reported the detection of Weak Impulsive Narrowband Quiet Sun Emissions (WINQSEs) which the authors hypothesize are weaker cousins of the well-known type III radio burst and originate due to coherent plasma emission from nonthermal electrons generated in nanoflares (Mondal, 2021). Recent studies at millimeter wavelengths have also revealed the presence of small-scale transients in the quiescent solar chromosphere (e.g. Nindos et al., 2020; Eklund et al., 2020; Nindos et al., 2021, etc.). While these chromospheric transients generally occur due to thermal free-free emission, they can be driven by the precipitation of nonthermal electrons produced due to weak energy release events. For producing observable nonthermal gyrosynchrotron emission in these frequencies, a sufficiently large electron population with energy in the MeV range is required, which is not expected in the case of small-scale flares (White and Kundu, 1992). Studies of these weak radio transients from the transition region and the low corona provide mixed results. While most studies suggest that these transients are free-free in nature (see Madjarska, 2019, for a review), some suggest that some fraction of them are nonthermal in origin (Krucker et al., 1997; Kundu et al., 2006), implicating the acceleration of nonthermal electrons in small-scale energy release events. However, these studies were done a few decades back when broadband radio imaging spectroscopy observations were not available. This posed challenges in identifying the emission mechanism of these sources. Additionally, due to the weak nature of these sources, image fidelity has always been an issue. Here we use simultaneous detection of such sources with two instruments or over wide bandwidths to increase the fidelity of the detections. In this paper, we use simultaneous broadband radio imaging spectroscopy data from the Very Large Array (VLA; Perley et al., 2011) and the expanded Owens Valley Solar Array (EOVSA; Gary et al., 2018) to detect weak radio transient sources and also obtain their spectral and temporal behavior when possible. We supplement these data with that obtained from the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) onboard the Solar Dynamics Observatory (SDO; Pesnell et al., 2012) to search for counterparts of these sources in the Extreme Ultraviolet (EUV) and use the EUV properties of these sources to delve deeper in their nature than that had been possible earlier. This paper is structured as follows. In Section 2, we briefly describe the observation. Section 3 describes the details of the data analysis procedure. In Section 4 we present the results obtained, which are then discussed in a broader context in Section 5. Finally, we conclude with a brief summary in Section 6. ## 2 Observations and Context The data presented here were acquired using the EOVSA and the VLA on February 1, 2020, between 19:00:00-23:00:00 UT. The Sun was extremely quiet on this day. No X-ray flare was reported by the Geostationary Operational Environmental Satellite (GOES). There was only one active region designated as National Oceanic and Atmospheric Administration (NOAA) 12757 presenting on the west limb. In addition, a bi-polar, but spotless magnetic structure is located on the disk (at S10W10). Fig. 1 shows contours of the full-day integrated radio map from the EOVSA1 at 3 GHz over an image from the Solar X-ray Telescope (XRT; DeLuca et al., 2000) onboard Hinode with the Al-poly filter (left panel) and an SDO/AIA 171 A image (right panel). A radio counterpart is detected from each region against the quiet Sun disk. Footnote 1: [http://ovsa.njit.edu/browser/?suntoday_date=2020-02-01](http://ovsa.njit.edu/browser/?suntoday_date=2020-02-01) The VLA observed the Sun during this time in the subarray mode in its C-configuration, with one subarray operating in the P band and another operating in the L band (0.994-2.006 GHz). Here we present data from the L-band. The entire band is divided into 8 spectral windows, each with a bandwidth of 128 MHz. Each spectral window has 128 channels and each channel has a width of 1 MHz. The maximum baseline length is 3.2 km, corresponding to an angular resolution of 14''-28'' in 1-2 GHz. We have also analyzed EOVSA data from this time period, focusing primarily on the band spanning 1.425-1.749 GHz, which has a frequency closest to the analyzed VLA L band data. ## 3 Data Analysis ### Calibration, imaging and source detection The VLA observation on this day did not have a flux calibrator observation. Hence, approximate flux densities were estimated assuming that the flux of the phase calibrator, J1941-1524, is the same as that given in the VLA Calibrator manual2 at all frequencies. The flux density at 20 cm and 6 cm were used to determine the spectral index, which was then used to obtain the frequency-dependent flux density at each spectral window. All analysis was done using the Common Astronomy Software Applications (CASA, McMullin et al., 2007). At the time of the observation, spectral windows 1 (centered at 1.185 GHz) and 4 (centered at 1.557 GHz) were heavily affected by radio frequency inference (RFI) and were excluded from further analysis. Details of the calibration and data analysis procedure are provided in the Appendix. Calibrated EOVSA data were obtained from the observatory for this day following the standard calibration procedure. Footnote 2: [http://www.vla.nrao.edu/astro/calib/manual/cosource.html](http://www.vla.nrao.edu/astro/calib/manual/cosource.html) Due to the large volume of data, automated techniques were used both for deconvolution and source identification purposes. For imaging the VLA data, we divided the entire duration into 15-minute chunks and imaged each chunk using the image deconvolution task tclean with the auto-masking mode. Unless specified otherwise, the time integration and frequency integration is 15 minutes and 128 MHz, respectively. Details of the imaging procedure are provided in the Appendix. However, we find this technique is not very useful for processing the EOVSA data for the weak radio transients due to the particularly high sidelobe levels (\(\sim 70\%\)). Hence the EOVSA data were imaged manually and sources were identified with visual inspection. ### Filtering scheme to remove probable spurious sources Careful considerations were taken to ensure that no spurious source is included in the imaging results used for further analysis. The automasking algorithm of tclean which was used to identify real sources during the deconvolution procedure was tuned such that it only identifies sources that are higher than the 1.5 times the maximum sidelobe level of the brightest sources identified earlier. During the image deconvolution, frequent major cycles (Schwab, 1984)3 were performed to ensure that the artifacts due to inaccurate subtraction of the instrumental point spread function are minimal. However, in the spirit of additional caution, we considered the possibility that still some sources might be sidelobes of real sources. The possibility of spurious source detection is much more in the EOVSA data. The sidelobe levels are very high (\(\sim 70\%\) at \(\sim\)1' away from the main beam at 1.5 GHz) and hence it often becomes impossible to determine the true location of a source when multiple closely spaced sources are present in the image. While care was taken to minimize these effects, we found that the best Figure 1: The full day integrated EOVSA 3 GHz radio map overlaid on the Hinode/XRT soft X-ray map (left panel) and the SDO/AIA 171Å EUV map (right panel). way to find true sources is to compare the images obtained using data having very different uv-distributions and PSFs. Both the uv-distribution and image artifacts change significantly between images with very different frequencies. Hence if a source is detected at different frequencies, it is considered to be a real source. If a source is detected at a single frequency in the VLA/EOVSA, we investigate if the source is also detected by the other instrument. For additional caution, we also compare the flux density of the source in both images if it is detected by both instruments. Due to the difference between the frequency and flux scale of the two instruments we assume a 15% and 10% systematic flux uncertainty in the case of EOVSA and VLA, respectively. If under these assumptions the flux and location of the source match in the EOVSA and VLA, it is considered to be a real source. Additionally, the brightest source in each image is also believed to be real as it is highly unlikely that other real sources in the image conspire such that their sidelobes merge together to produce the brightest source in the image. To summarize a source is considered to be real if it satisfies at least one of the following criteria: 1. It is the brightest source in the deconvolved image. 2. The same source is detected at multiple frequencies over a wide bandwidth. 3. The same source is detected in both EOVSA and VLA deconvolved images and their flux densities are consistent within uncertainties. ## 4 Results Using the techniques described in the previous section, we have detected several sources with high confidence, some lying on coronal holes, some associated with coronal bright points, some located beyond the limb, and some lying above the network regions. Below we discuss examples of these different types of sources. We have not included sources associated with active regions as these sources have been studied quite extensively and are not the focus of this work. ### Beyond the limb source While the integration time of EOVSA is 1 second, we have averaged the data over 15 minutes for this work. This was done in order to increase the image signal-to-noise ratio (SNR), and also for the ease of exploring the entire 4 hours of data with manual imaging. This source was first identified at 1.5 GHz using EOVSA data by averaging over 15 minutes between 21:15:00-21:30:00. It was also detected at a few other frequencies, although significant spatial averaging was needed for this. In the right panel of Fig. 2 radio contours of this source are overlaid on a nearby AIA 171A image. The spectrum of the source is shown in the left panel. The spectrum has been determined by taking the peak value of this source after smoothing each image to a resolution of \(150^{{}^{\prime\prime}}\times 150^{{}^{\prime\prime}}\). At this resolution, the source becomes unresolved, and hence the peak value (in units of Jy/beam) is equal to the integrated flux density (in units of Jy). The error bars were obtained by adding the rms of the image and a 15% flux uncertainty in quadrature. We have done a differential emission measure analysis using publicly available code4 following Hannah & Kontar (2012, 2013) and use the output of that to calculate the expected free-free emission using the code developed in Fleishman et al. (2021). The expected flux density due to free-free emission is shown using a black dashed line. It has been calculated directly from the model solar emission, by summing the pixel values inside the outermost contour shown in the right panel of Fig. 2. It is evident that the expected flux density is much lower than that observed in the EOVSA radio images. However, it should be noted that there can be small differences because the EUV limb and the radio limb do not lie at the same location. Additionally, the EUV images do not seem to have a counterpart to the observed radio source. We have also simulated an EOVSA observation using the free-free emission model and produced simulated images using the same procedure as that followed while producing the observed EOVSA map. No structure, similar to that observed in the EOVSA map, is detected in the simulated map, even though the rms of both the simulated and observed maps are comparable. Based on this we rule out free-free emission as the dominant emission mechanism behind the observed source. Footnote 4: [https://github.com/ianan/demreg/tree/master/python](https://github.com/ianan/demreg/tree/master/python) Next, we investigate if a gyrosynchrotron model can explain the observed spectrum. The gyrosynchrotron model was obtained using the code available in Kuznetsov & Fleishman (2021); Fleishman et al. (2021). The source is assumed to be homogeneous and isotropic. The nonthermal electron distribution is assumed to be a powerlaw between \(E_{min}\) and \(E_{max}\) with powerlaw index \(\delta\). The magnetic field (B), \(\delta\), and number density of nonthermal electrons were kept as free parameters during the fitting procedure. We also manually adjust other parameters to obtain a good fit to the data. The density of the thermal and nonthermal electrons are denoted as \(n_{\rm th}\) and \(n_{\rm nth}\) respectively. The angle between the LOS and the magnetic field is denoted as \(\theta\). The parameters used for modeling the spectra are given in Table 1 in the top row, marked with the label S1. The fitted spectrum is shown in the left panel of Fig. 2 using a blue line. It appears that for this source, \(n_{\rm nth}\) is greater than \(n_{\rm th}\), which, while rather uncommon, has been reported earlier in some flare events (Krucker et al., 2010; Fleishman et al., 2016). We have been unable to find a valid model in which \(n_{\rm nth}\) is smaller than the assumed thermal density, even when the magnetic field is allowed to be as high as 5 kG. While we have provided the formal uncertainties for the fitted parameters, due to the insufficient frequency sampling and bandwidth coverage, the fit is highly under-constrained. The fit parameters given in the table are certainly non-unique and should only be taken as representative values. Thus the primary source of uncertainties in those fit parameters is largely due to the systematic uncertainties due to the under-constrained fitting. Using these representative fit parameters, we calculate that the total nonthermal power of this source is \(2.6\times 10^{26}\,\rm ergs\,s^{-1}\). Assuming a constant power over the entire integration time, we estimated the total nonthermal energy associated with this source to be \(2.3\times 10^{29}\,\rm ergs\), which is comparable to that estimated in the case of microflares based on X-ray data (Hannah et al., 2008, 2008; Glesener et al., 2020). ### Source associated with coronal bright point This source was detected at 4 frequencies in the VLA data between 19:00:00-19:15:00. Radio contours of this source (S2 in table 1) are overlaid on an SDO/AIA 193A image at 19:06:16 in the left panel of Fig. 3. The blue, white, magenta, and cyan contours correspond to images made at 1.057, 1.313, 1.441, and 1.685 GHz respectively. This source also had a signature in the soft X-ray wavelengths, shown on the right, where the radio 1.441 GHz contour is overlaid on a Hinode/XRT Al-poly image. This source was only detected at 1.4 GHz in the later time period 19:15:00-19:30:00 and is not detected at any frequency at later times. In Fig. 4, we show the 193 A and 1.4 GHz light curve. The 1.4 GHz light curve was obtained by making images with a time integration of 5 minutes. The coronal bright point (CBP) shows a clearly decreasing flux, which implies that it was in the decay phase during this time. Interestingly we find that during the time when radio data were available and the source was detected, the 1.4 GHz light curve of this source and the EUV light curve show very similar trends. In the upper panel of Fig. 5, the spectrum of this source is shown. The absolute value of the stokes V/I spectrum of this source is also shown in the bottom panel. The error bars have been determined by adding a 10% systematic flux uncertainty in quadrature to the rms of the image. The red triangles denote the upper limits obtained using the 3 times the error in the estimated V/I and are shown when stokes V is not detected with at least an SNR of 3. The blue curve shows a model gyrosynchrotron spectrum which can account for the detected flux and the degree of circular polarization at different frequencies. It is evident that the model predicts a higher degree of circular polarization at 1.68 GHz, than that expected from the upper limit at that frequency. Such behavior is possible if there are inhomogeneities in the source, leading to a decrease in the observed circular polarization. However, exploring the details of this is beyond the scope of this work. The parameters used for modeling the spectra are given in Table 1 in the top row, marked with the label S2. The nonthermal energy flux estimated using the model parameters is \(7\times 10^{22}\,\rm erg\,s^{-1}\). Again assuming a constant power over the entire integration time, we estimate the nonthermal energy associated with this source as \(6\times 10^{25}\,\rm ergs\). The average thermal energy estimated from DEM analysis for this source is \(5\times 10^{25}\,\rm ergs\), comparable to the estimated nonthermal energy of the microwave source. Past studies of CBPs have assumed that the radio emission associated with them is generated due to free-free emission (Habbal et al., 1986). A high degree of circular polarization, if detected, was explained by propagation effects through a magnetized plasma. We investigate this possibility using simulated free-free emission maps, generated by the procedure described in Section 4.1. We assume that the medium has a uniform magnetic field of 100 G. The black dashed line shows the expected degree of circular polarization from this simulation. It is evident that the simulated values are much lower than those observed. Hence we conclude that free-free emission cannot be the emission mechanism behind the observed source. ### Sources associated with coronal holes While several sources were detected in coronal holes, three sources were chosen for further analysis because these are the brightest sources in the respective images and hence were unlikely to be significantly affected by the sidelobes of other sources. The radio contours of these three sources have been overlaid on SDO/AIA images at a similar time, and are shown in the first row of Fig. 6. The sources in the right, middle, and left panels are detected at 1.313, 1.941, and 1.941 GHz respectively. In the second and third rows, the light curves of these radio sources and those of the EUV sources (centers of the cyan dashed circles drawn in the upper panels) are shown. The EUV sources have been identified through visual inspection such that they are located close to the radio source and their light curve shows qualitative simi larity with that of the corresponding radio source. However, it is quite possible that multiple EUV sources lying within the instrumental resolution element of the radio instruments may contribute to the observed radio emission. Hence while identification of the EUV source bolsters the fidelity of the detected sources, their exact location is only accurate up to the instrumental resolution of the radio data. In the fourth row, the GOES light curve is shown. The black dotted line shows the time when the radio flux density is maximum. It should be noted that while this line has a spread of 5 minutes, which is the integration time of the radio images, the spread has not been shown in these plots. Interestingly we find that for each source, we find an X-ray peak at the same location as the radio peak. However, it should also be noted that at these low flux levels GOES light \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \hline & Magnetic & \(\delta\) & E\({}_{max}\) & \(\theta\) (deg) & \(\log_{10}(nth\) & Temper- & LOS & Area & \(\log_{10}(nth\) & E\({}_{min}\) \\ & field (G) & & (MeV) & & (cm\({}^{-3}\))) & \begin{tabular}{l} aure \\ (MK) \\ \end{tabular} & \begin{tabular}{l} aure \\ (arcsec\({}^{2}\)) \\ \end{tabular} & \begin{tabular}{l} aure \\ (cm\({}^{-3}\)) \\ \end{tabular} & \begin{tabular}{l} aure \\ (keV) \\ \end{tabular} \\ \hline S1 & 47\(\pm\)7 & 6.4\(\pm\)0.3 & 10 & 50 & 9.5 & 3 & 6.41 & 326 & 9.8\(\pm\)0.2 & 30 \\ \hline S2 & 129\(\pm\)12 & 5.7\(\pm\)0.2 & 10 & 70 & 9.4 & 3 & 5 & 100 & 9\(\pm\)0.7 & 1 \\ \hline \end{tabular} \end{table} Table 1: Parameters used to obtain the model gyrosynchrotron spectrum shown in Figs. 2 and 5. Note due to the under-constrained nature of the spectral fitting, they should only be regarded as representative values. Figure 3: Radio contours of the source (S2 in table 1) described in Sec 4.2 are overlaid on a AIA 193Å image (left panel) and a XRT Al-poly image (right panel). The blue, white, magenta and cyan contours correspond to images made at 1.057, 1.313, 1.441 and 1.685 GHz respectively. Figure 2: Left panel: The red points show the observed spectrum of the above-the-limb source (S1 in table 1) between 21:15–21:30. The blue line shows the fitted gyrosynchrotron model. The black dashed line shows the expected spectrum if the emission due to optically thin free-free radiation. Right panel: Contours of the radio source at 1.6 GHz overlaid over a SDO/AIA 171Å image. curve has a lot of contamination due to electrons and in absence of independent confirmation, the mere presence of these X-ray peaks does not confirm an X-ray signature. Movies showing the variability in EUV at the location of these three sources are provided in the supplementary material., where we overlay contours of these three radio images over AIA 171A base difference images. The cyan circles shown are the same ones shown in the upper panel of Fig. 3. The movies span the same time duration as the light curves shown in the second row of Fig. 3, with a cadence of 1 minute. Still frames of these movies are given in Figs. 7, 8 and 9. The name of the movie is same as the respective figure number of its still frame. We find that the radio sources are located near the coronal bright points in the coronal hole region. From the top panel, it is evident that the EUV source is co-located with the radio source and also shows variability in similar times. The variability is always observed in the 171A, and sometimes in the 193A, and 211A, but not observed in the other AIA wavebands. It should however be noted that due to the huge difference between the instrumental resolution of AIA and the radio observations presented here, it is extremely hard to identify a unique EUV counterpart to the radio source and it is possible that the observed radio emission has a contribution from other nearby sources as well. All of these radio sources are circularly polarized, with the one in the left column showing a degree of circular polarization of \(-56\pm 11\%\). The sources in the middle and right columns are only detected in the right circular polarization and using the \(5\sigma\) as the upper limit of the flux in the left circular polarization we estimate that the stokes V fraction is greater than 11% and 23% respectively. The high circular polarization of the source can be explained either by optically thin free-free emission due to propagation effects through the magnetic field (e.g. Habbal et al., 1986; Sastry, 2009), or due to gyrosynchrotron emission. Extremely impulsive narrowband emissions with flux densities comparable to the sources studied here were detected only in one circular polarisation (Bastian, 1991) and were attributed to plasma emission mechanism. While those emissions were solely associated with active regions, plasma emission from the weak reconnection events happening in the quiet Sun has been reported at lower frequencies (Mondal et al., 2020, 2023; Sharma et al., 2022) and can happen at our observing frequencies as well. Hence the polarization data alone are insufficient to determine the emission mechanism of these sources. However, for the first source shown in the left column, we can rule out free-free emission by combining the polarization and spectral information. Fig. 10 shows the observed flux density and the corresponding degree of circular polarization at 1.313 GHz where we have a detection of confidence. The upper limits of the flux density of the source at neighboring frequencies are also computed as 5 times the noise in the image, shown as red triangles. While the source spectra (observed flux at 1.313 GHz and the upper limits at other frequencies) may be explained by optically thick free-free emission, the high degree of circular polarization cannot be explained. On the other hand, while the polarization signal can be explained with optically thin free-free emission due to propaga Figure 4: Light-curve at 193Å from a 24”\(\times\)14” box centered on the black cross shown in Fig. 3 is shown in blue. The 1.4 GHz light curve is shown in red. Figure 5: Red and blue indicate the observed values and the modeled gyrosynchrotron spectrum respectively for the source described in Sec 4.2. The red triangles denote the upper limits obtained using the 3 times the rms in the Stokes V image. The black dashed line shows the expected value assuming free-free emission and a medium with a magnetic field of 100 G. tion effects, the spectral structure cannot be explained as the spectrum is not flat. However, a gyrosynchrotron model can explain both properties simultaneously. A representative gyrosynchrotron spectrum which satisfies the spectral constraints and also shows a high degree of circular polarization is shown as the blue curve in Fig. 10. Note that this model spectrum shown here is not a best-fit spectrum due to the lack of spectral constraints. It is provided with the sole intention of demonstrating that a nonthermal gyrosynchrotron spectrum can possibly meet the observational constraints in a much better manner than a thermal free-free emission mechanism. ### Sources associated with network regions Several other radio transients were detected in the quiet sun network regions where no apparent dynamic features were detected either in EUV or soft X-ray. Here we show two examples of these sources in Fig. 11. The source in the left panel is detected both in the EOVSA and VLA, while the source in the right panel is detected in multiple VLA bands. Hence both of these sources are high-fidelity sources and hence chosen for further analysis using the EUV and X-ray data. In Fig. 11 the details of these two sources are shown in the same format as that in Fig. 6. In the top left panel, the VLA and EOVSA data are shown with blue and white contours respectively. In the top right panel, blue, white, and magenta contours correspond to images at 1.057, 1.313 and 1.441 GHz respectively. Dashed cyan circles have been drawn such that their centers show the location from where the EUV light curves shown in the bottom panel are extracted. Movies showing the EUV variability at the location of these sources are provided in the supplementary material. The format of these movies is the same as that in Figs. 7, 8 and 9. Still frames of the movies are provided in Figs. 12 and 13. For the source in the left panel, we see a clear peak in the X-ray light curve at the same time as the peak in the radio. For the source in the right panel, while we have marked the numerical peak of the radio light curve, the fluxes are comparable at the times when the source was detected. Additionally, there are multiple X-ray peaks during this time interval, making it hard to identify a probable X-ray counterpart to the radio source. However, as mentioned earlier due to the high contamination from electrons, these signatures in the GOES lightcurve should not be treated as confirmatory signatures. Figure 6: Radio transients in the coronal hole region detected by VLA. First row: Overlay of radio contours over SDO/AIA images at similar times. Second row: Light curves of radio sources at 1.313 (left panel) and 1.987 GHz (middle and right panels). The red points show the peak flux at the location of the radio source. The blue triangles show the 3\(\sigma\) value, where \(\sigma\) is the estimated noise on the corresponding radio image. Third row: Light curves of the box centered on the red dashed point shown in the upper panel in the SDO/171 Å image. Fourth row: GOES lightcurve during the relevant times are shown. The dotted black line is used to mark the peak of the radio light curve. Each column corresponds to the same source. and coronal mass ejections (Alissandrakis & Gary, 2021). To the best of our knowledge, this work is the second work that constrains the coronal magnetic field in the quiet on-disk solar corona using radio observations (the first one was done by Habbal et al. (1986)). The availability of more sensitive and wideband radio instruments thus raises the hope that this technique of using quiet sun radio transients can be used in a much more routine manner to constrain the magnetic field in the quiescent solar corona. Figure 8: Base difference EUV image at 171Å corresponding to the source shown in the middle panel of Fig. 6. This figure is available as an animation in the supplementary material. The animation shows the variability seen in the difference image with time. The animation spans from 19:35:00–20:10:00 UT and shows the AIA images at a cadence of 1 minute. Figure 10: Flux density spectrum of the source in the left panel of Fig. 6. The red circular points shows the observed values. The red triangles show the upper limits to the flux density estimated as 5 times the noise in the image. The blue line shows a representative gyrosynchrotron model which satisfies the observational constraints. Figure 7: Base difference EUV image at 171Å corresponding to the source shown the left panel of Fig. 6. This figure is available as an animation in the supplementary material. The animation shows the variability seen in the difference image with time. The animation spans from 19:54:00–20:27:00 UT and shows the AIA images at a cadence of 1 minute. Figure 9: Base difference EUV image at 171Å corresponding to the source shown in the right panel of Fig. 6. This figure is available as an animation in the supplementary material. The animation shows the variability seen in the difference image with time. The animation spans from 20:35:00–21:10:00 UT and shows the AIA images at a cadence of 1 minute. Figure 11: Image shown is in same format as that in Fig. 6 for the sources described in Section 4.4. The left panel correspond to the source detected both in VLA and EOVSA images and the right panel correspond to the source detected in multiple frequencies. ## 6 Conclusion In this work, we have presented a study on weak radio transients from the quiet sun in the microwave range. We take advantage of imaging spectroscopy observations made by VLA and EOVSA along with available EUV data to unravel features of some of these transients. We show that at least 3 of the detected radio transients observed at 1-2 GHz are powered by nonthermal gyrosynchrotron emission. We have modeled the spectrum using a gyrosynchrotron model whenever possible and using the modeled parameters to estimate the nonthermal electron energy, which varies between \(10^{26}-10^{29}\) ergs. While this result is obtained from only a handful of events, our results strongly suggest the presence of a significant nonthermal electron population even during these quiet times. For the two sources where we were able to calculate both thermal and nonthermal energy, we find that they are comparable. This suggests that the nonthermal energy budget should also be taken into account when calculating the total coronal energy budget. We also present the first detection of radio transients associated with coronal holes and show that at least in one instance the source is due to gyrosynchrotron emission. We suggest that the source may be due to interchange reconnection in the vicinity of coronal holes, a scenario suggested to explain solar wind formation from the coronal holes. In addition, this work also shows that these radio transients can be used to constrain the coronal magnetic field in the quiet sun. Recent advances with the EOVSA have already demonstrated the technique of using microwave imaging spectroscopy to constrain the coronal magnetic field based on nonthermal gyrosynchrotron radiation theories (e.g. Chen et al., 2020; Kuroda et al., 2020; Yu et al., 2020; Wei et al., 2021; Chen et al., 2021; Fleishman et al., 2022; Zhang et al., 2022). However, due to the limit in sensitivity, dynamic range, and image fidelity of current instruments, such studies have largely been limited to active regions and solar flares. We hope that with the availability of more sensitive and broadband instruments with a much lower sidelobe level and higher image fidelity, similar studies can be extended even to the quiet solar corona. While due to the isolated nature of these radio transients, this technique can not be used to map the coronal magnetic field of the full solar disk, these measurements will add new constraints which we hope would ultimately lead to a better understanding of the magnetic field structure of the solar corona. Figure 12: Base difference EUV image at 171Å corresponding to the source shown in the left panel of Fig. 11. This figure is available as an animation in the supplementary material. The animation shows the variability seen in the difference image with time. The animation spans from 20:00:00–20:45:00 UT and shows the AIA images at a cadence of 1 minute. Figure 13: Base difference EUV image at 171Å corresponding to the source shown in the right panel of Fig. 11. This figure is available as an animation in the supplementary material. The animation shows the variability seen in the difference image with time. The animation spans from 20:55:00–21:40:00 UT and shows the AIA images at a cadence of 1 minute. This work makes use of public VLA data from the observing program VLA/19B-338. The NRAO is a facility of the National Science Foundation (NSF) operated under a cooperative agreement by Associated Universities, Inc. The authors acknowledge Dr. Tim Bastian for leading the VLA observing program and helpful discussions. SM also acknowledges Dr. Stephen White for discussions regarding intricacies of the GOES data. This work is supported by NSF grant AGS-1654382 and NASA grants 80NSSC20K0026, 80NSSC20K1283, and 80NSSC21K0623 to NJIT. EOVSA operations are supported by NSF grants AST-1910354 and AGS-2130832 to NJIT. This research used version 4.0.5 (Mumford et al., 2022) of the SunPy open source software package (The SunPy Community et al., 2020). This research used version 0.6.4 (Barnes et al., 2020) of the aiapy open source software package (Barnes et al., 2020). This research has made use of NASA's Astrophysics Data System Bibliographic Services. ## Appendix A VLA data analysis and imaging procedure All analysis were done using the Common Astronomy Software Applications (CASA, McMullin et al., 2007). A combination of manual flagging and an automated flagging routine, _tfcrop_, was used for the initial flagging. Calibration was non-trivial as the phase calibrator becomes resolved at baselines above \(3k\lambda\), a threshold below which many antennas had very few baselines. Hence to determine robust and accurate antenna gains self-calibration technique was applied on the phase calibrator data. We initially calibrated the data using baselines smaller than \(\sim 5k\lambda\) at 1.05 GHz (translates to a scale of \(\sim 40^{{}^{\prime\prime}}\)) and used progressively higher thresholds at higher frequencies so as to ensure that sufficient baselines were present for all antennas within this threshold. Each spectral window was calibrated independently assuming that the calibrator source flux is constant within each spectral window. After the initial calibration of bandpass and temporal variability of gain using baselines below the chosen threshold, the obtained gains were refined using a self-calibration based approach. During the self-calibration approach, all baselines were used. We also assumed that the bandpass obtained earlier using the uv-based threshold is correct and time-invariant. Flagging was done using the task _rflag_ when needed. After the self-calibration converged, we fitted a Gaussian function to the source using the task _imfit_ and obtained a scaling factor to match the observed flux and true flux obtained from the VLA calibrator manual. The obtained gains from the phase calibrator and complex gains due to the 20 dB attenuators were then applied on the solar data5. A round of flagging was done using the task _rflag_ to remove bad data in solar scans. Footnote 5: See Chen (2013) for a more detailed discussion on the use and calibration of the 20 dB attenuators. Imaging was done using the task _tclean_. We have used the auto-masking mode for deconvolving only "real" sources. However, the point spread function, or synthesized beam, generally had a sidelobe level of about 40%, which was larger than that typically observed in full-day synthesis images. Hence different auto-masking parameters of the task tclean were tuned so that the emission could be better captured. Specifically, sidelobethreshold, noisethreshold, minbeamfrac, and cyclefactor were set to 1.5, 3, 0.2, and 5, respectively6. _Software:_ Numpy (Harris et al., 2020), Scipy (Virtanen et al., 2020), Python 3 (Van Rossum & Drake, 2009), Matplotlib (Hunter, 2007), Sunpy (The SunPy Community et al., 2020), Astropy(Astropy Collaboration et al., 2013, 2018, 2022), Aiapy(Barnes et al., 2020) _Facilities:_ OVRO:SA, SDO, GOES
2304.04229
On the half-quantized Hall conductance of massive surface electrons in magnetic topological insulator films
In topological insulators, massive surface bands resulting from local symmetry breaking are believed to exhibit a half-quantized Hall conductance. However, such scenarios are obviously inconsistent with the Thouless-Kohmoto-Nightingale-Nijs theorem, which states that a single band in a lattice with a finite Brillouin zone can only have an integer-quantized Hall conductance. To explore this, we investigate the band structures of a lattice model describing the magnetic topological insulator film that supports the axion insulator, Chern insulator, and semi-magnetic topological insulator phases. We reveal that the gapped and gapless surface bands in the three phases are characterized by an integer-quantized Hall conductance and a half-quantized Hall conductance, respectively. This result is distinct from the previous consensus that the gapped surface band is responsible for the half-quantized Hall conductance and the gapless band should exhibit zero Hall response. We propose an effective model to describe the three phases and show that the low-energy dispersion of the surface bands inherits from the surface Dirac fermions. The gapped surface band manifests a nearly half-quantized Hall conductance at low energy near the center of Brillouin zone, but is compensated by another nearly half-quantized Hall conductance at high energy near the boundary of Brillouin zone because a single band can only have an integer-quantized Hall conductance. The gapless state hosts a zero Hall conductance at low energy but is compensated by another half-quantized Hall conductance at high energy, and thus the half-quantized Hall conductance can only originate from the gapless band. Moreover, we calculate the layer-resolved Hall conductance of the system. The conclusion suggests that the individual gapped surface band alone does not support the half-quantized surface Hall effect in a lattice model.
Rui Chen, Shun-Qing Shen
2023-04-09T12:48:44Z
http://arxiv.org/abs/2304.04229v1
On the half-quantized Hall conductance of massive surface electrons in magnetic topological insulator films ###### Abstract In topological insulators, massive surface bands resulting from local symmetry breaking are believed to exhibit a half-quantized Hall conductance. However, such scenarios are obviously inconsistent with the Thouless-Kohmoto-Nightingale-Nijs theorem, which states that a single band in a lattice with a finite Brillouin zone can only have an integer-quantized Hall conductance. To explore this, we investigate the band structures of a lattice model describing the magnetic topological insulator film that supports the axion insulator, Chern insulator, and semi-magnetic topological insulator phases. We reveal that the gapped and gapless surface bands in the three phases are characterized by an integer-quantized Hall conductance and a half-quantized Hall conductance, respectively. This result is distinct from the previous consensus that the gapped surface band is responsible for the half-quantized Hall conductance and the gapless band should exhibit zero Hall response. We propose an effective model to describe the three phases and show that the low-energy dispersion of the surface bands inherits from the surface Dirac fermions. The gapped surface band manifests a nearly half-quantized Hall conductance at low energy near the center of Brillouin zone, but is compensated by another nearly half-quantized Hall conductance at high energy near the boundary of Brillouin zone because a single band can only have an integer-quantized Hall conductance. The gapless state hosts a zero Hall conductance at low energy but is compensated by another half-quantized Hall conductance at high energy, and thus the half-quantized Hall conductance can only originate from the gapless band. Moreover, we calculate the layer-resolved Hall conductance of the system. The conclusion suggests that the individual gapped surface band alone does not support the half-quantized surface Hall effect in a lattice model. ## I Introduction The half-quantized Hall effect occurs on the topological insulator surface once the surface Dirac fermion acquires a mass due to local symmetry breaking [1; 2; 3; 4; 5; 6; 7; 8] for three reasons: (i) The bulk value of the axion angle \(\theta=\pi\) in topological insulators allows either gapless or gapped surface states [1; 9; 10; 11; 12]. In the gapped case, there must be a half-quantized Hall effect; (ii) The half-quantized Hall effect can be captured by calculating the layer-resolved Hall conductance [9; 13; 14; 15; 1]; and (iii) The gapped surface state could be described by the massive Dirac equation, which hosts a half-quantized Hall conductance [16; 17; 18; 19; 20; 21]. Therefore, it is widely believed that the half-quantized Hall conductance originates from the gapped surface state in magnetic topological insulators [22; 23; 24; 25]. However, there are still concerns for the following reasons: (i) The axion term and (ii) layer-resolved Hall conductance do not guarantee that the half-quantized Hall effect originates from the gapped surface state; (iii) The half quantization in the massive Dirac equation contradicts the common belief that a single band on a two-dimensional finite Brillouin zone can only have an integer-quantized Hall conductance. Moreover, our recent works focus on "parity anomalous semimetals", in which massive and massless Dirac fermions coexist [26; 27; 28]. The half-quantized Hall conductance in the parity anomalous semimetal is attributed to the massless Dirac fermions, rather than the previous consensus that it should be attributed to the massive Dirac fermions. Therefore, how the half quantization is manifested in magnetic topological insulator films is still an open question. The half-quantized Hall effect is proposed to be realized on magnetically doped topological insulators [29; 24; 22; 26; 24; 40; 41] and intrinsic antiferromagnetic topological insulators [9; 18; 23; 25; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123]. According to the electronic band structures, the magnetic topological insulator can be divided into three different phases: the semi-magnetic topological insulator phase [22; 26; 40] consists of a gapped and a gapless Dirac cone on its top and bottom surfaces [Fig. 1(b)]; the axion [23; 24; 33; 60; 61] and Chern [16; 29; 30; 58; 59; 60] insulator phases host two gapped Dirac cones on the top and bottom surfaces [Figs. 1(a) and 1(c)]. Specifically, the gapped Dirac cones on the axion (Chern) insulator phase were believed to be characterized by a half-quantized Hall conductance with the opposite (same) sign on the opposite surface, and they combine to yield a zero (quantized) Hall conductance. However, such scenarios are obviously inconsistent with the Thouless-Kohmoto-Nightingale-Nijs theorem, which states that a single band in a lattice with a finite Brillouin zone can only have an integer-quantized Hall conductance. In the semi-magnetic topological insulator phase, the magnetic gap is characterized by a half-quantized Hall conductance contributed by the gapped surface. The gapless surface is believed to exhibit no effect on the total Hall conductance. In the past years, the three phases have received enormous attention. The axion insulator possesses a unique electromagnetic response from the massive Dirac fermions, which leads to the quantized topological magnetoelectric effect [13; 64; 65; 66; 67; 68]. The semi-magnetic topological insulator provides a condensed-matter realization of the parity anomaly and opens the door for experimental realization of the single Dirac fermion [22; 26; 40]. The realization of the Chern insulator phase can lead to the development of novel spintronics devices [29; 30]. In this work, we systematically study the spectrum, probability distribution, and the corresponding Berry curvature distribution in three phases in magnetic topological insulator films, including the axion insulator, Chern insulator, and semi-magnetic topological insulator phases. We explore how the band structures manifest the half-quantized Hall conductance in a lattice model. In the axion and Chern insulator phases, each gapped surface state contributes a nearly half-quantized Hall conductance at low energy near the center of Brillouin zone, but must be compensated by another nearly half-quantized Hall conductance at high energy near the boundary of Brillouin zone, because a single band can only have an integer-quantized Hall conductance in a lattice model according to Thouless-Khomoto-Nightingale-Nijs theorem [69]. In the semi-magnetic topological insulator phase, the gapless state hosts a zero Hall conductance at low energy but must be compensated by another half-quantized Hall conductance at high energy, because the half-quantized Hall conductance can only originate from the gapless state. Moreover, we propose an effective four-band model to depict the three phases. We confirm that the low-energy surface states in these phases inherit from the Dirac fermions. The states at low and high energies always couple in energy scale and they cannot be treated separately. Due to the coupling effect, the exact half quantization is only revealed in the gapless surface state in the semi-magnetic topological insulator phase, but it is not found in the gapped surface state in the axion and Chern insulator phases from the band calculation. Moreover, we adopt the layer-resolved Hall conductance to study the three phases. The three phases are characterized by distinct patterns of half-quantized surface Hall effect. The results indicate that the half-quantized surface Hall effect is contributed by all the occupied bands, rather than the individual gapped surface bands. ## II Model To carry out numerical studies on the magnetic topological insulator, we consider a tight-binding Hamiltonian on a cubic lattice for an isotropic three-dimensional topological insulator [70; 16; 20] \[\mathcal{H}=\sum_{i}c_{i}^{\dagger}\mathcal{M}_{0}c_{i}+\sum_{i,\alpha=x,y,z} \left(c_{i}^{\dagger}\mathcal{T}_{\alpha}c_{i+\alpha}+c_{i+\alpha}^{\dagger} \mathcal{T}_{\alpha}^{\dagger}c_{i}\right), \tag{1}\] where \(\mathcal{T}_{\alpha}=B\sigma_{z}\tau_{0}-i\frac{A}{2}\sigma_{x}\tau_{\alpha}\) and \(\mathcal{M}_{0}=(M-6B)\sigma_{z}\tau_{0}+\Delta\left(z\right)\sigma_{0}\tau_{z}\) with the lattice space is taken to be unity. Near the \(\mathbf{k}=0\) point in the momentum space (i.e., the low-energy regime), this model is reduced to a Dirac-like model in the absence of the magnetic effect. \(\sigma\) and \(\tau\) are Pauli matrices. The magnetic effect is represented by layer-dependent Zeeman splitting \(\Delta\left(z\right)\). We take \(\Delta\left(z\right)=\Delta_{b}\) for the bottom surface with \(z=1,2\), \(\Delta\left(z\right)=\Delta_{t}\) for the top surface with \(z=n_{z}-1,n_{z}\), and \(\Delta\left(z\right)=0\) elsewhere. The axion and Chern insulator phases are characterized by \(\Delta_{t}\Delta_{b}<0\) and \(\Delta_{t}\Delta_{b}>0\), respectively. The semi-magnetic topological insulator phase is characterized by either \(\Delta_{b}\neq 0\) and \(\Delta_{t}=0\) or \(\Delta_{b}=0\) and \(\Delta_{t}\neq 0\). In the subsequent calculations, we fix the parameters as \(A=0.5\), \(B=0.25\), and \(M=0.4\). Figure 1: (a) Schematic illustration of the axion insulator phase. The arrows indicate the propagating direction of the surface chiral currents. Blue and red correspond to the top and bottom surfaces, respectively. (d) Energy spectra of the axion insulator phase. Here the color scheme of the bands indicates the wave function distribution. (g) Numerically calculated Hall conductance \(\sigma_{1}^{c,v}\) (blue), \(\sigma_{2}^{c,v}\) (red), \(\sigma_{3}^{c,v}\) (green), and \(\sigma_{t}\) (orange) as functions of \(E_{F}\). Here, \(\sigma_{t}\) depicts the conductance contributed from all the bands, \(\sigma_{1}^{c,v}\) depict the conductance contributed from the lowest conduction and highest valence bands, \(\sigma_{2}^{c,v}\) depict the conductance contributed from the second lowest conduction and second highest valence bands, and so on. The dashed black line corresponds to \(\sigma=-e^{2}/2h\). In (d) and (g), the yellow and light blue regions correspond to the magnetic gap on the top and bottom surfaces, respectively. (b, e, h) and (c, f, i) are the same as (a, d, g), except that they depict the semi-magnetic topological insulator phase and the Chern insulator phase, respectively. We take \(\Delta_{t}=0.05\) for the axion insulator phase, \(\Delta_{t}=0\) for the semi-magnetic topological insulator phase, and \(\Delta_{t}=-0.05\) for the Chern insulator phase. The Zeeman splitting term for the bottom surface is \(\Delta_{b}=-0.2\) for all three phases. The film thickness is taken as \(n_{z}=10\). When magnetization is introduced to a certain surface, it is believed that the gapless Dirac cone will open an energy gap characterized by a half-quantized Hall conductance, with its sign depending on the magnetization direction. For both Chern and axion insulator phases, the Zeeman effect is introduced to the top and bottom surfaces. Therefore the surface states open energy gaps at the \(\Gamma\) point on both the top and bottom surfaces [Figs. 1(a) and 1(c)]. When the two surfaces have antiparallel magnetization alignment [Fig. 1(a)], the system is characterized by half-quantized Hall conductance with the opposite sign on the opposite surface, leading to the emergence of the axion insulator phase. Because the top and bottom surfaces have parallel magnetization alignment in the Chern insulator phase, the system is characterized by the half-quantized Hall conductance with the same sign [Fig. 1(c)], and they combine to yield a quantized Hall conductance. The semi-magnetic topological insulator phase is characterized by a half-quantized Hall conductance, contributed by the gapped Dirac cone on the magnetic bottom surface [Fig. 1(b)]. However, the above scenarios seem to contradict the common belief that a single band in a lattice model can only host an integer-quantized Hall conductance in units of \(e^{2}/h\). Next, we study the spectrum and the corresponding probability distribution of the three phases in the lattice model in Eq. (1), and explore how the band structures reconcile the contradiction and how the half quantization is manifested. ## III Spectrum and Hall conductance The second and third rows in Fig. 1 show the numerically calculated energy spectra and the corresponding Hall conductance for the three phases, respectively. We take \(\Delta_{b}=0.2\) and \(\Delta_{t}=\mp 0.05\) for the axion and Chern insulator phases and \(\Delta_{b}=0.2\) and \(\Delta_{t}=0\) for the semi-magnetic topological insulator phase, respectively. The results are as expected. In the axion insulator phase, the spectrum opens a gap [Fig. 1(d)] characterized by a zero Hall conductance [the orange line in Fig. 1(g)]. The spectrum of the Chern insulator phase [Fig. 1(f)] is the same as the axion insulator phase, except that the gap is characterized by a quantized Hall conductance [the orange line in Fig. 1(i)]. The spectrum of the semi-magnetic topological insulator phase is gapless [Fig. 1(e)], and the system is characterized by a half-quantized Hall conductance when the Fermi energy is located at the magnetic gap of the bottom surface [the orange line in Fig. 1(h)]. Here, the conductance for the \(m\)-band is calculated by [19] \[\sigma_{m}\left(E_{F}\right) =-i\hbar e^{2}\sum_{n\neq m}\int\frac{d^{2}k}{\left(2\pi\right)^{ 2}}\frac{\left\langle m\right|v_{x}\left|n\right\rangle\left\langle n\right|v _{y}\left|m\right\rangle}{E_{m}-E_{n}}\] \[\times\frac{f\left(E_{F}-E_{n}\right)-f\left(E_{F}-E_{m}\right)} {E_{m}-E_{n}+i\eta}, \tag{2}\] wth \(\left|m\right\rangle\) is the eignestate of energy \(E_{m}\) for the quasi-2D system confined along the \(z\) direction. \(v_{x}\) and \(v_{y}\) are the velocity operators. \(f\left(x\right)\) is the Fermi distribution and \(\eta\) is an infinitesimal quantity. The total conductance contributed from all the bands has the form \(\sigma_{t}\left(E_{F}\right)=\sum_{m}\sigma_{m}\left(E_{F}\right).\) When the Fermi energy is located at the magnetic surface gaps [the yellow region in Figs. 1(d-f)], the topology nature of the three phases is dominated by the lowest conduction and highest valence bands [see the blue and orange lines in Fig. 1(g-i)]. The higher conduction and lower valence bands exhibit no contribution to the total Hall conductance [the red and green lines in Fig. 1(g-i)]. Figures 2(a-c) show the probability distributions \(|\psi_{i=1,2,3}^{v}\left(k_{x},z\right)|^{2}\) of the valence bands as functions of \(z\) for different \(k_{x}\) in the axion insulator phase, where \(\psi_{i}^{v}\) is the wave function distribution of the \(i\)-th highest valence band. It is noted that the probability distributions of the semi-magnetic topological insulator phase and Chern insulator phase are similar to that of the axion insulator phase [as shown in Figs.1(d-f)]. The probability distribution of the highest valence band \(|\psi_{1}^{v}\left(k_{x},z\right)|^{2}\) is mainly distributed at the top surface for the low-energy states near \(k_{x}=0\) [the blue lines in Fig. 2(a)], and is mainly dis Figure 2: (a) Probability distribution \(|\psi_{1}^{v}\left(k_{x},z\right)|^{2}\) as a function of layer index \(z\) for the highest valence band of the axion insulator phase for different \(k_{x}\). (b) and (c) are the same as (a), except that they depict the second and third highest valence bands. The probability distributions of the semi-magnetic topological insulator phase and the Chern insulator phase are similar to that of the axion insulator phase. (d) The logarithm of the energy gap between the first and second valence bands as a function of \(\Delta_{t}\). Each point is obtained by searching the minimum energy difference between the first and second valence bands by scanning in the two-dimensional Brillouin zone. The \(k\) points used in the calculation is \(N_{k}\times N_{k}\). The Zeeman splitting for the bottom surface is taken as \(\Delta_{b}=-0.2\). The film thickness is \(n_{z}=10\). In (a-c), we take \(\Delta_{t}=0.05\). tributed at the bottom surface for the high-energy states near \(k_{x}=\pi\) [the red lines in Fig. 2(a)]. The second and third highest valence bands exhibit no contribution to the total Hall conductance when the Fermi energy resides in the magnetic gap. Their probability distributions \(|\psi_{2,3}^{\mathrm{v}}\left(k_{x},z\right)|^{2}\) are mainly distributed at the bottom surfaces for the low-energy state near \(k_{x}=0\) [the blue lines in Figs. 2(b) and 2(c)], and are mainly distributed at the bulk for the high-energy state near \(k_{x}=\pi\) [the red lines in Figs. 2(b) and 2(c)]. Moreover, to confirm that the first and second valence bands are well separated in energy scale, we plot \(\log_{10}\left(E_{g}\right)\) as a function of \(\Delta_{t}\) in Fig. 2(d), where \(E_{g}\) is the minimum energy difference between the first and second valence bands by scanning the whole two-dimensional Brillouin zone. \(E_{g}\) converges with the increasing density of the \(k\) points. The results indicate that the first and second valence bands are well separated in energy scale, except at two special points with \(\Delta_{t}=\pm\Delta_{b}\). The above scenario indicates that the highest valence band, that dominates the topology of the three phases when Fermi energy resides in the magnetic gap, is localized at one surface at low energy near the Dirac point and at the other surface at high energy near the Brillouin boundary. In contrast to the previous consensus [13; 64; 64], the topology natures of the three phases are dominated by the two surface states near the \(\Gamma\) point. ## IV Effective model The lowest four bands of the magnetic topological insulator thin film can be effectively described by a \(4\times 4\) Hamiltonian in the two-dimensional Brillouin zone [20; 26; 70; 27]: \[H=\left(\begin{array}{cc}h\left(\mathbf{k}\right)+V_{t}\tau_{z}&j\left[m_{0 }\left(\mathbf{k}\right)/T\right]m_{0}\left(\mathbf{k}\right)\\ j\left[m_{0}\left(\mathbf{k}\right)/T\right]m_{0}\left(\mathbf{k}\right)&-h \left(\mathbf{k}\right)+V_{b}\tau_{z}\end{array}\right), \tag{3}\] where \(m_{0}\left(\mathbf{k}\right)=M-4B\left(\sin^{2}k_{x}/2+\sin^{2}k_{y}/2\right)\), \(V_{t/b}\left(k\right)=\Delta_{t/b}j\left[-m_{0}\left(\mathbf{k}\right)/T\right] +\Delta_{t/b}^{\prime}j\left[m_{0}\left(\mathbf{k}\right)/T\right]\), and \(h\left(\mathbf{k}\right)=A\left(\tau_{y}\sin k_{x}-\tau_{x}\sin k_{y}\right)\) describes the massless Dirac fermion. The Fermi-Dirac-distribution like factor \(j(x)=\left[\exp\left(x\right)+1\right]^{-1}\) describes the process that the surface states merge into the bulk states. \(\Delta_{t/b}^{\prime}/\Delta_{t/b}\simeq n_{z}^{\mathrm{Mag}}/n_{z}\ll 1\) where \(n_{z}^{\mathrm{Mag}}\) is the thickness of the magnetically doped film. For thick films with a large \(n_{z}\), we have \(\Delta_{t/b}^{\prime}\to 0.\) The coefficient \(T^{*}=0.05\) is a model-specific parameter. The first row in Fig. 3 compares the spectra obtained from the numerical tight-binding calculations and the analytical effective models in Eq. (3). The effective Hamiltonian captures the band structures of the three phases in the magnetic topological insulator. Moreover, in the following content, we will show that this effective Hamiltonian also captures the band topologies of the three topological phases. The Brillouin zone is divided into two regimes by the sign of \(m_{0}(\mathbf{k})\). In the low-energy regime near the center of Brillouin zone, i.e., \(k^{2}<k_{c}^{2}\), we have \(j\left[m_{0}\left(\mathbf{k}\right)/T\right]\to 0\) and \(j\left[-m_{0}\left(\mathbf{k}\right)/T\right]\to 1\), where the value of \(k_{c}\) is given by \(m_{0}(\mathbf{k}_{c})=0\). The effective Hamiltonian in Eq. (3) reduces to \[H=\left(\begin{array}{cc}h^{\prime}\left(\mathbf{k}\right)+\Delta_{t}\tau_{z }&0\\ 0&-h^{\prime}\left(\mathbf{k}\right)+\Delta_{b}\tau_{z}\end{array}\right), \tag{4}\] with \(h^{\prime}\left(\mathbf{k}\right)=A\left(\tau_{y}k_{x}-\tau_{x}k_{y}\right)\). The low-energy effective model describes two decoupled Dirac fermions with the Dirac mass determined by \(V_{t}\) and \(V_{b}\), respectively. The effective low-energy effective Hamiltonian reproduces the key features of the band structures, that the spectra are dominated by the two decoupled Dirac fermions near the \(\Gamma\) point [the second row in Fig. 3]. In the high-energy regime near the boundary of Brillouin zone, i.e., \(k^{2}>k_{c}^{2}\), we have \(j\left[m_{0}\left(\mathbf{k}\right)/T\right]\to 1\) and \(j\left[-m_{0}\left(\mathbf{k}\right)/T\right]\to 0.\) The effective Hamiltonian in Eq. (3) reduces to \[H=\left(\begin{array}{cc}h\left(\mathbf{k}\right)+\Delta_{t}^{\prime}\tau_{z }&m_{0}\left(\mathbf{k}\right)\\ m_{0}\left(\mathbf{k}\right)&-h\left(\mathbf{k}\right)+\Delta_{b}^{\prime}\tau_{ z}\end{array}\right). \tag{5}\] The top and bottom surface states are coupled via the term \(m_{0}(\mathbf{k})\) in the high-energy regime. Figure 3: (a, d) Energy spectrum of the axion insulator phase with \(\Delta_{t}=0.05\) and \(\Delta_{b}=-0.2\). The red dashed lines are obtained by numerical calculations (where only the two lowest conductance and two highest valence bands are illustrated). The solid lines in (a) and (d) are obtained from the effective Hamiltonians in Eqs. (3) and (4), respectively. (b, e) and (c, f) are the same as (a, d), except that they depict (b, e) the semi-magnetic topological insulator phase with \(\Delta_{t}=0\) and (c, f) the Chern insulator phase with \(\Delta_{t}=0.05\), respectively. ## V Berry curvature distribution Now, we investigate the Berry curvature distributions and search for the underlying half quantization in the three systems. The first row in Fig. 4 shows the Berry curvature distribution \(\Omega(k_{x},k_{y})\) as a function of \(k_{x}\) and \(k_{y}\). The Berry curvature distributions exhibit distinct behaviors for \(k_{r}<k_{c}\) (the low-energy regime) and \(k_{r}>k_{c}\) (the high-energy regime), where \(k_{r}\) is the radius of the circle centered at the origin in the Brillouin zone and \(k_{c}\) corresponds to the critical radius [see the red circle in Fig. 4(b)]. The second column shows the conductance of the \(m\)-th band \(\sigma_{m}\left(k_{r}\right)\) as a function of \(k_{r}\) at \(E_{F}=0\), where \[\sigma_{m}\left(k_{r}\right) =-i\hbar e^{2}\times\sum_{n\neq m_{\mathrm{R}}k_{c}^{2}}\int\limits _{k_{r}^{2}}\frac{d^{2}k}{\left(2\pi\right)^{2}}\frac{\left\langle m\right|v_{ x}\left|n\right\rangle\left\langle n\right|v_{y}\left|m\right\rangle}{E_{m}-E_{n}}\] \[\times\frac{f\left(-E_{n}\right)-f\left(-E_{m}\right)}{E_{m}-E_{ n}+i\eta}, \tag{6}\] Let us focus on the low-energy regime (i.e., \(k_{r}<k_{c}\)). The highest valence band is characterized by \(\sigma_{1}^{v}\left(k_{c}\right)=0.47e^{2}/h\) for the axion insulator phase [Fig. 4(f)], \(\sigma_{1}^{v}\left(k_{c}\right)=0\) for the semi-magnetic topological insulator [Fig. 4(g)], and \(\sigma_{1}^{v}\left(k_{c}\right)=-0.47e^{2}/h\) for the Chern insulator phase [Fig. 4(h)]. The second and third highest valence bands are characterized by \(\sigma_{2}^{v}\left(k_{c}\right)=-0.38e^{2}/h\) [Fig. 4(i)] and \(\sigma_{3}^{v}\left(k_{c}\right)=0.35e^{2}/h\) [Fig. 4(j)] for all three phases. In the low-energy regime, the calculated Hall conductance from the analytical low-energy effective model in Eq. (4) [the blue circle points in Figs. 4(f)-4(i)] can reproduce the numerically calculated Hall conductance from the tight-binding models [the solid red lines in Figs. 4(f)-4(i)]. This further confirms that, at low-energy regimes, the topologies of the systems are dominated by the Dirac fermions on the top and bottom surfaces. When the whole Brillouin zone is considered, each band in the axion and Chern insulator phases can only host a quantized Hall conductance, i.e., \(\sigma_{1}^{v}\left(\pi\right)=0\) for the axion insulator phase [Fig. 4(f)] and \(\sigma_{1}^{v}\left(\pi\right)=e^{2}/h\) for the Chern insulator phase [Fig. 4(h)]. The gapless band in the semi-magnetic topological insulator is characterized by an exact half-quantized Hall conductance with \(\sigma_{1}^{v}\left(\pi\right)=e^{2}/2h\) [Fig. 4(g)]. The second and third highest valence bands are characterized by \(\sigma_{2,3}^{v}\left(\pi\right)=0\) for all three phases [Figs. 4(i) and 4(j)]. ## VI Half-quantized Hall conductance In previous studies based on the low-energy effective model, the surface states are believed to host a half-quantized Hall conductance when local time-reversal symmetry is broken. However, this is not realizable in a realistic system where each band can only host an integer-quantized Hall conductance. The above scenarios explain how this contradiction is reconciled in a lattice model. Figure 4: (a) \(\Omega_{1}^{v}(k_{x},k_{y})\) as a function of \(k_{x}\) and \(k_{y}\), where \(\Omega_{1}^{v}(k_{x},k_{y})\) corresponds to the Berry curvature of the highest valence band in the axion insulator phase. (f) \(\sigma\left(k_{r}\right)\) as a function of \(k_{r}\). (b, g) and (c, h) are the same as (a, f), except that they depict the semi-magnetic topological insulator and the Chern insulator phase, respectively. The solid red line is obtained by numerical calculations. The blue circle and green triangle points are obtained from the effective Hamiltonians in Eqs. (3) and (4), respectively. The black dashed lines correspond to the half-quantized value with \(\sigma_{1}^{v}\left(k_{r}\right)=\pm e^{2}/2h\). The red dashed line corresponds to \(k_{r}=k_{c}\). (d, i) and (e, j) are the same as (a, f), except that they depict the second and third highest valence bands for the axion insulator phase. The results for the semi-magnetic topological insulator phase and the Chern insulator phase are similar to the axion insulator phase. The parameters are \(\Delta_{b}=-0.2\) and \(\Delta_{t}=0.05\) for the axion insulator phase, \(\Delta_{t}=0\) for the semi-magnetic topological insulator phase, and \(\Delta_{t}=-0.05\) for the Chern insulator phase. The film thickness is taken as \(n_{z}=10\). In the axion and Chern insulator phases, the highest valence band is dominated by a nearly half-quantized Hall conductance from one surface at low energy, and must be compensated by another nearly half-quantized Hall conductance from the other surface at high energy. In the semi-magnetic topological insulator phase, the highest valence band is dominated by a zero Hall conductance from one surface at low energy, and is compensated by another half-quantized Hall conductance from the other surface at high energy. This is because the half-quantized Hall conductance can only originate from the symmetry-protected gapless highest valence band, and all the rest bands can only contribute a quantized Hall conductance. Moreover, due to the coupling between the low-energy and high-energy states, the exact half quantization can not be observed in the axion and Chern insulator phases. As shown in Fig. 4(f), the contribution to the Hall conductance \(\sigma\left(k_{r}\right)\) from the low-energy massive Dirac cone (i.e., the blue circle dots) increases with the increasing \(k_{r}.\) The exact half quantization \(\sigma\left(k_{r}\right)\to e^{2}/2h\) shall be achieved only if \(k_{r}\rightarrow\infty\). However, the high-energy state also contributes to \(\sigma\left(k_{r}\right)\) when \(k_{r}>k_{c}\), which prohibits the observation of the exact half quantization. Thus, only a nearly half quantized Hall conductance \(\sigma_{1}^{v}\left(k_{c}\right)=0.47e^{2}/h\) is observed at \(k_{r}=k_{c}\). In contrast, the exact half quantization could only be observed in the semi-magnetic topological phase, because its low-energy state does not contribute to the Hall conductance. ## VII Layer-resolved Hall conductance The axion insulator phase can be distinguished from the normal insulator phase by calculating the layer-resolved Hall conductance. The layer-resolved Hall conductance of the \(m\)-th occupied band is given by [9; 13; 14; 15; 11] \[\tilde{\sigma}_{m}\left(z\right)=\frac{e^{2}}{2\pi h}\int d^{2}\mathbf{k} \mathcal{F}_{xy}^{mm}\left(\mathbf{k},z\right), \tag{7}\] where \[\mathcal{F}_{\alpha\beta}^{mn}\left(\mathbf{k}\right)=\partial_{\alpha} \mathcal{A}_{\beta}^{mn}\left(\mathbf{k}\right)-\partial_{\beta}\mathcal{A}_{ \alpha}^{mn}\left(\mathbf{k}\right)+i\left[\mathcal{A}_{\alpha}^{mn}\left( \mathbf{k}\right),\mathcal{A}_{\beta}^{mn}\left(\mathbf{k}\right)\right] \tag{8}\] is the non-Abelian Berry curvature in terms of \(\mathcal{A}_{\alpha}^{mn}\left(\mathbf{k},z\right)\) with the band indexes \(m\) and \(n\). The layer-resolved Hall conductance of all occupied bands is given by \(\tilde{\sigma}\left(z\right)=\sum_{E_{m}<E_{F}}\tilde{\sigma}_{m}\left(z\right)\). Moreover, the net Hall conductance of the \(m\)-th band and the layer-resolved Hall conductance of the \(m\)-th band is connected through \(\sigma_{m}=\sum_{z=1}^{n_{z}}\tilde{\sigma}_{m}\left(z\right)\). Figure 5(a) shows \(\tilde{\sigma}\left(z\right)\) as a function of \(z\) for the axion insulator (\(\Delta_{t}=0.05\)), semi-magnetic topological insulator (\(\Delta_{t}=0\)), and Chern insulator phases (\(\Delta_{t}=-0.05\)). The bottom surface layers have the same magnetization alignments in the three phases, thus they host the same layer-resolved Hall conductance [\(\tilde{\sigma}\left(z=1,2,3\right)\) in Fig. 5(a)]. The top surface layers have the opposite magnetization alignments in the axion insulator (green) and Chern insulator (blue) phases, thus the corresponding layer-resolved Hall conductances have the opposite signs in the two phases [\(\tilde{\sigma}\left(z=8,9,10\right)\) in Fig. 5(a)]. The top surface layers are non-magnetic in the semi-magnetic topological insulator (red) phase, thus the corresponding layer-resolved Hall conductances is zero. This phenomenon is observed more clearly in Fig. 5(b), where we show \(\tilde{\sigma}_{t/b}\) as functions of \(\Delta_{t}\). \(\tilde{\sigma}_{t}=\sum_{z=8}^{10}\tilde{\sigma}\left(z\right)\) and \(\tilde{\sigma}_{b}=\sum_{z=1}^{3}\tilde{\sigma}\left(z\right)\) depict the layer-resolved Hall conductance of the top and bottom surface layers, respectively. With the increasing \(\Delta_{t}\), \(\tilde{\sigma}_{b}\) remains unchanged but \(\tilde{\sigma}_{t}\) increases from \(-0.5\) to \(0\) and then to \(0.5\), which corresponds to the phase crossovers from the Chern insulator phase to the semi-magnetic topological insulator phase and then to the axion insulator phase. The above calculations take into account the contributions from all the occupied bands and we show that the Figure 5: (a) The layer-resolved Hall conductance \(\tilde{\sigma}\left(z\right)\) as a function of layer \(z\) for different \(\Delta_{t}\). Green, red, and blue correspond to the axion insulator (\(\Delta_{t}=0.05\)), semi-magnetic insulator (\(\Delta_{t}=0\)), and Chern insulator phases (\(\Delta_{t}=-0.05\)), respectively. (b) \(\tilde{\sigma}_{t/b}\) as functions of \(\Delta_{t}\). Here, \(\tilde{\sigma}_{t/b}\) depicts the layer-resolved Hall conductance contributed from the top/bottom surface layers. In (a-b), the calculations take into account the contributions from all the occupied bands. (c-d) and (e-f) are the same as (a-b), except that the obtained layer-resolved Hall conductance only takes into account the contributions from the first valence band in (c-d) and from the second valence band in (e-f), respectively. Here, we take the Fermi energy \(E_{F}=0\), the film thickness \(n_{z}=10\), and \(\Delta_{b}=-0.2\). half quantization can be extracted in the three phases. However, we find that the half quantization cannot be extracted if only one single band is considered. This is shown in Figs. 5(c-d) and Figs. 5(e-f), which are the same as Figs. 5(a-b), expect that the numerically calculated layer-resolved Hall conductance only considers the contributions from the first valence band and the second valence band, respectively. The layer-resolved Hall conductances of one single band have divergent values for each layer. This implies that the half-quantized surface Hall effect is a consequence of all the occupied bands, rather than the individual gapped surface bands. ## VIII Different thickness and parameters Above, we consider a thinfilm case with \(n_{z}=10\). Now, we show that the conclusions are irrelevant to the film thickness. Figures 6(a-c) show the probability distributions of the first, second, and third highest valence bands for different \(k_{z}\) with \(n_{z}=50\), respectively. The conclusions are similar to the thinfilm case shown in Figs. 2(a-c). Figure 6(d) shows the logarithm of the energy gap between the first and second valence bands as a function of \(\Delta_{t}\) for different thicknesses \(n_{z}\). Though the energy difference decreases with the increasing film thickness, the first and second valence bands are still well separated in energy scale, as long as \(\Delta_{t}\neq\pm\Delta_{b}\). Moreover, we consider the case for \(\Delta_{b}=-0.1\). The first row in Fig. 7 shows the spectra of the three topological phases, which are similar to those shown in the second row in Fig. 1, except that the high energies for Figure 6: (a) Wave function distribution as a function of layer index \(z\) for the highest valence band of the axion insulator phase for different \(k_{x}\). (b) and (c) are the same as (a), except that they depict the second and third valence bands. The wave function distributions of the semi-magnetic topological insulator phase and the Chern insulator phase are similar to the axion insulator phase. (d) The logarithm of the energy gap between the first and second valence bands as a function of \(\Delta_{t}\) for different thicknesses \(n_{z}\). Each point is obtained by find the minimum energy difference between the first and second valence bands by scanning in the two-dimensional Brillouin zone. The \(k\) points used in the calculation is \(N_{k}\times N_{k}\),with \(N_{k}=200\). The parameters are taken as \(\Delta_{b}=-0.2\). In (a-c), we take \(\Delta_{t}=0.05\) and \(n_{z}=50\). Figure 7: (a) Energy spectra of the axion insulator phase. Here the color scheme of the bands indicates the wave function distribution. (d) Numerically calculated Hall conductance \(\sigma_{z}^{c,v}\) (blue), \(\sigma_{z}^{c,v}\) (red), \(\sigma_{z}^{c,v}\) (green), and \(\sigma_{t}\) (orange) as a function of \(E_{F}\). Here, \(\sigma_{t}\) depicts the conductance contributed from all the bands, \(\sigma_{z}^{c,v}\) depict the conductance contributed from the lowest conduction and highest valence bands, \(\sigma_{2}^{c,v}\) depict the conductance contributed from the second lowest conduction and second highest valence bands, and so on. The solid black line corresponds to \(\sigma=e^{2}/2h\). In (a) and (d), the yellow and light blue regions correspond to the magnetic gap on the top and bottom surfaces, respectively. (b, e) and (c, f) are the same as (a, d), except that they depict the semi-magnetic topological insulator phase and the Chern insulator phase, respectively. (g) Probability distribution as a function of layer index \(z\) for the highest valence band of the axion insulator phase for different \(k_{x}\). (h) and (i) are the same as (a), except that they depict the second and third valence bands. The wave function distributions of the semi-magnetic topological insulator phase and the Chern insulator phase are similar to the axion insulator phase. We take \(\Delta_{t}=0.05\) for the axion insulator phase, \(\Delta_{t}=0\) for the semi-magnetic topological insulator phase, and \(\Delta_{t}=-0.05\) for the Chern insulator phase. The Zeeman splitting term for the bottom surface is \(\Delta_{b}=-0.1\) for all three phases. The film thickness is taken as \(n_{z}=50\). the first and second highest valence and lowest conduction bands merge into the bulk. This can be observed more clearly in the third row in Fig. 7, which shows the probability distributions of the first, second, and third highest valence bands, respectively. Thus, the probability distribution of the highest valence band at high energy with \(k_{x}=\pi\) can be localized not only at the surface [the red lines in Figs. 2(a) and 6(a)], but also at the bulk [the red line in Fig. 7(g)]. Moreover, by checking the Hall conductance (the second row in Fig. 7), the probability distributions of the state at high energy will not change the fact that the Hall conductance is dominated by the lowest conduction and highest valence bands, when the Fermi energy resides inside the magnetic gap. ## IX Conclusion In this work, we have investigated the energy spectra, wave function distribution, and the corresponding Berry curvature in the three distinct topological phases in magnetic topological insulator films, including the axion insulator phase, the semi-magnetic topological insulator phase, and the Chern insulator phase. In the axion and Chern insulator phases, a nearly half-quantized Hall conductance is observed at low energy near \(\Gamma\) point for the gapped Dirac cone (which is in accordance with the previous consensus), however, another nearly half-quantized Hall conductance must be compensated at high energy away from \(\Gamma\) point to ensure that the total Hall conductance is an integer-quantized number in units of \(e^{2}/h\). In the semi-magnetic topological insulator phase, the gapless Dirac cone hosts a vanishing Hall conductance at low energy near \(\Gamma\) point as expected, however, another half-quantized Hall conductance emerges at high energy away from \(\Gamma\) point. This explains how the contradiction between the previous consensus based on the low-energy effective model and the scenarios based on a lattice model is reconciled. Moreover, due to the coupling between the low-energy state and the high-energy state, the exact half quantization is only revealed in the gapless surface state in the semi-magnetic topological insulator phase, but not found in the gapped surface state in the axion and Chern insulator phases from the band calculation. On the other hand, we adopt the layer-resolved Hall conductance to characterize the three phases. The half-quantized surface Hall effect is revealed in the three topological phases and it is contributed by all the occupied bands. This is distinct to the previous consensus that the half quantization originates from the individual gapped band. ###### Acknowledgements. We thank helpful discussions with Bo Fu and Huan-Wen Wang. This work was supported by the Research Grants Council, University Grants Committee, Hong Kong under Grant Nos. C7012-21G and 17301220 and the National Key R&D Program of China under Grant No. 2019YFA0308603.
2304.02880
Band engineered bilayer Haldane model: Evidence of multiple topological phase transitions
We have studied the evolution of the topological properties of a band-engineered AB-stacked bilayer honeycomb structure in the presence of a Haldane flux. Without a Haldane flux, band engineering makes the band touching points (the so-called Dirac points) move towards each other and eventually merge into one at an intermediate $\mathbf{M}$ point in the Brillouin zone. Here the dispersion is linear along one direction and quadratic along the other. In the presence of a Haldane flux, the system acquires topological properties, and finite Chern numbers can be associated with the pairs of the conduction and the valence bands. The valence band closer to the Fermi level ($E_F$) possesses Chern numbers equal to $\pm2$ and $\pm1$, while the one further away from $E_F$ corresponds to Chern numbers $\pm1$. The conduction bands are associated with similar properties, except their signs are reversed. The Chern lobes shrink in the band-engineered model, and we find evidence of multiple topological phase transitions, where the Chern numbers discontinuously jump from $\pm2$ to $\mp2$, $\pm1$ to $\mp1$, $\pm1$ to $0$ to $\pm2$ and $\pm2$ to $\pm1$. These transitions are supported by the presence or absence of the chiral edge modes in a nanoribbon bilayer geometry and the vanishing of the plateau in the anomalous Hall conductivity. Different phases are further computed for different hopping amplitudes across the layers, which shows the shrinking of the Chern lobes for large interlayer tunneling.
Sayan Mondal, Saurabh Basu
2023-04-06T06:00:49Z
http://arxiv.org/abs/2304.02880v1
# Band engineered bilayer Haldane model: Evidence of multiple topological phase transitions ###### Abstract We have studied the evolution of the topological properties of a band-engineered AB-stacked bilayer honeycomb structure in the presence of a Haldane flux. Without a Haldane flux, band engineering makes the band touching points (the so called Dirac points) move towards each other and eventually merge into one at an intermediate \(\mathbf{M}\) point in the Brillouin zone. Here the dispersion is linear along one direction and quadratic along the other. In the presence of a Haldane flux, the system acquires topological properties, and finite Chern numbers can be associated with the pairs of the conduction and the valence bands. The valence band closer to the Fermi level (\(E_{F}\)) possesses Chern numbers equal to \(\pm 2\) and \(\pm 1\), while the one further away from \(E_{F}\) corresponds to Chern numbers \(\pm 1\). The conduction bands are associated with similar properties, except their signs are reversed. The Chern lobes shrink in the band-engineered model, and we find evidence of multiple topological phase transitions, where the Chern numbers discontinuously jump from \(\pm 2\) to \(\mp 2\), \(\pm 1\) to \(\mp 1\), \(\pm 1\) to \(0\) to \(\pm 2\) and \(\pm 2\) to \(\pm 1\). These transitions are supported by the presence or absence of the chiral edge modes in a nanoribbon bilayer geometry and the vanishing of the plateau in the anomalous Hall conductivity. Different phases are further computed for different hopping amplitudes across the layers, which shows shrinking of the Chern lobes for large interlayer tunneling. ## I Introduction The Haldane model is a toy model, where it was shown that one can achieve quantum Hall effect even in the absence of an external magnetic field in a two-dimensional honeycomb lattice [1]. To achieve such a scenario, the time-reversal symmetry (TRS) of the system needs to be broken which can be done via chiral complex next nearest neighbour hopping amplitudes. The spectral bands of such a system possess a non-zero topological invariant known as the Chern number, and hence the system is known as a Chern insulator. Futhermore, the band structure of a semi-infinite ribbon geometry hosts chiral edge modes which is the signature of its topological character. Also, the system exhibits quantum anomalous Hall effect which shows a plateau structure in the vicinity of zero Fermi energy. Haldane's work has trigged an extensive study in both theoretical and experimental fronts. for example, there have been reports of Haldane-like spectrum and non-trivial phases realized in dice lattice [2], Kagome lattice [3; 4; 5; 6], checkerboard lattice [7], Lieb lattice [8; 9; 10; 11], buckled lattice [12] etc. Experimentally, Haldane model has been realized in cold atoms situated at the optical lattices where the complex second neighbour hopping can be created by means of standing-wave laser beams [13; 14; 15], ultracold fermions in the optical honeycomb lattices [16] etc. Also, a two-dimensional honeycomb structure of Fe-based insulators, such as, \(X\)Fe\({}_{2}\)(PO\({}_{4}\))\({}_{2}\), with \(X\) being either one of these, K, Cs, La [17] demonstrate similar non-trivial topological phases with a non-zero Chern number. Further, non-zero Chern numbers have also been found in acoustic Chern insulators [18], the interface between the two trivial ferromagnetic insulators EuO and GdN [19] etc. In recent years, there have been studies of the Haldane model in coupled two-dimensional systems, for example, the bilayer materials [20; 21; 22; 23]. In parallel, there are studies on the band engineering in various systems, such as, single layer graphene [24], spin Hall insulators [25] and a dice lattice [26]. Such a band engineering has been incorporated via the introduction of an anisotropy among the nearest neighbour (NN) hoppings. Such hopping anisotropies have been included between the neighbouring sites lying along a particular direction (say, \(t_{1}\)), while keeping the rest NN hoppings as \(t\) in a honeycomb lattice. If the value of \(t_{1}\) is varied, the band extrema from the two Dirac points move closer to each other and they finally merge with a vanishing band gap at the \(\mathbf{M}\) point in the Brillouin zone (BZ) for a particular value of \(t_{1}\), namely, \(t_{1}=2t\), which is called as the semi-Dirac limit. During the process, the topological properties of the system also vanishes at the gap closing hopping amplitude \(t_{1}=2t\). It should be noted that the band structure of the system in absence of the complex NNN hopping (Haldane flux) shows semi-Dirac dispersion, that is, linear along the \(k_{x}\)-direction and quadratic along the \(k_{y}\)-direction. Experimentally, the semi-Dirac dispersion has been observed in many materials, such as, multilayered structures of TiO\({}_{2}\)/VO\({}_{2}\)[27; 28], monolayer phosphorene in presence of doping and pressure [29; 30], BEDT-TTF\({}_{2}\)I\({}_{3}\) organic salts under pressure [31; 32], black phosphorus doped with potassium atoms by means of _in situ_ deposition [33] etc. One can also achieve the semi-Dirac dispersion by applying an uniaxial strain to a system which will change the bond length lying along a particular direction which is parallel to the applied strain direction. Therefore, the hopping energies along those directions get modified, while the hopping along the other directions remains unaltered. Such method has been employed in a monolayer honeycomb structure, such as, Si\({}_{2}\)O which yields a semi-Dirac dispersion [34]. However, the effect of band engineering in a multi layered system, such as, a bilayer graphene has never been studied. Needless to mention that bilayers possess a richer phase diagram comprising of a larger parameter space. The topological properties of such an engineered system are interesting since the existence of the edge modes and the quantized Hall conductivity have never been studied. A more interesting issue is that owing to larger number of bands being present in the band structure of a bilayer system, higher values of Chern numbers are realized. A higher Chern number implies higher value of the anomalous Hall conductivity together with larger number of chiral edge modes present in a semi-infinite system. Higher Chern numbers are in general interesting and can be realized in a host of systems, such as, in the Dirac [35] and semi-Dirac [36] systems in presence of longer range hopping, multi-orbital triangular lattices [37], star lattices or decorated honeycomb lattices [38], honeycomb lattices in presence of spin-orbit coupling [39; 40], ultracold gases in triangular lattices [41; 42] etc. Further, topological insulators doped with magnetic materials [43], Cr-doped thin laminar sheets of Bi\({}_{2}\)(Se,Te)\({}_{3}\)[44] also demonstrate higher values of the Chern numbers. Further, MnBi\({}_{2}\)Te\({}_{4}\) at high temperature [46; 47], multilayered structure of doped (with magnetic materials) and undoped topological insulators arranged alternatively [48], and in classical systems, such as, sonic crystals prepared using acoustic components [45] show non-trivial phases with higher Chern numbers. In this work, we focus on a bilayer graphene with broken TRS, that is, a coupled bilayer Haldane model. The stacking of the two layers is assumed in such a way that the B sublattice of the upper layer lies exactly above the A sublattice of the lower layer. Such stacking is known as the AB stacking or the Bernal stacking. We shall see that the Chern numbers associated with various bands reveal interesting properties. For example, some of the bands possess both Chern numbers \(\pm 2\) and \(\pm 1\), while the rests are associated with Chern numbers \(\pm 1\). Such a scenario needs to be assessed for a band engineered system. Specifically, we wish to address the ramifications of the band deformation caused via asymmetric hopping amplitudes on the topological properties and ascertain whether such deformation induces a topological phase transition. In our bilayer model, the band engineering is incorporated via asymmetric NN hopping amplitudes in each of the layers, while the tunneling amplitude across the layers is left unaltered. Our subsequent discussions have been arranged as follows. Sec. II introduces the tight binding Hamiltonian of a bilayer graphene. Sec. III discusses the band structure of the system with the interlayer coupling (\(t_{\perp}\)) and the anisotropic NN hopping amplitudes (\(t_{1}\)) as parameters. Sec. IV deals with the phase diagrams that are obtained by computing the Chern numbers associated with the bands. In Sec. V, the presence (or absence) of the chiral edge modes in a ribbon geometry are presented. Next, the numerical computations of the anomalous Hall conductivity are shown in Sec. VI. Finally, a brief summary of the results are included in the concluding section (Sec. VII). ## II Figure 1: A bilayer graphene is shown in (a) with the interlayer coupling \(t_{\perp}\) between the B sublattice of upper layer and the A sublattice of lower layer. In both layers, the A and B sublattices are denoted by the red and blue filled circles. In (b), the other planar hoppings are shown. To properly see each sublattices in each layer, we have denoted the A and B sublattices in lower layer with the circles in red and blue respectively. The subscripts \(l\) and \(u\) in A\({}_{I,u}\) and B\({}_{I,u}\) refers to lower and upper layer respectively. All the bondings and NNN hoppings in the lower layers are shown by the dashed lines and dashed arrows respectively. The NN hopping strength along the \(\mathbf{\delta}_{1}\) direction (shown via the yellow arrow) is \(t_{1}\), while it is \(t\) along the \(\mathbf{\delta}_{2,3}\) directions (\(\mathbf{\delta}_{i}\) are defined in text). The NNN hopping is \(t_{2}e^{i\phi}\) (\(t_{2}e^{-i\phi}\)) for the clockwise (anti-clockwise) direction. ## II The Hamiltonian A tight-binding Hamiltonian of a bilayer honeycomb lattice can be written as follows, \[H= \sum_{p\in l,u}\left[\sum_{\langle ij\rangle}t_{ij}c_{i}^{p\dagger} c_{j}^{p}+t_{2}\sum_{\langle\langle im\rangle\rangle}e^{i\phi_{p}^{im}}c_{i}^{p \dagger}c_{m}^{p}+\text{h.c.}\right]\] \[+\left[t_{\perp}\sum_{\langle q,r\rangle_{\perp}}c_{q}^{l\dagger }c_{r}^{u}+\text{h.c.}\right] \tag{1}\] where \(c_{i}^{p\dagger}(c_{i}^{p})\) is the creation (annihilation) operator corresponding to site \(i\) which belongs to the layer \(p\). Here \(p=l,u\) represent the lower and the upper layers respectively. The first term in the right hand side denotes the nearest neighbour (NN) hopping with the amplitude \(t_{ij}\) being either \(t_{1}\) when \(i\) and \(j\) sites lie along the \(\mathbf{\delta}_{1}=a_{0}(0,1)\) direction, or \(t\) when they lie along the \(\mathbf{\delta}_{2}=a_{0}(\sqrt{3}/2,-1/2)\) and \(\mathbf{\delta}_{3}=a_{0}(\sqrt{3}/2,-1/2)\) directions as shown in Fig. 1. The second term represents the complex next nearest neighbour (NNN) hopping with the amplitude \(t_{2}\) and a phase \(\phi_{l,p}^{im}\). We have labeled the Haldane flux corresponding to the lower and upper layers as \(\phi_{l}^{im}\) and \(\phi_{u}^{im}\) respectively. If an electron hops in the counter-clockwise direction, \(\phi_{l,p}^{im}\) assumes a positive sign, while for the clockwise direction, it acquires negative sign. The third term is the hopping between the two layers with the coupling strength \(t_{\perp}\). It should be kept in mind that the interlayer hopping is between the B sublattice on layer \(u\) (\(r\in\text{B}_{u}\)) and the A sublattice on layer \(l\) (\(q\in\text{A}_{l}\)) (AB or Bernal stacking). In our calculations, we have varied \(t_{1}\) in both the layers from a value \(t\) to \(2t\) (semi-Dirac) and even considered \(t_{1}>2t\). Now, we Fourier transform the Hamiltonian and write them in the four sublattice basis, namely, \(\{\text{A}_{l},\text{B}_{l},\text{A}_{u},\text{B}_{u}\}\) in the following way, \[H(\mathbf{k})=\begin{pmatrix}h_{\mp}^{\pm}(\mathbf{k},\phi_{l})&h_{xy}( \mathbf{k},t_{1})&0&t_{\perp}\\ h_{xy}^{*}(\mathbf{k},t_{1})&h_{z}^{-}(\mathbf{k},\phi_{l})&0&0\\ 0&0&h_{\mp}^{\pm}(\mathbf{k},\phi_{u})&h_{xy}(\mathbf{k},t_{1})\\ t_{\perp}&0&h_{xy}^{*}(\mathbf{k},t_{1})&h_{z}^{-}(\mathbf{k},\phi_{u})\\ \end{pmatrix} \tag{2}\] where \(h_{\mp}^{\pm}\) are defined as, \(h_{+}^{+}(\mathbf{k},\phi_{p})=h_{0}(\mathbf{k},\phi_{p})\pm h_{z}(\mathbf{k}, \phi_{p})\). The element \(h_{xy}(\mathbf{k},t_{1})\) has the following form, \(h_{xy}(\mathbf{k},t_{1})=h_{x}(\mathbf{k},t_{1})-ih_{y}(\mathbf{k},t_{1})\). The expressions for the \(h_{i}\)s can be written as, \[h_{0}(\mathbf{k},\phi_{p})=2t_{2}\cos\phi_{p}\left\{2\cos\frac{ \sqrt{3}k_{x}}{2}\cos\frac{3k_{y}}{2}+\cos\sqrt{3}k_{x}\right\} \tag{3}\] \[h_{z}(\mathbf{k},\phi_{p})=-2t_{2}\sin\phi_{p}\left\{2\sin\frac{ \sqrt{3}k_{x}}{2}\cos\frac{3k_{y}}{2}-\sin\sqrt{3}k_{x}\right\} \tag{4}\] \[h_{x}(\mathbf{k},t_{1})=\left\{t_{1}\cos k_{y}+2t\cos\frac{k_{y}}{2}\cos\frac{ \sqrt{3}k_{x}}{2}\right\}, \tag{5}\] and \[h_{y}(\mathbf{k},\tilde{t})=\left\{-t_{1}\sin k_{y}+2t\sin\frac{k_{y}}{2}\cos \frac{\sqrt{3}k_{x}}{2}\right\}, \tag{6}\] Throughout our work, the amplitude of the NNN hopping \(t_{2}\) is kept fixed at \(0.1t\), and two different values of the interlayer hopping strength are chosen, namely, \(t_{\perp}=0.5t\) and \(t_{\perp}=0.1t\)[49]. The values of \(\phi_{l}\) and \(\phi_{u}\) are taken such that \(\phi_{l}=\phi_{u}=\pi/2\). Now, for \(\phi_{u}=\phi_{l}\), we obtain the following dispersion relation, \[E_{\pm}^{c}=\left[h_{0}+\sqrt{\frac{t_{\perp}^{2}}{2}+|h_{xy}|^{2 }+h_{z}^{2}\pm\frac{t_{\perp}}{2}\sqrt{t_{\perp}^{2}+4h_{xy}^{2}}}\right] \tag{7}\] \[E_{\pm}^{v}=\left[h_{0}-\sqrt{\frac{t_{\perp}^{2}}{2}+|h_{xy}|^{2 }+h_{z}^{2}\pm\frac{t_{\perp}}{2}\sqrt{t_{\perp}^{2}+4h_{xy}^{2}}}\right] \tag{8}\] where \(E_{\pm}^{c}\) denote the two conduction bands and \(E_{\pm}^{v}\) are the two valence bands for a bilayer. ## III Spectral properties In this section, we discuss how the spectral properties evolve as we interpolate between the Dirac and the semi-Dirac limits. We show the band structure for two different values of \(t_{\perp}\). The first one is for \(t_{\perp}=0.5t\) as shown in Fig. 2. As can be seen, there are four bands which we have labeled as follows. The upper conduction band is labeled as band-c1, while the lower conduction band is band-c2. Similarly, the lower and the upper valence bands are labeled as band-v1 and band-v2 respectively. When \(t_{2}=0\) (no Haldane flux), band-c2 and band-v2 touch each other at the Fermi level at the \(\mathbf{K}\) and \(\mathbf{K}^{\prime}\) points (see Figs. 2a-2d). These points are referred to as the Dirac points. Further, with the increase in the value of \(t_{1}\), we deviate from the Dirac limit, and the band touching points move close to each other which finally merge at \(t_{1}=2t\). Beyond this value, that is, for \(t_{1}>2t\), a gap opens up at the \(\mathbf{M}\) point. Now, if we switch on \(t_{2}\) (see Figs. 2e-2i), the spectral gap remains open for \(t\leq t_{1}<2t\) and \(t_{1}>2t\), while the gap vanishes exactly at the semi-Dirac limit, namely, \(t_{1}=2t\). The gap closing scenario of the bilayer graphene is thus similar to the case of single layer graphene, where the energy gap between the conduction and valence band vanishes at the semi-Dirac limit, that is, at \(t_{1}=2t\)[24]. Further, we have presented band structure in Fig. 3 for a smaller value of \(t_{\perp}\), namely, \(t_{\perp}=0.1t\). It is obvious from Eqs. 7 and 8 that the separation among the conduction bands (band-c1 and band-c2) and that among the valence bands (band-v1 and band-v2) decreases with decrease in \(t_{\perp}\). Moreover, the low energy dispersions of band-c2 and band-v2 about the band touching points have a linear behaviour which was quadratic for \(t_{\perp}=0.5t\). Thus, the massive electrons become progressively massless as we lower the value of \(t_{\perp}\). Further, with the decrease in \(t_{\perp}\), the spectral gap between band-c2 and band-v2 increases. For example, when \(t_{\perp}=0.1t\) the band gap is \(\Delta E_{g}\simeq 1.0390t\) and \(0.3124t\) for \(t_{1}=t\) and \(1.8t\) respectively. While for \(t_{\perp}=0.5t\), \(\Delta E_{g}\simeq 1.0335t\) and \(0.1406t\) for \(t_{1}=t\) and \(1.8t\) respectively. Thus the Figure 4: The phase diagrams corresponding to the lowest occupied band, that is, band-v1 is presented for \(t_{\perp}=0.5t\). The white regions denote the trivial phase with Chern number as zero, while the colored regions indicate the non-trivial phase with the non-zero Chern numbers. The non-zero values are indicated at the top of the figure. difference in energy is more noticeable as we move towards the semi-Dirac limit, that is, at large values of \(t_{1}\). ## IV Chern number and phase diagram In this section, we calculate the Chern number as a function of the Haldane flux of the two layers. Owing to the broken TRS, the bands possess non-zero Chern numbers, which can be calculated by integrating the Berry curvature over the BZ [50; 51]. \[C=\frac{1}{2\pi}\int\int_{\rm BZ}\Omega(k_{x},k_{y}){\rm d}k_{x}{ \rm d}k_{y} \tag{9}\] where \(\Omega(k_{x},k_{y})\) is the \(z\)-component of the Berry curvature [52], which is obtained from the following relation. \[\Omega(k_{x},k_{y})=-2i{\rm Im}\left[\left\langle\frac{\partial \psi(k_{x},k_{y})}{\partial k_{x}}\left|\frac{\partial\psi(k_{x},k_{y})}{ \partial k_{y}}\right\rangle\right]\right. \tag{10}\] where \(\psi(k_{x},k_{y})\) is the periodic part of the Bloch wave corresponding to the Hamiltonian defined in Eq. 2, and Im denotes the imaginary part. Hence, we calculate the Chern numbers as a function of the fluxes \(\phi_{l}\) and \(\phi_{u}\) corresponding to the lower and the upper layers respectively for various values of \(t_{1}\) as shown in Fig. 4. Here, the value of \(t_{\perp}\) is chosen to be \(0.5t\) and the phase diagrams shown correspond to band-v1. We have denoted the Chern insulating regions by two colors. The regions in red denote \(C=+1\) phase, while the blue ones denote \(C=-1\) phases. The trivial phases with \(C=0\) are shown by the white regions. It is evident from Fig. 4a that the areas of the Chern insulating regions are maximum for \(t_{1}=t\) (Dirac case). An engineering of the band structure, that is, with the increase in the value of \(t_{1}\), the area of the topological regions (called as the Chern lobes) gradually shrink. We have shown the phase diagram till a certain value, namely, \(t_{1}=1.9t\) (see Fig. 4d), beyond which the topological regions can hardly be seen. When \(t_{1}\) becomes equal to \(2t\), the Chern number (\(C\)) vanishes completely for all values of \(\phi_{l}\) and \(\phi_{u}\) owing to a gapless scenario between band-c2 and band-v2. Although band-v1 remains separated from the band-v2, the Chern number still van Figure 5: The phase diagrams corresponding to band-v2 is presented for \(t_{\perp}=0.5t\). The white regions denote the trivial phases with zero Chern number, while the colored regions indicate the non-trivial phase with the non-zero Chern numbers. Again the values are indicated at the top of the figure. Figure 6: The phase diagrams corresponding to band-v2 are shown in (a) and (b), and those for band-v1 are presented in (c) and (d). In (a) and (c), the \(\eta\) points are used and shown along the \(L_{1}\) and \(L_{2}\) lines, whereas the \(\gamma\) and \(\chi\) points are marked along the \(L_{3}\) and \(L_{4}\) lines in both (b) and (d). Along those lines multiple phase transitions occur. For example, along \(L_{3}\), the Chern number corresponding to band-v2 has values +1, 0, +2, 0, +1 at the points \(\gamma_{1}\), \(\gamma_{3}\), \(\gamma_{5}\), \(\gamma_{7}\) and \(\gamma_{9}\) respectively. The phase transitions take place at \(\gamma_{2}\), \(\gamma_{4}\), \(\gamma_{6}\) and \(\gamma_{8}\), where band-v2 touches either band-v1 or band-c2. The values of \(t_{\perp}\) and \(t_{1}\) are taken as \(0.5t\) and \(t_{1}\) respectively. ishes. For \(t_{1}>2t\), a gap opens up, however, the Chern numbers continue to be zero, and thus the gap is trivial. Further, we have presented the phase diagrams corresponding to band-v2 (the one closer to the Fermi level) in Fig. 5. As can be seen, additional phases with higher Chern number (\(C=\pm 2\)) appear. We have denoted the \(C=+2\) and \(C=-2\) phase with cyan and green colors respectively. The red and blue colors continue to denote \(C=+1\) and \(C=-1\) phases respectively. Thus, both \(C=\pm 2\) and \(C=\pm 1\) phases occur at different parameter values in the same phase diagram. Further, the topological regions shrinks with the increase in \(t_{1}\), and finally vanishes at \(t_{1}=2t\), where the gap between the band-v2 and band-c2 vanishes. For \(t_{1}>2t\), the gap reopens, but the Chern number remains zero for all values of \(\phi_{l}\) and \(\phi_{u}\). The phase diagrams for band-c1 and band-c2 are identical in shape to those of band-v1 and band-v2 respectively, except the Chern numbers have opposite signs. In order to visualize the gap closing scenario corresponding to different phase transitions occurring in the phase diagrams, the band structures are presented in Fig. 7 for a particular value of \(t_{1}\) and \(t_{\perp}\), namely, \(t_{1}=t\) and \(t_{\perp}=0.5t\). The values of \(\phi_{l}\) and \(\phi_{u}\) are such that they lie along the four lines, namely, \(L_{1}\), \(L_{2}\), \(L_{3}\) and \(L_{4}\) in the phase diagrams depicted in Fig. 6, and are denoted by \(\eta_{i}\,(i=1,\dots,6)\), \(\gamma_{j}\,(j=1,\dots,9)\), and \(\chi_{s}\,(s=1,2,3)\). Along \(L_{1}\), a topological phase transition occur between \(C=+2\) and \(C=-2\) corresponding to band-v2, while the transition between \(C=+1\) and \(C=-1\) occur along \(L_{2}\) for both band-v2 and band-v1. These results have to be understood in conjunction with the corresponding band structures as shown in Figs. 7a-7c and 7d-7f respectively. The band structures corresponding to \(\eta_{1}\) and \(\eta_{3}\) points are identical and the Chern numbers corresponding to band-v2 are \(+2\) and \(-2\) respectively (Fig. 6a). At \(\eta_{2}\), band-v2 and band-c2 touch each other at both the Dirac points (Fig. 7b), and hence there is a phase transition at \(\eta_{2}\). However, band-v1 remains isolated from band-v2 at these \(\eta\) points, and the Chern numbers are zero along \(L_{1}\) as evident from its phase diagram (Fig. 6c). Further, the band structures corresponding to \(\eta_{4}\) and \(\eta_{6}\) have similar features, however in this case, \(C\) has values \(+1\) and \(-1\) respectively corresponding to band-v2, while for band-v1, \(C\) has the same magnitude, but are of opposite signs. At the phase transition occurring at \(\eta_{5}\), band-v2 and band-v1 touch each other at the \(\mathbf{K}\) point in the BZ (Fig. 7e), and hence for both the bands, a topological phase transition takes place at this point. Further, along \(L_{3}\), again multiple phase transitions occur (see Figs. 6b and 6d), and the corresponding dispersions are shown in Fig. 7g-7o. At \(\gamma_{1}\), band-v2 and band-v1 show \(C=+1\) and \(C=-1\) respectively which drops to zero at \(\gamma_{2}\) and hence the gap between those bands close at the \(\mathbf{K}\) point as shown in Fig. 7h. At \(\gamma_{3}\), the gap reopens, but the Chern numbers corresponding to these bands remain zero. The gap between band-v2 and band-c2 vanishes at \(\gamma_{4}\) where again a phase transition takes place, since along the line connecting \(\gamma_{4}\) and \(\gamma_{6}\), the Chern number has a value \(+2\). The band structure at an intermediate point, namely, \(\gamma_{5}\) has been shown in Fig. 7k. Similarly, phase transitions take place at \(\gamma_{6}\) and \(\gamma_{8}\), where the gaps vanish at the \(\mathbf{K}^{\prime}\) point. At \(\gamma_{7}\) and \(\gamma_{9}\), \(C\) assumes values zero and \(+1\) respectively corresponding to band-v2. It should be noted that band-v1 shows vanishing of the Chern number between \(\gamma_{2}\) and \(\gamma_{8}\) segments (see fig. 6d) and hence it never touches band-v2 which results in absence of any phase transition. Now, we show the phase transitions between \(C=-2\) and \(C=-1\) phase along \(L_{4}\). The corresponding band structures are shown in Fig. 7p-7r. At \(\chi_{2}\), band-v2 and band-v1 remain isolated from each other, however, they possess Chern numbers \(C=-2\) and \(C=0\) respectively. At \(\chi_{1}\) and \(\chi_{3}\), these two bands touch each other at the \(\mathbf{K}^{\prime}\) and the \(\mathbf{K}\) points in the BZ respectively, where topological phase transitions take place. Beyond \(\chi_{1}\) and \(\chi_{3}\), the gap reopens and both the bands possess non-trivial phases with \(C=-1\). Further, along the \(\phi_{u}=-\phi_{l}\) line, a semi-metallic phase exists for all the bands. In the vicinity of \(\phi_{u}=\phi_{l}\), only the phase diagrams of band-v1 show trivial regions with \(C=0\), however, those for band-v2 demonstrate non-trivial phases either with \(C=+2\) or \(C=-2\). Moreover, in order to see the effects of \(t_{\perp}\) on the topological phases, we have shown the phase diagrams corresponding to band-v1 and band-v2 in Figs. 8a-8d and 8e-8h respectively. It is evident that the areas of Chern insulating regions are enhanced corresponding to lower values of \(t_{\perp}\). Also, the shape of the topological regions are different from those for the \(t_{\perp}=0.5t\) case. Further, the areas of \(C=\pm 2\) regions in the phase diagram corresponding to band-v2 are mostly spanned by \(C=\pm 1\) regions. However, the feature that remains unaltered is the trivial phase along \(\phi_{u}=\pm\phi_{l}\) lines for band-v1 and \(\phi_{u}=\phi_{l}\) line for band-v2. For both \(t_{\perp}=0.5t\) and \(t_{\perp}=0.1t\), the Chern insulating regions gradually shrink with the increase in the value of \(t_{1}\) and finally vanish at the semi-Dirac limit, namely, \(t_{1}=2t\). ## V Edge states To show the existence (and their vanishing) of the edge modes, in this section, we show the band structure of the system for semi-infinite nanoribbon. The ribbon has a finite width along the \(y\)-direction, while it is infinite along the \(x\)-direction [53; 54]. Further, we label the sites along the \(y\)-direction as \(\Lambda_{1}^{l}\), B\({}_{1}^{l}\), A\({}_{2}^{l}\), B\({}_{2}^{l}\),.... A\({}_{N}^{l}\), B\({}_{N}^{l}\), A\({}_{1}^{u}\), B\({}_{1}^{u}\), A\({}_{2}^{u}\), B\({}_{2}^{u}\),.... A\({}_{N}^{u}\), B\({}_{N}^{u}\). Since, the periodicity along the \(x\)-direction remains preserved, we can Fourier transform the operators along that direction. This results in set of four coupled equations as shown below. Figure 7: The band structures corresponding to the points \(\eta_{1}\)-\(\eta_{6}\) (shown in Figs. 6a and 6c) are depicted in (c)-(h). The spectra for the points \(\gamma_{1}\)-\(\gamma_{9}\) (shown in Figs. 6a and 6c) are shown in (i)-(q), and for the points \(\chi_{1}\)-\(\chi_{3}\) are presented in (p)-(r). The values of \(t_{1}\) and \(t_{\perp}\) for all the band structures are kept fixed at \(t\) and \(0.5t\) respectively. \[\begin{split} E_{k}a^{u}_{k,n}=&-\left[t\left\{1+e^{(-1)^ {n}ik}\right\}b^{u}_{k,n}+t_{1}b^{u}_{k,n-1}\right]\\ &-2t_{2}\left[\cos(k+\phi)a^{u}_{k,n}+e^{(-1)^{n}\frac{ik}{2}} \times\right.\\ &\left.\cos\left(\frac{k}{2}-\phi\right)\left\{a^{u}_{k,n-1}+a^{ u}_{k,n+1}\right\}\right]\end{split} \tag{11}\] \[\begin{split} E_{k}b^{u}_{k,n}=&-\left[t\left\{1+e^{ (-1)^{n+1}ik}\right\}a^{u}_{k,n}+t_{1}a^{u}_{k,n+1}\right]\\ &-2t_{2}\left[e^{(-1)^{n+1}\frac{ik}{2}}\times\cos\left(\frac{k} {2}+\phi\right)\left\{a^{l}_{k,n-1}+a^{l}_{k,n+1}\right\}+\right.\\ &\left.\cos(k-\phi)b^{u}_{k,n}\right]+t_{\perp}\left[\xi_{1}e^{- ik}+\xi_{2}\right]a^{l}_{n}\end{split} \tag{12}\] \[\begin{split} E_{k}a^{l}_{k,n}=&-\left[t\left\{1+e^ {(-1)^{n}ik}\right\}b^{l}_{k,n}+t_{1}b^{l}_{k,n-1}\right]\\ &-2t_{2}\left[e^{(-1)^{n}\frac{ik}{2}}\times\cos\left(\frac{k}{ 2}-\phi\right)\left\{a^{l}_{k,n-1}+a^{l}_{k,n+1}\right\}\right.\\ &\left.\cos(k+\phi)a^{l}_{k,n}+\right]+t_{\perp}\left[\xi_{1}e^{ ik}+\xi_{2}\right]b^{u}_{n}\end{split} \tag{13}\] \[\begin{split} E_{k}b^{l}_{k,n}=&-\left[t\left\{1+e^ {(-1)^{n+1}ik}\right\}a^{l}_{k,n}+t_{1}a^{l}_{k,n+1}\right]\\ &-2t_{2}\left[\cos(k-\phi)b^{l}_{k,n}+e^{(-1)^{n+1}\frac{ik}{2}} \times\right.\\ &\left.\cos\left(\frac{k}{2}+\phi\right)\left\{a^{l}_{k,n-1}+a^{ l}_{k,n+1}\right\}\right]\end{split} \tag{14}\] where \(a^{l,u}_{k,n}\) and \(b^{l,u}_{k,n}\) are the amplitudes of the wave functions corresponding to the sublattices A and B respectively. The superscripts \(l\) and \(u\) refers to lower and upper layers respectively. Here \(k=\sqrt{3}k_{x}a_{0}\) is the dimensionless momentum and \(n\) denotes the site index which assumes integer values in the range \([1:N]\) with \(N\) being the total number of unit cells along the \(y\)-direction. We chose \(N\) as \(128\), which gives the width to be \(79\sqrt{3}a_{0}\). In Eqs. 12 and 13, \(\xi_{1}\) and \(\xi_{2}\) denote quantities that depend on the site index, \(n\) via \(\xi_{1}=[1-(-1)^{n}]/2\) and \(\xi_{2}=[1+(-1)^{n}]/2\) respectively. By solving Eqs. 11, 12, 13, and 14, we obtain the band structure of the nanoribbon for various values of \(t_{1}\) as presented in Fig. 9. It can be noticed that a pair of edge modes from the valence bands (band-v2) traverse the Fermi level, \(E_{F}\) (shown via the red dashed line) and merge with the conduction bands (band-c2) and another pair crosses the Fermi level in the opposite direction. Such crossing of the edge modes leads to a quantized Hall conductivity should the Fermi level lies in the bulk Figure 8: The phase diagrams corresponding to band-v1 are shown in (a)-(d), while those for band-v2 are shown in (e)-(h). The value of \(t_{\perp}\) is kept fixed at \(0.1t\). The values of \(t_{1}\) are such that \(t_{1}=t\) in (a) and (e), \(t_{1}=1.5t\) in (b) and (f), \(t_{1}=1.8t\) in (c) and (g), and \(t_{1}=1.9t\) in (d) and (h). The white regions denote trivial phases with zero Chern numbers, while the colored regions indicate the non-trivial phases with the non-zero Chern numbers. The values are indicated at the top of the figure. Figure 9: The edge state spectra are shown for (a) \(t_{1}=t\), (b) \(t_{1}=1.5t\), (c) \(t_{1}=2t\), and (d) \(t_{1}=2.2t\). The green shaded regions represent the bulk gap in (a), (b) and (d) (np bulk gap in (c)). The Fermi levels (\(E_{F}\)) are denoted by the red dashed lines, which are shown to be present in the bulk gap. \(E_{F}\) intersects the edge modes at the points denoted by the green dots as shown in (a) and (b). For these, the edge currents are shown by the green arrows in the yellow panels located at the top right corner, which represent parts of the semi-infinite ribbon. gap. \(E_{F}\) intersects the edge modes (see Figs. (a)a and (b)b) at four points (marked by the green dots), whose corresponding edge currents are shown by the green arrows in the yellow panels located at the top right corner of the plots. The yellow panels represent a part of the semi-infinite ribbon. Since the velocity of electrons are proportional to the slope of the band structure, that is, \(\partial E/\partial k\), there exists a pair of edge currents at each edge that moves in the same direction. However, such pairs of currents propagate in opposite directions at the two edges of the ribbon. Hence, these modes are called chiral edge modes. It should be noted that because of a pair of chiral edge modes, we should obtain the Hall conductivity quantized with a plateau at \(2e^{2}/h\), with the factor '2' arise due to doubling of the number of chiral edge modes [55]. Such chiral edge modes exist as long as the value of \(t_{1}\) remains lesser than \(2t\). Since the bulk gap vanishes at \(t_{1}=2t\) (see Fig. (c)c), the edge current vanishes. For \(t_{1}>2t\), the edge modes get detached from the bulk bands as shown in Fig. (d)d for \(t_{1}=2.2t\), thereby resulting in a zero edge current. These results are consistent with the corresponding Chern numbers obtained in the phase diagram. For example, we observe the non-zero edge currents for \(t_{1}<2t\) and the corresponding Chern number is found to be \(C=|2|\). For \(t_{1}>2t\), the Chern numbers vanish, and so the edge currents. The figures presented here are for \(t_{\perp}=0.5t\). For \(t_{\perp}=0.1t\), we observe similar features in the spectrum, except that the bulk gaps get reduced. We have skipped the discussion of the latter for brevity. ## VI Hall conductivity In this section, we calculate the anomalous Hall conductivity as function of the Fermi energy \(E_{F}\). The prerequisite is the computations of the Berry curvature using Eq. 10 and hence use the following formula to calculate the anomalous Hall conductivity (\(\sigma_{xy}\)) [56; 57], namely, \[\sigma_{xy}=\frac{\sigma_{0}}{2\pi}\sum_{\lambda}\int\frac{\mathrm{d}k_{x} \mathrm{d}k_{y}}{(2\pi)^{2}}f\left(E^{\lambda}_{k_{x},k_{y}}\right)\Omega(k_{x },k_{y}) \tag{15}\] where \(f(E^{\lambda}_{k_{x},k_{y}})=\left[1+\exp\left\{(E^{\lambda}_{k_{x},k_{y}}-E_ {F})/K_{B}T\right\}\right]^{-1}\) is the Fermi-Dirac distribution function at an energy \(E^{\lambda}_{k_{x},k_{y}}\). Here \(E_{F}\) refers to the Fermi energy and \(T\) is the absolute temperature. The energy is denoted by \(E^{\lambda}_{k_{x},k_{y}}\) with \(\lambda\) being the band index. The constant term \(\sigma_{0}\) is equal to \(e^{2}/h\) which sets the scale for \(\sigma_{xy}\). Now, we compute \(\sigma_{xy}\) numerically as a function of \(E_{F}\) at zero temperature (\(T=0\)) for various values of \(t_{1}\) as shown in Fig. 10. As can be seen from Fig. (a)a, when the Fermi energy, \(E_{F}\) lies in the bulk gap, \(\sigma_{xy}\) becomes quantized at a value \(2\sigma_{0}\). The width of the plateau is equal to the width of the bulk gap in the dispersion spectrum of Fig. 2. As soon as \(E_{F}\) intersects the bands (either both the conduction or both the valence bands), \(\sigma_{xy}\) starts to decrease since the integral is performed over the occupied states. This also results in diminishing of the plateau width with increase in the value of \(t_{1}\). This happens because the energy gap between the band-c2 and the band-v2 shrinks. The plateau and the Hall conductivity vanish completely at \(t_{1}=2t\), where the spectrum becomes gapless. For the hopping asymmetry engineered beyond the semi-Dirac limit, that is, \(t_{1}>2t\), the bands become gapped again, however, the Hall conductivity remains zero. These results are consistent with their corresponding information coming from the Chern numbers. The Hall plateaus are observed as long as the system remains a Chern insulator, that is, for \(t_{1}<2t\). Further, the factor '2' in (\(2\sigma_{0}\)) denotes the value of Chern num Figure 10: The anomalous Hall conductivities are shown as a function of \(E_{F}\) for various values of \(t_{1}\) in (a) and (b) for \(t_{\perp}=0.5t\) and \(t_{\perp}=0.1t\) respectively. The plateau width decreases as \(t_{1}\) deviates from \(t\). modes) which vanishes for \(t_{1}>2t\). We have also presented the Hall conductivity for a smaller value of \(t_{\perp}\), namely, \(t_{\perp}=0.1t\) in Fig. 10b. In this case, the plateau widths corresponding to different values of \(t_{1}\) are larger as compared to that for the \(t_{\perp}=0.5t\) case, since the corresponding band gaps are enhanced as shown in Fig. 3. However, similar to the previous case, the plateau width decreases with the increase of \(t_{1}\), which finally vanishes at \(t_{1}=2t\) and beyond. Thus, a topological phase transition takes place across the gap closing point at the semi-Dirac limit, namely, \(t_{1}=2t\). ## VII Conclusion We have investigated the topological properties for a band engineered bilayer Haldane. By tuning one of the three NN hopping amplitudes, the band extrema, which were located at the \(\mathbf{K}\) and the \(\mathbf{K}^{\prime}\) points for the Dirac case, migrate towards each other and finally merge at an intermediate \(\mathbf{M}\) point in the BZ in the semi-Dirac limit, that is, at \(t_{1}=2t\). We have calculated the Chern numbers for various values of \(t_{1}\) and plotted them in the \(\phi_{u}\)-\(\phi_{l}\) plane which demonstrates that the higher Chern numbers (\(C=\pm 2\)) are associated only with band-v2. However, the Chern numbers corresponding to both both bands vanish, that is, there are topological phase transitions, where the Chern numbers discontinuously change from \(C=\pm 2\) to \(C=0\) and \(C=\pm 1\) to \(C=0\), across the semi-Dirac point \(t_{1}=2t\). Also, there are multiple phase transitions in the phase diagram, such as, \(+2\rightarrow-2\), \(+1\rightarrow-1\), \(\pm 2\rightarrow\pm 1\) and \(\pm 2\to 0\to 1\). These phase transitions are confirmed by the opening and closing of the energy gaps (semi-metallic phase) in the dispersion spectrum. Further, we have also computed the band structure of a nanoribbon, where we observe a pair of chiral edge modes along the edges of the ribbon exist as long as \(t_{1}\) remains lesser than \(2t\). Also for the anomalous Hall conductivity, the width of the quantized plateau at \(2\sigma_{0}\) gradually decreases with increase in \(t_{1}\), which finally vanishes at \(t_{1}=2t\). Thus, a bilayer Haldane model, similar to its monolayer analogue, exhibits a topological phase transition at the semi-Dirac point. However, here we have larger values of the Chern number and doubling of the edge modes at the edges of the bilayer nanoribbon. Further, the phase transitions are supported by the vanishing of Chern number, chiral edge modes and the anomalous Hall conductivity.
2303.09972
Neighborhood Averaging for Improving Outlier Detectors
We hypothesize that similar objects should have similar outlier scores. To our knowledge, all existing outlier detectors calculate the outlier score for each object independently regardless of the outlier scores of the other objects. Therefore, they do not guarantee that similar objects have similar outlier scores. To verify our proposed hypothesis, we propose an outlier score post-processing technique for outlier detectors, called neighborhood averaging(NA), which pays attention to objects and their neighbors and guarantees them to have more similar outlier scores than their original scores. Given an object and its outlier score from any outlier detector, NA modifies its outlier score by combining it with its k nearest neighbors' scores. We demonstrate the effectivity of NA by using the well-known k-nearest neighbors (k-NN). Experimental results show that NA improves all 10 tested baseline detectors by 13% (from 0.70 to 0.79 AUC) on average evaluated on nine real-world datasets. Moreover, even outlier detectors that are already based on k-NN are also improved. The experiments also show that in some applications, the choice of detector is no more significant when detectors are jointly used with NA, which may pose a challenge to the generally considered idea that the data model is the most important factor. We open our code on www.outlierNet.com for reproducibility.
Jiawei Yang, Susanto Rahardja, Pasi Franti
2023-03-17T13:44:52Z
http://arxiv.org/abs/2303.09972v1
# Neighborhood Averaging for ###### Abstract We hypothesize that _similar objects should have similar outlier scores_. To our knowledge, all existing outlier detectors calculate the outlier score for each object independently regardless of the outlier scores of the other objects. Therefore, they do not guarantee that similar objects have similar outlier scores. To verify our proposed hypothesis, we propose an outlier score post-processing technique for outlier detectors, called neighborhood averaging (NA), which pays attention to objects and their neighbors and guarantees them to have more similar outlier scores than their original scores. Given an object and its outlier score from any outlier detector, \(\mathrm{NA}\) modifies its outlier score by combining it with its \(k\) nearest neighbors' scores. We demonstrate the effectivity of \(\mathrm{NA}\) by using the well-known \(k\)-nearest neighbors (\(k\)-\(\mathrm{NN}\)). Experimental results show that \(\mathrm{NA}\) improves all 10 tested baseline detectors by 13% (from 0.70 to 0.79 \(\mathrm{AUC}\)) on average evaluated on nine real-world datasets. Moreover, even outlier detectors that are already based on \(k\)-\(\mathrm{NN}\) are also improved. The experiments also show that in some applications, the choice of detector is no more significant when detectors are jointly used with \(\mathrm{NA}\), which may pose a challenge to the generally considered idea that the data model is the most important factor. We open our code on www.outlierNet.com for reproducibility. Outlier detection, neighborhood averaging, \(k\)-\(\mathrm{NN}\), outlier score. ## I Introduction Outliers are objects that significantly deviate from other objects. Outliers can indicate useful information, which can be applied in applications such as fraud detection [1, 2], abnormal time series [3, 4], and traffic patterns [5, 6]. Outliers can also be harmful because they are generally unwanted, can be considered _errors_, and may have biased statistical analysis for applications like clustering [7, 8]. Recently, outlier detection has also been applied to manufacturing data [9] and industrial applications [10]. For these reasons, outliers need to be detected. Most outlier detectors calculate the so-called _outlier score_ for every object independently and then calculate the threshold scores that deviate significantly from the others and label them as outliers [11]. To improve the results of baseline outlier detectors, _ensemble techniques_ have been developed to combine the outcomes of multiple detectors to obtain a more accurate detector. An example is the _average ensemble_[1], which calculates the average outlier score from multiple baseline detectors. This can potentially improve outlier detection by smoothing the result of a weak detector and placing more emphasis on those that agree on an individual object. However, the existing ensemble techniques merely use more detectors but do not attempt to ensemble outlier scores of neighboring objects. Their success is also bounded by the reliability of the baseline detectors. The _outlier score_ is a fundamental concept in all score-based outlier detectors. All outlier detectors assume that outlier objects should have significantly higher or lower outlier scores [1]. Except for that, no attention has been paid to the relationship between objects and their outlier scores. Because outlier objects are directly decided by their outlier scores, it is vital to understand their relationship. In this paper, we address this problem. Fig. 1: Outlier scores given by three detectors on the task of detecting outlier eggs from a Robin. The results of Detector 3 can be obtained from the results of Detector 2 using the proposed method as shown in Fig. 7. In Fig. 1, all detectors successfully assign significantly higher scores to the outlier eggs (red triangles) but cannot guide the selection of the best detectors. We can see that egg A is distinctive and has the highest score. Detector 2 and Detector 3 are therefore better than Detector 1. Similarly, because eggs C, D, E, and F have the same color and size, they should have the same outlier scores. In this case, Detector 3 is better than Detector 2. Therefore, we can conclude that Detector 3 is the best among the three by comparing the objects' similarities. Based on the case in Figure 1, we conclude that _similar objects should have similar outlier scores_. Although this could be seen as obvious, none of the state-of-the-art outlier detectors uses this. Many detectors simply make use of the objects' neighborhood in the process (especially all \(k\)-NN-based detectors), but they do not consider the result of the scores. For example, object B in Figure 1 has a high outlier score whereas all the objects near it have low scores. It should therefore have a lower score than object A, which has no normal objects in its vicinity. To address the problem, we propose a novel _neighborhood averaging_ (NA) technique. It post-processes the outlier scores provided by any existing outlier detector by averaging it with the scores of its neighbors. In other words, if an object is an outlier, it is more likely that its near neighbors are also outliers. In this case, the predicted score is enhanced. On the contrary, if the neighboring objects have low outlier scores (predicted as normalities), the score of the object is also reduced accordingly. The beauty of NA is that it can serve as an additional and independent post-processing technique. It is different from ensemble techniques because rather than operating the results of multiple detectors of a single object, NA operates the results of multiple objects of a single detector as shown in Fig. 2. Neighborhood averaging is conceptually and fundamentally different from the ensemble techniques. It is also complementary to the ensembles, and these two approaches can be used jointly. While ensembles cannot assure similar objects have similar outlier scores, NA can achieve this. Fig. 3 demonstrates all the combinations that can be constructed from NA and the existing outlier detection methods, including ensemble techniques. On the top, we have the typical situation where dataset X is input into an outlier detector, which produces scores that are further processed by a threshold component to determine outliers. The second case is the multi-detector ensemble where the dataset is input into two outlier detectors to produce two separate scores, which are then combined by the ensemble component before they are processed by the threshold component to determine the outliers. The third case is the proposed NA where the dataset is input into an outlier detector, after which the scores are averaged before they were processed by the threshold component. The last case is a combination of the multiple-detector ensemble + NA, where two outlier detectors produce scores which are first combined by the ensemble and then post-processed by NA. To summarize this paper's contribution, we use \(k\)-NN to post-process the existing outlier scores to produce more reliable and consistent scores. While there are already many \(k\)-NN-based methods, they all operate in the feature space. In contrast, NA operates in the score space by modifying existing scores without any additional information besides the neighborhood graph. The method Fig. 3: Outlier detection process. Fig. 2: Difference between NA and ensembles. Ensembles use multiple detectors’ prediction of the _same_ object (on the bottom), while NA uses a single detector’s prediction of the _different_ (neighboring) objects (with gray background). is not limited to geographical data [34] or any other single application, but it can be applied in any application domain. It can improve any existing score-based outlier detectors or ensemble techniques, and it is not limited to use with _k_-NN-based outlier detectors. We organize this paper as follows. In Section II, we recall several state-of-the-art outlier detectors from several categories. They later are used as our baseline detectors. In Section III, we introduce the proposed hypothesis and NA. The experimental setup is described in Section IV, and the results are shown in Section V. In Section VI, we describe our conclusions. ## II outlier detectors By constructing the _reference set_[1] for the calculation of outlier scores, outlier detectors can be grouped into global detectors and local detectors. Global detectors use all objects and local detectors use only a small subset of objects, such as _k_-NN, in the dataset as a reference set. We next review 12 well-known and state-of-the-art outlier detectors including six _k_-NN-based detectors. In distance-based outlier detectors [12, 13, 14], outlier objects essentially should be located far away from other objects. The detector in [12] computes the distance between an object and its \(k^{\text{th}}\) nearest neighbor as the outlier score. This detector is referred to as KNN [12]. A variant that evaluates the average distance to its all \(k\) neighbors was proposed in [13]. The method in [15] calculates the distance to the average of its _k_-NN. It uses spatial features to determine the neighbors and the other features for the outlier detection. Instead of considering distance, the detector in [14] counts the number of objects within a predefined distance threshold to the object. The count is used as the outlier score. _Outlier detection using indegree of nodes_ (ODIN) in [13] is also based on the _k_-NN graph. It uses the time spent as another object's neighbor as the outlier score. _Reverse unreachability_ (NC as defined in [16]) is a detector based on representation. A given object is represented by _k_-NN with a weight matrix corresponding to the contribution from each neighbor. The negative weights carry information on the possibility of being outliers. The occurrence of negative weights is used as the outlier score. _Mean-shift outlier detection_ (MOD) [7, 17, 18] replaces an object with its _k_-NN's mean. This process is repeated three times. The distance between the original and the modified value of an object is the outlier score. This approach works well, especially when a dataset contains a large number of outliers [7]. In density-based detectors [19, 20], outlier objects have considerably lower densities than their neighbors. _Local outlier factor_ (LOF) [16] evaluates the density of an object relative to that of its _k_-NN as the outlier score. In [21], it was reported to be the best-known detector when compared to the other 12 _k_-NN-based detectors. The _minimum covariance determinant_ (MCD) [22] is based on statistical analysis and is a robust estimator for evaluating the mean and covariance matrix. It finds 50% of objects with a covariance matrix having the smallest determinant. It then uses the difference from an object to the center of the objects as the outlier score. _Isolation-based anomaly detection_ (IFOREST as defined in [23]) builds trees over the dataset. It recursively separates the objects into two parts with a random threshold given a randomly selected feature. To remove the bias of randomness, it repeats the process several times. The average number of splits to isolate an object from other objects is its outlier score. An improved version of IFOREST can be found in [24]. _Support vector machine_ (SVM) has been widely applied to pattern recognition tasks. _One class support vector machine_ (OCSVM) [25] treats the objects as training data and creates a one-class model. The distance to the trained model is then used as the outlier score. _Principal component analysis_ (PCA) is an established data-mining technique. PCA can extract the principal structure of the data. The _principal-component-analysis-based outlier detection method_ (PCAD) [26] reconstructs objects using the eigenvectors with reconstruction errors. The normalized errors are outlier scores. _Angle-based outlier detection_ (ABOD) [27] calculates the angles between objects. The variance of these angles is used as the outlier score. It was viewed as overcoming dimensionality better than distance-based measures in [27]. _Multiple-objective generative-adversarial active learning_ (MO-GAAL) [28] is proposed to overcome the sparsity of data in high-dimensional space by generating additional data objects. MO-GAAL first trains a neural network to classify the generative and real-data objects. The outlier score is calculated as the possibility of the object being real. _Copula-based outlier detector_ (COPOD) [29, 30] predicts the tail probabilities of each object by constructing an empirical copula. The probability is used as the outlier score. ## III Neighborhood Averaging In this section, we present the general framework of NA. In general, outlier detectors utilize different assumptions to produce outlier scores, such as distance or density. However, besides this assumption, we do not set any requirements but rely on the existing detectors and their assumptions. ### _General averaging framework_ The example in Fig. 1 shows that _similar objects should have similar outlier scores_. Although Detector 1 can find the two outliers (with the proper threshold), by plotting the outlier scores in Fig. 4, we can see there is a local peak in the distribution of the outlier scores, which does not match reality. Fig. 5 shows that the local peak will cause either a false positive or a false negative regardless of which threshold value is selected. It is therefore necessary to remove the local peak. In a recommendation system [31], a related hypothesis for collaborative filtering techniques states that _similar users must/should have similar preferences_. Both of these hypotheses rely on defining the similarity of the objects in the feature space. However, there is one important difference between them. While collaborative filtering does not involve any score calculations, the definition of the outlier score is the key to outlier detection. Fig. 6 shows three types of similar objects. ### _Neighborhood averaging (NA)_ The proposed NA technique is simple: We take any baseline outlier detector and use it to compute the preliminary outlier score for every object. We then modify the objects' outlier scores in the neighborhood to be closer to one another to smooth the baseline outlier detectors' results. The main advantage of the technique is its applicability to any existing outlier detector or technique. While we use \(k\)-NN in this paper, it should be noted that any neighborhood model can also be applied. Neighborhood averaging uses two steps to modify a given dataset X's outlier score S produced by any detector. In the first step, for each object Xi, NA looks for its \(k\)-NN: \(k\)-NN(Xi). In the second step, NA revises each outlier score Si, to be S*, which is the average of the scores of its \(k\)-NN: \[\mathrm{S}_{i}^{*}\leftarrow\frac{1}{k+1}(\mathrm{S}_{i}+\sum_{j=1}^{k} \mathrm{S}_{j})\,;\mathrm{X}_{j}\in\ k-\mathrm{NN}(\mathrm{X}_{i}),\mathrm{S} _{j}\in\mathrm{S}\qquad(1)\] Algorithm 1 shows the pseudo-code and Fig. 7 demonstrates NA's two steps. Considering the red object (object B in Fig. 4), NA first searches its \(k\)-NN and then calculates the average scores of the neighbors. As a result, the peak in the outlier scores in Fig. 4 has been removed. The visualization examples with and without NA are shown in Fig. 8. We can see that the LOF detector (with \(k=40\)) falsely detects many boundary objects as outliers (cross), but it succeeds after using NA. Neighborhood averaging updates the outlier score of an object by the average of the scores of its neighbors. Where the object is also a neighbor of other objects, NA would be applied with multiple iterations and in each current iteration, only the score of the last iteration is used to revise each object's score. Fig. 4: Define local variance in outlier scores: relative outlier scores do not match the relative degree of being outliers. Fig. 5: Visualization of how the local variance (local peak) affects the accuracy of outlier detection. The blue line and green line have local variances. We can see that no matter how we adjust the threshold value, the local variance affects the accuracy by causing either a false positive or a false negative. Fig. 6: Definition of the similarity of objects can be different with different data. In feature space, it can be based on the distance of objects (left); it can be based on the nodes’ common neighbors (middle); and it can be based on which level the object is located in the structure (right). ### _Discussion_ In this section, we discuss the proposed NA and closely related techniques. One is the _k_-NN classifier, which also looks for neighborhood objects when classifying objects. The difference is that the _k_-NN classifier is a supervised method, but NA is not. Another related technique is the mean-shift technique in [7], which is also widely applied in image processing [32]. Neighborhood averaging can be repeated several times and the process iteratively replaces an object's outlier score with its neighbors' mean scores. This process is close to the mean-shift process [11]. The difference is that mean-shift modifies the feature values of the objects whereas NA modifies their scores. All _k_-NN-based outlier detectors are related to one another in the sense that they use _k_-NN as their key component. However, how they use _k_-NN differs. In general, all _k_-NN-based detectors use _k_-NN to _produce_ the outlier scores for the objects, as shown at the top of Fig. 9, while NA uses _k_-NN to _revise_ the outlier scores produced by any detector, including all _k_-NN-based detectors, as shown at the bottom of Fig. 9. Using _k_-NN as a _detector_ to produce outlier scores is a well-known approach but it is novel to use it as a _post-processing technique_ for tuning the score. Fig. 8: Visualization of the outlier scores (top) and the detected outliers (bottom). The results at left and right are given by the LOF detector with and without NA, respectively (_k_ = 40). LOF falsely detects boundary objects as outliers (cross) as evaluated on a noisy A1 dataset [33], while NA improves the result of the LOF significantly. Fig. 7: Illustration of the averaging process: an object B from Fig. 4 (red), and all the outlier scores. NA first finds the 2 nearest objects of B and then calculates the average score within its neighbors as the revised score: (100+1+1)/3 = 34. As a result, the local peak has been successfully removed by NA. Fig. 9: Difference between _k_-NN-based detectors, ensembles, and NA It is worth noting that other detectors in [7, 13, 15] also utilize \(k\)-NN and the _average_ operation. However, these are stand-alone detectors and cannot be an add-on to detectors, while NA is an add-on to other detectors and cannot be used as a stand-alone detector. Ensemble techniques are also related and have the _combination_ operation. Besides this commonality, NA has three fundamental differences. First, ensemble techniques combine several poor detectors to obtain a better one [1], as shown in the revised outlier score in the ensemble in Fig. 9, while NA removes local variance. Second, ensemble techniques need to compute the outlier score for the same object multiple times, while NA does not. Third, ensemble techniques cannot be applied to a single detector, but NA can. Neighborhood averaging and ensemble techniques are not exclusive, and they can be applied jointly. Their similarity is that both aim to smooth the outlier scores; ensemble operates across the detectors while NA operates across the objects. Considering the two detectors (the blue and green lines) in Fig. 5, ensemble techniques can improve these two poorly performing detectors only when the two peaks happen in the same location (objects) and with a proper difference. It is worthwhile to note that NA may be suitable for other score-based data-mining tasks. This is because similar input should have similar output. If similar input does not provide similar output, the model is not consistent. If we define any techniques having the operation of _combination_ as ensemble techniques, we already have feature ensemble (feature bagging), detector ensemble, parameter ensemble, and object ensemble (NA). These ensembles should be applicable to data-mining tasks other than outlier detection. Recently, Ke et al. [34] proposed a method called group similarity system (GSS) for unsupervised outlier detection and Yang et al. [35] proposed a data pre-processing technique called neighborhood representative (NR) to detect collective outliers using exiting outlier detectors. GSS partitions the data into non-overlapped groups and judges the groups as outliers by considering the mean of the outlier scores of the objects in each group. NR scores the representative objects sampled from each group and judges the groups as outliers by considering the scores of the representative objects in each group. Neighborhood averaging is not used for collective outliers but for individual outliers, making it different from GSS or NR. ## IV Experimental setup We used nine public, real-world, semantically meaningful static datasets, which can be found in UCI repository datasets or [21]. The information in the datasets varies from 8 to 259. They contain outliers ranging from 0.40% to 75.40% and have objects ranging from 195 to 60,632 as summarized in Table I. For preprocessing, all data were scaled by subtracting the mean and dividing by standard deviation for each attribute. The outlier detectors' performance was measured mainly by _area under_ the _receiver operating characteristic_ (ROC) _curve_ (AUC). The ROC curve was drawn by plotting the true positive rate against the false positive rate over various threshold values. The AUC was a single value ranging from 0 to 1. The bigger the value was, the better the performance. While AUC measured the average performance, we also tested the performance when a selected thresholding method was applied. For the threshold component, we used the known number of outliers in the dataset. This is known as the _top-k method_. The result was measured by the _F1-score_, which was the average of _precision_ and _recall_. Precision is the ability to minimize false positives and recall is the ability to find all the positive samples. For \(k\)-NN-based outlier detectors, we used the value of \(k\), which provided the best results when \(k\) ranged from 2 to 100. The default parameters found in the literature are used for the other detectors. The proposed NA was tested with all values of \(k\) from 1 to 100. We used \(k=100\) as the default value. Neighborhood averaging was iterated 10 times to study the effect of _iterations_. ## V Results ### _The overall effect of NA_ We varied the neighborhood size \(k\) in NA from 1 to 100 to find the best results and compared them with the results obtained using the default value \(k=100\). The average AUC and F1-score results are summarized in Table II. The AUC results per dataset are summarized in Table III. Based on the results, we can make the following observations. First, based on the AUC results in Table II, the proposed NA significantly improved all the detection results. On average, all the detectors evaluated for all the datasets improved by \(+0.04\) (from 0.70 to 0.74) with the default \(k\), and \(+0.06\) with the best \(k\). We can make a similar observation about AUC for the F1-score. Neighborhood averaging improved all outlier detectors by \(+0.02\) (from 0.73 to 0.75) on average when using the default value of \(k\), and by \(+0.03\) when using the best value of \(k\). Second, NA provided the most AUC improvement with the NC detector, from 0.62 to 0.77. The most significant individual improvement was \(+0.28\) for **HeartDisease** and **KDD-Cup99**. This observation is interesting, as NC was originally one of the worst detectors. However, when used with NA, it became competitive. This indicates that NC and NA utilize different properties and are complementary. It also suggests that the poorly performing detectors evaluated previously may have been seriously underestimated. Third, the default setting with \(k=100\) performed almost as well as the best \(k\). This shows that NA is robust on the choice of the parameter \(k\). Fourth, as shown in the rows of data labeled _original_ in Table I, except MO_GAAL, without using NA the average AUC of detectors has a range from 0.62 to 0.75. However, with NA, the range becomes much smaller, from 0.75 to 0.79. This indicates that when NA was not used the choice of detector mattered, but when NA was used it mattered less. This may pose a challenge to the generally accepted idea that _the data model is the most important factor_[1]. For MO_GAAL, the ROC AUC is near 0.50, which is close to random guesses. This may be because MO_GAAL needed more samples to train the neural network. In Table III, we can see that all detectors for all datasets improved for both the default \(k\) and the best \(k\). The only exception is the result for **Arrhythmia**, which weakened by -0.02 when using default \(k\). Most datasets improved from \(+0.03\) to \(+0.15\) on average. The most significant individual improvement was for **HeartDisease**, which was \(+0.17\) on average. Neighborhood averaging did not help much with the datasets containing only a few outliers or when the original detector already performed well. For example, MOD, KNN, IFOREST, OCVSCM, and PCAD all achieved AUC = 0.99 for **KDD-Cup99**. ### _Effect of the iterations_ Neighborhood averaging can be iterated several times. Next, we varied the _iteration_ parameter from 1 to 10 times to study its effect on the result. The value _iteration_ = 0 corresponds to the original detector without NA. The average AUC results of all detectors evaluated for all datasets, a selected detector (MOD), and a selected dataset (**HeartDisease**) are summarized in Table IV, Table V, and Table VI, respectively. The average results in Table IV show the first iteration achieved the most improvement (\(+0.06\)). The second iteration achieved further improvement (\(+0.01\)) but beyond that, the effect remained rather small (\(<\)\(+0.03\)). However, by applying NA for multiple iterations the performance was improved from 0.70 to 0.79 UAC. The results for the individual datasets with MOD are reported in Table V. All the datasets evaluated with the MOD detector were improved except **Arrhythmia**, which started to deteriorate during the second iteration. This might have been caused by the so-called _curse of dimensionality_ in high-dimensional data, as **Arrhythmia** has 259 dimensions, while all the other datasets had 60 or fewer. Most other datasets were improved even when they were iterated 10 times. Another exception was **Pima**, for which the result started to deteriorate after the fourth iteration. This indicated that the _iteration_ parameter needed to be tuned according to the datasets if an optimal value was desired. To be conservative, we set the default value as _iteration_ = 1 despite knowing that some datasets, such as **SpamBase** and **HeartDisease**, would benefit from more iterations. The results for the individual detectors with **HeartDisease** are reported in Table VI. It shows all detectors can benefit from _iteration_ = 2. To summarize, we conclude that the optimal number of iterations in applying NA depends on the dataset and the detector used and it is not trivial to optimize. Our recommended choice is _iteration_ = 1. ### _The value of \(k\)_ To study the effect of \(k\) in NA, we varied it from 1 to 100. The average AUC values across all the datasets are shown in Fig. 10. The results on a selected individual dataset (**HeartDisease**) are also shown in Fig. 11. The value \(k=1\) corresponds to the original detector without NA. The results show that when increasing \(k\), all detectors improved and reached their best performance with \(k=100\). We therefore recommend \(k=100\) as the default value. Neighborhood averaging is proposed as an independent component to improve single outlier detectors. We notice that all \(k\)-NN-based outlier detectors also need to select the value of \(k\). We considered using the same \(k\) value both for the baseline detectors and for NA. We performed additional experiments with the \(k\)-NN-based detectors. We varied \(k\) from 3 to 100 to find the best AUC. The average results over all datasets are summarized in Table VII. They show that NA significantly improved the detectors by \(+0.05\) on average. Most improvement is achieved with NC (\(+0.11\)). Further minimal improvements might be achieved with some datasets if \(k\) was increased further. However, some datasets do not have enough data to go much beyond 100, and the results would eventually start to degrade. The main result was that we can achieve good performance with rather small \(k\) values. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \cline{3-13} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{**AUC**} & \multicolumn{3}{c|}{**F1-score**} \\ \cline{3-13} \multicolumn{1}{c|}{**Name**} & \multicolumn{3}{c|}{**Original**} & \multicolumn{3}{c|}{**NA (\& **k**)**} & \multicolumn{3}{c|}{**Original**} & \multicolumn{3}{c|}{**NA (\& **k**)**} \\ \cline{3-13} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**default**} & \multicolumn{1}{c|}{**best**} & \multicolumn{1}{c|}{**original**} & \multicolumn{3}{c|}{**default**} & \multicolumn{1}{c|}{**best**} \\ \hline MOD[7] & 0.73 & 0.77 & 0.78 & 0.74 & 0.76 & 0.77 \\ LOF[19] & 0.71 & 0.76 & 0.77 & 0.74 & 0.76 & 0.78 & 0.71 \\ ODIN[13] & 0.67 & 0.74 & 0.75 & 0.71 & 0.75 & 0.76 \\ NC[16] & 0.62 & 0.74 & 0.77 & 0.71 & 0.75 & 0.77 \\ KNN[12] & 0.75 & 0.76 & 0.79 & 0.74 & 0.76 & 0.77 \\ ABOD[27] & 0.69 & 0.72 & 0.75 & 0.73 & 0.75 & 0.75 \\ MCD[22] & 0.71 & 0.75 & 0.77 & 0.72 & 0.75 & 0.76 \\ IFOREST[23] & 0.74 & 0.77 & 0.79 & 0.74 & 0.76 & 0.77 \\ OCSVM[25] & 0.71 & 0.75 & 0.76 & 0.71 & 0.75 & 0.76 \\ PCAD[26] & 0.72 & 0.75 & 0.76 & 0.73 & 0.75 & 0.75 \\ MO GAAL[28] & 0.55 & 0.58 & 0.60 & 0.67 & 0.69 & 0.69 \\ COPOD[30] & 0.75 & 0.77 & 0.79 & 0.76 & 0.77 & 0.78 \\ \hline **AVG** & 0.70 & 0.74 & 0.76 & 0.73 & 0.75 & 0.76 \\ \hline \end{tabular} \end{table} TABLE II: AVERAGE AUC and F1-SCORE for ALL DATASETS \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c} \cline{3-13} \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{**Iteration**} \\ \cline{3-13} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**0 Detector**} & \multicolumn{1}{c|}{**0**} & \multicolumn{1}{c|}{**1**} & \multicolumn{1}{c|}{**2**} & \multicolumn{1}{c|}{**3**} & \multicolumn{1}{c|}{**4**} & \multicolumn{1}{c|}{**5**} & \multicolumn{1}{c|}{**6**} & \multicolumn{1}{c|}{**7**} & \multicolumn{1}{c|}{**8**} & \multicolumn{1}{c|}{**9**} & \multicolumn{1}{c}{**10**} \\ \hline MOD & 0.73 & 0.78 & 0.80 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 \\ LOF & 0.71 & 0.77 & 0.78 & 0.79 & 0.80 & 0.80 & 0.80 & 0.80 & 0.80 & 0.80 & 0.80 & 0.80 \\ ODIN & 0.67 & 0.75 & 0.77 & 0.78 & 0.79 & 0.79 & 0.80 & 0.80 & 0.80 & 0.80 & 0.80 \\ NC & 0.62 & 0.77 & 0.76 & 0.76 & 0.75 & 0.75 & 0.74 & 0.74 & 0.74 & 0.74 & 0.74 \\ KNN & 0.75 & 0.79 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 \\ ABOD & 0.69 & 0.75 & 0.78 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 \\ MCD & 0.71 & 0.77 & 0.78 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 & 0.79 \\ IFOREST & 0.74 & 0.79 & 0.80 & 0.82 & 0.82 & 0.82 & 0.82 & 0.82 & 0.82 & 0.82 & 0.83 \\ OCSVM & 0.71 & 0.76 & 0.79 & 0.81 & 0.82 & 0.82 & 0.81 & 0.82 & 0.82 & 0.82 & 0.82 \\ PCAD & 0.72 & 0.76 & 0.77 & 0.78 & 0.78 & 0.78 & 0.78 & 0.78 & 0.78 & 0.78 & 0.78 \\ MO GAAL & 0.55 & 0.60 & 0.61 & 0.61 & 0.61 & 0.61 & 0.61 & 0.61 & 0.61 & 0.62 \\ COPOD & 0.76 & 0.79 & 0.82 & 0.83 & 0.84 & 0.84 & 0.85 & 0.85 & 0.85 & 0.85 \\ \hline **AVG** & 0.70 & 0.76 & 0.77 & 0.78 & 0.78 & 0.78 & 0.78 & **0.79** & **0.79** & **0.79** & **0.79** \\ \hline \end{tabular} \end{table} TABLE IV: AVERAGE AUC RESULTS FOR ALL DATASETS \begin{table} \begin{tabular}{c|c|c ### _Outlier ensembles_ Next, we tested the effect of augmentation on NA with an existing outlier ensemble technique. We used the average ensemble [1] method, with different baseline detector combinations. Results are summarized in Table VIII. We can observe that the results of the outlier ensemble depend on the quality of the individual detectors. The best results are obtained by the combination of MOD and KNN, which reaches 0.75. Combining all 12 detectors would reach only 0.69. When applying NA jointly with the outlier ensemble, we observed the following. First, no matter which combination was used, NA always improved the result of the ensemble. Second, the best combination no longer depended on the quality of the individual detector. The best combination (MOD and KNN) is based on one of the weaker baseline detectors among those tested. This combination with NA reached the overall best result of 0.79, which is very close to the result (0.77) reached without optimizing the parameter \(k\). This indicates that NA provides a strong complementary component to ensemble. ### _Complementary to NR_ As our previous work NR was a data preprocessing method to improve detectors, we wanted to know if NA as outlier score post-processing method could further improve NR. We tested LOF, NR+LOF, LOF+NA, and NR+LOF+NA by setting their parameter \(k\) to be the same value and the results were plotted in Fig. 12 with **Parkinson**, **HeartDiease**, and **Spambase** datasets. From Fig. 12, we could observe a relative larger k, closed to 200, was good when NR and NA were jointly used. NR+NA+LOF could further improve NR 31% relative to NR+LOF on average (0.88 vs. 0.71 AUC) as shown on the right of Fig. 12. It was noteworthy that the performance of LOF on **Spambase** dataset with 0.49 AUC was close to random guessing, but when jointly used with NR and NA, it could even achieve 0.81 AUC. Another important observation was that the performance of **Parkinson** and **HeartDiease** of NR+LOF+NA could be more than 0.90 AUC compared to the LOF's results of less than 0.70, which was far better than any unsupervised results of existing literature, to our best knowledge. In a word, NA is complementary to NR significantly. ### _Computational complexity_ Neighborhood averaging requires O(_NlogN_) calculations using KD-tree in low dimensions (D\(<\)20) and Ball-tree in higher dimensions (D\(>\)20) to find _k_-NN. However, since NA serves as a post-processing step, we care more about its gain relative to its additional cost. Table IX shows the average extra computing time and the average AUC improvement over all datasets. Table IX shows that the _k_-NN-based detectors need only 4% extra time but can improve by 11% in AUC on average. Non-_k_-NN-based detectors are usually significantly faster and need 2,543% extra time to reach an average improvement of 7% in AUC. The main reason is that the _k_-NN-based detectors have already calculated the _k_-NN, which NA can directly utilize. ### _Discussion and limitations_ **Discussion.** Neighborhood averaging is not meant to be a stand-alone detector; rather, it is an add-on to any existing score-based outlier detector used to enhance its performance as shown in the neighborhood attention example in Fig. 3. The add-on does not increase the complexity of _k_-NN-based detectors as shown in section V-E, but it can bring significant improvement as shown in section V-A. Neighborhood averaging has only one parameter \(k\) to tune, which is not sensitive (not oscillating) to detectors or datasets, and it is easy to tune as demonstrated in Section V-C. Hence, NA is very useful for practical applications. **Limitations.** One limitation of the method is the _k_-NN graph. Some neighbors can be far away, and simple averaging may not be the best solution. Possible alternatives could be weighted average and medoid. Different neighbor graphs could also be used. Some alternatives include mutual neighborhood [36], _k_-MST [37], and XNN [38]. Nevertheless, NA is already successful and we leave these ideas for future work. The method also has the same limitation as other distance-based methods--its performance starts to degrade when the dimensions are large, as shown in the 269-dimensional Arrhythmia dataset. Neighborhood averaging still improved but the performance started to degrade if the method was iterated more than once. Such problems are common for distance-based pattern recognition methods operating in the raw attribute space. This is often referred to as the "curse of dimensionality." Fig. 12: Experiment results of LOF, NR+LOF, LOF+NA, and NR+LOF+NA ranging \(k\). NA is complementary to NR. ## VI Conclusions A novel post-processing technique called neighborhood averaging (NA) technique is proposed. The technique can be used to improve any existing outlier detector. Simulations showed that it significantly improved all 12 tested outlier detectors from 0.70 to 0.79 AUC on average. The technique does not require any complicated parameter tuning and \(k\) is the only parameter. When used with a \(k\)-NN based baseline detector, we do not need to recalculate the \(k\)-NN and used the existing one with the same \(k\) value as the detector. With non-\(k\)-NN-based detectors, setting the value of \(k=100\) was shown to provide good results for almost all datasets. It is worth noting that once NA is applied, even a poorly performing outlier detector became competitive. This can help practitioners as they have one less design component to consider. Outlier detection is an important topic in data mining. In addition to its ability to detect outliers in static data, it can also handle dynamic cases such as time series and is therefore useful for applications like audio and video content analysis. In general, whenever _similarity_ between objects can be properly predefined, whether static or dynamic, the concept of _neighborhood_ can be applied. Therefore, the proposed NA can be applied to enhance performance consistently and significantly. Neighborhood averaging has the potential to be widely adopted in a variety of applications in data mining and beyond.
2304.13755
No Tension: JWST Galaxies at $z > 10$ Consistent with Cosmological Simulations
Recent observations by JWST have uncovered galaxies in the very early universe via the JADES and CEERS surveys. These galaxies have been measured to have very high stellar masses with substantial star formation rates. There are concerns that these observations are in tension with the $\Lambda$CDM model of the universe, as the stellar masses of the galaxies are relatively high for their respective redshifts. Recent studies have compared the JWST observations with large-scale cosmological simulations. While they were successful in reproducing the galaxies seen in JADES and CEERS, the mass and spatial resolution of these simulations were insufficient to fully capture the early assembly history of the simulated galaxies. In this study, we use results from the Renaissance simulations, which are a suite of high resolution simulations designed to model galaxy formation in the early universe. We find that the most massive galaxies in Renaissance have stellar masses and star formation rates that are entirely consistent with the observations from the JADES and CEERS surveys. The exquisite resolution afforded by Renaissance allows us to model the build-up of early galaxies from stellar masses as low as 10$^4$ M$_\odot$ up to a maximum stellar mass of a few times 10$^{7}$ M$_\odot$. Within this galaxy formation paradigm, we find excellent agreement with JADES and CEERS. We find no tension between the $\Lambda$CDM model and current JWST measurements. As JWST continues to explore the high redshift universe, high resolution simulations, such as Renaissance, will continue to be crucial in understanding the formation history of early embryonic galaxies.
Joe McCaffrey, Samantha Hardin, John Wise, John Regan
2023-04-26T18:00:09Z
http://arxiv.org/abs/2304.13755v2
# No tension: JWST galaxies at \(z>10\) consistent with cosmological simulations ###### Abstract Recent observations by JWST have uncovered galaxies in the very early universe via the JADES and CEERS surveys. These galaxies have been measured to have very high stellar masses with substantial star formation rates. There are concerns that these observations are in tension with the \(\Lambda\)CDM model of the universe, as the stellar masses of the galaxies are relatively high for their respective redshifts. Recent studies have compared the JWST observations with large-scale cosmological simulations. While they were successful in reproducing the galaxies seen in JADES and CEERS, the mass and spatial resolution of these simulations were insufficient to fully capture the early assembly history of the simulated galaxies. In this study, we use results from the Renaissance simulations, which are a suite of high resolution simulations designed to model galaxy formation in the early universe. We find that the most massive galaxies in Renaissance have stellar masses and star formation rates that are entirely consistent with the observations from the JADES and CEERS surveys. The exquisite resolution afforded by Renaissance allows us to model the build-up of early galaxies from stellar masses as low as \(10^{4}\) M\({}_{\odot}\) up to a maximum stellar mass of a few times \(10^{7}\) M\({}_{\odot}\). Within this galaxy formation paradigm, we find excellent agreement with JADES and CEERS. We find no tension between the \(\Lambda\)CDM model and current JWST measurements. As JWST continues to explore the high redshift universe, high resolution simulations, such as Renaissance, will continue to be crucial in understanding the formation history of early embryonic galaxies. Subject headings:JWST, Galaxies, Cosmological Simulations, High-Redshift. ## 1 Introduction With the launch, and now the first observations with JWST, the high-redshift universe is being unveiled with unprecedented detail for the first time. As exquisitely detailed measurements of distant early galaxies are now within range of observations, it allows for the opportunity to match the results of these observations against high resolution simulations of early galaxy formation which was previously intractable. The JADES survey (Bunker, 2019) has provided measurements on five galaxies with spectroscopically confirmed redshifts at \(z>10\). Three of these galaxies are the most distant yet detected. Robertson et al. (2022), Curtis-Lake et al. (2022) and Bunker et al. (2023) have constrained physical properties of these five galaxies, finding that the galaxies lie at (mean) spectroscopic redshifts of \(z=10.38\) (GS-z10-0), \(z=11.58\) (GS-z11-0), \(z=12.63\) (GS-z12-0), \(z=13.20\) (GS-z13-0) and \(z=10.60\) (GN-z11). Additionally the CEERS project (Finkelstein et al., 2022) has also provided measurements for Maisie's galaxy which has a redshift of 11.44 (Arrabal Haro et al., 2023). In total there are six spectroscopically confirmed galaxies against which we can directly compare. The JADES survey, performed using the NIRCam instrument on JWST, targets a region, previously studied by the Hubble Telescope (Beckwith et al., 2006), in nine different wavelength ranges. JADES was conducted with the aim of detecting faint galaxies using the dropout-technique (Bunker, 2019) allowing for fast identification of high redshift galaxy candidates. However, the photometry alone is not enough to confirm the candidates' redshift and a follow-up spectrum, using an instrument like NIRSpec, is needed to confidently quantify the redshift of the candidates as noted above. Similar to JADES, CEERS aims to study the first 500 Myr of galaxy evolution again by using a combination of the NIRCam instrument for fast identification followed by a longer duration follow-up using NIRSpec. Initial photometric measurements of Maisie's galaxy and of CEERS-93316 placed them at photometric redshifts of \(z_{\rm phot}=11.08\)(Finkelstein et al., 2022) and \(z_{\rm phot}=16.45\)(Donnan et al., 2022), respectively. Spectroscopic follow-up measurements of these galaxies confirmed their redshifts to be \(z_{\rm spec}=11.44\) and \(z_{\rm spec}=4.912\)(Arrabal Haro et al., 2023) respectively. Due to the decrease in redshift of CEERS-93316 to a much lower redshift, we do not include it in the analysis in this paper and instead only include the high redshift Maisie's galaxy. The spectra of the galaxies found in JADES and CEERS were analysed using Beagle (Chevallard & Charlot, 2016), to estimate the stellar mass and star formation rate of each galaxy which we compare directly against our high resolution simulations. More details on the modelling procedure used to reduce the observational data can be found in the detection papers (e.g. Bunker et al., 2023; Arrabal Haro et al., 2023). In a recent study Keller et al. (2023), hereafter K23, tested the capabilities of a variety of cosmological simulations by investigating whether these simulations were able to reproduce galaxies with properties similar to the galaxies observed in the JADES and CEERS surveys. To do this, K23 utilised EAGLE (McAlpine et al., 2016), Illustris (Vogelsberger et al., 2014), TNG100 (Naiman et al., 2018), RomulusC (Tremmel et al., 2018), Obelisk (Trebitsch et al., 2021) and Simba (Dave et al., 2019). K23 concluded that the cosmological simulations they examined were able to reproduce galaxies with a similar stellar mass and star formation rate (SFR) compared to the galaxies observed in the JADES simulations and that the observed galaxies are consistent with a flat \(\Lambda\)CDM model. However, the simulation datasets available to K23 lacked the mass and spatial resolution necessary to probe both the cosmic star formation history and the assembly history of these early galaxies. In this paper, we build-on the work of K23 by comparing the JADES and CEERS results against high resolution simulations that were designed to specifically examine a high-redshift environment and to model the early assembly history of the first galaxies. This study accomplishes this using the Renaissance simulation suite (Xu et al., 2013; Chen et al., 2014; O'Shea et al., 2015). The Renaissance simulations model early galaxy formation in three regions which differ from each other by their level of overdensity (see O'Shea et al., 2015, for details). Using a similar methodology to K23, we compare the results from these high resolution simulations to the JADES and CEERS galaxy property estimates, observationally validating our simulation results and also determining the likeliness of such massive galaxies forming early in a \(\Lambda\)CDM cosmology. There have been significant concerns that the early measurements by JWST are in conflict with the \(\Lambda\)CDM model of the universe. In particular, there are claims that the stellar masses of the galaxies observed by JWST are simply too massive to be explained via a \(\Lambda\)CDM cosmology (e.g. Haslbauer et al., 2022) and that the masses of the JWST galaxies, as measured at redshifts between 7 and 10 by Labbe et al. (2022) in particular, are testing the upper limits on the available baryonic mass available according to \(\Lambda\)CDM (Boylan-Kolchin, 2023). This paper addresses these concerns by showing that simulations (based on a \(\Lambda\)CDM cosmology) are able to reproduce galaxies entirely consistent with the early findings of JWST at least at the very highest redshifts explored by JWST. If the Labbe et al. (2022) result holds under spectroscopic scrutiny and the galaxies are found to lie in typical regions of a \(\Lambda\)CDM universe, then, as pointed out by Boylan-Kolchin (2023), this represents a major challenge to standard cosmology. However, see also Prada et al. (2023) which offers an alternative explanation to the large stellar masses found by Labbe et al. (2022). The paper is laid out as follows: SS2 describes the high-resolution Renaissance simulations as well as the methodology and code used to run the suite, SS3 discusses the results of the analysis performed and in SS4 we discuss the implications of the results and the case for further analysis of the high redshift universe using cosmological simulations. ## 2. Methodology ### The Renaissance Simulations The Renaissance simulations (Xu et al., 2013; Chen et al., 2014; O'Shea et al., 2015; Smith et al., 2018; Wise et al., 2019) were run using the massively parallel adaptive mesh refinement Enzo code (Bryan et al., 2014; Brummel-Smith et al., 2019). We briefly describe the methods used here, but refer the interested reader to the previous papers for a more complete discussion. The Renaissance simulation suite is composed of three zoom-in regions (see Figure 1) extracted from a parent volume of 40 Mpc on the side. The three separate zoom-in regions were named the Rarepeak (RP) region, the Normal region and the Void region. The zoom-in volumes ranged from 200 to 430 comoving Mpc\({}^{3}\). The RP region is centred on a \(3\times 10^{10}\) M\({}_{\odot}\) halo at \(z=6\) with an enclosing volume of \((3.8\times 5.4\times 6.6)\) Mpc\({}^{3}\). The Normal and Void volumes have comoving volumes of \((6.0\times 6.0\times 6.125)\) Mpc\({}^{3}\). All three regions have projected areas comparable to the NIRCam field of view. The Renaissance suite uses the cosmological parameters from the 7-year WMAP \(\Lambda\)CDM+SZ+LENS best fit (Komatsu et al., 2011) with \(\Omega_{\rm M}=0.266,\Omega_{\Lambda}=0.734,\Omega_{\rm b}=0.0449,h=0.71,\sigma _{8}=0.81\) and \(n=0.963\). The (dark matter) particle mass resolution of the Renaissance suite is \(2.9\times 10^{4}\) M\({}_{\odot}\) and the maximum spatial resolution afforded by the adaptive mesh is 19 comoving pc. This allows the Renaissance suite to resolve most of the minihaloes in which the first stars are expected to form (e.g. Machacek et al., 2001; Kulkarni et al., 2021; Chiaki et al., 2023). In particular Renaissance employs a model for metal-free (Population III) star formation (Wise et al., 2012) allowing stochastic sampling of the formation of the first metal-free stars. The resulting metal enrichment driven from the collapse of the first stars results in the emergence of the second generation of stars which ultimately lead to the birth of the first massive galaxies - the galaxies which JWST is now observing. The computational complexity of the Renaissance suite means that evolv Figure 1.— Mass-weighted density projection of the (40 comoving Mpc)\({}^{3}\) exploratory dark matter simulations at redshift \(z=8\) used in the Renaissance suite. The survey volumes of the Rarepeak, Normal and Void regions are outlined. The Rarepeak region is centered on the most massive halo at \(z=6\). Due to projection effects, the Normal region appears as dense as the Rarepeak region. However, the average overdensity of the Normal region is only 9% higher than the mean matter density. Visualisation originally published in Xu et al. (2016). ing these simulations to the present day is completely intractable. As such the RP simulation was evolved to \(z=15\), the Normal simulation evolved to \(z=11.6\) and the Void region to \(z=8\). As some of the JADES and CEERS results are at somewhat lower redshifts (compared to the RP and Normal runs) we extrapolate our results to the JADES and CEERS spectroscopic redshifts in some cases. As discussed in the Introduction, the comparison study undertaken by K23 uses the simulation datasets from EAGLE, Illustris, TNG100, RomulusC, Obelisk and Simba. In Table 1 we compare the simulation datasets used in K23 versus that used here (i.e. against Renaissance). The simulations used in K23 do not have sufficient resolution to probe the formation of the first stars and can resolve, at best, the formation of the first atomic cooling haloes. The Renaissance suite allows us to probe the assembly processes involved in forming the haloes that appear in the simulations used in K23 as well as the building blocks of the galaxies now being observed with JWST. ### Extrapolating the Stellar Mass of the Simulated Galaxies based on the Star Formation History To properly compare the simulated galaxy conditions against observations, we need to have simulated values at the same redshift as the JADES and CEERS measurements. However, both the Normal and RP regions of the simulations only reach a redshift of \(z=11.6\) and \(z=15\) respectively. Therefore, the Normal region does not reach sufficiently low redshifts so as to be directly comparable against two of the JADES and CEERS galaxies, while the RP region cannot be directly compared, in terms of redshift, against any of the JADES and CEERS galaxies. To rectify this, we extrapolate the stellar masses of the most massive galaxy in both the Normal and RP region based on their respective SFR forward in time to connect with the observational redshifts. The definition of the specific star formation rate (sSFR) is \[\Psi_{\rm S}\equiv\frac{\Psi}{M_{*}} \tag{1}\] where \(\Psi\) is the SFR and M\({}_{*}\) is the stellar mass. We use three values for \(\Psi_{\rm S}\) in our extrpolation method: A maximum value of \(10^{-7}\) yr\({}^{-1}\), a nominal value of \(10^{-8}\) yr\({}^{-1}\), and a minimum value of \(10^{-9}\) yr\({}^{-1}\). These values were chosen from the range of sSFR values found in Renaissance (see also Figure 4). The extrapolated mass is dictated by the differential equation \[\Psi\equiv\frac{dM_{*}}{dt}=\Psi_{\rm S}M_{*}, \tag{2}\] where the solution of this equation is \[M_{*}(t)=M_{0}e^{\Psi_{\rm S}(t-t_{0})}. \tag{3}\] Here, \(M_{0}\) is the final simulated stellar mass of the halo and \(t_{0}\) is the final simulated time. Using Equation (3) we can then predict the stellar mass of the galaxy past the final simulation time. We will discuss the impact of this extrapolation method in more detail next. ## 3. Results We begin here by comparing the most massive galaxy in each of the RP, Normal and Void regions against the JADES and CEERS results. We then follow this by comparing the global galaxy assembly history of all of the galaxies in the Renaissance suite against the JADES and CEERS results. ### Comparing the Most Massive Galaxies in Renaissance with the JADES and CEERS measurements In Figure 2 we plot the stellar masses of the most massive galaxy in each region in Renaissance against the stellar masses of each of the JADES and CEERS survey galaxies. Due to the fact that both the RP and Normal regions have a final output time earlier than some or all of the observed measurements, we plot an extrapolation of their stellar masses, depicted by the shaded regions and described in SS2.2. Examining Figure 2, we can see that the most massive galaxy in the Normal region (blue line), which evolves to a \(z=11.6\), has a stellar mass greater than GS-\(\pm 10\)-0 and is consistent with or with a factor of a few compared to the stellar mass of the remaining five galaxies. The RP region, which does not overlap in redshift with any of the JADES and CEERS galaxies, is consistent in terms of stellar mass with each of the observed galaxies once extrapolation is considered. The upper bounds of both shaded regions is based on a constant sSFR value of \(10^{-7}\) yr\({}^{-1}\), the lower bounds being \(10^{-9}\) yr\({}^{-1}\), and the dashed lines show the extrapolated stellar masses based on an nominal sSFR value of \(10^{-8}\) yr\({}^{-1}\). The dashed line belonging to the RP region is entirely consistent with each of the observed galaxies. The Void region shows systematically lower stellar masses and does not achieve the same stellar masses as those found in JADES and CEERS. Large-scale overdensities are directly related to the overabundances of galaxies through the halo mass function and the halo mass - stellar mass relation, shown by Xu et al. (2016) for the Renaissance suite in particular. The observed stellar mass estimates suggest that the CEERS and JADES fields are not likely underdense regions. However, additional work would be needed to statistically conclude that the JADES and CEERS were \begin{table} \begin{tabular}{|c|c|c|c|} \hline Simulation & Box size [cMpc] & \(M_{\rm DM}\) [M\({}_{\odot}\)] & \(\Delta x_{\rm DM,*}\) [pc] \\ \hline Simba & 147.7 & \(9.7\times 10^{7}\) & 500 \\ EAGLE & 100 & \(9.7\times 10^{6}\) & 2660 \\ TNG100 & 110.7 & \(7.5\times 10^{6}\) & 740 \\ Illustris & 106.5 & \(6.3\times 10^{6}\) & 710 \\ Obelisk & 142.0 & \(1.2\times 10^{6}\) & 540 \\ RomulusC & 50 & \(3.4\times 10^{5}\) & 250 \\ Renaissance & 40 & \(2.4\times 10^{4}\) & 19 \\ \hline \end{tabular} \end{table} Table 1The first column gives the simulation suite name, the second column the boxsize length of each simulation used in K23, the third column gives the dark matter particle resolution, \(M_{\rm DM}\), and finally the fourth column gives the spatial resolution, The spatial resolution was based on the gravitational softening lengths for the SPH simulations and for the AMR simulations it is based on the cell length. For the Renaissance suite we give the parent box size (40 cmc) but note that the results we show here are for the zoom in regions which have box lengths of approximately 6 cmpc. observed in overdense regions. As of now the observed galaxies are entirely consistent with both the Normal and RP regions. To provide additional context and to quantify the rarity of haloes at these masses and redshifts, we plot lines in Figure 2 to represent the number of haloes of a certain (stellar) mass we expect JWST to see at a specific redshift. These lines are based on the baryon fraction obtained from WMAP7 and a star formation efficiency (SFE) of \(f_{*}\equiv M_{*}/[(\rm{O_{b}/\Omega_{M}})M_{\rm{halo}}]=0.1\). The value of \(dn(M_{*},z)/dz\) represents the number of galaxies of stellar mass \(M_{*}\) or greater we expect to see at redshift \(z\) in the NIRCam field of view. For example \(\rm{dn/dz}=10^{-3}\) tells us that we expect to see one galaxy with stellar mass \(M_{*}\) or greater at redshift \(z\) in every one thousand frames with the angular size of NIRCam. All of the galaxies are consistent with the \(\rm{dn/dz}=1\) line except the GS-z11-0 galaxy whose error bars are magniably above the \(\rm{dn/dz}=1\) line. This means that we should expect to observe at least one galaxy of that stellar mass in the field of view of NIRCam. The lines at \(\rm{dn/dz}=10^{-3}\) and \(10^{-6}\) represent a mass scale that would be increasingly unlikely to observe at that redshift. The calculations used in the creation of these values is described in Appendix A. Figure 3 shows the SFR history of the most massive galaxy in the RP, Normal and Void regions - the same as Figure 2. Comparing the observed (coloured symbols) and simulated galaxies (solid coloured lines), it is clear that Renaissance is able to reproduce galaxies with star formation rates entirely consistent with the galaxies observed in JADES and CEERS. This is achieved due to the excellent mass and spatial resolution afforded by Renaissance which gives Renaissance the unique capability to follow the formation of the galaxy stellar population(s) from approximately \(10^{4}\) M\({}_{\odot}\) up to the final simulated stellar masses of a few times \(10^{7}\) M\({}_{\odot}\). What we see is that the star formation rates start several orders of magnitude below that observed but in very inefficient, star forming, haloes. As the star formation efficiency increases (see also Figure 4) the simulated values quickly converge to the observed values ultimately reaching val Figure 2.— The most massive galaxy in each of the Rarepeak (RP, orange), Normal (blue) and Void (green) regions is shown. The six star shaped symbols identify the JADES and CEERS galaxies giving both their stellar masses and the redshift at which they were spectroscopically identified. The most massive galaxies in both the RP and the Normal regions are in excellent agreement with the JADES and CEERS observations. The shaded region denotes extrapolation of the stellar masses based on the SFR history of the respective galaxy where the region is bounded from above by a specific star formation rate of \(10^{-7}\) yr\({}^{-1}\) and from below by a sSFR of \(10^{-9}\) yr\({}^{-1}\). The dashed line represents an extrapolated mass based on a sSFR of \(10^{-8}\) yr\({}^{-1}\). Finally, we plot as solid black lines the expectation value for finding a galaxy of a given stellar mass at a given redshift in a field of view comparable to NIRCam. For this we assume a gas mass corresponding to the baryon fraction and a star formation efficiency (SFE) of 0.1 using the Sheth-Tornen (Sheth & Tormen, 1999) halo mass function as the underlying framework. All of the JADES, CEERS and Renaissance haloes are consistent with \(\Lambda\)CDM predictions of finding at least one halo with these stellar masses in this volume. See text for further details on the consistency with respect to \(\Lambda\)CDM predictions. ues of approximately 1 M\({}_{\odot}\) yr\({}^{-1}\). It is the superior mass resolution of Renaissance, afforded through the use of adaptive grids, that allows us to follow the build up of the galaxy from initially very small stellar masses (\(\sim 10^{4}\) M\({}_{\odot}\)) up to a final stellar mass of more than \(>10^{7}\) M\({}_{\odot}\). While the spatial resolution of 19 comoving pc allows for the internal structure and dynamics of these early galaxies to be well resolved. We observe an initial burst of star formation in the Renaissance galaxies plotted in Figure 3 with a leveling off in the SFR as the halo continues to evolve. This occurs because in the initial stages of star formation in the haloes, the mass resolution of Renaissance is sufficient to capture the early assembly history. Once the halo mass exceeds the atomic cooling threshold and gas can cool via atomic line emission, the gas begins to collapse and form (Population II) stars more readily and the star formation efficiency increases rapidly. The feedback from the (Population II) star formation heats up the surrounding environment thus regulating further star formation. This results in an equilibrium in the SFR of the halo and is represented by the leveling off seen in Figure 3. ### Comparing all Galaxies in Renaissance with JADES Measurements and Simulation Results Figure 4 shows the SFR of each galaxy as a function of the stellar mass of each galaxy in Renaissance along with the galaxies from the simulations included in K23. By including the data from our simulations and the K23 simulations, we see how both lower and higher mass galaxies compare to the JADES and CEERS data that is plotted on Figure 4 as coloured stars. The data on GS-z13-0, GS-z12-0, GS-z11-0 and GS-z10-0 are taken from Robertson et al. (2022), GN-z11 is taken from Bunker et al. (2023) while the data on Maishe's Galaxy is taken from Arrabal Haro et al. (2023). In this comparison, we consider all galaxies in the RP, Normal, and Void regions of the Renaissance simulations that extend to very low masses. The exquisite mass resolution provided by Renaissance allows us to see in detail the star formation history from the very beginning of galaxy formation, shown starting from \(10^{4}\) M\({}_{\odot}\) (blue hexbins), while the K23 simulations (grey hexbins) provide insight into the higher mass regime. As shown by Figure 4 the Renaissance and K23 simulations' galaxies align well with the JADES and CEERS data. We show lines of constant sSFRs that are representative of the majority of the data in Figure 4. The figure shows that most galaxies at high-redshift quickly assemble, lying in the range \(10^{-7}-10^{-9}\) yr\({}^{-1}\) corresponding to \(e\)-folding times of 10 Myr to 1 Gyr. While it is difficult to see any definite trends in the heatmap because of the differing simulations and zoom-in regions, this figure demonstrates that early galaxy formation progresses at a fairly rapid rate with the lower mass galaxies having higher sSFR values. The plot shows that the Renaissance simulation follows closely with the trend shown in the K23 simulations. For both sets of data, we can see that SFR increases as stellar mass increases with a constant slope. The Renaissance data connects smoothly to the K23 and JADES data, showing how the galaxies in the Renaissance simulations could evolve over 200 Myr to become the JADES and CEERS galaxies. The figure clearly demonstrates that the Renaissance and K23 simulations are consistent with the observed sSFRs of \(\sim 10^{-8}\) yr\({}^{-1}\) and that these galaxies are not out of the ordinary. ### Simulated Stellar Mass - Halo Mass Relation Figure 5 shows the stellar mass of each galaxy in the Renaissance simulation against its host halo mass from Figure 4.— The star formation rates of the galaxies in the Renaissance simulations, which occupy lower masses in the plot and shown in blue hexbins, and the galaxies in the K23 simulations, which exist at higher masses and shown in grey hexbins, as a function of stellar mass. Lines of constant sSFR are plotted that are representative of the majority of the galaxies in the simulations. We plot the JADES and CEERS observations as stars, consistent with the simulation data. Figure 3.— The star formation rates (SFRs) of the most massive galaxy in each of the RP (orange), Normal (blue) and Void (green) regions. The SFR of the JADES and CEERS galaxies are also plotted as star symbols with associated error bars (Robertson et al., 2022; Arrabal Haro et al., 2023; Bunker et al., 2023). All of the modelled galaxies show an initial burst of star formation at the early stages of evolution with a leveling off as each halo continues to evolve. In the cases where the observed galaxies overlap with the modelled galaxies, the SFRs found are consistent with each other within an order of magnitude. Both the Normal and RP regions show SFRs, for their highest mass galaxies between 0.5 and 1 M\({}_{\odot}\) yr\({}^{-1}\). The most massive galaxy in the Void region shows a lower SFR as expected (e.g. Xu et al., 2016). all zoom-in regions. We plot data from all outputs in the simulation suite. We can combine these datasets because galaxy scaling relations do not vary greatly with redshift at these early times (Chen et al., 2014). We thus consider each galaxy as a representative selection from the high-redshift galaxy population. Here we focus on the trends between stellar mass and halo mass in this low-mass regime. From the raw data alone, there is a large scatter at all halo masses, culminating from highly variable SFRs that arise from periodic gas expulsions from stellar feedback. To better show any trends, we show the median value (solid blue line) as a function of the halo mass for all galaxies. We depict the standard deviation as the shaded blue region, which remains fairly constant in log-space throughout this mass range. We also show lines of constant star formation efficiency (SFE), denoting the stellar mass at a given halo mass. As demonstrated in Figure 3, galaxy growth does not proceed at a constant SFR and thus SFE, especially at these low masses when star formation can be bursty. Each galaxy makes its unique path through this parameter space. When a population of bursty galaxies are considered, these variable tracks transform into a relationship shown in the figure. Throughout most of the plot, the slope of the median line is steeper than the slopes of the constant SFE lines, meaning that the galaxies are, on average, forming stars more efficiently as they grow. This trend clearly shows an increase in the \(M_{*}-M_{\rm halo}\) slope around the atomic cooling limit at \(\sim 10^{8}\) M\({}_{\odot}\). Gas cooling and thus star formation within these nascent galaxies are inefficient below this limit with a median SFE \(f_{*}\sim 2\times 10^{-3}\). Once the gas can cool through atomic processes, star formation becomes more efficient, depicted by the increase in the slope and associated SFE. By the time haloes reach \(10^{9}\) M\({}_{\odot}\), the median SFEs are a few per cent, similar to the more massive galaxies probed by the simulations highlighted in K23. We note that the turnover to a negative slope is an artefact of limited galaxy sample at these highest halo masses in the Renaissance simulations. In principle, the stellar mass should continue to grow as the halo grows. These most massive haloes have lower stellar masses than some of slightly less massive haloes. This is not unexpected because of the large scatter in the stellar mass - halo mass relation, caused by the stochastic nature of early galaxy formation. ## 4. Discussion and Conclusions The goal of this study is to investigate whether or not the initial findings of JWST, via the JADES and CEERS surveys, are consistent with state-of-the-art high resolution simulations. Additionally, the high (mass) resolution simulations allow us to study in detail the assembly history of these galaxies and allow us to connect the modelled galaxies with those observed in JADES and CEERS. We find that, using the high spatial and mass resolution Renaissance simulations, that excellent agreement exists between observations and simulations. Our results are consistent with a similar study by Keller et al. (2023) who compared a range of somewhat coarser resolution simulations against the initial JADES and CEERS results. Our overall findings can be broken down as: * The most massive haloes in Renaissance have comparable stellar masses to the JADES and CEERS galaxies. Comparing with the theoretical expectation of galaxies within a field of view identical to NIRCam, we find that the \(z>10\) galaxies detected in JADES and CEERS are consistent with that expected from \(\Lambda\)CDM cosmology. * The star formation rates for the most massive galaxies in Renaissance are fully consistent with the JADES and CEERS measurements. Moreover, the star formation histories show specific star formation rates, as a function of stellar mass, entirely consistent with these latest \(z>10\) JWST observations. * The mass resolution of Renaissance allows us to capture the rapid assembly of galaxies in the early universe. After inefficiently forming stars below the atomic cooling threshold at a median SFE of \(2\times 10^{-3}\), star formation becomes more rigorous yet feedback-regulated, reaching levels of a few per cent at galaxy masses similar to the JADES and CEERS measurements. The stellar masses of the most massive galaxies in the Rarepeak and Normal regions need to be extrapolated so that they can be compared to the observational measurements. After extrapolation, we find that the galaxies need to maintain an sSFR of at least \(10^{-8}\) yr\({}^{-1}\) in order to have a stellar mass comparable to the galaxies in the large scale simulations and the JADES and CEERS observations. We conclude that both coarser and finer resolution simulations agree that the JADES and CEERS measurements are not in tension with current galaxy formation models. JWST has for the first time enabled a detailed view Figure 5.— The stellar mass of each galaxy from the Renaissance simulation plotted against the halo mass of each galaxy over time. The green, yellow, orange, and purple dashed lines show the stellar mass given a constant SFE of 0.01, 0.1, 1, and 10 per cent, respectively. The blue solid line shows the median stellar mass while the shaded blue region shows the standard deviation, each of them as a function of the halo mass. of the early Universe. Initial findings of massive early galaxies has surprised many with some discussion in the literature that the JWST results maybe in conflict with \(\Lambda\)CDM (Haslbauer et al., 2022; Boylan-Kolchin, 2023). However, what we find is that in the context of a \(\Lambda\)CDM universe there is no tension between theory and observation at the very highest redshifts that we can currently probe. As more measurements are made with JWST and future record breaking observatories, it comes with more opportunity to utilise high-resolution simulations like Renaissance and further stress test the \(\Lambda\)CDM model. ## Acknowledgements We thank Peter Coles for useful discussions during the course of this work. We also thank Ben Keller for providing the data points used in Figure 4. JM acknowledges the support from the John & Pat Hume Doctoral Awards Scholarship (Hume 2021-22). JR acknowledges support from the Royal Society and Science Foundation Ireland under grant number URF\(\backslash\)R1\(\backslash\)191132. JR also acknowledges support from the Irish Research Council Laureate programme under grant number IRCLA/2022/1165. JHW acknowledges support by NSF grants OAC-1835213 and AST-2108020 and NASA grants 80NSSC20K0520 and 80NSSC21K1053.
2305.04100
Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks
A legal document is usually long and dense requiring human effort to parse it. It also contains significant amounts of jargon which make deriving insights from it using existing models a poor approach. This paper presents the approaches undertaken to perform the task of rhetorical role labelling on Indian Court Judgements as part of SemEval Task 6: understanding legal texts, shared subtask A. We experiment with graph based approaches like Graph Convolutional Networks and Label Propagation Algorithm, and transformer-based approaches including variants of BERT to improve accuracy scores on text classification of complex legal documents.
Anshika Gupta, Shaz Furniturewala, Vijay Kumari, Yashvardhan Sharma
2023-05-06T17:04:51Z
http://arxiv.org/abs/2305.04100v1
Steno AI at SemEval-2023 Task 6: Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks ###### Abstract A legal document is usually long and dense requiring human effort to parse it. It also contains significant amounts of jargon which make deriving insights from it using existing models a poor approach. This paper presents the approaches undertaken to perform the task of rhetorical role labelling on Indian Court Judgements as part of SemEval Task 6: understanding legal texts, shared subtask A (Modi et al., 2023). We experiment with graph based approaches like Graph Convolutional Networks and Label Propagation Algorithm, and transformer-based approaches including variants of BERT to improve accuracy scores on text classification of complex legal documents. ## 1 Introduction Rhetorical Role Labelling for Legal Documents refers to the task of classifying sentences from court judgements into various categories depending on their semantic function in the document. This task is important as it not only has direct applications in the legal industry but also has the ability to aid several other tasks on legal documents such as summarization and legal search. This task is still in it's early stages, with huge scope for improvement over the current state-of-the-art. To facilitate automatic interpretation of legal documents by dividing them into topic coherent components, a rhetorical role corpus was created for Task 6, sub-task A of The International Workshop on Semantic Evaluation (Modi et al., 2023). Several applications of legal AI, including judgment summarizing, judgment outcome prediction, precedent search, etc., depend on this classification. ## 2 Related Works with Comparison The predominant technique used in Rhetorical Role Labeling over large datasets is based on the use of transformer-based models like LEGAL-BERT (Chalkidis et al., 2020) and ERNIE 2.0 (Sun et al., 2020), augmented by various heuristics or neural network models. The accuracy of these approaches has remained low over the years. The results are summarized in Table 1. The dataset (Parikh et al., 2022) used to implement the above approaches is relatively small, consisting only of a few hundred annotated documents and 7 sentence classes. ## 3 Dataset The dataset (Kalamkar et al., 2022) is made up of publicly available Indian Supreme Court Judgements. It consists of 244 train documents, 30 validation documents and 50 test documents making a total of 36023 sentences. For every document, each sentence has been categorized into one of 13 semantic categories as follows: 1. **PREAMBLE**: The initial sentences of a judgement mentioning the relevant parties 2. **FAC**: Sentences that describe the events that led to the filing of the case 3. **RLC**: Judgments given by the lower courts based on which the present appeal was made to the present court 4. **ISSUE**: Key points mentioned by the court upon which the verdict needs to be delivered 5. **ARG_PETITIONER**: Arguments made by the petitioner 6. **ARG_RESPONDENT**: Arguments made by the respondent \begin{table} \begin{tabular}{l c} \hline \hline **Model** & **F1 score** \\ \hline LEGAL-BERT & 0.557 \\ LEGAL-BERT + Neural Net & 0.517 \\ ERNIE 2.0 & 0.505 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of related works on the task of rhetorical role labelling on legal text. (Parikh et al., 2022) 7. **ANALYSIS**: Court discussion of the facts, and evidence of the case 8. **STA**: Relevant statute cited 9. **PRE_RELIED**: Sentences where the precedent discussed is relied upon 10. **PRE_NOT_RELIED**: Sentences where the precedent discussed is not relied upon 11. **Ratio**: Sentences that denote the rationale/reasoning given by the Court for the final judgement 12. **RPC**: Sentences that denote the final decision given by the Court for the case 13. **None**: A sentence not belonging to any of the 12 categories ## 4 Proposed Techniques and Algorithms We try several different approaches for the task at hand. All our models use LEGAL-BERT as their base, and use various methods for further processing and refining of results. The LEGAL-BERT family of models is a modified pretrained model based on the architecture of BERT Devlin et al. (2019). The variant used in this paper is LEGAL-BERT-BASE, a model with 12 layers, 768 hidden units, and 12 attention heads. It has a total of 110M parameters and is pretrained for 40 epochs on a corpus of 12 GB worth of legal texts. This model was fine-tuned on the task dataset for 2 epochs with a learning rate of 1e-5 using the Adam optimizer and Cross entropy loss ### Direct Classification of CLS tokens First, we used the default classifier of LEGAL-BERT to find the first set of predictions, to establish a baseline for our further experiments. Our next step used the CLS tokens extracted from the final hidden layer of this trained model. Similar to the methodology of Gaoa et al.(2020) and Furniturewala et al.(2021) we utilised the CLS tokens from LEGAL-BERT for further classification models. This CLS token is a 768-dimensional semantic feature that represents BERT's understanding of the text input. It is a fixed embedding present as the first token in BERT's output to the classifier and contains all the useful extracted information present in the input text. We tried directly applying various multi-layer neural networks to the extracted CLS tokens. These two models served as a baseline to assess the efficacy of our methods. ### Graph-Based Approaches We implemented classificaton systems based on graph architectures. We modeled the data into a graph using cosine similarity on the CLS tokens generated by LEGAL-BERT. An edge was created between two sentences if and only if their CLS tokens had cosine similarity greater than 0.5, with the cosine similarity acting as edge weight. The threshold was included to minimize the presence of noise-heavy edges in the graph. \[\cos(\mathbf{x},\mathbf{y})=\frac{\sum_{i=1}^{n}\mathbf{x}_{i}\mathbf{y}_{i}}{ \sqrt{\sum_{i=1}^{n}{(\mathbf{x}_{i})^{2}}}\sqrt{\sum_{i=1}^{n}{(\mathbf{y}_ {i})^{2}}}} \tag{1}\] The cosine similarity between two nodes, X and Y, is defined in equation (1), where x and y are the CLS tokens for nodes X and Y respectively, and n is the length of the CLS token, i.e. 768 in this case. The function for the final adjacency matrix is defined equation (2). \[A_{XY}=\begin{cases}\cos(\mathbf{x},\mathbf{y})&if\ \ \cos(\mathbf{x},\mathbf{y})>0.5\\ 0&otherwise\end{cases} \tag{2}\] On this graph, we performed the label diffusion algorithm Zhou et al. (2003), to establish a graph-based baseline for our system. Random walk label diffusion assigns labels to an unlabeled node using the average of it's neighbours, weighted by their distance from the node. \[F^{t+1}=\alpha\cdot P\cdot F^{t}+(1-\alpha)*Y \tag{3}\] Figure 1: Extracting CLS Tokens Furniturewala (2021) \[P=D^{-1/2}\cdot A\cdot D^{-1/2} \tag{4}\] \[F^{*}=(1-\alpha)*(I-\alpha P)^{-1}\cdot Y \tag{5}\] To implement it, we combined the train and validation label array, one-hot encoded it and masked the validation labels. We then used equation (5) to generate predictions for each sentence. Here P is the normalised adjacency matrix, Y is the array of one-hot encoded labels, \(\alpha\) is a hyper-parameter, D is the degree matrix, and Z is the array of predicted labels. The matrix P is obtained via equation (4), normalizing the adjacency matrix A using the square root inverse of the degree matrix D. For our experimentation, we used \(\alpha=0.5\). Furthermore, we used a two-layer Graph Convolution Network (GCN) [10] to perform classifications on the data. Inspired by the methodology of BERTGCN [12], we used the LEGAL-BERT embeddings of each sentence as the node representation for our graph, and then performed graph convolutions on it. The GCN architecture uses trainable weights to identify the optimal weightage that each neighbour of each node should have on its label. The use of two layers allows us to incorporate the context of one-hop neighbours into the label of a particular node. \[Z =f(X,A) \tag{6}\] \[=softmax(\hat{A}\cdot ReLU(\hat{A}XW^{(0)})W^{(1)}) \tag{7}\] We used equation (7) to predict the labels of the validation set. Here, A represents the symmetrically normalized adjacency matrix, X is the feature vector which in this case is the LEGAL-BERT embeddings of the nodes, \(W^{i}\) is the matrix of trainable weights in layer \(i\). The calculations required for this approach were extremely computationally expensive, so we were not able to train the model on the entire training set on a V100 server. We used half of the training documents for graph building and the prediction of labels. However, the LEGAL-BERT embeddings were generated by fine-tuning the model on all training documents. ### Context-Based LEGAL-BERT Our final approach was a Context-Based LEGAL-BERT. We cleaned each sentence by removing all stopwords (such as 'a', 'an', 'the') present using the NLTK library. Then we created a 5 sentence input corresponding to any given input by concatenating its two preceeding sentences and its two succeeding sentences in order. These 5 sentences were separated using LEGAL-BERT's separator token </s>. Sentences at the beginning or end of a document were padded using a string of <pad> tokens. These 5 sentence inputs were then tokenized using LEGAL-BERT's tokenizer and fed into the model using the baseline parameters. We used the default classifier to perform classification on these context-based inputs. ## 5 Results We trained the models and tested them on the validation set. The accuracy scores have been reported in Table 2. We see that the performance of these models is significantly better than the previous attempts at this problem. The improvement of the results of previously studied models can be attributed to the increase in dataset size, along with other changes in the structure of the task. However, our Context-based LEGAL-BERT approach outperforms the other frameworks by a significant margin. This exhibits that the context of each sentence is critically important in determining its label, and that we are successful in incorporating the context of each sentence into its representation. We saw that graph-based approaches did not significantly improve performance compared to the current state-of-the-art models. However, it is important to note that we were unable to run the Graph Convolution Network using the entire train dataset due to compute constraints. Despite such constraints, there might be other reasons for the mediocre performance of graph-based models. One possible reason is that the representation of the sentences used for building the model was not able to capture information necessary to make better predictions. This also explains how the Context-based LEGAL-BERT performed so much better - it improved the quality of sentence representation, successfully capturing a wider range of features pertaining to the task at hand. ## 6 Conclusion and Future Work In this paper, we tried several different techniques to perform a sentence classification task on legal documents. Through our experiments, we show that incorporating context into the CLS tokens of sentences offers a significant improvement of 5.5 percentage points over LEGAL-BERT. Moreover, through our experiments on graph-based models, we show that improving the CLS tokens results in a better classification, compared to the regular CLS tokens used in a variety of different ways. The Context-based LEGAL-BERT model was not only more accurate but also less resource intensive. For future improvements on these models, we could try the Graph Convolutional Network approach on the complete dataset. We could also try the various methods of classification, such as a custom neural network or label diffusion, on the context-based CLS tokens. Moreover, we could further try to incorporate more sentences for context of each target sentence. This would require the use of a long-former model, since the total number of tokens passed into the model will increases.
2307.02308
Multi-Scale Prototypical Transformer for Whole Slide Image Classification
Whole slide image (WSI) classification is an essential task in computational pathology. Despite the recent advances in multiple instance learning (MIL) for WSI classification, accurate classification of WSIs remains challenging due to the extreme imbalance between the positive and negative instances in bags, and the complicated pre-processing to fuse multi-scale information of WSI. To this end, we propose a novel multi-scale prototypical Transformer (MSPT) for WSI classification, which includes a prototypical Transformer (PT) module and a multi-scale feature fusion module (MFFM). The PT is developed to reduce redundant instances in bags by integrating prototypical learning into the Transformer architecture. It substitutes all instances with cluster prototypes, which are then re-calibrated through the self-attention mechanism of the Trans-former. Thereafter, an MFFM is proposed to fuse the clustered prototypes of different scales, which employs MLP-Mixer to enhance the information communication between prototypes. The experimental results on two public WSI datasets demonstrate that the proposed MSPT outperforms all the compared algorithms, suggesting its potential applications.
Saisai Ding, Jun Wang, Juncheng Li, Jun Shi
2023-07-05T14:10:29Z
http://arxiv.org/abs/2307.02308v1
# Multi-Scale Prototypical Transformer for Whole Slide Image Classification ###### Abstract Whole slide image (WSI) classification is an essential task in computational pathology. Despite the recent advances in multiple instance learning (MIL) for WSI classification, accurate classification of WSIs remains challenging due to the extreme imbalance between the positive and negative instances in bags, and the complicated pre-processing to fuse multi-scale information of WSI. To this end, we propose a novel multi-scale prototypical Transformer (MSPT) for WSI classification, which includes a prototypical Transformer (PT) module and a multi-scale feature fusion module (MFFM). The PT is developed to reduce redundant instances in bags by integrating prototypical learning into the Transformer architecture. It substitutes all instances with cluster prototypes, which are then re-calibrated through the self-attention mechanism of Transformer. Thereafter, an MFFM is proposed to fuse the clustered prototypes of different scales, which employs MLP-Mixer to enhance the information communication between prototypes. The experimental results on two public WSI datasets demonstrate that the pro-posed MSPT outperforms all the compared algorithms, suggesting its potential applications. Keywords:Whole slide image, Multiple instance learning, Multi-scale feature, Prototypical Transformer. ## 1 Introduction Histopathological images are regarded as the 'gold standard' in the diagnosis of cancers. With the advent of the whole slide image (WSI) scanner, deep learning has gained its reputation in the field of computational pathology [1, 2, 3]. However, WSIs are extremely large in the size and lack of pixel-level annotations, making it difficult to adopt the traditional supervised learning methods for WSI classification [4]. To address this issue, multiple instance learning (MIL) has been successfully applied to the WSI classification task as a weakly supervised learning problem [5, 6, 7]. In this context, a WSI is considered as a bag, and the cropped patches within the slide are the instances in this bag. However, the lesion regions usually only account for a small portion of the WSI, resulting in a large number of negative patches. When the positive and negative instances in the bag are highly imbalanced, the MIL models are prone to in correctly discriminate these positive instances when using simple aggregation operations. To this end, several attention-based MIL models, such as ABMIL [8] and DSMIL [9], apply variants of the attention mechanism to re-weight instance features. Thereafter, the recent works develop the Transformer-based architectures to better model long-range instance correlations via self-attention [10; 11; 12; 13]. However, since the average bag size of a WSI is more than 8000 at 20\(\times\) magnification, it is computationally infeasible to use the conventional Transformer and other stacked self-attention network architectures in MIL-related tasks. Recently, prototypical learning is applied in WSI analysis to identify representative instances in the bag [14]. Some works adopt the \(K\)-means clustering on all instances in a bag to obtain \(K\) cluster centers i.e., instance prototypes, and then use these prototypes to represent the bags [15; 16]. These clustering-based MIL algorithms can significantly reduce the redundant instances, and thereby improving the training efficiency for WSI classification. However, it is different for \(K\)-means to specify the cluster number as well as the initial cluster centers, and different initial values may lead to different cluster results, thus affecting the performance of MIL. Besides, affected by the feature extractor, the clustering-based MIL algorithms may ignore the most important instances that contain critical diagnostic information. Therefore, it is necessary to develop a method that can fully exploit the potential complementary information between critical instances and prototypes to improve representation learning of prototypes. On the other hand, when pathologists analysis the WSIs, they always observe the tissues at various resolutions [17]. Inspired by this diagnostic manner, some works use multi-scale information of WSIs to improve diagnostic accuracy. For example, Li et al. [9] adopted a pyramidal concatenation mechanism to fuse the multi-scale features of WSIs, in which the feature vectors of low-resolution patches are replicated and concatenated with the those of their corresponding high-resolution patches; Hou et al. [18] propose a heterogeneous graph neural network to learn the hierarchical representation of WSIs from a heterogeneous graph, which is constructed by the feature and spatial-scaling relationship of multi-resolution patches. However, since the number of patches at each resolution is quite different, it requires complex pre-processing to spatially align feature vectors of patches in different resolutions. Therefore, it is significant to develop an efficient and effective patch aggregation strategy to learn multi-scale information from WSIs. In this work, we propose a Multi-Scale Prototypical Transformer (MSPT) for WSI classification. The MSPT includes two key components: a prototypical Transformer (PT) and a multi-scale feature fusion module (MFFM). The specifically developed PT uses a clustering algorithm to extract instance prototypes from the bags, and then re-calibrates these prototypes at each scale with the self-attention mechanism in Transformer [19]. MFFM is designed to effectively fuse multi-scale information of WSIs, which utilizes the MLP-Mixer [20] to learn effective representations by aggregating the multi-scale prototypes generated by the PT. The MLP-Mixer adopts two types of MLP layers to allow information communication in different dimensions of data. The contributions of this work are summarized as follows: 1) A novel prototypical Transformer (PT) is proposed to learn superior prototype representation for WSI classification by integrating prototypical learning into the Transformer architecture. It can effectively re-calibrate the cluster prototypes as well as reduce the computational complexity of the Transformer. 2) A new multi-scale feature fusion module (MFFM) is developed based on the MLP-Mixer to enhance the information communication among phenotypes. It can effectively capture multi-scale information in WSI to improve the performance of WSI classification. ## 2 Method ### MIL Problem Formulation MIL is a typical weakly supervised learning method, where the training data consists of a set of bags, and each bag contains multiple instances. The goal of MIL is to learn a classifier that can predict the label of a bag based on the instances in it. In binary classification, a bag can be marked as negative if all in-stances in the bag are negative, otherwise, the bag is labeled as positive with at least one positive instance. In the MIL setting, a WSI is considered as a bag and the numerous cropped patches in WSI are regarded as instances in the bag. A WSI dataset T can be defined as: \[T=\{x_{i},y_{i}\}_{i=1}^{t=N},x_{i}=\{I_{i}^{j}\}_{j=1}^{j=n} \tag{1}\] where \(x_{i}\) denotes a patient, \(y_{i}\) the label of \(x_{i}\), \(I_{i}^{j}\) is the \(j\)-th instance of \(x_{i}\), \(N\) is the number of patients and \(n\) is the number of instances. ### Multi-scale Prototypical Transformer (MSPT) The overall architecture of MSPT is shown in Fig. 1. A WSI is first divided into non-overlapping patches at different resolutions, and a pre-trained ResNet18 [21] is used to extract features from each patch. The learned multi-scale features are then fed into the proposed MSPT, which consists of a PT and an MFFM, to re-calibrate cluster prototypes at each scale and fuse multi-scale information of WSI. Finally, a WSI-level classifier is trained to predict the bag label. **Pre-training.** It is a time consuming and tedious task for pathologists to annotate the patch-level labels in gigapixel WSIs, thus, a common practice is to use a pre-trained encoder network to extract instance-level features, such as an ImageNet pre-trained encoder or a self-supervised pre-trained encoder. In this work, we follow [9] to adopt SimCLR [22] to pre-training the patch encoder at different resolutions. SimCLR is a self-supervised learning algorithm to pre-trainng a network by maximizing the similarity between positive pairs and minimizing the similarity between negative pairs [22]. After pre-training, the extracted instances of different scales are fed into MSPT for prototype learning and multi-scale learning. **Prototypical Transformer (PT).** Most tissues in WSIs are redundancy, and therefore, we introduce the instance prototypes to reduce redundant instances. Specifically, for each instance bag \(\textbf{{X}}_{bag}\in\mathbb{R}^{n\times d_{k}}\), the \(K\)-means clustering algorithm is applied on all instances to get \(K\) centers (prototypes). These cluster prototypes can be used as instances to represent a new bag \(\textbf{{P}}_{bag}\in\mathbb{R}^{k\times d_{k}}\). However, the \(K\)-means clustering algorithm is sensitive to the initial selection of cluster centers, i.e. different initializations can lead to different results, and the final result may not be the global optimal solution. It is essential to try different initializations and choose the one with the lowest error. However, the WSI dataset generally has a long sequence of instances, which makes the clustering algorithms computationally expensive and slow down as the size of the bag increases. To solve the issue above, we propose to apply the self-attention (SA) mechanism in Transformer to re-calibrate these cluster prototypes. As shown in Fig.1, the optimization process can be divided into two steps: 1) the initial cluster prototype bag \(\textbf{{P}}_{bag}\) is obtained in the pre-processing stage by using the \(K\)-means clustering on \(\textbf{{X}}_{bag}\); 2) PT uses \(\textbf{{X}}_{bag}\) to optimize \(\textbf{{P}}_{bag}\) via the self-attention mechanism in Transformer. The detailed process is as follows: \[\texttt{SA}\big{(}\textbf{{P}}_{bag},\textbf{{X}}_{bag}\big{)}=softmax \left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)\cdot V\] \[=softmax\left(\frac{w_{q}\textbf{{P}}_{bag}(\textbf{{X}}_{bag}\textbf{{W}}_{k })^{T}}{\sqrt{d_{k}}}\right)\textbf{{W}}_{v}\textbf{{X}}_{bag}\to A_{map} \textbf{{W}}_{v}\textbf{{X}}_{bag}\rightarrow\hat{P} \tag{2}\] where \(\textbf{{W}}_{q}\), \(\textbf{{W}}_{k}\), \(\textbf{{W}}_{v}\in\mathbb{R}^{d_{k}\times d_{k}}\) are trainable matrices of query \(\textbf{{P}}_{bag}\) and the key-value pair \((\textbf{{X}}_{bag},\textbf{{X}}_{bag})\), respectively, and \(\textbf{{A}}_{map}\in\mathbb{R}^{k\times n}\) is the attention matrix to compute the weight of \(\textbf{{X}}_{bag}\). Thus, the computational complexity of SA is \(O\) (\(nm\)) instead of \(O\) (\(n^{2}\)), and the \(k\) is much less than \(n\). Specifically, for a single clustering prototype \(p_{k}\in\textbf{{P}}\), the SA layer scores the pairwise similarity between \(p_{k}\) and \(x_{n}\) for all \(x_{n}\in\textbf{{X}}\) Figure 1: Overview of the proposed MSPT. which can be written as a row vector \([a_{k1},a_{k2},a_{k3},...,a_{kn}]\) in \(\mathbf{A}_{map}\). These attention scores are then weighted to \(\mathbf{X}_{bag}\) to update the \(p_{k}\in\mathbb{R}^{1\times d_{k}}\) for completing the calibration of the clustering prototypes \(\mathbf{\widehat{P}}\in\mathbb{R}^{k\times d_{k}}\). As mentioned above, existing clustering-based MIL methods use the \(K\)-means clustering to identify instances prototypes in the bag, where the most important instances that contain the key semantic information may be ignored. On the contrary, our PT can efficiently use all the instances to update the cluster prototypes multiple times. Therefore, the combination of bag instances is no longer static and fixed, but diverse and dynamic. It means that different new bags can be fed into the MFFM each time. In addition, by applying PT to each scale, the number of cluster prototypes obtained at different scales is consistent, so there is no need for additional operations to align multi-scale features. #### 2.2.2 Multi-scale Feature Fusion Module (MFFM). To fuse the output clustered prototypes at different scales in MSPT, we proposed an MFFM, which consists of an MLP-Mixer and a Gated Attention Pooling (GAP). The MLP-Mixer is used to enhance the information communication of the prototype representation, and the GAP is used to get the WSI-level representation for WSI classification. As shown in Fig. 2, The Mixer layer of MLP-Mixer contains one token-mixing MLP and one channel-mixing MLP, each consisting of two fully-connected layers and a GELU activation function [23]. Token-mixing MLP is a cross-location operation to mix all prototypes, while channel-mixing MLP is a pre-location operation to mix features of each prototype. Thus, MLP-Mixer allows the information communication between different prototypes and prototype features to learn superior representation through information aggregation. Specifically, the procedure of MFFM is described as follows: We first perform the feature concatenation operation on the multi-scale output clustering prototypes \([\hat{P}_{20\times},\hat{P}_{10\times},\hat{P}_{5\times}]\) to construct a feature pyramid \(\hat{P}\): \[concat[\hat{P}_{20\times},\hat{P}_{10\times},\hat{P}_{5\times}]\to\hat{P}\in \mathbb{R}^{k\times 3d_{k}} \tag{3}\] where \(d_{k}\) is the feature vector dimension of the prototypes. Figure 2: The structure of MFFM. Then, the \(\hat{P}\) is fed to the MLP-Mixer to obtain the corresponding hidden feature representation \(H\in\mathbb{R}^{k\times 3d_{k}}\) as follows: \[\begin{split}& H_{1}=\hat{P}^{T}+\mathbf{W_{2}}\sigma(\mathbf{W_{1}}\text {LN}(\hat{P}^{T}))\\ & H={H_{1}}^{T}+\mathbf{W_{4}}\sigma(\mathbf{W_{3}}\text{LN}({H_{1}}^{T} ))\end{split} \tag{4}\] where LN denotes the layer normalization, \(\sigma\) denotes the activation function implemented by GELU, \(\mathbf{W_{1}}\in\mathbb{R}^{k\times c},\mathbf{W_{2}}\in\mathbb{R}^{c\times k},\mathbf{W_ {3}}\in\mathbb{R}^{3d_{k}\times d_{s}}\)_and_\(\mathbf{W_{4}}\in\mathbb{R}^{d_{s}\times 3d_{k}}\) are the weight matrices of MLP layers. \(c\) and \(d_{s}\) are tunable hidden widths in the token-mixing and channel-mixing MLP, respectively. Finally, the \(H\) is fed to the gated attention pooling (GAP) [8] to get the WSI-level representation \(Z\in\mathbb{R}^{1\times 3d_{k}}\) for WSI classification: \[\begin{split}& Z=GAP(H)\\ &\hat{Y}=softmax\big{(}MLP(Z)\big{)}\end{split} \tag{5}\] where \(\hat{Y}\in\mathbb{R}^{1\times d_{out}}\) is the class label probability of the bag, and \(d_{out}\) is the number of classes. ## 3 Experiments and Results ### Datasets To evaluate the effectiveness of MSPT, we conducted experiments on two public dataset, namely Camelyon16 [24] and TCGA-NSCLC. Camelyon16 is a WSI dataset for the automated detection of metastases in lymph node tissue slides. It includes 270 training samples and 129 testing samples. After pre-processing, a total of 2.4 million patches at \(\times\)20 magnification, 0.56 million patches at \(\times\)10 magnification, and 0.16 million patches at \(\times\)5 magnification, with an average of about 5900, 1400, and 400 patches per bag. The TCGA-NSCLC dataset includes two sub-types of lung cancer, i.e., Lung Squamous Cell Carcinoma (TGCA-LUSC) and Lung Adenocarcinoma (TCGA-LUAD). We collected a total of 854 diagnostic slides from the National Cancer Institute Data Portal ([https://portal.gdc.cancer.gov](https://portal.gdc.cancer.gov)). The dataset yields 4.3 million patches at 20\(\times\) magnification, 1.1 million patches at 10\(\times\) magnification, and 0.30 million patches at 5\(\times\) magnification with an average of about 5000, 1200, and 350 patches per bag. ### Experiment Setup and Evaluation Metrics. In WSI pre-processing, each slide is cropped into non-overlapping 256 \(\times\) 256 patches at different magnifications, and a threshold is set to filter out background ones. After patching, we use a pre-trained ResNet18 model to convert each 256 \(\times\) 256 patch into a 512- dimensional feature vector. We selected accuracy (ACC) and area under curve (AUC) as evaluation metrics. For Camelyon16 dataset, we reported the results of the official testing set. For TCGA-NSCLC, we conducted five cross-validation on the 854 slides, and the results are reported in the format of mean \(\pm\) SD (standard deviation). ### Implementation Details. For the feature extractor, we employed the SimCLR encoder trained by Lee et al. [9] for the Camelyon16 and TCGA datasets. But [9] only trained SimCLR encoders at 20\({}^{\times}\) and 5\({}^{\times}\) magnification, to align with that setting, we used the same settings to train the SimCLR encoder at 10\({}^{\times}\) magnification on both datasets. For the proposed MSPT, the Adam optimizer was used to update the model weights, the initial learning rate of 1e-4 with a weight decay of 1e-5. The mini-batch size was set as 1. The MSPT models were trained for 150 epochs and they would early stop if the loss would not decrease in the past 30 epochs. All models were implemented by Python 3.8 with PyTorch toolkit 1.11.0 on a platform equipped with an NVIDIA GeForce RTX 3090 GPU. ### Comparisons Experiment **Comparison algorithms.** The proposed MSPT was compared to state-of-the-art MIL-based algorithms: 1) The traditional pooling operators, such as mean-pooling and max-pooling; 2) the attention-based algorithms, including ABMIL [8] and DSMIL [9]; 3) the Transformer-based algorithm TransMIL [11]; 4) The clustering-based algorithm ReMix [16]. **Experimental results.** Table 1 shows the comparison results on the Camelyon16 and TCGA-NSCLC datasets. In CAMELYON16, it can be found that the proposed MSPT outperforms all the compared algorithms with the best accuracy of 0.9536, and AUC of 0.9869. Compared to other algorithms, MSPT improves at least 0.78%, and 1.07% on classification ACC and AUC, indicating the effectiveness of MFFM to learn the multi-scale information of WSIs. In addition, PT achieves the best classification results in the single-resolution methods and outperforms ReMix on all indices, which proves PT can effectively re-calibrate the clustering prototypes. In TCGA-NSCLC, the proposed MSPT algorithm again outperforms all the compared algorithms on all indices. It achieves the best classification performance of 0.9289\(\pm\)0.011 and 0.9622\(\pm\)0.015 on the ACC and AUC. Moreover, MSPT improves at \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{Camelyon16} & \multicolumn{2}{c}{TCGA-NSCLC} \\ \cline{2-5} & Accuracy & AUC & Accuracy & AUC \\ \hline Mean-Pooling & 0.8837 & 0.8916 & 0.8911\(\pm\)0.011 & 0.9230\(\pm\)0.010 \\ Max-Pooling & 0.9147 & 0.9666 & 0.9136\(\pm\)0.014 & 0.9441\(\pm\)0.016 \\ ABMIL [8] & 0.9302 & 0.9752 & 0.9123\(\pm\)0.015 & 0.9457\(\pm\)0.017 \\ DSMIL [9] & 0.9380 & 0.9762 & 0.9049\(\pm\)0.010 & 0.9359\(\pm\)0.011 \\ TransMIL [11] & 0.9225 & 0.9734 & 0.9095\(\pm\)0.014 & 0.9432\(\pm\)0.016 \\ ReMix [16] & 0.9458 & 0.9740 & 0.9167\(\pm\)0.013 & 0.9509\(\pm\)0.016 \\ PT (Ours) & 0.9458 & 0.9809 & 0.9257\(\pm\)0.011 & 0.9567\(\pm\)0.013 \\ **MSPT (Ours)** & **0.9536** & **0.9869** & **0.9289\(\pm\)0.011** & **0.9622\(\pm\)0.015** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison results on the Camelyon16 and TCGA datasets. least 0.78% and 1.03%, respectively, on the corresponding indices compared with all other algorithms. ### Ablation Study To evaluate the contribution of PT and MFFM in the proposed MSPT, we further conducted a series of ablation studies. #### 3.5.1 Investigation of the number of prototypes in PT. To evaluate the effectiveness of the PT, we first changed the number of prototypes \(K\) in the range of {1, 2, 4, 8, 16, 32} to get the optimal \(K\) for each dataset. Then, the following two variants were compared with PT: (1) Full-bag: the first variant was only trained on all the instances; (2) Prototype-bag: the second variant was only trained on the cluster prototypes. As shown in Fig. 3, the horizontal axes denote the number of prototypes, and the vertical axes denote the classification accuracy. In the Camelyon16 dataset, the performance of both PT and Prototype-bag increases with the increase of \(K\) value, and achieves the best results with \(K\)=16. In the TCGA-NSCLC dataset, PT always outperforms the Full-bag and Prototype-bag. These experimental results demonstrate that PT can effectively re-calibrate the clustering prototypes to achieve superior results. #### 3.5.2 Investigation of Multi-scale Fusion. We further compared our MFFM with several other fusion strategies, including (1) Concatenation: this variant concatenated the cluster prototypes of each magnification before the classifier. (2) MS-Max: this variant used max-pooling on the cluster prototypes for each magnification, and then added them. (3) MS-Attention: this variant used attention-pooling [8] on the cluster prototypes for each magnification, and then added them. Table 2 gives the results on the Camelyon16 and TCGA-NSCLC datasets. Compared with other multi-scale variants, the proposed MSPT improves ACC by at least 0.78% and 0.85% on Camelyon16 and TCGA-NSCLC, respectively, which proves that the MLP-Mixer in MFFM can effectively enhance the information communication among phenotypes and their features, thus improving the performance of feature aggregation. Figure 2: Ablation study on the number of prototypes. **More studies.** We provide more empirical studies, i.e., the effect of the multi-resolution scheme, the visualization results, and the training budgets, in Supplementary Materials to better understand MSPT. ## 4 Conclusion In summary, we propose an MSPT for WSI classification that combine the prototype-based learning and multi-scale learning to generate powerful WSI-level representation. The MSPT reduces redundant instances in WSI bags by replacing instances with updatable instance prototypes, and avoids complicated procedures to align patch features at different scales. Extensive experiments validate the effectiveness of the proposed MSPT. In the future, we will develop an attention mechanism based on the magnification level to re-weight the features from different scales before fusion in MSPT. **Acknowledgments** This work is supported by the National Natural Science Foundation of China (81871428) and 111 Project (D20031).
2303.06499
Multiple Access in Constellation Domain by Non-Coherent Massive MIMO
Multiple access is the base for increasing the capacity in multi-user communication networks. However, the growing demand for higher data rates and the number of users who requires communication services has led to the scarcity of orthogonal resources in current wireless communications. On the other hand, integrating the satellite within terrestrial networks as an initiative of 3GPP since its Release 15 entails the need for new forms of multiple access between terrestrial and non-terrestrial users. This paper studies constellation schemes as a new domain to enhance the state-of-the-art multiple-access techniques for future communication technologies employing non-coherent communications with massive MIMO. In addition, we propose a hybrid model between the classic access methods such as Time Division Multiple Access (TDMA) or Frequency Division Multiple Access (FDMA), the emerging models of non-orthogonal multiple access (NOMA) and the proposed domain of the constellation based on non-coherent massive multiple-input multiple-output (MIMO) schemes. This model is discussed for different scenarios in satellite communications that help increase the system's capacity and avoid interference between terrestrial and non-terrestrial users.
Victor Monzon Baeza
2023-03-11T21:25:26Z
http://arxiv.org/abs/2303.06499v1
# Multiple Access in Constellation Domain by Non-Coherent Massive MIMO ###### Abstract Multiple access is the base for increasing the capacity in multi-user communication networks. However, the growing demand for higher data rates and the number of users who requires communication services has led to the scarcity of orthogonal resources in current wireless communications. On the other hand, integrating the satellite within terrestrial networks as an initiative of 3GPP since its Release 15 entails the need for new forms of multiple access between terrestrial and non-terrestrial users. This paper studies constellation schemes as a new domain to enhance the state-of-the-art multiple-access techniques for future communication technologies employing non-coherent communications with massive MIMO. In addition, we propose a hybrid model between the classic access methods such as Time Division Multiple Access (TDMA) or Frequency Division Multiple Access (FDMA), the emerging models of non-orthogonal multiple access (NOMA) and the proposed domain of the constellation based on non-coherent massive multiple-input multiple-output (MIMO) schemes. This model is discussed for different scenarios in satellite communications that help increase the system's capacity and avoid interference between terrestrial and non-terrestrial users. Non-coherent, massive MIMO, Multiple Access, SatCom. ## 1 Introduction The explosive use of communication technologies claiming indigent amounts of data has led to a shortage in radio resources for sharing them across multiple users in a cell. Traditional radio resources such as time, frequency, or codes have allowed multiple users to access the services provided simultaneously. These resources are used as multiple orthogonal accesses until the third generation of wireless communications (1G-3G). Orthogonal Frequency Division Multiplexing (OFDM) was employed for the fourth generation (4G) [1]. The scarcity of more orthogonal resources led the research community to explore alternatives to its non-orthogonal counterparts, giving rise to Non-Orthogonal Multiple Access (NOMA), evaluated in [2] and proposed for the fifth generation (5G) in [3]. The need for global coverage in mobile networks has led 3GPP to propose the integration of the space segment or non-terrestrial networks (NTN) with terrestrial networks. A complete network in which the satellite is one more member of the terrestrial system transparently is the objective of the future sixth generation (6G). This need has opened a line of research for new multiple-access techniques. The problem of the scarcity of resources to simultaneously access the services is accentuated when we integrate more elements in the network, together with a new issue; the management of interference between users of the terrestrial network with those who access the non-terrestrial network. Advances in access multiple considering TDMA and NOMA have been proposed in a framework for satellites in [4]. However, the authors do not consider the integration with terrestrial networks. In addition, the 5G physical layer includes massive MIMO technology to increase spectral and energy efficiency [5]. The problem with this technology is the enormous amount of information that must be processed to estimate many channels. Therefore, to overcome this
2310.18947
Double-loop hysteresis of multisite dilute Sr(Y$_{1-x}$Dy$_x$)$_2$O$_4$ single crystal Kramers paramagnets: electron-phonon interaction, quantum tunneling and cross-relaxation
Experimental and theoretical studies of the dynamic magnetization in swept magnetic fields of the orthorhombic SrY$_2$O$_4$ single-crystals doped with the Dy$^{3+}$ Kramers ions (0.01 and 0.5 at.%) with natural abundances of even and odd Dy isotopes are presented. Impurity ions substitute for Y$^{3+}$ ions at two nonequivalent crystallographic sites with the same local $C_s$ symmetry but strongly different crystal fields. Well pronounced double-loop hysteresis is observed at temperatures 2, 4, 5 and 6 K for sweeping rates of 5 and 1 mT/s. The microscopic model of spectral, magnetic and kinetic properties of Dy$^{3+}$ ions is developed based on the results of EPR, site selective optical spectra and magnetic relaxation measurements. The derived approach to the dynamic magnetization in the sweeping field based on the numerical solution of generalized master equations with time-dependent transition probabilities induced by the electron-phonon interaction, quantum tunneling and cross-relaxation allowed us to reproduce successfully the evolution of the hysteresis loop shape with temperature, sweeping rate and concentration of paramagnetic ions.
Boris Z. Malkin, Roman V. Yusupov, Ildar F. Gilmutdinov, Ruslan G. Batulin, Airat G. Kiiamov, Bulat F. Gabbasov, Sergey I. Nikitin, Bernard Barbara
2023-10-29T09:13:10Z
http://arxiv.org/abs/2310.18947v1
Double-loop hysteresis of multisite dilute Sr(Y\({}_{1-x}\)Dy\({}_{x}\))\({}_{2}\)O\({}_{4}\) single crystal Kramers paramagnets: electron-phonon interaction, quantum tunneling and cross-relaxation ###### Abstract Experimental and theoretical studies of the dynamic magnetization in swept magnetic fields of the orthorhombic SrY\({}_{2}\)O\({}_{4}\) single-crystals doped with the Dy\({}^{3+}\) Kramers ions (0.01 and 0.5 at.%) with natural abundances of even and odd Dy isotopes are presented. Impurity ions substitute for Y\({}^{3+}\) ions at two nonequivalent crystallographic sites with the same local \(C_{s}\) symmetry but strongly different crystal fields. Well pronounced double-loop hysteresis is observed at temperatures 2, 4, 5 and 6 K for sweeping rates of 5 and 1 mT/s. The microscopic model of spectral, magnetic and kinetic properties of Dy\({}^{3+}\) ions is developed based on the results of EPR, site selective optical spectra and magnetic relaxation measurements. The derived approach to the dynamic magnetization in the sweeping field based on the numerical solution of generalized master equations with time-dependent transition probabilities induced by the electron-phonon interaction, quantum tunneling and cross-relaxation allowed us to reproduce successfully the evolution of the hysteresis loop shape with temperature, sweeping rate and concentration of paramagnetic ions. ## I Introduction Much attention has been paid to studies of the macroscopic quantum tunneling of magnetization of different Single Molecule Magnets (SMM) containing Dy\({}^{3+}\) ions ([1; 2; 3; 4; 5] and references therein). A variety of hysteresis loops and their transformations, in particular, from single-loop to double-loop evolution with temperature or a sweeping rate of an external magnetic field was observed. A peculiar feature of Dy compounds is the existence of odd and even Dy isotopes with comparable natural abundances. A conventional analysis of the quantum tunneling at anticrossings of hyperfine sublevels of the ground Kramers doublet in the energy spectra of odd isotopes allows to understand, at least, qualitatively the experimental data. However, similar hysteresis loops were observed in the samples isotopically enriched with the even \({}^{164}\)Dy isotope [6; 7; 8]. The step-wise loops are observed even in magnetically diluted samples though anticrossings of sublevels of the electronic Kramers doublet in the sweeping field can be induced only by the external or intrinsic (dipolar or superhyperfine) transversal magnetic field. Understanding the nature of the experimental findings in strongly diluted Kramers rare-earth electronic systems remains a challenging problem. We report here on the experimental and theoretical studies of dynamic magnetization in dysprosium doped SrY\({}_{2}\)O\({}_{4}\) single crystals. Complex oxides SrR\({}_{2}\)O\({}_{4}\) (where R is Y or a rare-earth (RE) ion) possess a specific quasi-one-dimensional structure of orthorhombic symmetry with the space group \(Pnam\)[9; 10]. The unit cell contains 4 formula units, the lattice constant \(c=0.341\) nm is about three times less than the lattice constants \(a=1.007\) nm and \(b=1.191\) nm, and each sublattice is formed by ionic chains running along the \(c\)-axis. The impurity Dy\({}^{3+}\) ions substitute for Y\({}^{3+}\) ions at two nonequivalent crystallographic sites Y1 and Y2 with the point symmetry group \(C_{s}\). The physical properties of magnetically concentrated crystals SrDy\({}_{2}\)O\({}_{4}\) were investigated earlier with magnetometry [11], heat capacity measurements [12], neutron scattering [13; 14; 15; 16], ultrasound [17] and muon [18] spectroscopies. In the absence of an external magnetic field, magnetic ordering in this compound was not observed down to the lowest experimentally accessible temperatures [11]. Strong magnetic anisotropy and coexistence of fast and slow fluctuations of short- and long-range magnetic correlations in external magnetic fields were discovered. The observed manifestations of a classic spin-liquid behavior of SrDy\({}_{2}\)O\({}_{4}\) are undoubtedly related to the specific single ion spectral and kinetic properties of the dysprosium subsystem. Our results of low temperature dynamic magnetization measurements in swept magnetic fields reveal that the strongly diluted paramagnet SrY\({}_{2}\)O\({}_{4}\):Dy\({}^{3+}\) is the first example of an inorganic Kramers rare-earth compound exhibiting SMM-like properties, similar to the first non-Kramers rare-earth compound LiYF\({}_{4}\):Ho\({}^{3+}\)[19]. However, in contrast with a single stable holmium isotope \({}^{165}\)Ho with a nuclear spin \(I=7/2\) and the dominant role of the nuclear driven quantum tunneling of the magnetization in LiYF\({}_{4}\):Ho\({}^{3+}\), dysprosium has several stable even isotopes (56.2 %) along with odd isotopes \({}^{161}\)Dy (\(I=5/2\), 18.9 %) and \({}^{163}\)Dy (\(I=5/2\), 24.9 %). We present here an original approach to simulation of the dynamic magnetization in a swept magnetic field for different temperatures, sweeping rates and concentra tions of paramagnetic ions accounting for the electron-phonon interaction, cross-relaxation and Landau-Zener-Stuckelberg (LZS) non-adiabatic quantum tunneling [20; 21; 22]. The measured low-temperature dynamic magnetization and magnetic field dependencies of relaxation rates are presented in Section II, while Section III describes modeling of the observed hysteresis loops. The results of spectroscopic studies (electron paramagnetic resonance (EPR) and optical spectra) are presented in Supplemental Material [23] containing necessary references [24; 25; 26; 27; 28; 29]. The article ends with the Conclusions. ## II Experimental details and results In the present study, the magnetic and spectral characteristics of impurity Dy\({}^{3+}\) ions in Sr(Y\({}_{1-x}\)Dy\({}_{x}\))\({}_{2}\)O\({}_{4}\) (\(x=10^{-4}\) and \(x=5\cdot 10^{-3}\)) single crystals grown by the floating zone method [23; 30] were measured by means of the low temperature EPR and site-selective laser spectroscopies. The analysis of the registered spectra based on the calculations of crystal-field (CF) parameters in the framework of the semi-phenomenological exchange charge model (ECM) [31; 32] allowed us i) to associate the observed EPR signals with exact quantum transitions between the sublevels of the CF ground state doublets and ii) to assign spectral lines in selectively excited optical spectra to transitions between well-defined CF sublevels of the ground and several excited multiplets of the impurity Dy\({}^{3+}\) ions at Y1 (Dy1) and Y2 (Dy2) lattice sites. As a result, we obtained the total sets of self-consistent single-ion spectral and magnetic parameters (two sets of 15 CF parameters (Table S3 in [23]), g-factors (Table S1 in [23]) and hyperfine coupling constants), the energy level patterns (Table S2 in [23]) and the corresponding electronic and electron-nuclear wave functions in external magnetic fields. This has served a base for a detailed modeling and interpretation of the measured dependencies of the dynamic magnetization on temperature, sweeping rate of the magnetic field and concentration of Dy\({}^{3+}\) ions. It is important to underline here some specific differences between the single-ion properties of the Dy1 and Dy2 centers in SrY\({}_{2}\)O\({}_{4}\):Dy crystals. Possessing the same \(C_{s}\) point symmetry, due to differences in the location of nearby oxygen ions (see Fig. S1 in [23]), these centers reveal different magnetic anisotropies (of easy-axis and easy-plane type at Y1 and Y2 sites, respectively) and strongly different energy gaps (energy barriers in the relaxation processes) between the first excited and the ground CF doublets, \(E_{2}-E_{1}=68\) K and 300 K for Dy1 and Dy2, respectively. The maximum principal value of the \(g\)-tensor of the ground doublet of Dy2 ions equals \(g_{2}=19.28\) (\(g_{1}\) and \(g_{3}\) are less than 0.1) with the principal direction in the (\(ab\))-plane slightly tilted (\(\sim\pm 9^{\circ}\) for magnetically nonequivalent ions) from the \(b\)-axis contrary to Dy1 ions with the maximum \(g\)-factor \(g_{3}=13.6\) along the \(c\)-axis. The discovered magnetic hysteresis is the result of long spin-phonon relaxation times of the quasi-Ising-type Dy2 ions caused by the large energy gap mentioned above. Magnetization measurements were carried out using the vibrating sample magnetometer (VSM) with the option of the PPMS-9 universal system (Quantum Design, USA). Magnetization (\(M\)) was measured as a function of a magnetic field \(B_{b}\) applied along the \(b\)-axis. In order to assess the irreversible nature of magnetization, measurements were carried out in increasing and then decreasing field with a constant speed \(v=dB/dt\). The equilibrium magnetization was measured at a reduced number of field values with setting the target field, then leaving a sample to relax (typically for 5 minutes) and then taking the magnetization measurement. Figure 1 shows the dynamic magnetization \(M(B_{b})\) measured on the single crystal SrY\({}_{2}\)O\({}_{4}\):Dy\({}^{3+}\) (0.01 at.%) in a swept magnetic field applied along the \(b\)-axis as well as the equilibrium one at temperatures 2, 4, 5 and 6 K (\(B_{max}=0.6\) T, \(v=5\) and 1 mT/s). In the magnetic fields oriented along the crystallographic axes, four Y1 sites, as well as four Y2 sites, in the unit cell are magnetically equivalent. According to the results of EPR measurements [23]), the corresponding \(g\)-factors of the Dy\({}^{3+}\) ions are \(g_{bb}\)(Dy1) = 2.7 and \(g_{bb}\)(Dy2) = 19.28, so, the Dy2 ions provide the dominant contribution to the measured magnetization. All registered dependencies \(M(B_{b})\) in the swept fields (Fig. 1) demonstrate well pronounced double-loop hysteresis. Loop areas reduce monotonously with decreasing a sweeping rate and increasing a temperature. We note a change of the loops shape from quasi-rectangular upturns and downturns close to zero values of the applied field \(B_{b}\) to a rounded spindle-type one between 5 and 6 K for decreasing sweeping rates in the range \(5-1\) mT/s. A similar profile of the hysteresis loops was found in the dynamic magnetization of the SrY\({}_{2}\)O\({}_{4}\):Dy single-crystal sample with much higher dysprosium concentration (0.5 at.%, Fig. 2). Relaxation rates of the magnetization in SrY\({}_{2}\)O\({}_{4}\):Dy\({}^{3+}\) single-crystals with different concentrations of impurity ions (0.01 and 0.5 at.%) at a given temperature \(T\) and fixed magnetic field \(B_{0}\) along the \(b\)-axis were measured using fast switching of the magnetic field from the initial value \(B_{in}\) in the equilibrium state of the sample to the \(B_{0}\) value (\(B_{0}<B_{in}\) or \(B_{0}>B_{in}\)). The magnetization evolution was tracked by periodic measurements (typically every second) after the target field \(B_{0}\) had been set. Each magnetization was in fact an average over the integration time of the PPMS lock-in detector. The measured magnetization evolution during the thermalization processes at the temperatures 2 and 4 K gives evidence for the two magnetic subsystems with fast and slow relaxion rates and strongly different contributions to the total magnetization. We identify these two subsystems with the Dy\({}^{3+}\) ions at Y1 and Y2 sites. In particular, the measured time dependencies of the magnetization after setting the field \(B_{0}\), shown in Fig. 3 for \(B_{in}=0\), is described by the equation \[M(t)=\left[M_{1}(B_{0})+M_{2}(B_{0})(1-e^{-t/\tau})\right]/2, \tag{1}\] where \(\tau\) is the relaxation time, \(M_{1}(B_{0})\) and \(M_{2}(B_{0})\) the equilibrium magnetic moments along the \(b\)-axis of Dy1 and Dy2 ions, respectively, at temperature \(T\) and magnetic field \(B_{0}\) (for example, at \(T=2\) K and \(B_{0}=0.07\) T, \(M_{1}(B_{0})=0.35\)\(\mu_{B}\)/Dy1, \(M_{2}(B_{0})=1.82\)\(\mu_{B}\)/Dy2, \(\tau=1444\) s). A small contribution of Dy1 ions to the total magnetization quickly achieves the equilibrium value \(M_{1}(B_{0})\), while variations of magnetic moments at Dy2 sites are well resolved and are successfully described by a single exponential relaxation model at relatively high magnetic fields \(B_{0}\geq 0.07\) T. Magnetic field dependence of the relaxation time \(\tau\propto B_{0}^{-4}\) at \(T=2\)-4 K (see Fig. 4, the slope of the straight dashed line 1 equals \(-3.89\)) evidences the dominant role of the direct spin-phonon relaxation process. However, when the target field \(B_{0}\) tends to zero, the measured relaxation time of Dy\({}^{3+}\) ions shortens and reaches values as short as about 20 s (\(T=2\) K) at Y2 Figure 1: Dynamic magnetization of the SrY\({}_{2}\)O\({}_{4}\):Dy (0.01 at.%) single-crystal in magnetic fields \(\mathbf{B}\parallel\mathbf{b}\) for the sweeping rate of (a) 5 and (b) 1 mT/s at 2, 4, 5 and 6 K. Red and black lines represent the results of measurements and modeling. respectively. Dashed lines show the equilibrium magnetization. Magnetization curves for \(T=4\), 5 and 6 K are shifted upward by 3, 5 and 6.5 \(\mu_{B}\)/Dy\({}^{3+}\), respectively; \(\mu_{B}\) is the Bohr magneton. Figure 2: Measured (red lines) and simulated (black lines) dynamic magnetization of the single-crystal sample SrY\({}_{2}\)O\({}_{4}\):Dy (0.5 at.%). Dashed line represents the equilibrium magnetization. sites. This fact proves the existence of an additional relaxation mechanism effective in the region of weak magnetic fields. In the considered range of temperatures, Orbach-type relaxation is ineffective because of a large, 300 K, gap between the ground and the first excited CF sublevels of the ground multipet of the Dy\({}^{3+}\) ions at Y2 sites. However, the observed abrupt change of the shape of the hysteresis loops at the elevated temperatures of \(5-6\) K as compared with the loops at \(2-4\) K (see Fig. 1) evidences for the two-phonon Raman relaxation processes with the strong temperature dependence of the relaxation rate. Quantum tunneling and cross-relaxation processes affect the dynamic magnetization only in the relatively narrow range of sweeping magnetic field \(-0.02\) T \(<B_{b}(t)<+0.02\) T comparable to widths of hyperfine structures of the ground Kramers doublets of odd dysprosium isotopes at Y2 sites. ## III Modeling of the observed hysteresis loops ### Master equations Assuming a homogeneous distribution of impurity ions among Y1 and Y2 sites and neglecting interactions between the Dy\({}^{3+}\) ions in the strongly dilute samples, we describe the observed dynamic magnetization along the applied field by the quantum statistical expression \[\left\langle M_{b}(t)\right\rangle=\sum_{\lambda=Dy1,Dy2}\mathrm{Tr}\left[M_{b,\lambda}\rho(\lambda,t)\right]/2. \tag{2}\] Here \(M_{\alpha.\lambda}\) are components of the magnetic moment operators of Dy\({}^{3+}\) ions, and \(\rho(\lambda,t)\) is the single-ion density matrix of an impurity ion at the site Y1 (\(\lambda=\) Dy1) or Y2 (\(\lambda=\) Dy2) satisfying the generalized master equation [32] with the time dependent relaxation terms. To model the dynamic magnetization, we use the secular approximation that is adapted for the considered system; the non-diagonal elements of \(\rho(\lambda,t)\) in the basis of eigenfunctions of the single-ion Hamiltonian \(H_{\lambda}(t)\) are neglected. The diagonal elements \(\rho_{nn}(\lambda,t)=\rho_{n}(\lambda,t)\) which determine populations of single-ion electronic (or electron-nuclear) states with energies \(E_{n}(\lambda,t)\) satisfy the equations of motion \[\frac{\partial\rho_{n}}{\partial t}=\\ \sum_{k(k\neq n)}\left[\rho_{k}W_{k\to n}(t)-\rho_{n}W_{n\to k}(t)+ \Gamma_{kn}(t)(\rho_{k}-\rho_{n})\right]+\\ \sum_{kpl(k\neq l,p\neq n)}W^{CR}_{n\leftrightarrow p,l\gets k }(t)(\rho_{p}\rho_{k}-\rho_{l}\rho_{n}), \tag{3}\] here and below the site index \(\lambda\) is dropped for simplicity. The time dependent coefficients at the right-hand side of Eq. (3) are transition probabilities between different states of a Dy\({}^{3+}\) ion induced by electron-phonon interaction (\(W_{k\to n}\)), LZS quantum tunneling (\(\Gamma_{kn}\)) and dipole-dipole interactions between paramagnetic ions (cross-relaxation (CR) processes with the probabilities \(W^{CR}_{n\gets p,l\gets k}\) of simultaneous transitions in a coupled pair of ions, \(p\to n\) in one ion and \(k\to l\) in another [33]). Because of large gaps between the first excited and the ground state doublets of Dy\({}^{3+}\) ions at Y1 and Y2 sites, we can neglect populations of all excited CF energy levels at low temperatures (\(T\leq 6\) K). In the case Figure 3: Registered (solid lines) relaxation to the equilibrium magnetization in SrY\({}_{2}\)O\({}_{4}\):Dy (0.01 at.%) sample after a fast setting of the external field \(B_{0}\) along the \(b\)-axis at \(T=2\) K; for each curve, a value of \(B_{0}\) is shown in the figure. Red dashed lines represent the fits of the data with single exponential slow evolution (1) of the magnetization of Dy2 ions. Figure 4: The relaxation time dependencies on the applied magnetic field at \(T=2\) K in SrY\({}_{2}\)O\({}_{4}\):Dy crystals with Dy concentrations of 0.01 at.% (1) and 0.5 at.% (2). Line (2) is a guide for an eye. of magnetically equivalent ions, the system of equations (3) contains only two (even isotopes) or twelve (odd isotopes) equations for the relative populations of sublevels of the ground state doublet. The corresponding systems of differential nonlinear (in general case) equations with time dependent coefficients were solved numerically by the Newton method of successive approximations modified by varied steps of the sweeping field in searches for stable solutions. Calculations were performed separately for each of six contributions to the total dynamic magnetization from even and two odd isotopes, weighted in accordance with their natural abundances, at sites Y1 and Y2. The initial step of increasing (decreasing) magnetic field had a value of \(10^{-4}\) T. The normalization condition \(\sum_{n}\rho_{n}=1\) was checked at each step. The simulation starts from the numerical diagonalization of the single ion Hamiltonian \(H(t=0)\) (see below) for an ion in the initial magnetic field \(B_{b}(0)=B_{min}\) or \(B_{b}(0)=B_{max}\) and the construction of the equilibrium density matrix \(\rho_{nk}(0)=N\delta_{nk}\exp(-E_{n}(0)/k_{B}T)\) at a fixed temperature \(T\) (\(E_{n}(0)\) are eigenvalues of the Hamiltonian \(H(0)\), \(k_{B}\) is the Boltzmann constant and \(N\) is the normalization factor). Transition probabilities in Eqs. (3) were calculated for fixed values of the magnetic field \(B_{b}(t)\) using the single-ion Hamiltonian operating in the basis of electronic (for even Dy isotopes) or electron-nuclear (odd isotopes) states of the ground multiplet \({}^{6}\)H\({}_{15/2}\). This effective Hamiltonian \[H(t)=H_{\rm CF}+H_{\rm Z}(t)+H_{\rm MHF}+H_{\rm QHF} \tag{4}\] contains the CF interaction (\(H_{\rm CF}\)) defined by 15 CF parameters for each of two nonequivalent Y1 and Y2 sites determined from EPR and site-selective optical measurements [23], magnetic dipole (\(H_{\rm MHF}\)) and electric quadrupole (\(H_{\rm QHF}\)) hyperfine interactions, and the Zeeman interaction (\(H_{\rm Z}\)). The swept field is \(B_{b}(t)\), and the steady transversal fields \(B_{a}\) and \(B_{c}\) are considered as the fitting parameters. We estimated the lower boundaries of the transversal fields \(B_{a}=0.284\cdot 10^{-5}\) T and \(B_{c}=0.336\cdot 10^{-5}\) T at Y2 sites from calculations of the mean-square dipolar fields [34] of nuclear magnetic moments \({\bf m}_{Y}=g_{n}\mu_{n}{\bf I}\) of \({}^{89}\)Y\({}^{3+}\) ions (\(g_{n}=-0.137\), \(\mu_{n}\) is the nuclear magneton, \(I=1/2\)), \[B_{\alpha}=\sum\frac{g_{n}\mu_{n}}{2r^{5}}\left[\left(3\alpha^{2}-r^{2}\right) ^{2}+(3\alpha\beta)^{2}+(3\alpha\gamma)^{2}\right]^{1/2}, \tag{5}\] where summation was taken over all Y sites with coordinates \(\alpha,\beta,\gamma\) (\(\alpha\neq\beta\neq\gamma\)) in the crystallographic frame with the origin at Y2 site. In the present work, the self-consistent description of the dynamic magnetization was achieved using the transversal fields \(B_{a}=1.45\cdot 10^{-5}\) T and \(B_{c}=0.5\cdot 10^{-5}\) T (note, these fields are comparable to the Earth magnetic field). In the crystallographic frame (\(x\parallel a,\,y\parallel b,\,z\parallel c\)), \[H_{\rm Z}=\mu_{B}g_{J}\left[J_{x}B_{a}+J_{z}B_{c}+J_{y}B_{b}(t)\right], \tag{6}\] \[H_{\rm MHF}=A_{J}{\bf J}\cdot{\bf I}, \tag{7}\] \[H_{\rm QHF}=B_{Q}\left\{\left[3J_{z}^{2}-J(J+1)\right]\left[I_{z}^{2}-I(I+1) \right]/3+\right.\] \[\left(J_{x}J_{y}+J_{y}J_{x}\right)\left(I_{x}I_{y}+I_{y}I_{x} \right)+\left(J_{x}J_{x}+J_{z}J_{x}\right)\left(I_{x}I_{z}+I_{z}I_{x}\right)+\] \[\left.\left(J_{y}J_{z}+J_{z}J_{y}\right)\left(I_{y}I_{z}+I_{z}I_{ y}\right)+\left(J_{x}^{2}-J_{y}^{2}\right)\left(I_{x}^{2}-I_{y}^{2}\right) \right\}+\] \[P_{0}\left[3I_{z}^{2}-I(I+1)\right]+P_{2}(I_{x}^{2}-I_{y}^{2})+P _{-2}(I_{x}I_{y}+I_{y}I_{x}). \tag{8}\] Here, \(J_{\alpha}\) and \(I_{\alpha}\) are components of the total electronic angular and nuclear spin moment operators, respectively, \(J=15/2\) and \(I=5/2\). The effective Lande factor as well as the magnetic hyperfine coupling constants were slightly corrected as compared to the values corresponding to the Russell-Saunders approximation using the results of EPR measurements [23], \(g_{J}({\rm Dy}1)=0.9925\cdot 4/3\), \(g_{J}({\rm Dy}2)=0.985\cdot 4/3\), \(A_{J}({}^{161}{\rm Dy})=-3.683\cdot 10^{-3}\) cm\({}^{-1}\), \(A_{J}({}^{163}{\rm Dy})=5.163\cdot 10^{-3}\) cm\({}^{-1}\)). The upper three lines in Eq. (8) define contributions to the electric field gradient at the nucleus from the \(4f\) electrons localized on a rare-earth ion, the parameters \(B_{Q}({}^{161}{\rm Dy})=0.7021\cdot 10^{-5}\) cm\({}^{-1}\) and \(B_{Q}({}^{163}{\rm Dy})=0.7319\cdot 10^{-5}\) cm\({}^{-1}\) were calculated according to the corresponding definition [35]. The lower line in Eq. (8) corresponds to the energy of the nuclear quadrupole moment at sites with the \(C_{s}\) symmetry in the ionic lattice, the parameters \(P_{k}\) were calculated using the nominal point ion charges, \(P_{0}({}^{161}{\rm Dy}1)=-1.092\), \(P_{2}({}^{161}{\rm Dy}1)=-2.543\), \(P_{-2}({}^{161}{\rm Dy}1)=-0.42\), \(P_{0}({}^{161}{\rm Dy}2)=1.935\), \(P_{2}({}^{161}{\rm Dy}2)=11.615\), \(P_{-2}({}^{161}{\rm Dy}2)=-0.164\) (in units of \(10^{-5}\) cm\({}^{-1}\)); for the \({}^{163}\)Dy isotope, parameters \(P_{k}({}^{163}{\rm Dy}1)\) and \(P_{k}({}^{163}{\rm Dy}2)\) contain additional factor \(Q({}^{163}{\rm Dy})/Q({}^{161}{\rm Dy})=1.043\). Note, the quadrupole hyperfine interaction affects weakly energies of hyperfine sublevels of Dy\({}^{3+}\) Kramers doublets, but it substantially increases the transition probabilities induced by the electron-phonon interaction (see Fig. S7 in [23] and Ref. [36]). ### Electron-phonon relaxation At low temperatures, we can consider the interaction of \(4f\)-electrons with long-wavelength acoustic phonons only. In this case, Hamiltonian of the electron-phonon interaction (EPI) operating in the space of electron-nuclear states of the ground multiplet \({}^{6}\)H\({}_{15/2}\) is written as follows [37; 38] (the second term at right hand side corresponds to the electron-rotational interaction) \[H_{\rm EPI}=\sum_{\alpha,\beta}V_{\alpha\beta}e_{\alpha\beta}+i\left[H,({\bf J}+{ \bf I})\cdot\mathbf{\uptheta}\right], \tag{9}\] where \(H\) is the single-ion Hamiltonian (4), \(e_{\alpha\beta}\) are components of the dynamic deformation tensor and \(\mathbf{\uptheta}\) are vectors of dynamic rotations linear in phonon annihila tion \(a_{\mathbf{q}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{}}}}}}}}}}}}}}}\mathbf{j}\) and creation \(a_{\mathbf{q}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ }}}}}}}}}}}}}^{ +}}\) operators: \[e_{\alpha\beta}=\frac{1}{2}\sum_{\mathbf{q}\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ }}}}}}}}}}}}}}}}Q_{ \mathbf{q}\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ }}}}}}}}}}}}}}}}} \left[\varepsilon_{j\alpha}(\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ { }}}}}}}}}} \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ \mathbf{ { \mathbf{ { \mathbf{ { \mathbf{ \mathbf{ { \mathbf{ \mathbf{ \mathbf{ \mathbf{ { \mathbf{ \mathbf{ \mathbf{ \mathbf{ { \mathbf{ \mathbf{ { \mathbf{ { \mathbf{ { \mathbf{ { \mathbf{ { \mathbf{ { { \mathbf{ { { \mathbf{ { \mathbf{ { \mathbf{ { \mathbf{ { { \mathbf{ { { \mathbf{ { { \mathbf{ { { \mathbf{ { { \mathbf{ { { \mathbf{ { { { \mathbf{ { { { \mathbf{{ { } { \mathbf{{ { { { \mathbf{{ { } { \mathbf{{ { } { { { \mathbf{{ { { { \mathbf{{{ } { { \mathbf{{ } { \mathbf{ { { } { { \mathbf{{ { } { } { { { \mathbf{{ { { } { { { \mathbf{{ { { { { \mathbf{{{ { } { \mathbf{ { } { \mathbf{ { } { \mathbf{ { } { } { \mathbf{ } { \mathbf{ { } { \mathbf{ } { \mathbf{ \mathbf{ { } { \mathbf{ \mathbf{ \mathbf{ { \mathbf{ { { \mathbf{ { }}}}}}}}}}}\). It should be noted that the specific butterfly loops in molecular complex V\({}}}}{}\) with the effective spin \(S=1/2}}\) ground state have been successfully described in Refs. [40; 41] taking into account the phonon bottleneck effect, however, a degree of resonant phonons heating depends strongly on the spin-phonon coupling strength and the concentration of paramagnetic ions. The evolution of nonequilibrium populations of energy levels driven by the electron-phonon interaction is determined by the relaxation matrix \(W\), \(\partial\rho_{n}(t)/\partial t=\sum_{k}W_{nk}(t)\rho_{k}(t)\), where \(W_{nk}=W_{k\to n}\), \(W_{kn}=W_{nk}\exp(-\hbar\omega_{kn}/k_{B}T)\) and \(W_{nn}=-\sum_{k\neq n}W_{kn}\). The eigenvalues of the relaxation matrix determine the set of relaxation rates, in particular, for a two-level system, the relaxation time \(\tau=-(W_{11}+W_{22})^{-1}\). The calculated probabilities of the single-phonon transitions within the ground state doublet of the Dy\({}^{3+}\) ions at Y1 sites are large enough (in particular, for even isotopes in the field \(B_{b}=0.3\) T at \(T=2\) K, the relaxation time \(\tau=7.7\cdot 10^{-4}\) s) to ensure quick thermalization of the Dy1 subsystem. Therefore, the simulated dynamic magnetization follows the Curie law in all the experiments performed in the present work. Thus, the Dy1 subsystem does not contribute to hysteresis loops due to the fast electron-phonon relaxation rates. For the Dy2 subsystem, we obtained about three orders of magnitude slower single-phonon relaxation rates (in particular, \(\tau=1.66\) s for even isotopes in the field \(B_{b}=0.3\) T at \(T=2\) K that is comparable to the measured relaxation time \(5.3\) s (see Fig. 4). To fit the measured dynamic magnetization at elevated temperatures (\(T=5-6\) K), we supplemented the relaxation matrix of Dy\({}^{3+}\) ions at Dy2 sites with terms corresponding to the Raman relaxation processes which provide the relaxation rate of the ground Kramers doublet \(\tau_{R}^{-1}=7.2\cdot(T/10)^{9}\) s\({}^{-1}\)K\({}^{-9}\) at cryogenic temperatures [42]. At temperatures \(T<5\) K, these terms are practically ineffective, but at the temperature of \(6\) K the relaxation rate \(\tau_{R}^{-1}=0.073\) s\({}^{-1}\) exceeds remarkably the single-phonon relaxation rate \(0.0174\) s\({}^{-1}\) in the magnetic field \(B_{b}=0.1\) T tional relaxation channels, namely, the cross-relaxation and the quantum tunneling, which are effective at low magnetic fields in the region of crossing hyperfine or Zeeman sublevels of the ground state Kramers doublet of Dy\({}^{3+}\) ions at Y2 sites. with the multi-level hyperfine structures of the electronic doublets, and the cross-relaxation between even isotopes of Dy1 and Dy2 ions in the magnetic field \(B_{b}(t)\) crossing the zero value. Taking into account finite widths of energy levels and assuming the homogeneous distribution of Dy\({}^{3+}\) ions over Y1 and Y2 sites, we can write the transition probability (13) for ions belonging to the subsystems \(\lambda\) and \(\lambda^{\prime}\) in the following form [44; 45] (we consider here the magnetic dipolar interactions between ions with magnetic moments \(\mathbf{M}_{\lambda}=g_{\lambda,J}\mu_{B}\mathbf{J}_{\lambda}\)): \[W^{CR,\lambda\lambda^{\prime}}_{n\leftrightarrow,l\gets k} =2\pi C_{\lambda^{\prime}}\frac{(\mu_{B}^{2}g_{\lambda,J}g_{\lambda^{\prime},J })^{2}}{\hbar^{2}}\times\\ \sum_{\alpha\beta\gamma\delta}g^{CR}_{\alpha\beta\gamma\delta}( \omega_{pn}-\omega_{lk})k^{\lambda\lambda^{\prime}}_{\alpha\beta\gamma\delta} J^{(\lambda)}_{\alpha,np}g^{(\lambda^{\prime})}_{\beta,lk}J^{(\lambda)}_{\gamma,pn}J^{( \lambda)}_{\delta,kl}, \tag{14}\] where either \(\lambda=\lambda^{\prime}=^{161}\)Dy1, \({}^{161}\)Dy2, \({}^{163}\)Dy1, \({}^{163}\)Dy2, or \(\lambda=^{even}\)Dy2 and \(\lambda^{\prime}=^{even}\)Dy1, \(C_{\lambda^{\prime}}\) is the concentration of \(\lambda^{\prime}\)-ions per Y lattice site, \(g^{CR}_{\alpha\beta\gamma\delta}(\omega)\) is the CR form-function, \(k^{\lambda\lambda^{\prime}}_{\alpha\beta\gamma\delta}\) are the lattice sums \(k^{\lambda\lambda^{\prime}}_{\alpha\beta,\gamma\delta}=\sum_{s}a_{\alpha\beta }(\mathbf{R}_{\lambda\lambda^{\prime},s})a_{\delta\gamma}(\mathbf{R}_{\lambda \lambda^{\prime},s})\) over sites of a Bravais lattice, and \[a_{\alpha\beta}(\mathbf{R}_{\lambda\lambda^{\prime},s})=\frac{1}{R^{3}_{ \lambda\lambda^{\prime},s}}\left(\delta_{\alpha\beta}-3\frac{x_{\lambda\lambda^ {\prime},s\alpha}x_{\lambda\lambda^{\prime},s\beta}}{R^{2}_{\lambda\lambda^{ \prime},s}}\right). \tag{15}\] Here, \(R_{\lambda\lambda^{\prime},s}\) is the radius-vector of the site s in the subsystem \(\lambda^{\prime}\) in the coordinate frame with its origin at the site belonging to the subsystem \(\lambda\). The computed lattice sums used in the modeling of the dynamic magnetization are presented in the Supplemental Material [23]. The CR rates were calculated assuming the Gaussian line shape \(g^{CR}_{\alpha\beta\gamma\delta}=\frac{1}{\sqrt{2\pi\Delta}}\exp[-(\omega_{pn }-\omega_{lk})^{2}/2\Delta^{2}]\) of the spectral density of the energy reservoir corresponding to interactions between the Dy\({}^{3+}\) ions. The standard deviation \(\Delta\) of CR frequencies increasing with temperature and concentration of paramagnetic ions was estimated from the measured EPR linewidths (\(\Delta=140-300\) MHz). The most important result of the CR processes is the appearance of the effective relaxation channels with the rates of up to \(10^{7}\) s\({}^{-1}\) nearby the crossing points of the hyperfine sublevels of Kramers doublets in odd dysprosium isotopes. It should be noted that the calculated field dependencies of the CR-promoted relaxation rates might change remarkably if another CR line shape is used (the Lorentz distribution, in particular). ### Quantum tunneling Computed values of gaps \(G_{pq}\) (with a precision of \(10^{-7}\) cm\({}^{-1}\)) at the anticrossings of the hyperfine sublevels in the fields \(B_{b}(p,q)\) are presented in Table 2. Transversal fields \(B_{a}\) and \(B_{c}\) of an order of \(10^{-5}\) T practically do not change tunneling splittings \(G_{pq}\) in the spectra of odd isotopes but slightly shift \(B_{b}\) values at all crossing points. Large gaps between the anticrossing hyperfine sublevels \(p\) and \(q\) of an order of \(10^{-4}-10^{-3}\) cm\({}^{-1}\) with the differences \(|m_{p}-m_{q}|=1\) between the corresponding nuclear spin projections on the swept field are induced by the magnetic hyperfine interaction (second order effects of an order of \(A_{J}^{2}/\Delta E\) where \(A_{J}\) is the magnetic hyperfine coupling constant and \(\Delta E\) is the CF energy of the first excited sublevel of the ground multiplet). Note, the direction of the applied field does not coincide with the quantization c-axis of the electronic angular momentum and is tilted by \(\pm 9\) degrees in the \((ab)\)-plane from the principal directions of the \(g\)-factor \(g_{2}=19.28\) of Dy2 magnetically nonequivalent ions. An order of magnitude smaller gaps between the hyperfine sublevels with \(m_{p}=m_{q}\) are also induced by the magnetic hyperfine interaction. Additional significantly smaller (by about two orders of magnitude) gaps are induced by the quadrupole hyperfine interaction, but, as follows from calculations, this interaction additionally mixes electron-nuclear wave functions with nuclear spin projections \(m\) and \(m\pm 2\). This mixing provides weak additional anticrossings. The calculated gap between the sublevels of the ground doublet of even isotopes at Y2 sites arising from the transversal magnetic fields intro Figure 6: Hyperfine structures of the ground doublets of (a) \({}^{163}\)Dy\({}^{3+}\) and (b) \({}^{161}\)Dy\({}^{3+}\) ions at sites Y2 in weak magnetic fields \(B_{b}\) applied along the \(b\)-axis. Indices \(p\) (\(q\)) numerate hyperfine sublevels with increasing (decreasing) energies in the increasing field. duced above equals \(3.6\cdot 10^{-7}\) cm\({}^{-1}\), an order of magnitude less than the one communicated in Ref. [2], in particular. The analytical expression for the transition probability (per unit time) between anticrossing energy levels \(n\) and \(k\) with the minimal energy gap \(\Delta_{kn}\) at the sweeping field \(B_{b}(t)=B_{G}\) \[\Gamma_{kn}(B_{b})=\frac{\Delta_{kn}^{2}\tau_{kn}}{2\left\{h^{2}+[g_{bb}\mu_{B} (B_{b}(t)-B_{G})\tau_{kn}]^{2}\right\}} \tag{16}\] was derived accounting for the electron-phonon relaxation time \(\tau_{kn}=-(W_{kk}+W_{nn})/2\) in Refs. [47; 48; 49]. The function (16) has a Lorentzian shape with a width at half maximum \[\Delta B_{b,kn}=\frac{2\hbar}{g_{bb}\mu_{B}\tau_{kn}} \tag{17}\] that does not depend on the gap \(\Delta_{kn}\). The measured relaxation time of Dy\({}^{3+}\) ions at Y2 sites at weak magnetic fields \(|B_{b}(t)|<0.02\) T is not less than 20 s. In accordance with (17), the width of the LZS regime does not exceed \(6\cdot 10^{-14}\) T and is much less than the minimal step (about \(10^{-9}\) T) in the applied field variation used in simulations of the dynamic magnetization. As the duration of the passage by the sweeping field through the quantum tunneling regions is much smaller than the relaxation time, we used the asymptotic LZS expression for nonadiabatic transition probabilities \[P_{nk}=\exp\left(-\frac{\pi\Delta_{nk}^{2}}{2\hbar\mu_{B}g_{bb}|dB_{b}/dt|}\right) \tag{18}\] just after a passage of the field \(B_{b}\) through the anticrossing point. However, the corresponding evolution of the magnetization is strongly renormalized (see Fig. S6 in [23]) by the CR processes effective within the magnetic field region of about \(5\cdot 10^{-4}\) T determined by the width of the CR form-function. Note, despite huge differences between the maximal values (\(10^{11}\)-\(10^{15}\) s\({}^{-1}\)) of the quantum tunneling rate (16), \(\Delta_{kn}^{2}\tau_{kn}/2\hbar^{2}\), and the probabilities of the flip-flop CR processes, the accumulated corresponding changes of populations of the ground doublet sublevels have comparable values due to the inverse relation between the widths of the LZS and CR regimes. Eventually, after taking into account the quantum tunneling and cross-relaxation, the simulated profiles of the hysteresis loops remarkably well reproduce the results of measurements at different temperatures and rates of the swept magnetic fields in SrY\({}_{2}\)O\({}_{4}\) single-crystal samples doped with Dy\({}^{3+}\) (0.01 and 0.5 at.%) ions (Figs. 1 and 2). As an example, separate contributions into the total magnetization from even and odd (\({}^{163}\)Dy) isotopes at Y2 sites at the temperature 2 K for the sweeping rate 5 mT/s are shown in Supplemental Material [23] (Fig. S8). The dynamic magnetization curves at low temperatures (2 and 4 K) in the increasing field from \(B_{min}=-B_{max}\) to \(B_{max}\) (as well as in the decreasing field from \(B_{max}\) to \(B_{min}\)) show two quasi-plateaus at the entrance to and exit from the critical region of fast evolution of average magnetic moments per Dy\({}^{3+}\) ion determined by the widths of hyperfine structures of odd isotopes. When approaching this region, the value of the magnetic moment is determined by the electron-phonon relaxation rate. However, after passing through this region, the values of the magnetic moments are determined by the LZS tunneling renormalized by the CR processes. It should be underlined that the agreement of the performed modeling with the experimental data was achieved due to introduction of two fitting parameters, namely, components of the weak transversal magnetic field, comparable to the Earth magnetic field, affecting the impurity Dy\({}^{3+}\) ions at Y2 sites. The unveiled low relaxation rates of impurity Dy\({}^{3+}\) ions at Y2 sites in single-ion magnets SrY\({}_{2}\)O\({}_{4}\):Dy correlate with the unusual irreversibility of magnetic processes studied by means of the ultrasound technique in the concentrated SrDy\({}_{2}\)O\({}_{4}\) single crystal at very low temperatures [14]. Recently, the hysteretic dynamic magnetization was observed at low temperatures in the paramagnetic phase of the concentrated inorganic dysprosium compounds DyScO\({}_{3}\) (\(T_{N}=3.1\) K) [50] and LiDyF\({}_{4}\) (\(T_{N}=0.56\) K) [51] where, most probably, formation of the hysteresis loops was caused by the magnetocaloric effect. ## IV Conclusions The first observation and investigation of quantum tunneling of magnetization in the inorganic dilute rare-earth paramagnet, single-crystal of tetragonal double fluoride LiYF\({}_{4}\) doped with non-Kramers Ho\({}^{3+}\) ions, was reported more than 20 years ago [19; 52]. As follows from the results of our study, the multi-sublattice oxide \begin{table} \begin{tabular}{c|c c c c c c} \(p/q\) **(I)** & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 0.00 & 0.00 & 0.04 & 0.16 & 14.83 & 7.46 \\ 2 & 0.00 & 0.05 & 0.07 & 18.75 & 4.00 & 54.34 \\ 3 & 0.04 & 0.07 & 19.88 & 1.22 & 68.65 & 0.60 \\ 4 & 0.16 & 18.75 & 1.29 & 72.78 & 0.27 & 0.13 \\ 5 & 14.83 & 4.00 & 68.65 & 0.27 & 0.16 & 0.00 \\ 6 & 7.46 & 54.34 & 0.60 & 0.13 & 0.00 & 0.00 \\ \hline \(p/q\) **(II)** & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 0.00 & 0.00 & 0.12 & 0.61 & 38.65 & 5.99 \\ 2 & 0.00 & 0.16 & 0.27 & 48.97 & 4.08 & 10.36 \\ 3 & 0.12 & 0.27 & 51.98 & 1.44 & 13.12 & 0.14 \\ 4 & 0.61 & 48.97 & 1.44 & 13.92 & 0.06 & 0.03 \\ 5 & 38.65 & 4.08 & 13.12 & 0.06 & 0.04 & 0.00 \\ 6 & 5.99 & 10.36 & 0.14 & 0.03 & 0.00 & 0.00 \\ \end{tabular} \end{table} Table 2: Computed gaps \(G_{pq}\) (in units of \(10^{-5}\) cm\({}^{-1}\)) at anticrossing points in the hyperfine structure of the ground state doublet of \({}^{163}\)Dy\({}^{2}\) (I) and \({}^{161}\)Dy\({}^{2}\) (II) ions. SrY\({}_{2}\)O\({}_{4}\) doped with Kramers Dy\({}^{3+}\) ions is the second inorganic dilute rare-earth paramagnet that exhibits a hysteretic behavior of the dynamic magnetization similar to SMM. The derived approach to the magnetization dynamics involved the comprehensive experimental studies of spectroscopic, magnetic and kinetic properties of the synthesized single-crystal samples with different concentrations of Dy\({}^{3+}\) ions, the subsequent analysis of the measured EPR and site-selective optical spectra and the magnetic relaxation rates in the framework of the semi-phenomenological crystal field model, and numerical solutions of the master equations. This approach enabled us to successfully reproduce the hysteresis cycle shapes observed as well as their transformation with temperature, sweeping field rate and concentration of Dy\({}^{3+}\) ions. The important result of our work is the demonstration of the strong CR effects on the dynamic magnetization, in particular, the renormalization of the LZS incoherent transition probabilities at anticrossing points in the electron-nuclear manyfold of states in swept magnetic fields. ###### Acknowledgements. This work was supported by the Russian Science Foundation (project No. 19-12-00244). BZM is grateful to O.A. Petrenko for useful discussions and to M.V. Vanyunin for help in developing a Matlab code for solving a system of nonlinear equations of motion. The authors are grateful to M.A. Cherosov for his assistance in magnetization measurements.
2306.14643
Universal scaling dimensions for highly irrelevant operators in the Local Potential Approximation
We study $d$-dimensional scalar field theory in the Local Potential Approximation of the functional renormalization group. Sturm-Liouville methods allow the eigenoperator equation to be cast as a Schrodinger-type equation. Combining solutions in the large field limit with the Wentzel-Kramers-Brillouin approximation, we solve analytically for the scaling dimension $d_n$ of high dimension potential-type operators $\mathcal{O}_n(\varphi)$ around a non-trivial fixed point. We find that $d_n = n(d-d_\varphi)$ to leading order in $n$ as $n \to \infty$, where $d_\varphi=\frac{1}{2}(d-2+\eta)$ is the scaling dimension of the field, $\varphi$, and determine the power-law growth of the subleading correction. For $O(N)$ invariant scalar field theory, the scaling dimension is just double this, for all fixed $N\geq0$ and additionally for $N=-2,-4,\ldots \,.$ These results are universal, independent of the choice of cutoff function which we keep general throughout, subject only to some weak constraints.
Vlad-Mihai Mandric, Tim R. Morris, Dalius Stulga
2023-06-26T12:30:01Z
http://arxiv.org/abs/2306.14643v2
# Universal scaling dimensions for highly irrelevant operators in the Local Potential Approximation ###### Abstract We study \(d\)-dimensional scalar field theory in the Local Potential Approximation of the functional renormalization group. Sturm-Liouville methods allow the eigenoperator equation to be cast as a Schrodinger-type equation. Combining solutions in the large field limit with the Wentzel-Kramers-Brillouin approximation, we solve analytically for the scaling dimension \(d_{n}\) of high dimension potential-type operators \(\mathcal{O}_{n}(\varphi)\) around a non-trivial fixed point. We find that \(d_{n}=n(d-d_{\varphi})\) to leading order in \(n\) as \(n\rightarrow\infty\), where \(d_{\varphi}=\frac{1}{2}(d-2+\eta)\) is the scaling dimension of the field, \(\varphi\), and determine the power-law growth of the subleading correction. For \(O(N)\) invariant scalar field theory, the scaling dimension is just double this, for all fixed \(N\geq 0\) and additionally for \(N=-2,-4,\ldots.\) These results are universal, independent of the choice of cutoff function which we keep general throughout, subject only to some weak constraints. _Keywords--_ Renormalization group, Local potential approximation, Scalar field ###### Contents * 1 Introduction * 2 Flow equations in LPA * 2.1 Asymptotic solutions * 2.2 SL analysis * 2.3 WKB analysis * 3 O(N) scalar field theory * 3.1 SL analysis * 3.2 WKB analysis * 4 Summary and discussion * 5 Acknowledgements Introduction The functional renormalization group (FRG) is one of the most widely used approaches to study quantum field theories in non-perturbative regimes, as evidenced by an extensive literature (see, for instance, the reviews [1, 2, 3, 4, 5, 6]). Various realizations of the FRG exist [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18], but the most prevalent version [12, 13, 14, 15, 16, 17, 18] focuses on the flow of an appropriately defined Legendre effective action \(\Gamma_{\Lambda}\) (also referred to as the effective average action), with respect to an infrared cut-off scale \(\Lambda\). This flow equation is given by: \[\frac{\partial}{\partial\Lambda}\Gamma_{\Lambda}=-\frac{1}{2}\text{Tr}\left[ \frac{1}{\Delta_{\Lambda}}\frac{\partial\Delta_{\Lambda}}{\partial\Lambda} \left(1+\Delta_{\Lambda}\Gamma_{\Lambda}^{(2)}\right)^{-1}\right]\,. \tag{1.1}\] Here, Tr stands for a space-time trace and \(\Gamma_{\Lambda}^{(2)}\) is the Hessian with respect to the fields. The propagator \(\Delta_{\Lambda}(q)=C_{\Lambda}(q)/q^{2}\) is modified by the inclusion of a multiplicative infrared cutoff function \(C_{\Lambda}(q)=C(q^{2}/\Lambda^{2})\), which is non-negative, monotonically increasing, and satisfies \(C(0)=0\) and \(C(\infty)=1\). In practical applications, some form of approximation becomes necessary. One frequently employed approximation is the Local Potential Approximation (LPA) [19, 20, 21, 22, 23, 24, 25, 26, 27, 28], which simplifies the flow equations by disregarding the momentum dependence of the effective action, except for a local potential term, \(V_{\Lambda}\). For a scalar field \(\varphi\) in \(d\) Euclidean dimensions, the effective action then takes the form: \[\Gamma_{\Lambda}=\int\!\!d^{d}x\left(\frac{1}{2}(\partial_{\mu}\varphi)^{2}+V _{\Lambda}(\varphi)\right)\,. \tag{1.2}\] While an exact analytical solution to this truncated FRG formulation is still not possible in general, the LPA enables numerical treatments that provide valuable insights into the system's behaviour. It allows for numerical estimates of various physical quantities, including critical exponents and the scaling equation of state [1, 2, 3, 4, 5, 6, 23, 29, 30, 31]. Moreover, the LPA serves as the initial step in a systematic derivative expansion [1, 23, 29, 30, 31], which facilitates a more comprehensive exploration of the system's properties [1, 2, 3, 4, 5, 6, 30]. Nevertheless it is important to acknowledge the limitations of the LPA and more generally the derivative expansion. Since such truncations do not correspond to a controlled expansion in some small parameter, the errors incurred can be expected to be of the same order in general as the quantities being computed. Furthermore, quantities that should be universal, and thus independent of the specific form of the cutoff, are not. It has long been understood that an exception to this is the general form of a non-trivial fixed potential \(V(\varphi)\) in the large field regime [1, 23, 29, 31], which follows from asymptotic analysis: \[V(\varphi)=A|\varphi|^{d/d_{\varphi}}+\cdots\qquad\text{as}\qquad\varphi\to\pm \infty\,, \tag{1.3}\] where the ellipses stand for subleading terms (see later). The leading term coincides with the scaling equation of state precisely at the fixed point. It is a simple consequence of dimensional analysis on using the scaling dimension \(d_{\varphi}=\frac{1}{2}(d-2+\eta)\) for the field \(\varphi\) at the fixed point, \(\eta\) being its anomalous dimension. However asymptotic analysis does not fix the amplitude \(A\) or the anomalous dimension \(\eta\), which have to be found by other means, for example by numerical solution of truncated fixed point equations. In this paper, we will show that within LPA, asymptotic analysis combined with Sturm-Liouville (SL) and Wentzel-Kramers-Brillouin (WKB) analysis,1 also allows one to determine asymptotically the scaling dimension \(d_{n}\) of the highly irrelevant (\(d_{n}\gg 1\)) eigenoperators \(\mathcal{O}_{n}=\mathcal{O}_{n}(\varphi)\) of potential-type (those containing no spacetime derivatives). Ordering them by increasing scaling dimension, we will show that \(d_{n}=n(d-d_{\varphi})\) to leading order in \(n\). In the case of \(O(N)\) invariant scalar field theory with fixed \(N\geq 0\) the dimension \(d_{n}\) is doubled to \(d_{n}=2n(d-d_{\varphi})\). The scaling dimension is thus independent of \(N\). It agrees with the result for the single scalar field since these eigenoperators are functions of \(\varphi^{2}=\varphi^{a}\varphi^{a}\), and thus pick out only the even eigenoperators (those symmetric under \(\varphi\leftrightarrow-\varphi\)) in the \(N=1\) case. We also show that the scaling dimension is \(d_{n}=2n(d-d_{\varphi})\) whenever \(N=-2k\), where \(k\) is a non-negative integer. Footnote 1: See _e.g._ ref. [32] for textbook discussion of SL methods and ref. [33] for WKB methods. Once again these results are independent of the choice of cutoff and thus universal. Indeed in this paper, we will keep the cutoff function completely general throughout, subject only to some weak technical constraints that we derive later. Note that, like the fixed point equation of state (1.3), the \(d_{n}\) take the same form, independent of the choice of fixed point, provided only that \(d_{\varphi}>0\) and that the fixed point potential is non-vanishing. We also show that the next to leading correction to \(d_{n}\) behaves as a power of \(n\). The power is universal although the coefficient of the subleading correction is not. Actually this approach was first employed to determine the scaling dimension of highly irrelevant eigenoperators in an \(f(R)\) approximation [34, 35] to the asymptotic safety scenario [36, 37, 38] in quantum gravity. The \(f(R)\) approximation serves as a close analogue to the LPA in this context [39, 40, 41]. However, while the resulting scaling dimensions \(d_{n}\) exhibit a simple nearly-universal form for large values of \(n\), they nevertheless retained strong dependence on the choice of cutoff. This issue can be traced back [35] to the so-called single-metric (or background field) approximation [36], where the identification of the quantum metric with the background metric is made in order to close the equations. The present paper thus completes the circle by demonstrating that, indeed, without such an approximation, the results become truly universal. Additionally, it showcases the power of these methods in a simpler context. The paper is organised as follows. We first analyse the functional renormalization group equations for a single scalar field in the LPA. From the eigenoperator equation we write the resulting SL equation in Schrodinger form and thus, by taking the large field limit, deduce the asymptotic form of the renormalization group eigenvalues in the WKB limit. Sec. 3 extends the analysis to \(O(N)\) scalar field theory using the same approach. Finally in sec. 4 we conclude and discuss the results, placing them in a wider context. ## 2 Flow equations in LPA The LPA approximation amounts to setting the field \(\varphi\) in the Hessian to a spacetime constant, thus dropping from a derivative expansion all terms that do not take the form of a correction to the potential. The flow equation for \(V_{\Lambda}(\varphi)\) then takes the form: \[\left(\partial_{t}+d_{\varphi}\varphi\frac{\partial}{\partial\varphi}-d\right)V_{ \Lambda}(\varphi)=-\frac{1}{2}\int\frac{d^{d}q}{(2\pi)^{d}}\frac{\dot{\Delta}}{ \Delta}\frac{1}{1+\Delta V^{\prime\prime}_{\Lambda}(\varphi)}\,, \tag{2.1}\] where \(\partial_{t}=-\Lambda\partial_{\Lambda}\), \(t\) being the renormalization group 'time' which, following [7], we have chosen to flow towards the IR. Here the momentum, potential and field are already scaled by the appropriate power of \(\Lambda\) to make them dimensionless. Then \(\Delta=C(q^{2})/q^{2}\) no longer depends on \(\Lambda\). The same is true of \(\partial_{t}\Delta_{\Lambda}\), which after scaling we write as \(\dot{\Delta}\), where \[\dot{\Delta}=2\,C^{\prime}(q^{2})\,. \tag{2.2}\] Since \(C(q^{2})\) is monotonically increasing, we have that \(\dot{\Delta}>0\). The scaling dimension of the field is \(d_{\varphi}=\frac{1}{2}(d-2+\eta)\), where \(\eta\) is the anomalous dimension. Since \(\eta\) arises from the renormalization group running of the field, and is typically inferred from corrections to the kinetic term, one would naturally conclude that it vanishes in LPA [2, 7, 19, 20, 21, 22, 23, 24, 25, 26]. Nevertheless, as noticed in refs. [27, 28], this assumption is not necessary. The flow equation (2.1) is still a mathematically consistent equation with \(\eta\neq 0\). However, since we cannot determine \(\eta\) directly from (2.1), its value needs to be input from elsewhere (either from experiment or other theoretical studies). We will follow this strategy, in the expectation that it improves the accuracy of our final estimates for \(d_{n}\). At a FP (fixed point) \(V_{\Lambda}(\varphi)=V(\varphi)\), and \(\eta\), have no renormalization group time dependence. The eigenoperator equation follows from linearising about a FP: \[V_{\Lambda}(\varphi)=V(\varphi)+\varepsilon\,v(\varphi)\,\mathrm{e}^{\lambda t }\,, \tag{2.3}\] \(\varepsilon\) being infinitesimal. Here \(\lambda\) is the RG eigenvalue. It is the scaling dimension of the corresponding coupling, and is positive (negative) for relevant (irrelevant) operators. The scaling dimension of the operator \(v(\varphi)\) itself is then \(d-\lambda\). We write the eigenoperator equation in the same form as refs. [34, 35, 41]: \[-a_{2}(\varphi)v^{\prime\prime}(\varphi)+a_{1}(\varphi)v^{\prime}(\varphi)+a_{ 0}(\varphi)v(\varphi)=(d-\lambda)v(\varphi)\,, \tag{2.4}\] where the \(\varphi\)-dependent coefficients multiplying the eigenoperators are given by: \[a_{0}(\varphi) =0\,, \tag{2.5}\] \[a_{1}(\varphi) =d_{\varphi}\varphi\,,\] (2.6) \[a_{2}(\varphi) =\frac{1}{2}\int\frac{d^{d}q}{(2\pi)^{d}}\frac{\dot{\Delta}}{(1+ \Delta V^{\prime\prime})^{2}}>0\,, \tag{2.7}\] and we have noted that \(a_{2}\) is positive. We can now repeat the analysis carried out in [34, 35, 41] to solve for \(\lambda\) in the case of high dimension eigenoperators. ### Asymptotic solutions For large \(\varphi\), the RHS of (2.1) can be neglected. Thus at a fixed point, the equation reduces to a first order ODE (ordinary differential equation) which is easily solved. It gives the first term (1.3) in an asymptotic series solution [29]: \[V(\varphi)=A|\varphi|^{m}+O\left(|\varphi|^{2-m}\right)\quad\text{as}\quad\varphi \rightarrow\pm\infty\,, \tag{2.8}\] where for convenience we introduce \[m=d/d_{\varphi}\,, \tag{2.9}\] and \(A\) is a real constant (that is determined by solving for the full FP solution). The subleading terms arise from iterating the leading order contribution to next order. Of course there is always the trivial \(V(\varphi)\equiv 0\) fixed point solution, corresponding to the Gaussian fixed point. We will not be interested in that (the scaling dimensions in that case are exactly known and reviewed in the discussion in sec. 4). Instead we focus on non-trivial FP solutions for which \(A\neq 0\). In principle, \(A\) could be different in the two limits \(\varphi\rightarrow\pm\infty\), although in practice the fixed point potentials (2.8) are symmetric. Anyway, we will see that \(A\) drops out of the analysis in a few further steps. It is helpful for the following to note that \(m>3\). Neglecting \(\eta\) (typically \(\eta\ll 1\), see _e.g._[42]), we see that \(m\) is a decreasing function of \(d\) for all \(d>2\). In practice, non-trivial FP solutions only exist for \(d<4\) (see _e.g._[23]). In the limit \(d\to 4^{-}\), \(\eta\to 0\) (by the \(\epsilon\) expansion [42]) and thus \(m\to 4\). In \(d=2\) dimensions, the asymptotic solution (2.8) corresponds to that of a unitary minimal model [43, 44]. The one with the largest anomalous dimension is that of the Ising model universality class which has \(\eta=1/4\), thus in \(d=2\) dimensions we have \(m\geq 8\) for all the unitary minimal models. Note that the solution (2.8) has a single free parameter even though the FP equation is a (non-linear) second order ODE. The second parameter, if it exists, can be deduced by linearising around (2.8), writing \(V(\varphi)\mapsto V(\varphi)+\delta V(\varphi)\), and solving the flow equation (2.1) at the FP this time for \(\delta V\). Since \(\delta V\) satisfies a _linear_ second order ODE and one solution is already known, namely \(\delta V=\partial_{A}V(\varphi)\), it is easy to find the solution that corresponds at the linearised level to the missing parameter [23, 29]. However, one then discovers that these'missing' linearised solutions are rapidly growing exponentials. Such a linearised perturbation is not valid asymptotically since for diverging \(\varphi\) it is much larger than the solution (2.8) we perturbed around. Hence, the FP asymptotic solutions only have the one free parameter, \(A\). Substituting (2.8) into (2.7), we see that asymptotically \(a_{2}(\varphi)\) scales as follows: \[a_{2}(\varphi)=F\,|\varphi|^{2(2-m)}+O\left(|\varphi|^{3(2-m)}\right)\quad \text{as}\quad\varphi\rightarrow\pm\infty\,, \tag{2.10}\] where \(F\) is positive and cutoff dependent: \[F=\frac{1}{2\left(m(m-1)A\right)^{2}}\int\frac{d^{d}q}{(2\pi)^{d}}\frac{ \dot{\Delta}}{\Delta^{2}}=-\frac{1}{\left(m(m-1)A\right)^{2}}\int\frac{d^{d}q }{(2\pi)^{d}}\,q^{4}\,\frac{\partial}{\partial q^{2}}C^{-1}(q^{2})\,. \tag{2.11}\] We will assume that the integral converges. This imposes some weak constraints on the cutoff profile. From (2.11), we see that we require \(C(q^{2})\) to vanish slower than \(q^{d+2}\) as \(q\to 0\), and \(C\to 1\) faster than \(1/q^{d+2}\) as \(q\rightarrow\infty\). This is true for example for the popular form of additive (_i.e._ mass-type) cutoff [13] (which was the one used in the analogous \(f(R)\) analyses in refs. [34, 35]): \[r(q^{2})=\frac{q^{2}}{\exp(aq^{2b})-1}\,,\quad a>0,\,b\geq 1\,, \tag{2.12}\] provided also we set \(b<\frac{1}{2}(d+2)\), the relation to \(C(q^{2})\) being \(q^{2}C^{-1}(q^{2})=q^{2}+r(q^{2})\). Given that \(a_{2}(\varphi)\) vanishes asymptotically, it is tempting to neglect the \(a_{2}\) term in (2.4). We will shortly justify this. By neglecting the \(a_{2}\) term, the ODE becomes linear first order giving a unique solution up to normalization. Thus we deduce that the eigenoperators asymptotically scale as a power of the field: \[v(\varphi)\propto|\varphi|^{\frac{d-\lambda}{\omega_{\varphi}}}+\cdots\,, \tag{2.13}\] where the ellipses stands for subleading corrections. The neglect of the \(a_{2}\) is justified as follows. The missing solution is one that grows exponentially (again, so that \(a_{2}(\varphi)v^{\prime\prime}(\varphi)\) cannot be neglected). Since the ODE is linear, these are allowed solutions to (2.4), but they are ruled out because they do not evolve multiplicatively in the RG [30, 34, 35, 45, 46, 1]. Now, the asymptotic solution (2.13) imposes two boundary conditions (one for each limit \(\varphi\to\pm\infty\)) on the second order ODE (2.4), but since the ODE is linear this overconstrains the equation2 which thus leads to quantisation of the RG eigenvalue \(\lambda\). We index the solutions as \(v_{n}(\varphi)\), ordering them so that \(\lambda_{n}\) decreases as \(n\) increases. We can now perform an SL transformation and deduce the asymptotic dependence of the eigenvalues \(\lambda_{n}\) on \(n\), as \(n\to\infty\). Footnote 2: We can see this for example by imposing a normalization condition on \(v\). ### SL analysis We can rewrite the eigenvalue equation (2.4) in a SL form by multiplying it with the SL weight function \[w(\varphi)=\frac{1}{a_{2}(\varphi)}\exp\left\{-\int_{0}^{\varphi}d\varphi^{ \prime}\frac{a_{1}(\varphi^{\prime})}{a_{2}(\varphi^{\prime})}d\varphi^{\prime }\right\}\,, \tag{2.14}\] which is always positive due to the positivity of \(a_{2}\). Then the eigenvalue equation becomes \[-(a_{2}(\varphi)w(\varphi)v^{\prime}(\varphi))^{\prime}=(d-\lambda)w(\varphi)v (\varphi)\,. \tag{2.15}\] The SL operator on the left, \(L=-\frac{d}{d\varphi}\left(a_{2}w\frac{d}{d\varphi}\,\right)\), is self adjoint when acting on the space spanned by the eigenoperators, _i.e._ it satisfies \[\int_{-\infty}^{\infty}\!\!\!d\varphi\,u_{1}(\varphi)\,Lu_{2}(\varphi)=\int_{ -\infty}^{\infty}\!\!\!d\varphi\,u_{2}(\varphi)\,Lu_{1}(\varphi)\,, \tag{2.16}\] when the \(u_{i}\) are linear combinations of the eigenoperators. This is so because the boundary terms at infinity, generated by integration by parts, vanish in this case. This follows because, from (2.13), the \(u_{i}\) diverge at worst as a power of \(\varphi\), whilst \(w(\varphi)\to 0\) exponentially fast as \(\varphi\to\pm\infty\). Thus from SL analysis [32], we know that the eigenvalues \(\lambda_{n}\) are real, discrete, with a most positive (relevant) eigenvalue and an infinite tower of ever more negative (more irrelevant) eigenvalues, \(\lambda_{n}\to-\infty\) as \(n\to\infty\)[30]. Let us define a 'coordinate' \(x\): \[x=\int_{0}^{\varphi}\frac{1}{\sqrt{a_{2}(\varphi^{\prime})}}\,d\varphi^{\prime} \tag{2.17}\] (always taking the positive root in fractional powers). Defining the wave-function as \[\psi(x)=a_{2}^{1/4}(\varphi)w^{1/2}(\varphi)v(\varphi)\,, \tag{2.18}\] enables us to recast (2.15) as: \[-\frac{d^{2}\psi(x)}{dx^{2}}+U(x)\psi(x)=(d-\lambda)\psi(x)\,. \tag{2.19}\] This is a one-dimensional time-independent Schrodinger equation for a particle of mass \(m=1/2\), with energy \(E=d-\lambda\)_i.e._ just the eigenoperator scaling dimension, and with potential [34, 35, 41]: \[U(x)=\frac{a_{1}^{2}}{4a_{2}}-\frac{a_{1}^{\prime}}{2}+a_{2}^{\prime}\left( \frac{a_{1}}{2a_{2}}+\frac{3a_{2}^{\prime}}{16a_{2}}\right)-\frac{a_{2}^{ \prime\prime}}{4}\,, \tag{2.20}\] where the terms on the right hand side are functions of \(\varphi\). From the limiting behaviour of \(a_{2}(\varphi)\), (2.10), we see that asymptotically the coordinate \(x\) scales as \[x=\int_{0}^{\varphi}\left(\frac{|\varphi^{\prime}|^{m-2}}{\sqrt{F}}+O(1) \right)d\varphi^{\prime}=\pm\frac{|\varphi|^{m-1}}{(m-1)\sqrt{F}}+O(|\varphi| )\quad\text{as}\quad\varphi\to\pm\infty\,, \tag{2.21}\] so in particular when \(\varphi\to\pm\infty\) we have \(x\to\pm\infty\). On the right hand side of (2.20), the first term dominates at leading order (LO) and next-to-leading order (NLO). Since asymptotically, \[\frac{a_{1}^{2}(\varphi)}{4a_{2}(\varphi)}=\frac{d_{\varphi}^{2}}{4F}|\varphi| ^{2m-2}+O(|\varphi|^{m})\,, \tag{2.22}\] we thus find that \[U(x)=\frac{1}{4}(d-d_{\varphi})^{2}x^{2}+O(|x|^{1+\frac{1}{m-1}})\quad\text{ as}\quad x\to\pm\infty\,. \tag{2.23}\] To LO, this is the potential of a simple harmonic oscillator of the form \(\frac{1}{2}m\omega^{2}x^{2}\), where \[\omega=d-d_{\varphi}=\frac{1}{2}(d+2-\eta)\,. \tag{2.24}\] ### WKB analysis We can now use WKB analysis to compute the asymptotic form of the energy levels, a.k.a. operator scaling dimensions, \(E_{n}\), at large \(n\). This follows from solving the equality \[\int_{-x_{n}}^{x_{n}}\!\!\!\!dx\,\sqrt{E_{n}-U(x)}=\left(n+\frac{1}{2}\right) \pi\,, \tag{2.25}\] for the total phase of the wave oscillations described by \(\psi(x)\), in the limit of large \(E_{n}\)[33]. Here \(x_{n}\) are the classical turning points, _i.e._ such that \(E_{n}=U(\pm\,x_{n})\). Now, the above integral is dominated by the regions close to the turning points, where we can substitute the asymptotic form (2.23). Including the subleading correction proportional to some constant \(\gamma\) (that depends on the cutoff profile) the integral is \[\frac{\omega}{2}\int_{-x_{n}}^{x_{n}}\!\!\!\!dx\,\sqrt{x_{n}^{2}+\gamma x_{n}^ {1+\frac{1}{m-1}}-x^{2}-\gamma|x|^{1+\frac{1}{m-1}}}=\frac{\omega}{2}x_{n}^{2} \int_{-1}^{1}\!\!\!dy\,\sqrt{1-y^{2}+\gamma x_{n}^{\frac{1}{m-1}-1}(1-|y|^{1+ \frac{1}{m-1}})}\,. \tag{2.26}\] Since the \(x_{n}\) are also large we can now evaluate the right hand side and thus from (2.25) we get the asymptotic relation between \(x_{n}\) and \(n\): \[\frac{\omega\pi}{4}x_{n}^{2}+O\left(x_{n}^{1+\frac{1}{m-1}}\right)=n\pi\,. \tag{2.27}\] Hence, using (2.23), (2.24) and (2.27), the scaling dimension of the eigenoperators takes the form \[d_{n}=E_{n}=d-\lambda_{n}=U(x_{n})=n\omega+O\left(n^{\frac{m}{2(m-1)}}\right)= n(d-d_{\varphi})+O\left(n^{\frac{m}{2(m-1)}}\right)\quad\text{as}\quad n\to \infty\,. \tag{2.28}\] The subleading correction to the critical exponents contain information about the cutoff via the constant \(\gamma\) introduced in (2.26). However, at leading order the result is independent of the cutoff, and is hence universal. ## 3 O(N) scalar field theory Now let us apply the same treatment to \(N\) scalar fields \(\varphi^{a}\) (\(a=1,\ldots,N\)) with an \(O(N)\) invariant potential \(V_{\Lambda}(\varphi^{2})=V_{\Lambda}(\rho)\), in the LPA. We use the shorthand \(\rho=\varphi^{a}\varphi^{a}=\varphi^{2}\). The flow equation (2.1) becomes [47, 31]: \[\left(\partial_{t}-d+2d_{\varphi\rho}\frac{\partial}{\partial\rho}\right)V_{ \Lambda}(\rho)=-\frac{1}{2}\int\frac{d^{d}q}{(2\pi)^{d}}\frac{\dot{\Delta}}{ \Delta}\left(M^{-1}\right)^{aa}\,, \tag{3.1}\] where the matrix \(M\) is given by: \[M^{ab}=\delta^{ab}+\Delta\frac{\partial^{2}V_{\Lambda}(\rho)}{\partial\varphi ^{a}\partial\varphi^{b}}=\delta^{ab}+2\Delta\left[\delta^{ab}V_{\Lambda}^{ \prime}(\rho)+2\varphi^{a}\varphi^{b}V_{\Lambda}^{\prime\prime}(\rho)\right]\,. \tag{3.2}\] Inverting and tracing, yields: \[\left(M^{-1}\right)^{aa}=\frac{N-1}{1+2\Delta V_{\Lambda}^{\prime}(\rho)}+ \frac{1}{1+2\Delta V_{\Lambda}^{\prime}(\rho)+4\Delta\rho V_{\Lambda}^{\prime \prime}(\rho)}\,. \tag{3.3}\] In the limit of large \(\rho\), the right hand side of the flow equation (3.1) can be neglected at leading order. This implies that a FP solution \(V_{\Lambda}(\rho)=V(\rho)\) takes the following asymptotic form: \[V(\rho)=A\rho^{\frac{m}{2}}+O\left(\rho^{1-\frac{m}{2}}\right)\quad\text{as} \quad\rho\to\infty\,, \tag{3.4}\] where as before the subleading term has been calculated by iterating the leading contribution to next order. The RG eigenvalue equation follows by linearising (3.1) around the fixed point solution, \[V_{\Lambda}(\rho)=V(\rho)+\varepsilon\,v(\rho)\,\mathrm{e}^{\lambda t}\,, \tag{3.5}\] giving an equation for \(v(\rho)\) with the same structure as (2.4), _i.e._ \[-a_{2}(\rho)v^{\prime\prime}+a_{1}(\rho)v^{\prime}+a_{0}(\rho)v=(d-\lambda)v\,, \tag{3.6}\] the same value for \(a_{0}(\rho)=0\), but different expressions for \(a_{1}(\rho)\), \[a_{1}(\rho)=2d_{\varphi}\rho-\int\frac{d^{d}q}{(2\pi)^{d}}\dot{\Delta}\left[ \frac{1}{\left(1+2\Delta V^{\prime}+4\Delta\rho V^{\prime\prime}\right)^{2}}+ \frac{N-1}{\left(1+2\Delta V^{\prime}\right)^{2}}\right]\,, \tag{3.7}\] and \(a_{2}(\rho)\), which however is again always positive: \[a_{2}(\rho)=\int\frac{d^{d}q}{(2\pi)^{d}}\frac{2\dot{\Delta}\rho}{\left(1+2 \Delta V^{\prime}+4\Delta\rho V^{\prime\prime}\right)^{2}}\,. \tag{3.8}\] Using the asymptotic fixed point solution (3.4) (and assuming \(A\neq 0\)) we get that asymptotically \(a_{2}\) scales as follows: \[a_{2}(\rho)=4F\rho^{3-m}+O\left(\rho^{4-\frac{3m}{2}}\right)\quad\text{as} \quad\rho\to\infty\,, \tag{3.9}\] where \(F\) was already defined in (2.11). By similar arguments to before, we see that \(m>3\) in practice, so this implies \(a_{2}(\rho)\to 0\). We also find that \(a_{1}\) scales as follows: \[a_{1}(\rho)=2\,d_{\varphi}\rho+O\left(\rho^{2-m}\right)\quad\text{as}\quad \rho\to\infty\,. \tag{3.10}\] If we substitute \(\rho=\varphi^{2}\) into the above asymptotic expansions, they differ from the large \(\varphi\) behaviour (2.6) of \(a_{1}(\varphi)\) and (2.10) of \(a_{2}(\varphi)\). However they reproduce the previous results once we transform the ODE (3.6) by changing variables \(\rho=\varphi^{2}\). Thus by the same arguments as before, _cf._ (2.13), we also know that for \(\rho\to\infty\), we must have \[v(\rho)\propto\rho^{\frac{d-\lambda}{2d_{\varphi}}}+\cdots\,. \tag{3.11}\] However, this now imposes only one boundary condition on the linear ODE (3.6) since \(\rho\) is restricted to be non-negative. On the other hand we see from (3.8) that \(a_{2}(0)=0\), so the ODE has a so-called fixed singularity at \(\rho=0\). In order to ensure that \(v(\rho)\) remains non-singular at this point, an additional boundary condition is then required: \[a_{1}(0)v^{\prime}(0)=(d-\lambda)v(0)\,. \tag{3.12}\] Now we again have two boundary conditions, overconstraining the equation, and leading to quantisation of the RG eigenvalue \(\lambda\). ### SL analysis The last step is to perform the SL analysis, which also differs because of the \(\rho=0\) boundary. For small \(\rho\) we have \[a_{2}(\rho)=2G\rho+O(\rho^{2})\qquad\text{and}\qquad a_{1}(\rho)=-GN+O(\rho)\,, \tag{3.13}\] where we have set \[G=\int\frac{d^{d}q}{(2\pi)^{d}}\frac{\dot{\Delta}}{\left[1+2\Delta V^{\prime}( 0)\right]^{2}}\,. \tag{3.14}\] Note that \(G\) is of course positive. (By Taylor expanding (3.1) one sees that its convergence is guaranteed for any such solution to the flow equation.) The SL weight function now takes the form \[w(\rho)=\frac{1}{a_{2}(\rho)}\exp\left\{-\int_{\rho_{0}}^{\rho}\!\!\!d\rho^{ \prime}\,\frac{a_{1}(\rho^{\prime})}{a_{2}(\rho^{\prime})}\right\}\,, \tag{3.15}\] where by (3.13) a non-zero lower limit, \(\rho_{0}>0\), is required to avoid the integral diverging (when \(N\neq 0\)). Using \(w(\rho)\) we can now cast (3.6) in SL form (2.15). However, for the SL operator to be self-adjoint, we need the boundary contributions that appear on integration by parts, to vanish. This is still true for large field since as \(\rho\to\infty\), the eigenoperators diverge at worst as a power, whilst from (3.9) we have \(a_{2}(\rho)\to 0\), and thus \(w(\rho)\to 0\) exponentially fast. At the \(\rho=0\) boundary we require:3 Footnote 3: Using (3.12) and (3.13), this can be reduced to \(\lim_{\rho\to 0}a_{2}(\rho)w(\rho)(\lambda_{i}-\lambda_{j})\,v_{i}(\rho)v_{j}( \rho)=0\) (when \(N\neq 0\)). \[\lim_{\rho\to 0}a_{2}(\rho)w(\rho)\left(v_{i}(\rho)v_{j}^{\prime}(\rho)-v_{j}( \rho)v_{i}^{\prime}(\rho)\right)=0\,, \tag{3.16}\] for any two eigenfunctions \(v_{i}(\rho)\) and \(v_{j}(\rho)\). This is true for all \(N>0\) since by (3.13) and (3.15) we see that for small \(\rho\), \[a_{2}(\rho)w(\rho)\propto\rho^{N/2}\left[1+O(\rho)\right]\,. \tag{3.17}\] We have thus determined that the SL operator is self-adjoint for all \(N>0\). Actually, \(N=0\) is also interesting since it corresponds to the universality class of fluctuating long polymers [42]. In this case, the above analysis shows that \(a_{2}(0)w(0)>0\), which would appear to imply that (3.16) is no longer satisfied. However from (3.13) we see that \(a_{1}(0)=0\) now and thus, from (3.12), either \(\lambda_{i}=d\) or \(v_{i}(0)=0\)[31]. The first possibility corresponds to the uninteresting solution \(v(\rho)\equiv 1\), _i.e._ the unit operator, which we discard. All the other eigenoperators must thus satisfy \(v_{i}(0)=0\), and so (3.16) is satisfied in this reduced space. Therefore, with this one proviso, the SL operator is actually self-adjoint for all \(N\geq 0\). For general \(N<0\), the SL operator fails to be self-adjoint, and thus SL analysis is no longer applicable. However for \(N=-2k\), \(k\) a non-negative integer, something special happens. The first \(k+1\) eigenoperators with the lowest scaling dimension turn out to have exactly soluble scaling dimensions, in fact coinciding with the Gaussian ones [48, 49, 50]. (The case \(N=0\) above is the first example, the lowest dimension operator being the unit operator with scaling dimension zero.) Again, the SL operator is self-adjoint in the remainder of the space. For example for \(N=-2\), one knows from ref. [31] that the remaining eigenoperators satisfy \(v_{i}(0)=v_{i}^{\prime}(0)=0\), and thus \(v_{i}(\rho)\propto\rho^{2}\) for small \(\rho\), whilst for \(N=-4\) boundary conditions force the remaining eigenoperators to satisfy \(v_{i}(\rho)\propto\rho^{3}\) for small \(\rho\). From that analysis it is clear that in general at \(N=-2k\), we have that the remaining operators satisfy \[v_{i}(\rho)\propto\rho^{k+1}\qquad\text{as}\qquad\rho\to 0\,. \tag{3.18}\] Combining these observations with (3.16) and (3.17), we see that the SL operator is indeed self-adjoint in the reduced space defined by excluding the first \(k+1\) operators. The SL equation can now be recast in the same way as before, using (2.17) for \(x\) and (2.18) for \(\psi(x)\) (except for the obvious replacement of \(\varphi\) by \(\rho\)). The resulting Schrodinger equation is then precisely as before, _viz._ (2.19), and the potential \(U(x)\) also takes precisely the same form in terms of the \(a_{i}\), _viz._ (2.20). However the \(\rho=0\) boundary turns into an \(x=0\) boundary since, by (3.13) and (2.17), we have \[x=\sqrt{2\rho/G}+O\left(\rho^{\frac{3}{2}}\right)\quad\text{as}\quad\rho\to 0\,. \tag{3.19}\] Thus, using \(a_{2}\) from (3.13) and \(a_{2}w\) from (3.17), we see that \[\psi(x)\propto x^{\frac{N-1}{2}}v(x) \tag{3.20}\] for small \(x\). Hence for all \(N>1\), \(\psi(x)\) vanishes as \(x\to 0\). On taking into account the behaviour (3.18) we see that in the reduced space, \(\psi(x)\) also vanishes for the special cases \(N=-2k\). In this limit the leading contributions to the potential come from the first, third and fourth terms in (2.20), and thus we find: \[U(x)=\frac{(N-1)(N-3)}{4\,x^{2}}+O(1)\quad\text{as}\quad x\to 0\,. \tag{3.21}\] The cases \(N=1,3\) are exceptional since this leading behaviour then vanishes, whilst the range \(1<N<3\) will need a separate treatment because the potential is then unbounded from below. At the other end of \(x\)'s range, we find that \[x=\int_{0}^{\rho}\!\!\!d\rho^{\prime}\left(\frac{(\rho^{\prime})^{\frac{1}{2}( m-3)}}{2\sqrt{F}}+O\left(\rho^{\prime-\frac{1}{2}}\right)\right)=\frac{\rho^{ \frac{1}{2}(m-1)}}{(m-1)\sqrt{F}}+O\left(\rho^{\frac{1}{2}}\right)\quad\text{ as}\quad\rho\to\infty\,. \tag{3.22}\] Identifying \(\rho=\varphi^{2}\), this is the same formula (2.21) as before. The potential \(U(x)\) is again dominated by the first term in (2.20), both at LO and NLO. Substituting the asymptotic expressions (3.10) and (3.9) for \(a_{1}\) and \(a_{2}\), we find exactly the same formula (2.23) for the large \(x\) behaviour of \(U(x)\). In particular the leading term is again that of a simple harmonic oscillator with angular frequency \(\omega=d-d_{\varphi}\). ### WKB analysis For the cases \(N>3\), \(0<N<1\) and \(N=-2m\), we can now proceed with the WKB analysis in the usual way. In this case we have for the total phase of the wave function: \[\int_{x_{n}^{-}}^{x_{n}^{+}}\!\!\!dx\,\sqrt{E_{n}-U(x)}=\left(n+\frac{1}{2} \right)\pi\,, \tag{3.23}\] where \(x_{n}^{-}\) and \(x_{n}^{+}\) are the classical turning points, _i.e._\(E_{n}=d-\lambda_{n}=U(x_{n}^{-})=U(x_{n}^{+})\). In contrast to the previous case, the potential is not symmetric and there is no simple relation between \(x_{n}^{-}\) and \(x_{n}^{+}\). In the large \(n\) limit, the contribution from the right hand boundary gives half of what we obtained before. To see this in detail, let \(x_{0}^{+}\) be some fixed finite value but sufficiently large to trust the asymptotic form (2.23) of the potential, then the contribution from the right hand boundary is \[\int_{x_{0}^{+}}^{x_{n}^{+}}\!\!\!dx\,\sqrt{E_{n}-U(x)}=\frac{\omega}{2}(x_{n }^{+})^{2}\int_{x_{0}^{+}/x_{n}^{+}}^{1}\!\!\!dy\,\sqrt{1-y^{2}+\gamma(x_{n}^ {+})^{\frac{1}{m-1}-1}(1-\left|y\right|^{1+\frac{1}{m-1}})}\,. \tag{3.24}\] Taking into account the multiplying factor of \((x_{n}^{+})^{2}\) we see that the lower limit \(x_{0}^{+}/x_{n}^{+}\) of the integral can be set to zero, since the correction is of order \(O(x_{n}^{+})\) which is smaller than that given by the \(\gamma\) correction. Thus we get half the integral in (2.26) (with \(x_{n}\) replaced by \(x_{n}^{+}\)) giving half the left hand side of (2.27): \[\int_{x_{0}^{+}}^{x_{n}^{+}}\!\!\!\!dx\,\sqrt{E_{n}-U(x)}=\frac{\omega\pi}{8} (x_{n}^{+})^{2}+O\left((x_{n}^{+})^{1+\frac{1}{m-1}}\right)\,. \tag{3.25}\] Using the asymptotic form of the potential, we see that the leading term can be written as \(\pi E_{n}/(2\omega)\). In the large \(n\) limit, the left hand boundary makes a contribution that can be neglected in comparison. To see this let \(x_{0}^{-}\) be some fixed finite value but sufficiently small to use (3.21). Then the contribution from the left hand boundary is \[\int_{x_{n}^{-}}^{x_{0}^{-}}\!\!\!dx\,\sqrt{E_{n}-U(x)}=\frac{1}{2}\sqrt{(N-1)( N-3)}\int_{1}^{x_{0}^{-}/x_{n}^{-}}\!\!\!dy\,\left(\frac{\sqrt{y^{2}-1}}{y}+O(x_{n }^{-})\right)\,. \tag{3.26}\] Since \(x_{n}^{-}\) is vanishing for large \(E_{n}\), we see that this integral is \(O(1/x_{n}^{-})\) or, using again the relation (3.21), \(O(E_{n}^{1/2})\). That only leaves the portion of the integral that goes from \(x_{0}^{-}\) to \(x_{0}^{+}\), but since these boundaries are fixed and finite, we see that this part also grows as \(\sqrt{E_{n}}\) and thus it too can be neglected in comparison to (3.25). Therefore asymptotically the integral in (3.23) is given by (3.25). Inverting the relation to find \((x_{n}^{+})^{2}\) asymptotically in terms of \(n\), we thus find \[d_{n}=E_{n}=d-\lambda_{n}=U(x_{n}^{+})=2n\omega+O\left(n^{\frac{m}{2(m-1)}} \right)=2n(d-d_{\varphi})+O\left(n^{\frac{m}{2(m-1)}}\right)\quad\text{as} \quad n\to\infty\,, \tag{3.27}\] _i.e._ precisely double the value we found for a single component field in (2.28) and independent of \(N\). We see that technically this arises because the WKB integral is precisely half as large in the \(O(N)\) case, the leading contribution coming from the \(x_{n}^{+}\) boundary only. Recall that at \(N=1,3\), the leading behaviour (3.21) of \(U(x)\) is no longer applicable. Since the potential is now finite as \(x\to 0\), it is clear from the above analysis that the left hand boundary continues to contribute at most \(O(E_{n}^{1/2})\sim\sqrt{n}\) and so can be neglected. Thus we see that (3.27) applies also to these exceptional cases. Thus also for \(N=1\) we find twice the previous scaling dimension as a function of large index \(n\). This is in agreement with that single field result however, because these eigenoperators are a function of \(\varphi^{2}\) only. Hence for a single component field, the current \(n\) indexes only the even eigenoperators (those symmetric under \(\varphi\leftrightarrow-\varphi\)). Finally, let us show that our result (3.27) is also applicable to the range \(1<N<3\). Although in this case, from (3.21), the potential \(U(x)\to-\infty\) as \(x\to 0\), we know from (3.20) that the solutions we need, have \(\psi(x)\) vanishing there. These solutions are consistent with the Schrodinger equation (2.19) because for small \(x\) we have, by (3.20), a diverging second derivative: \[-\frac{d^{2}\psi(x)}{dx^{2}}\propto-\frac{(N-1)(N-3)}{4\,x^{2}}\psi(x)\,, \tag{3.28}\] which is precisely the right behaviour to cancel the divergence in the Schrodinger equation coming from the \(U(x)\psi(x)\) term. Meanwhile the \(v(x)\) term in (3.20) is well behaved in terms of oscillations at small \(x\), behaving similarly to the above cases. Therefore we are only neglecting a subleading contribution to the total phase, if we work instead with a modified WKB integral where we replace the lower limit in (3.23) with some finite value \(x_{0}^{-}\). By the above analysis we then recover (3.27) again. In this way we have shown that the result (3.27) is actually applicable for all \(N\geq 0\) and to the special cases \(N=-2k\) (where \(k\) is a non-negative integer). Summary and discussion We have used SL theory and WKB methods to derive the scaling dimension \(d_{n}\) of highly irrelevant operators \({\cal O}_{n}\) around a non-trivial fixed point for scalar field theory, in the Local Potential Approximation (LPA). The scaling dimensions \(d_{n}\) are ordered so that they increase with increasing index \(n\). The \(d_{n}\) are derived following the methods developed in [34]. They are given to leading order in \(n\), together with the power-law dependence on \(n\) of the next-to-leading order. The results apply to all the non-trivial (multi)critical fixed points in \(2<d<4\), for single component scalar field theory and for \(O(N)\) invariant scalar field theory, and also to the unitary minimal models in \(d=2\) dimensions. The \(d_{n}\) are universal, independent of the choice of fixed point (except through the anomalous dimension \(\eta\)) and independent of the cutoff choice which we have left general throughout, apart from the weak technical constraints discussed below eqn. (2.11). In particular these constraints allow for the popular smooth cutoff choice (2.12). The crucial property leading to universality is that the results depend only on asymptotic solutions at large field, which can be derived analytically, and are also universal in the same sense. Although non-universal cutoff-dependent terms, in particular (2.11) and (3.14), enter into the calculation at intermediate stages, they drop out in the final stages. For a single component real scalar field, \(d_{n}\) is given in (2.28). For \(O(N)\) scalar field theory, the \(d_{n}\) are just twice this, _cf._ (3.27), independent of \(N\). This is in agreement with the single field result because here \(n\) indexes the eigenoperators that are a function of \(\varphi^{2}\) only. The first steps in deriving these results is to recast the eigenoperator equation in SL form, and then establish that the SL operator is self-adjoint in the space spanned by the eigenoperators. For a single component scalar field this follows after demonstrating that the SL weight decays exponentially for large field, since the eigenoperators grow at most as a power of the field. For the \(O(N)\) case the analysis is more subtle because the relevant space is now the positive real line (parametrised by \(\rho=\varphi^{2}\geq 0\)) and thus the SL operator is self-adjoint only if the boundary terms at \(\rho=0\) also vanish. By analytically determining the small \(\rho\) dependence of the relevant quantities we see that the SL operator is self-adjoint when \(N>0\). For \(N\leq 0\), the SL operator is not self-adjoint and the analysis does not apply. Presumably in these cases one would find that the scaling dimensions \(d_{n}\) are no longer real. However for a sequence of special cases \(N=-2k\), \(k\) a non-negative integer, the SL operator is self-adjoint on a reduced space spanned by all eigenoperators apart from the first \(k+1\). The analysis can then proceed on this reduced space. As we already noted, while most of these special cases are presumably only of theoretical interest, the \(N=0\) case describes the statistical physics of long polymers. The next step is to cast the SL equation in the form of a one-dimensional time-independent Schrodinger equation with energy levels \(E_{n}=d_{n}\) and potential \(U(x)\). For the single component field this potential is symmetric, and in order to determine the energy levels \(E_{n}\) asymptotically at large \(n\), using the WKB approximation, we need only the behaviour of \(U(x)\) at large \(x\). The latter follows from our asymptotic analysis. For \(O(N)\) scalar field theory, the space is the positive real line \(x\geq 0\), and thus for WKB analysis we need also the behaviour of the potential \(U(x)\) at small \(x\). Here we find that the range \(1\leq N\leq 3\) requires a separate treatment because the leading term in \(U(x)\) turns negative leading to a potential unbounded from below. Nevertheless we are able to treat this case and the end result for \(d_{n}\), (3.27), is the same, thus applying universally to all \(N\geq 0\) and the \(N=-2k\) special cases. Although these results are universal, they are still derived within the LPA, which is an uncontrolled model approximation. One might reasonably hope however that the fact that these results are universal in the sense of being independent of the detailed choice of cutoff, is an indication that they are nevertheless close to the truth. On the other hand the LPA [22] of the Polchinski flow equation [9] is in fact completely cutoff independent, although this property arises rather trivially. It is actually equivalent under a Legendre transformation [51] to the flow equation (2.1) for the Legendre effective action in LPA, as we study here, but only for a special (but actually popular) choice of additive cutoff known as the optimised cutoff [52]. However the optimised cutoff does not satisfy our technical constraints given below (2.11) so our analysis is invalid for this case. Nor in fact does a sharp cutoff [53, 14, 20, 23] or power-law cutoff [29] satisfy the technical constraints. What this means is that these particular cutoffs fail to regularise completely the region of large fields, in the sense that \(a_{2}\), defined by (2.7) or (3.8), no longer has an asymptotic expansion given simply by integrating over the asymptotic expansion of its integrand. For these three particular cutoffs, regions of momenta far from \(\Lambda\) alter the asymptotic expansion of \(a_{2}\) so that it is no longer of the form (2.10), or (3.9), and for this reason these cutoffs are less satisfactory. Nevertheless, following our methods, it would be straightforward to derive the asymptotic scaling dimensions \(d_{n}\) in LPA for any or all of these three special choices of cutoff, by using the particular form of the LPA flow equation in these cases (which are known in closed form, since the momentum integrals can be calculated analytically in these cases). The results will differ from the \(d_{n}\) derived here and amongst themselves, but their investigation would improve insight into the accuracy of the LPA in this regime. Furthermore it would seem possible to generalise any of these special choices of cutoff to their own class of cutoffs with similar properties, and thus understand the extent to which the results could still be cutoff independent, up to some appropriate constraints, in these cases, and gain a more detailed understanding of why the \(d_{n}\) differ. Unfortunately our \(d_{n}\) do not seem to match in a useful way to existing results in the literature. The LPA restricts us to eigenoperators that contain no spacetime derivatives, and thus our index \(n\) counts only over these. In reality all eigenoperators (apart from the unit operator) contain spacetime derivatives, so in particular it is not clear how our index \(n\) would map into the exact sequence. However in some special limits the LPA is effectively exact. This is true for the Gaussian fixed point for example, where \(d_{n}=nd_{\varphi}\) (with \(\eta=0\)). Our scaling dimensions \(d_{n}\) differ from this, but the Gaussian fixed point is specifically excluded from our analysis since our results apply only to non-trivial fixed points, such that the asymptotic expansion of the fixed point potential takes the form (1.3) or (3.4) with \(A\neq 0\). The LPA also becomes effectively exact in the large \(N\) limit [47], and there the scaling dimensions are \(d_{n}=2n\) (with \(\eta=0\)) which again differs from our result (as well as differing from the Gaussian fixed point result). Furthermore they continue to disagree even if we now take a second limit such that both \(n\) and \(N\) are sent to infinity. However in this case we have an example where the order of the limits matters. The \(N\to\infty\) result is derived for \(d_{n}\) whilst first holding \(n\) fixed, while our result applies first for fixed \(N\) while \(n\to\infty\). The difference can be seen at the technical level. The first term on the right hand side of the flow equation (3.1) is proportional to \(N\). In our analysis however it is the denominators that dominate. On the other hand in the large \(N\) analysis, only the first term survives, resulting in a first order ODE with no SL properties (or Schrodinger equation representation). The universal results fall out on the one hand in our analysis from the asymptotic behaviour at large field, but on the other hand in large \(N\) they fall out from a Taylor expansion around the minimum of the fixed point potential [47]. There seems unfortunately to be no way to bridge the gap between these two limiting regimes. An even clearer example where the exchange of limits do not commute, is provided by the special cases \(N=-2k\). As we recalled in sec. 3, in these cases the first \(k+1\) eigenoperators degenerate, gaining Gaussian scaling dimensions. But our \(d_{n}\) apply to the highly irrelevant eigenoperators that are found in the reduced space, which excludes these first \(k+1\) operators, and hence have non-trivial scaling dimensions. However if instead we fix on the \(n^{\text{th}}\) eigenoperator and let \(N\to-\infty\) by sending \(k\to\infty\), we see that this \(n^{\text{th}}\) eigenoperator will fall into the excluded space and thus end up with Gaussian scaling dimensions. The disagreement between the two results will then remain even if we choose next to send \(n\to\infty\). ## 5 Acknowledgements VMM and DS acknowledge support via STFC PhD studentships. TRM acknowledges support from STFC through Consolidated Grant ST/T000775/1.
2308.13480
Communicating on Security within Software Development Issue Tracking
During software development, balancing security and non security issues is challenging. We focus on security awareness and approaches taken by non-security experts using software development issue trackers when considering security. We first analyse interfaces from prominent issue trackers to see how they support security communication and how they integrate security scoring. Then, we investigate through a small scale user study what criteria developers take when prioritising issues, in particular observing their attitudes to security. We find projects make reference to CVSS summaries (Common Vulnerability Scoring System), often alongside CVE reports (Common Vulnerabilities and Exposures), but issue trackers do not often have interfaces designed for this. Users in our study were not comfortable with CVSS analysis, though were able to reason in a manner compatible with CVSS. Detailed explanations and advice were seen as helpful in making security decisions. This suggests that adding improvements to communication through CVSS-like questioning in issue tracking software can elicit better security interactions.
Léon McGregor, Manuel Maarek, Hans-Wolfgang Loidl
2023-08-25T16:38:27Z
http://arxiv.org/abs/2308.13480v1
# Communicating on Security within Software Development Issue Tracking ###### Abstract During software development, balancing security and non security issues is challenging. We focus on security awareness and approaches taken by non-security experts using software development issue trackers when considering security. We first analyse interfaces from prominent issue trackers to see how they support security communication and how they integrate security scoring. Then, we investigate through a small scale user study what criteria developers take when prioritising issues, in particular observing their attitudes to security. We find projects make reference to CVSS summaries (Common Vulnerability Scoring System), often alongside CVE reports (Common Vulnerabilities and Exposures), but issue trackers do not often have interfaces designed for this. Users in our study were not comfortable with CVSS analysis, though were able to reason in a manner compatible with CVSS. Detailed explanations and advice were seen as helpful in making security decisions. This suggests that adding improvements to communication through CVSS-like questioning in issue tracking software can elicit better security interactions. ## 1 Introduction Discrepancies exist between what security experts desire and the guidance developers end up following [10], so building better security practices into software development is key for exposing security to non-experts. Modern Development Operations (DevOps) processes, streamline tracking and implementing features, bug fixes, and management of issues arising in software development [17]. A key concern for security is whether, amongst the many DevOps security practices [16], processes include prioritising reported security issues. Prioritisation of issues during development is an important step, as it dictates the approach that a whole project will take towards developing their product. Priority tensions in a project mean that many different aspects will compete for priority, and have an impact on developer's approaches to security and analytical thinking. "Consideration of revenue is rational" for developers to prioritise [4], so care is needed to balance security against functional requirements. In order for an application to be secure, security issues ought to get a high priority. Making this decision well requires project members to be informed on security risks and to be involved in the decision making process, beyond just the security experts. Investigating whether tools used during DevOps do enable security is important as there are many factors that impact adoption [18]. DevOps must also enable security professionals to use "communication and methods by which [non-security developers] already share knowledge as part of their workflow" [2]. When considering security, communication and management during development are key aspects to improving security motivation [3]. There is a need to investigate if DevOps processes and tools encourage good security among non-security experts. Our investigation is twofold. First, we survey the design and security approaches within large projects which track issues openly (Section 2). Then, we conduct a developer study observing non-security experts approaches to security prioritisation of issues and perception of the role of CVSS (Common Vulnerability Scoring System) [5] as a security analysis system (Section 3). We frame our work around the following research questions. **RQ1** How do software development issue tracking systems integrate security considerations? **RQ2** Does prompting for CVSS during issue management have the potential to be a useful interaction? **RQ3** How can non-security experts better engage with security during project issue management? ContributionsOur survey covers four large open software development issue trackers and their usage in openly tracked projects (Section 2 details our selection criteria). Then we run a developer study investigating how non-security-experts make security decisions. Four participants engaged, making it a scale small investigation, but we nevertheless draw the following preliminary findings which form the main contributions of this paper. 1) Existing software development issue tracking tools lack the design to fully convey security concerns. 2) Non-experts seem not comfortable with using CVSS analysis. 3) However, CVSS seems to be considered helpful by a non-security expert/experienced project manager to prioriti security-impacting issues. 4) Security inquiry through questioning and sharing advice could make security more accessible to non-experts. 5) Security related metadata could be integrated into issue trackers, elicited by text answers to security questions, or optionally CVSS scores. Security classifications and scoringNumerous methods have been adopted across the security industry to help classify the impact of security flaws. CWE identifiers (Common Weakness Enumeration) [12] represent common classes of bug with similar behaviours. They can be assigned to a bug to better describe what the problem is in relation to others. CVE identifiers (Common Vulnerabilities and Exposures) [11] are assigned to specific cases of flaws and uniquely identify a single case of a security vulnerability or exposure of a sensitive system. CVSS scores are values given to a reported vulnerability, calculated by measuring the impact of a security flaw across several dimensions. CVE records contain CVSS analysis, and many CVE records reference CWE classifications. CVSS analysis can either be shared as a score between 0 and 10 (most severe) or as a string which encodes all of the individual components of the scoring metrics, allowing individuals to see precisely what the risks are. CVSS is a method of measuring the severity of a security issue, and is often attached to a specific CVE, though there is nothing that precludes it being used outside of CVE only. Some research suggests there is significant disagreement amongst security experts over the individual scores that might be generated through CVSS [7], however there are findings suggesting that in general the CVSS scores produced by databases are trustworthy [9]. This tension suggests that these scoring mechanisms are worth investigating further, particularly with respect to non-security experts. There exist other classification and scoring systems, such as CWSS scoring system [13], similar to CVSS, but not focused on specific security incidents. We focus on CVSS as it was specifically designed to emphasise and rank which bugs should be prioritised for patching [14]. ## 2 Issue Tracker Survey To answer our research questions we investigate the design characteristics and security approaches of platforms, before we conduct a developer study investigating developer approaches within issue trackers. In this section, we investigate issue tracking tools in order to see how well they integrate and promote security. We have decided to analyse the public facing instances of issue trackers for projects run by the same groups that created the bug trackers, assuming that these developers would be the most likely to fully utilise the capabilities of their issue tracking tools. Note that we are not focused on performing in-depth ethnographic analysis of these projects, but simply interested to see how they publicly use their own issue tracking platforms in relation to security. Platforms referenced here were accessed in July 2022. Investigation stepsOur investigation followed steps for each issue tracker: 1) Explore and identify all the interface elements, 2) Find security issues reported within the project and observe interesting interactions, 3) Find mentions of CVE, CVSS, CWE, or discussion relating to security choices. Interface exploration involved identifying common and differing interface elements on the individual bug/issue pages and seeing whether the projects made use of these available elements. Security issues needed to be located by a broad search for "security" in body text or labelling and identifying how particular project demarcates security issues, then doing a deeper investigation on terms such as "CVE" or "CVSS". Selection criteriaThere are many DevOps, issue tracking, and project management tools, so we narrowed down to a specific subset with publicly visible trackers to allow for the focused analysis detailed above. We selected projects and tracking systems when the system would 1) Track issues in a software development project, 2) Not require a special plugin or extension, 3) Have a public tracker used by the manufacturers, 4) Serve large projects. We settled on the following 4 issue trackers: Bugzilla (Firefox Web Browser), Monorail (Chrome Web Browser), Jira (Atlassian DevOps Tools), GitLab. Among other major open systems, we discarded GitHub as it does have one central bug tracking system, and Linux Kernel development as it relies on mailing lists which are out of scope of this review. We note other platforms (Trac, Redmine) met the criteria but these were not selected for full investigation. ### Investigation BugzillaBugzilla is a bug tracker developed by Mozilla for use with developing the Firefox browser. Within the Firefox project, the project uses all of the fields that are available in the interface. In addition, Bugzilla allows site administrators to add custom fields and properties if the project managers feel they would benefit from it. Bugzilla does not include any built-in support for tracking or measuring CVSS scores, but there are cases where bug report participants have included comments that reference CVSS scores. From searching the Firefox Bugzilla instance for references to CVSS, we find that most of the bugs that include a CVSS score are CVEs which have been copied to the Mozilla-run bug tracker from other sources. Bug 1246014 is a good example of this. Coming via email from an external security investigator, the CVSS and related analysis needs to be included in a comment to the bug, as there is no standard field in the UI to place it. We see that when a CVE is reported, it will include a CVSS score, describing the severity of the bug. In this case the bug was given a keyword, "sec-critical", as a way of tracking the severity. What is interesting in this case is that the dedicated fields for bug priority and severity were not changed. This behaviour is seen in other CVEs reported to the Mozilla tracker in this fashion, including Bug 814001 and Bug 805121. Some CVEs reported as bugs do however get the relevant fields set in addition to a priority keyword, including Bug 1274637 and Bug 1631618. Outside of CVE and CVSS related bugs, keywords, severity and priorities are used to keep track of bugs, as can be seen in Bug 1538007. It is unclear just from these observations if there is a consistent or formal procedure that Mozilla has for assigning security priorities in their bug tracker, but it does indicate that a standardised interface that can keep track of CVSS scores in addition to or in place of many separate labels might be useful. By performing a broad search across the entire bug tracker for mentions of CVSS excepting when used to describe CVEs, we find that no bugs use CVSS scores despite the intended use of CVSS as a means of analysing security bugs. MonorailMonorail is a bug tracker developed by Google, for use in developing Chrome, the world's largest web browser. One differentiating aspect to Bugzilla issue tracking is the use of labels to track different aspects, rather than using individual fields. Labels can be related using dashes - as an alternative to defining specific fields, such as Target-102, which indicates that the team aims to resolve an issue by the time version 102 is released. This scoped labelling approach might offer more autonomy for individual project members to categorise and prioritise issues than having to rely on a project administrator to add specific fields. This capability is used frequently within the Chrome project. Searching for CVSS scores given in bugs reported in the Chromium project tracker reveals a similar pattern to that found in the Mozilla tracker. Comments and descriptions of bugs only tend to mention CVSS scores when they were originally formed as CVEs, cross-posted to the public tracker. An illustrative example is found in Bug 1313172. The interface includes a priority field, but in contrast no severity field, and instead severity labels are used. Monorail does not have a built-in CVSS field or method to track this so when required the comments section has to be used to display CVSS scores. The Chrome project has over 1000 bugs referencing CVSS, but only 2 confirmed bugs that reference CVSS scores without also mentioning CVE. These are Bug 571480 and Bug 695474. Both have a priority, but only the first has a security severity label. This may indicate that there may either be limited utility or a lack of recognition of these metrics outside of the context of CVE reporting. JiraJira is a bug tracker created by Atlassian. It is used by many projects including MongoDB and Qt. Atlassian use Jira as their public facing issue tracker for their own products, which we analyse here. In a Jira issue many fields, properties, and links are present similar to other issue trackers. Jira also allows Jira administrators to change the fields available. In the open source project Qt, we see that their issue tracker only has the default fields, and so is missing the "severity" fields present in Atlassian's Jira instance (seen in Bug QTBUG-105931). This approach means that different projects may approach security in different ways, depending on how they have configured their setup. Within Atlassian's public issue tracker there are references to CVSS scores within the comments of bugs. It is interesting that many of the CVSS scores are all presented in the same way, through a table in comments, yet Jira's custom fields are not used to standardise this. We see this presentation added by a 'bot' in Bug JRASERVER-71198, indicating desire for some kind of automation for collecting this data. All the comments in this fashion include a link to an Atlassian CVSS Calculator to explain the severity rating. The CVSS scores seem to come from CVE notifications which state it _"is an independent assessment and you should evaluate its applicability to your own IT environment."_ Whether this phrase is included as a way to instruct the community reading these bugs to be alert, or simply as a form of legal disclaimer is unclear. CVSS scores are not limited to just CVE reports, but also appear in 122 issues unrelated to CVE, indicating that CVSS identifications bear some importance to the Jira project. Tracking security is an important part of Atlassian bugs, across their projects 6646 bugs are related in some way to security, even excluding CVE. While researching which open source projects use Jira, we found MongoDB recommends voting as a way to gauge community priorities for fixing some issue. This is an insight that the addition of a "voting" property may be incentivising projects to use it as a way to decide priorities collectively. GitLabGitLab targets the whole DevOps process, including issue tracking. GitLab is open source and the development team use their own project to track issues. GitLab's tracker shows many bugs mentioning CVSS without an associated CVE report. There is a GitLab Bug 218601 which is a proposal to standardise the way GitLab tracks CVSS scores, notably suggesting _"scoring can then be exposed to the user in relevant parts of the UI"_. GitLab's handbook describes that priorities and severity can be assigned based on the CVSS scores that are generated. The hand book also suggests that issues with high CVSS scores ought to be labelled as high priority and be mitigated within 24 hours. For individual projects, GitLab's issue tracking interface does have a weight field. However, the concrete meaning of this field is not defined and could change project to project. From GitLab's documentation, it could refer to any of _"how much time, value, or complexity a given issue has or costs."_ This is still an important metric when it comes to prioritisation, however it is not clear if it alone can be leveraged or relied on to handle the priorities of security issues. GitLab already has a CVSS calculator, but it is not used for generating scores to prioritize issues. Instead it is for defining the "bounty" to be rewarded for certain bugs. When issues are reported through the bounty program, the CVSS scores generated as part of the reporting are converted to priority and severity indicated through labelling. In some cases the CVSS scores, though used as part of reporting, are not linked on the issue page itself. Instead developers need to navigate to a separate page to view it. An example of this can be see in Bug 336535: _Severity is set as per CVSS calculated on hackerone report._ Not presenting this inline could add to developer workload. Designing an interface that can present this information could improve issue prioritisation to draw developer attention to more severe issues. Other bugs, such as Bug 360986, include basic information of the final CVSS score, and link to more detailed reports again held elsewhere. GitLab's development team currently tracks CWEs, and other weakness and classifications through labels. GitLab's team use scoped labels, which allows all CWEs of a specific type to be grouped together. An open GitLab Bug 300978 is an issue discussing a proposal of whether to adopt CWEs as a means of tracking and codifying security issues in GitLab's Secure group. The evidence of interest in CWE and CVSS indicates intent from GitLab to include more support for these security vulnerability classification systems. ### Comparison and summary Table 1 compares the available features focusing on non-security specific features as none of the investigated issue trackers include dedicated space to discuss security specific concerns. We see that many features are shared amongst the issue trackers. There are some outliers. GitLab seems to offer as many capabilities as is possible. Where certain features are not present, many projects using these systems utilise labels or keywords as a means of tracking certain properties of issues. Only GitLab has a 'weight' field, however its use might be duplicated by severity or estimated completion times in other tracking systems. Most of the tracking systems make use of colour to draw attention to certain features, with Bugzilla highlighting important issues in red when in list view, to GitLab which uses very colourful labels, chosen by project organisers. Some bug tracking systems have built-in priority tracking fields for their bugs, and every bug tracking system has some form of keyword or labelling fields. Developers often seem to use labelling as a means of tracking severity, suggesting labels may be a more usable interface element. The use of labelling varies between development projects. The contrast between Bugzilla and Monorail is interesting when considering severity labelling. Both Bugzilla and Jira include severity fields, others use labels. But Firefox development includes a specific keyword despite Bugzilla having a severity field, and Chrome development uses a severity label as an alternative to a severity field. This indicates different approaches in how strictly severity is assigned, and that one single approach is perhaps not suitable for all projects. The GitLab project tracks certain weaknesses classified by CWE through scoped labelling. CVSS scores are mentioned within bug tracking for open source projects, often in relation to externally reported issues such as through CVEs or bug bounties. The CVSS scores appear within the text of the discussion of the bug, instead of alongside the labels or fields that describe severity and priority. This could indicate that though there is a desire for CVSS when discussing critical security bugs like CVEs, the bug tracker interfaces do not leverage CVSS at all for other kinds of reported security issues, and offer little support for CVSS when they are used. Note that CVSS targets reported vulnerabilities and might not suit all security planning discussions. In summary, the selected DevOps issue trackers have many commonalities, but also a few differences. We see wide variation in the approaches taken to labelling, handling severity, and handling references to security issues and reports. External metrics and classifications are used, but interface support for referencing these is missing, and interfaces offer no automated support for making decisions based on such metrics. \begin{table} \begin{tabular}{l r r r} Field & & & \\ \hline Priority & & & \\ Severity & & & \\ Weight & & & \\ Votes & & & \\ Scoped Labels & & & \\ Milestones & & & \\ Estimated Completion Times & & & \\ Epics & & & \\ Colours & & & \\ Custom Fields & & & \\ CWE & & & \\ Native to all: Title and Description, Timestamps, Component Hierarchy, Issue types, Labels, Attachments, Related Issues, Confirmation and Resolution, Project Member Links & \\ \hline Metadata support: native fields (\(\bullet\)), possible with labels (\(\circ\)) & \multicolumn{1}{l}{} \\ \end{tabular} \end{table} Table 1: Comparison of metadata fields across platforms ## 3 Developer Study After investigating how trackers support security, we focus on how users themselves make security decisions through our developer study. The purpose of this study is to investigate approaches that non-security-experts take when analysing and prioritising security issues. We examine whether techniques like CVSS analysis are useful, and what parts of an issue presentation impact most. This study is conducted online with non-security-expert developers and project managers. ### Protocol The participants will be placed in a fictional setting, a software development company developing financial applications. When presented with issues, the participant are asked to consider each one, then assign a priority to it, relative to the others, and answer questions about CVSS scoring. All participants see the same issues, but in a random order. The prioritisation will be a scale where the top issue needs to be dealt with first, and both security and business-critical issues are rated together, to simulate the prioritisation stresses faced in a real environment. To incur some amount of time pressure to simulate a working environment, but allow for enough time to have a reasonable attempt at seeing and briefly investigating all of the issues, we allocate participants 30 minutes followed by however long needed to answer a final questionnaire. Participants were awarded online vouchers for their time regardless if they completed the experiment. Our protocol was reviewed and approved by our university's ethical committee. **Issues** We add 14 issues to a tracker. 7 issues each focused on security and functionality. The security issues were created by looking at top Mitre issues, with a corresponding entry in the find-sec-bugs library. The functionality issues were created by considering what a financial app would need to offer, and potential requirements that may be faulty. For example, one issue concerned "Improper input validation", with consideration to security and CWE-20. During the experiment issues are not explicitly named as being security issues or not. For each of these issues, the following is given: a title, a description, and a code snippet. The issues were designed to include all the information relevant to the business, and which would allow for full CVSS analysis. When prioritising these issues, there will be security and functionality considerations for all, however some issues will be more or less _security critical_ or _business critical_. **Experiment platform** For this experiment, we worked with a customised GitLab server. We chose GitLab as an easily self-hostable DevOps platform we could customise for our experiment. We added custom questionnaires to this server: an easy-to-use drag and drop interface for choosing relative priorities, and a questionnaire shown next to issues with CVSS and other questions (shown in Figure 1). These are shown as an overlay so that the primary activity of issue analysis is always present. The drag and drop interface ensures that all of the prioritisation was conducted in a relative fashion forcing a choice and preventing giving the same priority to issues. Although we build on the GitLab interface for our study, we are exploring the approaches developers take generally rather than specifically in a GitLab context. ### Developer Study Outcomes Here we present the outcomes from our developer study. We recruited 10 participants from computer science alumni, all of whom consented to take part. 4 started and completed the experiment, and all of these participants identified as male. We recruited a balance of project managers and software developers, 2 participants identifying expertise in each. None of the participants felt their expertise in security was strong. We name the less experienced participants SD1, SD2, PM1, and one project manager with more experience PME. **Approaching prioritisation** We viewed participants' approaches to choosing priorities through logged behaviour. 3 participants looked at most of the issues when making their prioritisations, while PM1 only looked at the details of 3 issues. SD1 and SD2 changed their prioritisation as they read through the issues, and PME preferred to make multiple priorisations at once. PM1, who did not look at many issues, did not change the priorities from the default random order, so we cannot draw concrete conclusions for that participant's priorities. SD1, SD2, and PM1 on average changed 13 issue priorities from the initial random assignment, which suggests they did engage well in this activity. Looking at the final priorities chosen, we can see some trends. There was agreement that "CSRF or Referrer Missing", "SQL Injection", "Input Validation" were the most important as they appeared in the top 5 highest priorities; "Adding and remembering payees" and "Chequing" and "Currency converter" were all placed in the lowest 4 priorities; "Downloading PDF Summaries" was the lowest priority issue. This is interesting to see as it suggests, from our population sample, Figure 1: Screenshot of the issue questionnaire interface that they want to rate security issues higher than non-security ones. Participants never contacted each other yet chose similar priorities. **Usefulness of CVSS**PME, SD2, and PM1 attempted to generate CVSS scores for some issues. We asked how comfortable participants felt using GitLab, choosing priorities, and completing the tasks. Responses showed mixed responses with no clear trends across participant demographics. Only PME felt comfortable with CVSS. The participants showed evidence of critical security thinking, identifying where there were some security issues, but suggested due to them not being readily exploitable, they could be de-prioritised. When asked about the SSL issue, PME stated "[it] is serious but is not obvious or easy to exploit so it is not as important," and on an SQL Injection issue, SD1 comments "[issues] that have the potential for data leaks, is given highest priority". Analyses like the ease of exploitation and impact on confidentiality are captured in CVSS, so standardised questioning like used in CVSS could be a useful way to discuss issues amongst developers. PME's comment about risk mirrors the view that CVSS is concerned with the severity over risk [15], and, in that sense, a framing of CVSS that better indicates context may be helpful. **Engaging with security** Participants feel there should be collective responsibility for security. SD1 suggests "A sorted list of developer/security analysts etc... so i would know who to ping for" is useful for issue tracking, to expedite seeking advice or guidance. PM1 suggests employing "a dedicated cyber team to review and ensure best practice". During the per-issue questions, we asked participants what aspect of an issue was most influential to their choice of priority. PME and SD2 gave an explanation for every issue, and SD1 gave explanations for the 6 issues they prioritised highest. SD2 and PME, a software developer and a project manager, favoured 'advice given' and 'legal impact', respectively. The most common aspect impacting a decision was when a fellow staffer in the scenario gave advice explaining how an attack worked. This is also backed up by one comment that mentioned that "links" to standards are an important part of prioritising an issue. This suggests the importance of communication between a team and utilising relevant knowledge. The next biggest impact is when there may be a legal impact to the business. GDPR and "privacy laws" were specifically mentioned within this aspect. Potentially we may see more priority given to security if laws surrounding secure programming and secure delivery of services are known. The third biggest impact is when there is either an impact to customers, or a need to weigh between commercial and security critical interests. PME described this tension in a comment on an issue saying "this is a commercial issue but not a critical issue," and gave it a low priority, showing that even in the face of business, they felt security was more important. Encouraging more dialogue in the form of advice from relevant parties during issue analysis, taking into account tradeoffs required between commercial and security interests, could help less experienced developers better engage with security decisions. ## 4 Discussion and Conclusion We discuss our findings according to our 3 research questions. **RQ1: How do software development issue tracking systems integrate security considerations?** We see many large software projects reference security analyses in issues reported to their trackers. Vulnerabilities like CVEs are often included in the body or comment of a report, CVSS scores are likewise pasted into comments, and labels are used to track the classification according to external sources like CWE classes. Despite the apparent desire to reference such security measurements and analyses, basic issue tracking interfaces do not offer any built-in fields to support security metadata, with projects like Atlassian opting for a bot to add security information in Jira. **RQ2: Does prompting for CVSS have the potential to be a useful interaction?** In the parts of the experiment where we analyse CVSS we find mixed evidence that it could be a useful addition. Participants were able to identify the tensions between prioritising security and functional interests. They gave comments that mirror the approach taken by CVSS for conducting analysis. When directly asked if they felt comfortable with CVSS, only 1 out of 4 participants felt comfortable, so this could suggest that CVSS alone may not be suitable or would require training to increase confidence before use. Though our sample size is low, the fact that our more experienced project manager participant felt most comfortable with CVSS could suggest that more experienced roles are more suited to using CVSS for prioritisation, or that CVSS is more relevant to such roles by potentially helping them liaise with software developers who actually handle the issues. There is evidence from prior study that giving additional advice during CVSS analysis helps to provide more accurate scoring [1], so any future CVSS integration should come with guidance. Alternatives such as the Exploit Prediction Scoring System (EPSS) [8] are explored to better suit proactive security analysis (EPSS was in early release when our project started). **RQ3: How can non-security experts better engage with security during project issue management?** Experts can find the current design of DevOps tools more useful than less experienced users. Explanations and colour are the most useful design considerations when displaying important information for security choices. Related studies into API design [6] find benefits in involving developers when choosing such design considerations and this may benefit issue management tools also. Developers should be supported with external knowledge where relevant. To best engage with security discussions, all experts should offer their advice and context, and if possible DevOps processes should guide those making priorities towards people best able to give this advice. Combined with security analysis from CVSS or otherwise, this could improve security dialogue between team members.
2301.11944
Phonon-induced localization of excitons in molecular crystals from first principles
The spatial extent of excitons in molecular systems underpins their photophysics and utility for optoelectronic applications. Phonons are reported to lead to both exciton localization and delocalization. However, a microscopic understanding of phonon-induced (de)localization is lacking, in particular how localized states form, the role of specific vibrations, and the relative importance of quantum and thermal nuclear fluctuations. Here we present a first-principles study of these phenomena in solid pentacene, a prototypical molecular crystal, capturing the formation of bound excitons, exciton-phonon coupling to all orders, and phonon anharmonicity, using density functional theory, the \emph{ab initio} $GW$-Bethe-Salpeter equation approach, finite difference, and path integral techniques. We find that for pentacene zero-point nuclear motion causes uniformly strong localization, with thermal motion providing additional localization only for Wannier-Mott-like excitons. Anharmonic effects drive temperature-dependent localization, and while such effects prevent the emergence of highly delocalized excitons, we explore the conditions under which these might be realized.
Antonios M. Alvertis, Jonah B. Haber, Edgar A. Engel, Sahar Sharifzadeh, Jeffrey B. Neaton
2023-01-27T19:00:05Z
http://arxiv.org/abs/2301.11944v1
# Phonon-induced localization of excitons in molecular crystals ###### Abstract The spatial extent of excitons in molecular systems underpins their photophysics and utility for optoelectronic applications. Phonons are reported to lead to both exciton localization and delocalization. However, a microscopic understanding of phonon-induced (de)localization is lacking, in particular how localized states form, the role of specific vibrations, and the relative importance of quantum and thermal nuclear fluctuations. Here we present a first-principles study of these phenomena in solid pentacene, a prototypical molecular crystal, capturing the formation of bound excitons, exciton-phonon coupling to all orders, and phonon anharmonicity, using density functional theory, the _ab initio_\(GW\)-Bethe-Salpeter equation approach, finite difference, and path integral techniques. We find that for pentacene zero-point nuclear motion causes uniformly strong localization, with thermal motion providing additional localization only for Wannier-Mott-like excitons. Anharmonic effects drive temperature-dependent localization, and while such effects prevent the emergence of highly delocalized excitons, we explore the conditions under which these might be realized. _Introduction.-_ Photoexcitation of organic molecular crystals leads to strongly bound electron-hole pairs, or excitons, due to the weak screening of the Coulomb interaction in these systems. Depending on factors such as the size of the molecular building blocks and the spin of the electron-hole pair, exciton radii can vary from those of localized Frenkel excitons [1, 2] to spatially extended excitons that approach the Wannier-Mott limit [3, 4, 5, 6, 7]. The spatial extent of these excited states is important to applications of organic semiconductors such as photovoltaics [8] and LEDs [9], since it affects properties including the nature of their interaction with phonons [10], their transport [11] and non-radiative recombination [12]. Critical to affecting the spatial extent of excited states are lattice vibrations, which are generally thought to result in wavefunction localization [13]. Phonons can strongly renormalize one- and two-particle excitation energies of organic systems, influencing the optical gap and the charge carrier mobility [14, 15, 10]. Phonons in these systems have generally been thought to lead to localized excitons that diffuse via, _e.g._, a Forster or Dexter mechanism [16, 17]. However, it has recently been proposed that in certain well-ordered organic crystals atomic motion can give rise to configurations that favor strong transient exciton delocalization, having a beneficial effect to transport [18, 19, 20]. This transient exciton delocalization is similar to transient _charge_ delocalization [21, 22, 23], wherein phonons lead to configurations with large overlaps between neighboring molecular orbitals [24] and hence highly delocalized states [25]. Despite these insights, a rigorous microscopic understanding of phonon-induced modulations to exciton radii, one that accounts for electron-hole interactions, strong exciton-phonon coupling at finite temperatures [10, 26], and the anharmonicity of low-frequency motions in molecular crystals [27, 28, 29, 30], is still lacking. Here we elucidate the microscopic mechanism of exciton localization in extended molecular solids. We employ a first-principles computational framework which captures all aforementioned effects, combining density functional theory (DFT), the Green's function-based _ab initio_\(GW\)-Bethe Salpeter equation (BSE) approach for accurately describing exciton effects [31], finite-difference methods for strong exciton-phonon interactions [32, 10], and path integral techniques for describing phonon anharmonicity [33, 34]. We apply this framework to the prototypical molecular crystal pentacene and show that zero-point nuclear motion leads to strong localization of singlet and triplet excitons, reducing their average electron-hole separation by more than a factor of two. Temperature increases further reduce the size of delocalized Wannier-Mott-like excitons, an effect driven by anharmonic phonons. The trends in exciton radii are reflected in the dispersion of their energies in reciprocal space. While highly delocalized excitons do appear at large phonon displacements, anharmonicity reduces the amplitude associated with these motions, suppressing transient delocalization for exciton transport. _System and methods.-_ We focus on the widely studied molecular crystal pentacene [35], which hosts a delocalized Wannier-Mott-like singlet exciton (Fig. 1a) and a more localized Frenkel-like triplet exciton (Fig. 1b) [7, 10; 36], for which the effect of phonons is expected to be different. We compute excitons with principal quantum number \(S\) and center-of-mass momentum \(\mathbf{Q}\) using _ab initio_ DFT and \(GW\)-BSE calculations with the Quantum Espresso [37] and BerkeleyGW [38] codes. This involves constructing the electron-hole kernel \(K^{e-h}\) and solving the BSE [39; 31] in reciprocal space in the electron-hole basis, namely \[(E_{c\mathbf{k}+\mathbf{Q}}-E_{v\mathbf{k}})A^{S}_{cv\mathbf{k}\mathbf{Q}} \tag{1}\] \[+\sum_{c^{\prime}v^{\prime}\mathbf{k}^{\prime}}\left\langle c\mathbf{k}+ \mathbf{Q},v\mathbf{k}\right|K^{e-h}\left|c^{\prime}\mathbf{k}^{\prime}+\mathbf{Q},v^{\prime} \mathbf{k}^{\prime}\right\rangle A^{S}_{cv^{\prime}\mathbf{k}^{\prime}\mathbf{Q}}\] \[=\Omega^{S}_{\mathbf{Q}}A^{S}_{cv\mathbf{k}\mathbf{Q}},\] with input from prior DFT and \(GW\) calculations. In Eq. 1 the indices \(c,v\) define conduction and valence states respectively, \(\mathbf{k}\) is the crystal momentum, and \(A^{S}_{cv\mathbf{k}\mathbf{Q}}\) is the amplitude contributed by states \(c,v\) with momentum \(\mathbf{k}\) to the exciton with momentum \(\mathbf{Q}\). The exciton wavefunction can be written as \[\Psi^{\mathbf{Q}}_{S}(\mathbf{r}_{e},\mathbf{r}_{h})=\sum_{cv\mathbf{k}}A^{S}_{cv\mathbf{k}\mathbf{Q}} \psi_{c\mathbf{k}+\mathbf{Q}}(\mathbf{r}_{e})\psi^{*}_{c\mathbf{k}}(\mathbf{r}_{h}), \tag{2}\] where \(\psi_{n\mathbf{k}}\) are the Kohn-Sham wavefunctions. The kernel \(K^{e-h}\) consists only of an attractive 'direct' term between electrons and holes for triplets, while for singlets it also includes a repulsive 'exchange' term, giving singlets their greater spatial extent [31; 7]. The energies of the conduction and valence bands in Eq. 1 are obtained within the so-called \(GW\) approximation [40] from self-energy corrections to DFT Kohn-Sham eigenvalues. This approach has been shown to give highly accurate descriptions of excitons in molecular crystals [41; 42; 7; 36; 10]. The computational details for our DFT and \(GW\)-BSE calculations are given in Supplemental Material [43] Section S1. We treat the effect of phonons following Monserrat [44; 32; 45], and in a manner similar in spirit to Zacharias and Giustino [46; 47]. For an observable \(\mathcal{O}\) at a temperature \(T\), we compute the ensemble-average in the adiabatic approximation as \[\left\langle\mathcal{O}(T)\right\rangle_{\mathcal{H}}=\frac{1}{Z}\int dX \mathcal{O}(X)e^{-\beta\mathcal{H}}, \tag{3}\] where the canonical partition function \(Z=\int dXe^{-\beta\mathcal{H}}\) involves the configuration space integral \(\int dX\)[48]. Non-adiabatic effects to the electron-phonon interactions of organic systems such as pentacene are negligible [49]. The Hamiltonian \(\mathcal{H}\) of the system includes electronic and nuclear degrees of freedom in general, and may be approximated at different levels. One approach is to assume nuclear motion to be harmonic, reducing the phonon contribution to the Hamiltonian to the following form, \[\mathcal{H}^{\text{har}}\equiv\frac{1}{2}\sum_{n,\mathbf{q}}(\nabla^{2}_{u_{n,\mathbf{q}}}+\omega^{2}_{n,\mathbf{q}}u^{2}_{n,\mathbf{q}}), \tag{4}\] in atomic units. Here, phonons of frequencies \(\omega\) are labeled by their branch index \(n\) and wavevector \(\mathbf{q}\). We compute the ensemble-average \(\left\langle\mathcal{O}^{\text{har}}\right\rangle\) in the Born-Oppenheimer approximation, tracing out all electronic degrees of freedom, using a finite-displacements approach [50; 51] to calculate phonon frequencies \(\{\omega_{n,\mathbf{q}}\}\) and eigendisplacements \(\{u_{n,\mathbf{q}}\}\), and then drawing \(N\) random samples \(\{X^{\text{har}}_{i}\}\) from the multivariate Gaussian phonon distribution and calculating the observables of interest \(\{\mathcal{O}(X^{\text{har}}_{i})\}\). \(\left\langle\mathcal{O}^{\text{har}}\right\rangle\) is then simply computed as the average of its value at the samples \[\left\langle\mathcal{O}^{\text{har}}\right\rangle=\lim_{N\to\infty}\frac{1}{N} \sum_{i=1}^{N}\mathcal{O}(X^{\text{har}}_{i}). \tag{5}\] Eqs. 4 and 5 are exact apart from the adiabatic and harmonic approximations, and the description of phonon effects on any observable \(\mathcal{O}\) in Eq. 5 is non-perturbative [26]. The use of the harmonic approximation in molecular crystals can lead to unphysical results, due to highly anharmonic behavior of low-frequency phonons [27; 29]. In this work, we account for this anharmonicity by employing path-integral molecular dynamics (PIMD) which are rendered computationally tractable using the surrogate machine-learning (ML) potential \(V^{\text{ML}}\) from Refs. [27; 52], constructed to reproduce the potential energy surface (PES) from first-principles density functional theory (DFT) calculations. The modified phonon Hamiltonian \[\mathcal{H}^{\text{anhar}}\equiv\sum_{i=1}^{N_{a}}\frac{\hat{\mathbf{p}}_{i} ^{2}}{2m_{i}}+V^{\text{ML}}(\hat{\mathbf{r}}_{1},\ldots,\hat{\mathbf{r}}_{N_{ a}}) \tag{6}\] is used to run PIMD simulations at reduced computational cost, for a cell of \(N_{a}\) atoms, with nucleus \(i\) having a mass \(m_{i}\), and \(\hat{\mathbf{p}}_{i}\), \(\hat{\mathbf{r}}_{i}\) its momentum and position operators respectively. We then draw random samples from the PIMD trajectories, and use these to compute vibrational averages of observables, analogously to Eq. 5, namely \[\left\langle\mathcal{O}^{\text{anhar}}\right\rangle=\lim_{N\to\infty}\frac{1}{N} \sum_{i=1}^{N}\mathcal{O}(X^{\text{anhar}}_{i}). \tag{7}\] Our simulations use a \(2\times 1\times 1\) supercell of pentacene (\(N_{a}=144\) atoms), capturing the effect of phonons at \(\Gamma\) and at the band-edge \(X\) on observables. Phonons beyond \(\Gamma\) and \(X\) have a minor effect on pentacene optical properties as discussed in Supplemental Material [43] Section S1.C. To quantify exciton localization, we study two observables \(\mathcal{O}\). The first are the exciton energies at finite center-of-mass momentum, \(\Omega_{\mathbf{Q}}^{\mathrm{S}}\), obtained through solving the BSE (Eq. 1). The second is the average electron-hole separation for each excitation \(S\), which we refer to as the exciton radius \(r_{\mathrm{exc}}\). This is obtained by post-processing the BSE solution \(\Psi_{S}\), as discussed elsewhere [53] and in Supplemental Material [43] Section S1. To determine the exciton radius, we compute the electron-hole correlation function as defined in Ref. [53], namely \[F_{S}(\mathbf{r})=\int_{V}d\mathbf{r}_{h}|\mathbf{\Psi}_{S}^{\mathbf{Q}=0}(\mathbf{r}_{e}=\mathbf{r}_{ h}+\mathbf{r},\mathbf{r}_{h})|^{2}, \tag{8}\] where \(V\) the volume of the primitive cell. \(F_{S}(\mathbf{r})\) describes the probability of finding the electron-hole pair at a distance of \(\mathbf{r}=\mathbf{r}_{e}-\mathbf{r}_{h}\), and is computed as a discrete sum over hole positions. The average exciton radius for a given atomic configuration is then \[r_{\mathrm{exc}}=\int d|\mathbf{r}|F_{S}(|\mathbf{r}|)|\mathbf{r}|. \tag{9}\] Having described the main quantities in our computational framework, we may summarize it as follows. We generate displaced configurations \(X_{i}^{\mathrm{har}}\) within the harmonic approximation using a finite differences approach, and \(X_{i}^{\mathrm{anhar}}\) within the anharmonic distribution through PIMD employing a previously-developed ML potential. The _ab initio_ BSE, Eq. 1, is solved at these configurations, followed by a calculation of the exciton radius via Eq. 9. We then compute the vibrational averages using Eqs. 5 and 7. Details of the convergence of the vibrational averages, the ML potential, and PIMD simulations, are given in Supplemental Material [43] Section S1. _Results.-_ We first discuss exciton properties obtained from solving the BSE without consideration of phonons. We refer to these clamped-ion solutions as the'static' case. Fig. 1 shows an isosurface of the electron density for the first singlet (S\({}_{1}\), blue, panel **a**) and triplet (T\({}_{1}\), green, panel **b**) exciton, for a hole fixed at the center of the visualized region. As shown previously [7; 10; 36], the singlet is significantly more delocalized than the triplet, which results in bands that are more dispersive in reciprocal space [7; 42], as shown in Fig. 1c. We plot the exciton energies along the path \(\Gamma\to X\) in the Brillouin zone, corresponding to the dominant packing direction of the pentacene crystal. Table 1 summarizes the bandwidth \(W=\Omega(X)-\Omega(\Gamma)\) of the two excitons, as well as the width \(\Delta=\Omega(\mathbf{Q}=0.4\,\mathrm{\SIUnitSymbolAngstrom}^{-1})-\Omega( \mathbf{Q}=0.1\,\mathrm{\SIUnitSymbolAngstrom}^{-1})\), the values of the exciton momentum chosen to accommodate comparison to recent experiments [54]. We see from our static calculations that the singlet bandwidth is more than twice that of the triplet. We now include the effect of phonons on the exciton band structures along \(\Gamma\to X\) at \(100\,\mathrm{K}\) and \(300\,\mathrm{K}\), within the harmonic and anharmonic distributions, and visualize the results in Fig. 1c when including anharmonic effects. There are two broad categories of phonons in molecular crystals, corresponding to low-frequency intermolecular and high-frequency intramolecular motions, visualized in Fig. 1d. While the former are predominantly activated when going from \(100\,\mathrm{K}\) to \(300\,\mathrm{K}\), the latter have significant zero-point energies \(\hbar\omega/2\). Including \(100\,\mathrm{K}\) phonon effects red-shifts both singlet and triplet exciton energies and flattens their dispersions, as shown in Fig. 1c and Table 1. This effect is larger for the triplet, which is more localized and therefore more impacted by high-frequency intra-molecular modes. However, increasing the temperature to \(300\,\mathrm{K}\) has no effect on the triplet, since there are negligible additional contributions from intramolecular modes at these temperatures and the modulations of intermolecular distances by lower-frequency phonons hardly affect this localized state. In contrast, the delocalized singlet red-shifts further, and its dispersion flattens by an additional \(18\,\mathrm{meV}\). Our results for the singlet width \(\Delta\) at \(100\,\mathrm{K}\) are in excellent agreement with recent experiments [54], as summarized in Table 1. Our predicted decrease of the singlet width \(\Delta\) by \(13\,\mathrm{meV}\) when increasing the temperature from \(100\,\mathrm{K}\) to \(300\,\mathrm{K}\) underestimates the experimental decrease of Figure 1: Isosurfaces of electron distributions of singlet (blue, panel **a**) and triplet (green, panel **b**) excitons for a hole fixed at the center of the plotted area, and corresponding dispersions (panel **c**, same color scheme) in molecular crystals. A typical low-frequency (top) and high-frequency (bottom) phonon of pentacene is shown in panel **d**. 21 meV, largely due to ignoring thermal expansion in our calculation, which reduces \(\Delta\) by a further 6 meV within this temperature range, see Supplemental Material [43] Section S2. Interestingly, we see in Table 1 that the harmonic approximation predicts an _increase_ of the singlet bandwidth with increasing temperature, contrary to our calculations including anharmonic effects using PIMD and to experiment, a point that we return to below. The changes in the width of the exciton dispersions suggest phonon-induced modulations of real-space exciton properties, which are zero-point dominated for the triplet, and which have significant temperature dependence for the singlet. We highlight the connection between the dispersion modulations and real-space exciton properties by computing vibrational averages of the exciton radii at a range of temperatures. The results are presented in Fig. 2 for the singlet (blue) and triplet (green) within the harmonic approximation and including anharmonic effects. Let us first comment on the harmonic case. Compared to the static limit (circles), the radii in the presence of phonons at 0 K are renormalized by more than a factor of two. For the singlet, the static value of 11.2 A for its radius reduces to 4.9 A, while the static triplet radius of 2.7 A reduces to 1.2 A. To visualize this we present in Fig. 2b and Fig. 2c differential plots for isosurfaces of the electron density once a hole is placed at a high-probability position in the unit cell. Specifically, we plot the difference between the electronic density of the case without phonons and that of a typical atomic configuration at 0 K. Red indicates amplitude vanishing due to phonons, while blue and green indicate areas where the singlet and triplet wavefunction respectively gain amplitude, demonstrating their tendency to localize. When increasing the temperature to 300 K within the harmonic approximation there is no change to the triplet exciton radius, in agreement with our expectation of the effect of phonons on the triplet exciton dispersion. The singlet however exhibits delocalization, with its radius increasing substantially to the average value of 6.96 A, consistent with the increase of the singlet bandwidth with temperature in the harmonic case. Upon including anharmonic effects, triplet radii agree with the harmonic case; however, for the singlet the results are qualitatively different, and we recover the expected behavior of decreasing singlet radius with increasing temperature. All vibrational averages and errors for the exciton radii are given in Section S7 of the Supplemental Material [43]. The discrepancy between the harmonic and anharmonic cases is due to configurations with highly delocalized excitons within the harmonic approximation, with radii as large as 31 A at 300 K. Such configurations are shown in Supplementary Material [43] Section S5, and their inclusion in the thermal averages of Eq. 5 for the radii leads to the observed temperature-induced increase of \(\langle r_{\mathrm{exc}}\rangle\) in Fig. 2a. To understand why such configurations are not present within the anharmonic case, we plot in Fig. 3a the difference between the phonon root mean squared displacement \(\sqrt{\langle u^{2}\rangle}\) of the two distributions at 300 K. We find that a low-frequency acoustic mode, corresponding to a sliding along the z-axis of adjacent pentacene molecules, is significantly over-displaced Figure 2: Singlet (blue) and triplet (green) exciton radii within the different cases and temperatures (panel **a**). Representative configuration showing electronic isosurfaces for fixed hole positions, indicating localization of the singlet (triplet) at 0 K towards the region in blue (green), shown in panel **b** (panel **c**). Red represents electronic wavefunction amplitude that disappears in the presence of phonons. in the harmonic case at \(\mathbf{q}=X\). Anharmonic terms alter the PES associated with this phonon, limiting its average amplitude at room temperature, as shown in Supplementary Material [43] Fig. S3, in agreement with known cases where the harmonic approximation breaks down in molecular crystals [27; 29; 30]. We confirm that the overdisplacement of this phonon within the harmonic approximation leads to the temperature-induced singlet delocalization observed in Fig. 2a, by computing the singlet radius as a function of amplitude of this mode, as visualized in Fig. 3b. The blue and red regions indicate the maximum range of displacements which are accessible within the anharmonic and harmonic distributions respectively, due to thermal excitation of phonons at \(300\,\mathrm{K}\). The harmonic approximation leads to configurations with highly delocalized excitons of radii as large as \(25\,\mathrm{\SIUnitSymbolAngstrom}\). The dependence of the exciton radius on the phonon displacement is non-monotonic due to the oscillating \(\pi\) orbital overlap between neighboring pentacene molecules [55]. While highly delocalized excitons may appear at certain nuclear configurations, anharmonicity prevents accessing these, as seen in Fig. 3b. However, such configurations could appear out of equilibrium, _e.g._ due to photoexcitation, upon relaxation to the excited state PES minimum. For pentacene, the minimum of the singlet exciton PES along the anharmonic acoustic mode lies far from the 'delocalized' region of Fig. 3b (see Supplemental Material [43] Section S6), it is thus unlikely that for this and similar systems transiently delocalized excitons may be accessed, even outside equilibrium. _Conclusions.-_ We have presented a first-principles study of the effect of phonons on the dispersion and radii of excitons in the prototypical molecular crystal pentacene. Zero-point nuclear motion uniformly causes substantial localization of excitons, manifesting as a flattening of the exciton dispersion in reciprocal space. Wannier-Mott-like singlet excitons also exhibit additional temperature-activated localization due to their stronger coupling to low-frequency phonons, with anharmonic effects being critical in capturing this effect and preventing transient exciton delocalization. Anharmonic low-frequency phonons are common in molecular materials [27] and can couple to singlets when these approach the Wannier-Mott limit, in a manner which is in turn determined by the size [10] and packing [56] of the molecular building blocks. Our work lays foundations for a deep understanding and controlled enhancement of exciton transport in molecular crystals, for example by suppressing anharmonicity through chemical modifications [57]. We thank Sivan Refaely-Abramson for useful discussions. This work was primarily supported by the Theory FWP, which provided \(GW\) and \(GW\)-BSE calculations and analysis of phonon effects, and the Center for Computational Study of Excited-State Phenomena in Energy Materials (C2SEPEM), which provided advanced codes, at the Lawrence Berkeley National Laboratory, funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract No. DE-AC02-05CH11231. SS acknowledges funding from the U.S. National Science Foundation (NSF) under grant number DMR-1847774. Computational resources were provided by the National Energy Research Scientific Computing Center (NERSC).
2310.19481
Spatial information allows inference of the prevalence of direct cell-to-cell viral infection
The role of direct cell-to-cell spread in viral infections - where virions spread between host and susceptible cells without needing to be secreted into the extracellular environment - has come to be understood as essential to the dynamics of medically significant viruses like hepatitis C and influenza. Recent work in both the experimental and mathematical modelling literature has attempted to quantify the prevalence of cell-to-cell infection compared to the conventional free virus route using a variety of methods and experimental data. However, estimates are subject to significant uncertainty and moreover rely on data collected by inhibiting one mode of infection by either chemical or physical factors, which may influence the other mode of infection to an extent which is difficult to quantify. In this work, we conduct a simulation-estimation study to probe the practical identifiability of the proportion of cell-to-cell infection, using two standard mathematical models and synthetic data that would likely be realistic to obtain in the laboratory. We show that this quantity cannot be estimated using non-spatial data alone, and that the collection of a data which describes the spatial structure of the infection is necessary to infer the proportion of cell-to-cell infection. Our results provide guidance for the design of relevant experiments and mathematical tools for accurately inferring the prevalence of cell-to-cell infection in $\textit{in vitro}$ and $\textit{in vivo}$ contexts.
Thomas Williams, James M. McCaw, James Osborne
2023-10-30T12:09:57Z
http://arxiv.org/abs/2310.19481v3
# Spatial information allows inference of the prevalence of direct cell-to-cell viral infection ###### Abstract The role of direct cell-to-cell spread in viral infections -- where virions spread between host and susceptible cells without needing to be secreted into the extracellular environment -- has come to be understood as essential to the dynamics of medically significant viruses like hepatitis C and influenza. Recent work in both the experimental and mathematical modelling literature has attempted to quantify the prevalence of cell-to-cell infection compared to the conventional free virus route using a variety of methods and experimental data. However, estimates are subject to significant uncertainty and moreover rely on data collected by inhibiting one mode of infection by either chemical or physical factors. These methods assume that this inhibition process fully eliminates its target mode of infection while exactly preserving the dynamics of the other. In this work, we provide a framework for estimating the prevalence of cell-to-cell infection from data which is experimentally obtainable without the need for additional interventions, and two standard mathematical models for viral dynamics with the two modes of infection. We provide guidance for the design of relevant experiments and mathematical tools for accurately inferring the prevalence of cell-to-cell infection. **Keywords:** Cell-to-cell viral infection, viral dynamics, Bayesian inference, multicellular model, agent-based model. ## Introduction Classically, viral infections have been assumed to spread among host cells through a process of viral secretion, diffusion, and reabsorption via the extracellular environment [8, 10]. In reality, however, a huge variety of the most medically important viruses -- including influenza A, herpesvirususes, hepatitis C, HIV and SARS-CoV-2 -- have all been observed to also spread between host cells using direct cell-to-cell mechanisms [12, 14, 24]. This mode of infection, which is mechanistically distinct from the conventional cell-free route, permits viruses or viral proteins to be trafficked directly between adjacent cells without ever being leaving the cell membrane [12]. This is significant for multiple reasons. For one, the direct cell-to-cell route of infection is orders of magnitude more efficient than the cell-free route [9, 13], and moreover is far better protected from immune or drug defences [13, 19, 24]. Cell-to-cell infection is considered one of the essential strategies of chronic viral infections like hepatitis C and HIV, and elevated cell-to-cell spread has been associated with increased pathogenicity in influenza and SARS-CoV-2 infections [13, 27]. Understanding the role and prevalence of cell-to-cell infection in different viral species is therefore of profound importance in therapeutic applications. Recent works, both from the modelling and experimental literature, have made efforts to identify the prevalence of the cell-to-cell mode of viral infection. In hepatitis C, modelling efforts led by Graw and Durso-Cain suggested that cell-free infection events were rare, yet worked synergistically with the cell-to-cell infection strategy to rapidly accelerate the overall rate of infection spread [5, 9]. Blahut and coworkers used modelling to quantify the proportion of the two modes of spread using _in vitro_ experimental data, and claimed that as much as 99% of the infection events observed were due to cell-to-cell infection [2]. Experimental work by Kongsomros and colleagues suggested that the proportion of cell-to-cell infections in influenza was very low, but elevated in more pathogenic strains of the virus [13]. By contrast, experimental work in SARS-CoV-2 by Zeng and collaborators claimed that cell-to-cell infection represented around 90% of infections [27]. These estimates in the literature for cell-to-cell prevalence among different viruses are in sparse supply, and subject to significant uncertainty. Moreover, to our knowledge, all share a common limitation, being that they rely on experiments which knock out only one of the modes of viral spread, usually cell-free infection, and leaves the other infection mechanism untouched [2, 13, 27]. This is usually implemented by conducting infection assays in the presence of an antiviral such as oseltamivir for influenza, or by imposing a physical barrier to viral diffusion, such as treating the cell sheet with methylcellulose [13, 27]. However, it is not known whether these additional experimental controls increase or decrease the productivity of the cell-to-cell infection route. To our knowledge, only one publication in the literature has attempted to infer the balance of the two modes of infection spread from data where both mechanisms are unimpeded [15]. This work, by Kumberger and colleagues, demonstrates modifications that can be made to a standard ordinary differential equation (ODE) model of viral dynamics in order to better describe cell-to-cell infection, but was nonetheless unable to satisfactorily infer the prevalence of cell-to-cell infection from synthetic data where the two modes of infection occurred simultaneously, and moreover did not examine whether estimates of this quantity were improved or weakened when the true balance of the two mechanisms was changed [15]. The limits of identifiability of the proportion of cell-to-cell infection -- under different conditions, using different models, and based on different sources of observational data -- has not been systematically studied. Here, we conduct simulation estimation studies using two mathematical models for viral infections with two modes of spread: one non-spatial ODE system and one spatially-explicit multicellular model. In both cases, we generate synthetic data using the model in combination with an observational model, and attempt to re-estimate the prevalence of cell-to-cell infection from the resulting observations. We repeat this process under a range of conditions and with different types of available data for fitting. Our results provide an important background for the identifiability of the cell-to-cell infection prevalence, and offer guidance for the design of models and experimental systems best equipped to learn this quantity. In this work we take particular inspiration from the work of Kongosmros and colleagues [13]. In their work, the authors conduct a series of experiments where "donor" cells infected with influenza are added to a well of "recipient" cells, labelled with a membrane dye, and infection allowed to spread under a given set of experimental conditions. At various times, wells are harvested and fixed, then stained with fluorescent anti viral-NP antibody to identify the infected recipient cell population. In the present work, we will take the fluorescent cell proportion, following the construction given here, as our primary source of observational data. We provide further discussion of our choice of data source in Discussion. ## Results In the presence of observational noise, the prevalence of cell-to-cell infection spread cannot be determined from fluorescence time series data alone We sought to investigate whether an ODE model incorporating both cell-free viral infection and cell-to-cell infection, could be used to infer the balance of the two modes of spread, given a time series of observations of the fluorescent proportion of the cell population as in Kongosmros _et al._[13]. We exhibit the basic properties of the ODE model in Figure 1 (the model is fully described in Methods "An ODE model for dual-spread dynamics"). Figure 1a shows the basic structure of the model and the parameters governing the model. We apply a standard target cell-limited model framework with a latent compartment and two modes of infection. That is, initially susceptible cells may become infected either through cell-to-cell infection -- at a rate proportional to the infected proportion of the cell population -- or through infection by cell-free virus -- at a rate proportional to the quantity of extracellular virus in the system. Once initially infected, cells enter the first of \(K\) eclipse stages (such that the duration of the eclipse stage is gamma-distributed, instead of exponentially-distributed, see [6, 21]), before becoming productively infected, at which stage they begin producing extracellular virus. Productively infected cells then die. We assume that cells become detectably fluorescent once they become productively infected, but that they remain fluorescent after death over the time scale of simulations, as observed in Kongsomros _et al._[13]. Throughout this work we will take the majority of the model parameters to be fixed (which is discussed in Section Methods "An ODE model for dual-spread dynamics"), aside from the two parameters governing the rates of cell-to-cell and cell-free infection, \(\alpha\) and \(\beta\), respectively. Figure 0(b) shows the dynamics of the infected cell proportion over time using the ODE model with a range of \(\alpha\) and \(\beta\) values (throughout this work, \(\alpha\) and \(\beta\) have units of \(\mathrm{h}^{-1}\) and \(\mathrm{(TCID}_{50}/\mathrm{ml)}^{-1}\mathrm{h}^{-1}\), respectively). We can quantify and describe the overall rate of infection progression by the exponential growth rate \(r\) (units of \(\mathrm{h}^{-1}\)). This quantity, well established in the theory of both between-host and within-host infection dynamics, describes the initial rate of exponential expansion of the infected (or fluorescent) population [3, 17]. For further details refer to Methods "Exponential growth rate - \(r\)". We applied simulation estimation techniques to investigate whether \(\alpha\) and \(\beta\) could be inferred from the fluorescent cell time series of the model. We first selected three sets of \((\alpha,\beta)\) pairs resulting in different proportions of infections arising from each mechanism. Specifically, if we label the final fraction of infections arising from the cell-to-cell route as \(P_{\mathrm{CC}}\), we construct lookup tables on \(\alpha\)-\(\beta\) space for this quantity, and use this to compute \((\alpha,\beta)\) pairs corresponding to \(P_{\mathrm{CC}}\) values of approximately 0.1, 0.5, and 0.9, with a fixed exponential growth rate \(r\) of 0.52 in each case to ensure the overall Figure 1: (a) Schematic of the ODE model. (b) Proportion of infected cells over time as predicted by the ODE model for an array of values of \(\alpha\) and \(\beta\) between zero and 2.5 and 2\(\times 10^{-6}\), respectively. The parameter values sampled to generate the plot are shown in the inset. (c) Calculation of \(P_{\mathrm{CC}}\). We keep track of the proportion of the cell population which has been infected by the cell-to-cell (CC) and cell-free (CF) infection over the course of infection. We define \(P_{\mathrm{CC}}\) as the proportion of infections arising from the CC route at long time. (d) \(P_{\mathrm{CC}}\) contour map on \(\alpha\)–\(\beta\) space for the dual-spread ODE model. \(\alpha\) and \(\beta\) have units of \(\mathrm{h}^{-1}\) and \(\mathrm{(TCID}_{50}/\mathrm{ml)}^{-1}\mathrm{h}^{-1}\), respectively. dynamics progressed at a comparable rate. We show a graphic of the computation of \(P_{\text{CC}}\) in Figure 1c, and a contour map on \(\alpha\)-\(\beta\) space for the ODE model in Figure 1d. For further details on \(P_{\text{CC}}\), refer to Methods "Proportion of infections from the cell-to-cell route - \(P_{\text{CC}}\)". For each of the specified values of \((\alpha,\beta)\), we simulated the ODE model and, following Kongsomros and colleagues, we computed the fluorescent cell proportion \(F(t)\) -- that is, the cumulative proportion of the initially susceptible population that has become infected -- at \(\mathbf{t}=\{3,6,9,...,30\}\)h [13]. We then applied an observational model to this data to simulate the experimental process, by assuming a cell population size \(N_{\text{sample}}\), and overdispersed noise modelled by a negative binomial distribution. We take \(N_{\text{sample}}=2\times 10^{5}\) as in [13] and set the dispersion parameter \(\phi=10^{2}\), selected to impose a modest amount of noise on our observations, leading to the observed data vector \(\mathbf{D}\). We specify the observation model in full in Methods "Simulation estimation", and explore the role of observational noise in more detail in Supplementary Section 1. Having obtained our observed data \(\mathbf{D}\), we run a No U-Turn Sampling (NUTS) Markov Chain Monte Carlo (MCMC) algorithm [23] to obtain posterior density estimates for \(\alpha\) and \(\beta\). For each \((\alpha,\beta)\) pair to estimate, we run ten replicates -- that is, we repeat the process of applying observational noise to the true fluorescence data and re-estimating \(\alpha\) and \(\beta\) ten times -- and for each replicate we use four independent and randomly seeded chains. We assume uniform priors for \(\alpha\) and \(\beta\) on \([0,5]\,\text{h}^{-1}\) and \(\big{[}0,1\times 10^{-5}\big{]}\left(\text{TCID}_{50}/\text{ml}\right)^{-1} \text{h}^{-1}\) respectively, and assume a negative binomial likelihood. Further details of this simulation estimation process are specified in Methods "Simulation estimation". In Figure (2), we show the results of this fitting process. In Figures 2a-2c, we show posterior samples in \((\alpha,\beta)\) space from each chain of a single replicate fit. We do so for each of the three target parameter pairs. As a visual aid, we also plot the \((\alpha,\beta)\) contours corresponding to the true \(P_{\text{CC}}\) value and the true \(r\) value in each case. These plots confirm that the chains are indeed well-mixed, however, they also show that the posterior samples for each pair of target parameters are spread out along the true \(r\) contour. While some samples are close to the target parameter pair, the chains do not appear to converge at this point. This is confirmed in Figure 2d. In Figure 2d, for each target parameter pair, we show violin plots of the posterior distributions of \(P_{\text{CC}}\) and \(r\) for four replicate fits, along with a box plot of the posterior medians across all ten replicates. We also show the prior density of both of these quantities in grey. Figure 2d shows that while \(r\) is well estimated compared to its prior distribution -- regardless of the choice of target parameters -- \(P_{\text{CC}}\) cannot be identified. Whilst, at least for the case where the target \(P_{\text{CC}}=0.1\), the distribution of posterior medians can be somewhat accurate, posterior distributions from individual replicates are frequently far from the true value. Importantly, many of these posterior distributions are reasonably _confident_, yet also _wrong_, for instance, Replicate 4 for the case where the target \(P_{\text{CC}}=0.5\). Overall, this experiment indicates that, when even a modest degree of observational noise is applied to the fluorescence data, only the exponential growth rate \(r\) can be accurately estimated: the proportion of infections arising from each mode of spread is lost in the observational process. We investigated the role of the level of observational noise in determining the quality of estimates of \(P_{\text{CC}}\) and \(r\) using the ODE model (for full details, see Supplementary Section 1). We found that for higher values of the dispersion parameter \(\phi\) than we show here (that is, with less observational noise), estimates of \(P_{\text{CC}}\) were overall closer to the true value, however, the distribution of estimate medians still showed not insignificant variance, even when virtually all observational noise was removed. Subject to a higher level of observational noise, estimates of \(P_{\text{CC}}\) were almost entirely random. We show these results in full in Supplementary Figure SI 1. Using a spatial model with spatial data, the balance of the modes of infection spread can be accurately inferred We sought to apply a similar simulation estimation procedure to a spatially-structured model of infection, to investigate whether a model capable of describing the actual structure of infection would provide better estimates of the proportion of each infection mechanism. We constructed an agent-based spatial model with an equivalent structure to the ODE model used in the previous result, where transitions between compartments of the model are replaced by probabilities of discrete cells, occupying specific positions in space, changing between states analogous to those in the ODE model. The notable difference in this construction is that while we still model cell-free infection based on a _global_ extracellular viral reservoir, we now model cell-to-cell infection as a spatially _local_ process. Specifically, we assume that the probability of cell-to-cell infection of a given cell is based on the infected proportion of its neighbours, instead of the global infected cell population as in the ODE model. This reflects the assumption -- based on current biological understanding -- that cell-free virions spread rapidly over the size of tissue we seek to model, whereas cell-to-cell infection is possible only between adjacent cells [14]. This process is illustrated in Figure 3a. Figure 3a shows a schematic of the spatial model, and illustrates the alternate formulation of the cell-to-cell infection mode. Note that, as illustrated in Figure 3a, the spatial model is able to predict the infection rate in the model, and the model is able to predict the infection rate in the model. Figure 2: (a)–(c) MCMC chains in \(\alpha\)–\(\beta\) space for a fit to fluorescence data with typical observational noise where the true \(P_{\rm CC}\approx 0.1\), \(0.5\), \(0.9\) and the value of \(r\) is held fixed. (d) Prior density and posterior densities from individual replicates for \(r\) and \(P_{\rm CC}\) both with typical observational noise. We repeat this for three sets of parameters resulting in \(P_{\rm CC}\) values of \(0.1\), \(0.5\) and \(0.9\) with a fixed \(r\) value. Dashed and solid horizontal lines mark the mean and median values respectively. We also show a box plot of the distribution of posterior medians across all replicates. There are ten replicates in total at each value of \(P_{\rm CC}\), of which we display four. \(\alpha\) and \(\beta\) have units of \(\rm h^{-1}\) and \((TCID_{50}/ml)^{-1}h^{-1}\), respectively. the schematic, cells are packed in a hexagonal lattice, which reflects the biological reality of epithelial monolayers and moreover ensures that adjacency between cells is well-defined. Full details of the spatial model can be found in Methods "A multicellular spatial model for dual-spread dynamics". In addition to the fluorescent proportion metric we introduced in the previous result, we developed an additional metric for the spatial model to describe the extent to which infected cells were clustered together. This metric, which we term \(\kappa(t)\), describes the mean proportion of neighbours of the fluorescent cells which are also fluorescent at time \(t\). In Figure 3c we show a schematic which illustrates the computation of the fluorescent neighbour fraction at a number of fluorescent cells in a cell sheet. We define \(\kappa(t)\) explicitly in Methods "Clustering metric - \(\kappa(t)\)". \(\kappa(t)\) has the property that when it is large, fluorescent cells tend to be clustered together and the infection is highly localised, whereas if it is small, the infection is diffuse. In Figure 3d-3g, we demonstrate the behaviour of the spatial model under three \((\alpha,\beta)\) parameter pairs, chosen to result in a \(P_{\text{CC}}\) of approximately 0.1, 0.5, and 0.9, and to reach a peak infected cell fraction at approximately 18h. In Figure 3d, we visualise a section of the cell grid at a series of time points. We do so by assigning a unique index \(j=\{1,2,...,N_{\text{init}}\}\) to each of the \(N_{\text{init}}\) initially infected cells and the extracellular virus they produce. Then, every time a susceptible cell is marked for infection during a simulation, we compute the probability that it was caused by each of the \(N_{\text{init}}\) viral lineages, and determine the lineage assigned to that cell. Infected cells are then coloured by their lineage. Once a cell dies, we change its colour to black. This construction allows us to visualise the spread of infection in space. Figure 3d shows that when cell-to-cell infection dominates, infection plaques are tightly clustered and infected cells of the same lineage tend to be found closer together. When cell-free infection dominates, there is no particular structure to the colouring of the cell sheet. In Figure 3e-3g, we show time series for the spatial model under the same three parameter schemes as discussed above: the proportion of the cell population which is infected over time, the fluorescent cell curve as discussed in the previous section, and the clustering metric \(\kappa(t)\). These time series indicate that even though the different parameter regime lead to vastly differently-structured infections -- as can be seen in Figure 3d -- their infected and fluorescent cell count dynamics as a time series are relatively similar, although there is some variation in the initial uptick of infection in the case where \(P_{\text{CC}}\) is large. By contrast, the time series for \(\kappa(t)\) shows substantial variation between the parameter values corresponding to low, roughly equal and high values of \(P_{\text{CC}}\). Since in the spatial model, cell-to-cell infection is constrained to act locally, infections that spread mainly through cell-to-cell infection are forced to spread radially. The size of the resulting infected cell population, therefore, grows in a non-exponential manner. For this reason, the exponential growth rate \(r\) is not well-defined in the case of the spatial model. As an alternative metric of the rate of growth of the infected cell population, we simply use the time of the peak infected cell population, which we label as \(t_{\text{peak}}\). Since this, like \(P_{\text{CC}}\), cannot be well-estimated _a priori_, we again resort to computing a lookup table of mean \(t_{\text{peak}}\) values on \(\alpha\)-\(\beta\) space. For full details on the construction of these lookup tables and their corresponding surface plots, refer to Methods "Proportion of infections from the cell-to-cell route - \(P_{\text{CC}}\)" and Figures 6a-6b. We computed \((\alpha,\beta)\) pairs for the spatial model which result in \(P_{\text{CC}}\) values of approximately 0.1, 0.5, and 0.9 and a common value of \(t_{\text{peak}}\) of approximately 18h, analogous to the values selected for the ODE model in our previous fitting experiment. For each of these parameter pairs, we ran simulations of the spatial model and reported the fluorescent proportion of the susceptible cells as well as the clustering metric \(\kappa(t)\) at times \(\mathbf{t}=\{3,6,9,...,30\}\)h, one time point per simulation. This model reflects the destructive experimental observation process. We provide full details of the observational model in Methods "Simulation estimation". The resulting observations collectively form our observed data vectors \(\mathbf{D}^{\text{\tiny{Bacon}}}\) and \(\mathbf{D}^{\text{\tiny{cluster}}}\). We then used Population Monte Carlo (PMC) methods to re-estimate \(\alpha\) and \(\beta\) (full details in Methods "Simulation estimation") given this synthetic observational data. For each of the three target \((\alpha,\beta)\) pairs, we ran four replicates of the data generation and fitting process. We show the results of this experiment in Figure 4. Figure 4, which follows a similar layout to Figure 2, shows that with the addition of clustering metric data, \(P_{\text{CC}}\) can now be robustly inferred using the spatial model. In Figure 4a-4c, we plot scatter plots of the final accepted posterior samples for \(\alpha\) and \(\beta\) in \(\alpha\)-\(\beta\) space for the three target parameter pairs, resulting in \(P_{\text{CC}}\approx 0.1,\ 0.5,\ 0.9\). These plots show posterior density distributed compactly around the true values of \((\alpha,\beta)\), instead of being dispersed along a \(t_{\text{peak}}\) contour as in the previous simulation estimation. In Figure 4d, we show the Figure 3: (a) Schematic of the spatial model. The model follows the same structure as the ODE model with the exception that cell-to-cell infection is based on the proportion of a cells neighbours which are infected. (b) Cartoon of the calculation of infected neighbour proportion. (c) \(\kappa(t)\) is our clustering metric, computed as the mean proportion of neighbours of the fluorescent cells which are also fluorescent. (d) Typical time evolution of the cell grid using the spatial model under three \(\alpha\)–\(\beta\) combinations, resulting in \(P_{\text{CC}}\) values of approximately 0.1, 0.5, and 0.9. Parameters were chosen such that the peak infected cell population is reached at approximately the same time in each instance. Initially infected cells are flagged with a unique colour and infections resulting from that lineage of cells are assigned the same colour. Target cells are marked in grey and dead cells in black. (e), (f), (g) Proportion of cell sheet infected, proportion of susceptible cells which are fluorescent over time, and the clustering metric \(\kappa(t)\) respectively. We show eight simulations for each of the \(\alpha\)–\(\beta\) parameter pairs described above. \(\alpha\) and \(\beta\) have units of \(\text{h}^{-1}\) and \((\text{TCID}_{50}/\text{ml})^{-1}\text{h}^{-1}\), respectively. weighted posterior distributions of \(P_{\text{\tiny CC}}\) and \(t_{\text{\tiny peak}}\) for individual replicates along with the distribution of weighted posterior means across replicates. As before, \(t_{\text{\tiny peak}}\) is still extremely well estimated in each case, however, now the posterior distributions for \(P_{\text{\tiny CC}}\) are also very accurate to the true value. Moreover, the posterior distributions for individual replicates are concentrated on the true values of \(P_{\text{\tiny CC}}\) with only modest confidence intervals, and the distributions of weighted mean estimates across replicates are extremely precise to the true values, meaning that carrying out inference with only a single data stream (as opposed to aggregating across multiple observations) was sufficient to estimate both \(P_{\text{\tiny CC}}\) and \(t_{\text{\tiny peak}}\). This was not the case with the ODE model. We note that estimates for \(P_{\text{\tiny CC}}\) are especially sharp when the true value of \(P_{\text{\tiny CC}}\) is higher, suggesting that the dynamics in this high cell-to-cell scheme are particularly distinguishable. To test whether our results were dependent on the inclusion of the secondary data source, the clustering metric \(\kappa(t)\), we performed another set of simulation estimations using the same methods as above, this time using only the fluorescence data (full details in Supplementary Section 3). We show the results of this fitting experiment in Supplementary Figure 2. This figure shows that, without the use of the clustering metric, estimates for \(P_{\text{\tiny CC}}\) are again very poor, while estimates for \(t_{\text{\tiny peak}}\) remain reasonably precise. This result, which mirrors what we observed with the ODE model, suggests that fluorescence data alone is not sufficient to imply the balance of the two modes of viral spread even for the spatial model. We provide more discussion of this in Supplementary Section 3. Inference on the prevalence of cell-to-cell infection is robust to smaller samples of the cell sheet The clustering metric \(\kappa(t)\), as we have defined it, relies on sampling every fluorescent cell in the tissue at each observation time and calculating the proportion of its neighbours which are also fluorescent. However, in an experimental setting, it may be impractical if not impossible to observe the fluorescent state of every cell in the target population, especially _in vivo_. We sought to investigate whether approximations of \(\kappa(t)\) generated by sampling from subsets of the cell population would be sufficient to allow \(\alpha\) and \(\beta\) -- and therefore \(P_{\text{\tiny CC}}\) -- to be inferred. We did so by carrying out simulation estimations as in the previous result, but where the clustering metric is now approximated by \(\kappa_{S}(t)\), which is computed by randomly sampling \(S\) cells instead of sampling the entire grid. Full details of this adjusted simulation estimation process are given in Methods "Clustering metric - \(\kappa(t)\)". To test the influence of the sample size \(S\) on estimation of \(P_{\text{\tiny CC}}\), we performed a series of simulation estimations on the spatial model using both fluorescence and approximate clustering data for varying sample sizes and target values of \(P_{\text{\tiny CC}}\). These simulation estimations were conducted using the same methods as in the previous result (summarised in Figure 4). We show the results of these simulation estimations in Figure 5. Here we plot, as in previous figures, weighted posterior distributions for \(P_{\text{\tiny CC}}\) for each combination of target parameters and sample size, as well as box plots of the posterior weighted means across replicates in each case. Estimates for \(t_{\text{\tiny peak}}\) are again very precise across all replicates, as is shown in Supplementary Section 4. Figure 5 shows that as the size of the sample becomes smaller and the approximation of \(\kappa(t)\) becomes coarser, posterior distributions for \(P_{\text{\tiny CC}}\) become wider and less confident, however, the centre of these distributions is still accurate, as can be seen in the box plots of posterior weighted means, which remain very compact and close to the true value of \(P_{\text{\tiny CC}}\). This is true even for the smallest sample sizes and for any target value of \(P_{\text{\tiny CC}}\). We see that increasing noise due to a reduction in sample size when approximating \(\kappa(t)\) does not result in biased estimates of \(P_{\text{\tiny CC}}\), instead, merely a reduction of confidence. By contrast, as we mentioned in the previous result and Supplementary Section 1, while an increase in observational noise did lead to an increase in posterior distribution width, it also resulted in individual replicates where \(P_{\text{\tiny CC}}\) estimates were found in reasonably tight, inaccurate distributions. Finally, we also note that, as seen in the previous result, estimation of \(P_{\text{\tiny CC}}\) is far more precise in the case where the target value was higher. Even with the coarsest approximation of the clustering metric, the algorithm correctly identified the \(P_{\text{\tiny CC}}\) in this case with a high degree of precision. This suggests both that high \(P_{\text{\tiny CC}}\) dynamics of the spatial model are particularly distinctive -- at least as far as the fluorescence and clustering time series are concerned -- but also that only tiny samples of the cell sheet need to be measured in order to precisely infer the value of \(P_{\text{\tiny CC}}\) in this case. ## Discussion In this work we have conducted a number of experiments to investigate the use of mathematical models in inferring the relative proportions of cell-to-cell and cell-free viral infection, which we summarised via the metric \(P_{\rm CC}\): the proportion of infections arising from the cell-to-cell route. We have applied simulation estimation techniques using Bayesian methods for inference on both an ODE model and a spatially-explicit multicellular model. As much as possible, we aimed to emulate the type and quality Figure 4: (a)–(c) Posterior density as a contour plot in \(\alpha\)–\(\beta\) space for a fit to fluorescence and clustering data where the true \(P_{\rm CC}\approx 0.1\), \(0.5\), \(0.9\) and the infected cell peak time is held fixed at approximately \(18\)h. We only show densities above a threshold value of \(10^{-4}\). (d) Prior density and posterior densities from individual replicates for infected peak time (\(t_{\rm peak}\)) and \(P_{\rm CC}\) with target parameters as specified in (a)–(c). Dashed and solid horizontal lines mark the weighted mean and median values respectively. We also show a box plot of the distribution of posterior weighted means across all four replicates in each case. The replicates in bold are those plotted in (a)–(c). \(\alpha\) and \(\beta\) have units of \(\rm h^{-1}\) and \(\rm(TCID_{50}/ml)^{-1}h^{-1}\), respectively. of data available experimentally. In particular, we extracted and attempted to fit time series data on the proportion of fluorescent susceptible cells (that is, initially susceptible cells which have reached, or passed, the productively infected state), following experimental work Kongosmos and colleagues [13]. We found that this data source was insufficient for inferring \(P_{\mathrm{CC}}\) from simulation estimation after observational noise was applied, even when all model parameters aside from those governing the rates of cell-to-cell and cell-free infection were assumed known. This was true for both the ODE and spatial models. By contrast, from the same experiments, _global_ metrics of the infection dynamics were very robustly inferred (the exponential growth rate \(r\) for the ODE model, and the time of peak infected cell population \(t_{\mathrm{peak}}\) in the spatial case). This indicates that \(P_{\mathrm{CC}}\) values can be interchanged while preserving the fluorescent proportion curve -- at least as precisely as can be estimated once observational noise is applied -- provided \(r\) or \(t_{\mathrm{peak}}\) are held fixed. This suggests that for both the ODE and spatial models, \(P_{\mathrm{CC}}\) cannot be inferred based on fluorescence data alone. The slight caveat to this claim was our observation that \(P_{\mathrm{CC}}\) was somewhat well estimated by the spatial model when the true proportion of cell-to-cell infections was high. This was due to the fact that in the spatial model, cell-to-cell infection is forced to spread radially, while cell-free infection is free to spread globally (causing the infected population to grow asymptotically exponentially). Therefore in instances where the global route of infection is almost entirely eliminated, the fluorescent population is forced to grow in a non-exponential manner, which was more easily detected by our inference methods. We were able to overcome the inability to infer \(P_{\mathrm{CC}}\) by adding a second set of observational data alongside the fluorescent proportion time series. We did so by introducing a clustering metric \(\kappa(t)\), which, given the state of the cell grid in a simulation of the spatial model, measures the mean fraction of fluorescent cells neighbouring each fluorescent cell. Note that since \(\kappa(t)\) relies on knowledge of the Figure 5: Prior density and posterior densities from individual replicates for \(P_{\mathrm{CC}}\) for different values of \(S\), the number of cells sampled to calculate the approximation \(\kappa_{S}(t)\) in fitting. Dashed and solid horizontal lines mark the weighted mean and median values respectively. For each value of \(S\) we also show a boxplot of the distribution of posterior weighted means across all four replicates. We show results for the case where the target values of \(\alpha\) and \(\beta\) give rise to \(P_{\mathrm{CC}}\) values of approximately 0.1, 0.5, and 0.9 and \(t_{\mathrm{peak}}\) of approximately 18h. \(\alpha\) and \(\beta\) have units of \(\mathrm{h}^{-1}\) and \((\mathrm{TCID}_{50}/\mathrm{ml})^{-1}\mathrm{h}^{-1}\), respectively. actual spatial configuration of infection, it is only possible to construct such a metric for a spatially-structured model. We re-ran simulation estimations on the spatial model, using time series for both the fluorescent cell proportion and \(\kappa(t)\) as the observational data, and found that \(P_{\mathrm{CC}}\) was very well estimated in this case regardless of the target value of \(P_{\mathrm{CC}}\), however estimates were especially precise when \(P_{\mathrm{CC}}\) was high. We also found that \(P_{\mathrm{CC}}\) could still be reliably inferred using the spatial model when the clustering metric \(\kappa(t)\) was only coarsely approximated, using a random subset of the cell population. Even at the coarsest approximation we tested -- where \(\kappa(t)\) was approximated using a sample of only 50 cells -- inference of \(P_{\mathrm{CC}}\) was still reasonably robust, and dramatically improved compared to the case where \(\kappa(t)\) was not used at all. These results suggest that even a very rough measure of the spatial distribution of infection is sufficient to deduce the \(P_{\mathrm{CC}}\) of the underlying system. One of the limitations to the analysis which we have presented here is the fact that our simulation estimations have only attempted to fit the parameters governing the rates of infection (that is, \(\alpha\) and \(\beta\)), and assumed perfect prior knowledge of all other model parameters. This prior knowledge is not available when fitting to actual experimental data. There are additional identifiability concerns attached with estimating the other parameters -- the cell-free infection rate \(\beta\) and extracellular viral production rate \(p\), for instance, are well known to only be determined as a product [11, 20] -- and it is possible that estimating these additional parameters may introduce further complications in determining \(P_{\mathrm{CC}}\). For the sake of simplicity, as well as computational complexity, we have not carried out this analysis in this work. Another important simplification in our approach was our implementation of a global extracellular virus population in the spatial model, rather than a spatially-explicit, diffusing viral population. This approach, which was also employed by Blahut and colleagues in their dual-spread model of hepatitis C [2], implicitly assumes that extracellular viral transport over the cell grid is fast relative to the length scale of the cell sheet. This is fairly easily justified here due to the small grid of cells used in our implementation of the spatial model, but is also a standard assumption of the rate of viral diffusion _in vivo_ or in permissive media [8, 10, 11, 15]. It is worth also briefly remarking on the computational costs associated with parameter estimation using these models. While the ODE model was very efficient to use, inference on the spatial model was extremely computationally intensive. The computation behind Figure 5, for instance, which comprises 60 individual simulation estimations took approximately 13 weeks to complete, with a single typical replicate taking around 24 hours each (running in parallel across eight CPUs (Intel Xeon CPU E5-2683 v4)), while our 150 ODE fits finished in ten days running on four CPUs (AMD EPYC 7702). This is despite using a small \(50\times 50\) grid of cells for the spatial model and only fitting two parameters. The extremely high computational costs associated with these parameter estimations is largely due to the stochastic nature of the spatial model, meaning that many candidate parameter samples which are very close to the true values are randomly rejected. This effect is exacerbated when the noise associated with the model is increased, specifically, when the approximation of \(\kappa(t)\) is especially coarse. While recent works in the literature have demonstrated rapid advancements in the speed of simulations by running on Graphic Processing Units [6] (our code, by contrast, is written in the comparatively slow MATLAB and run on CPUs), the computational costs associated with computing large-scale parameter estimations using the spatial model are not insignificant. We opted to use fluorescence data as the main data source used in fitting, instead of extracellular viral titre data, which is more typically reported in the experimental virology literature. This is mainly because our work was guided by the results published by Kongsomros and colleagues [13], which reports fluorescent cell proportions as its main metric, but also since we were interested in analysing infection scenarios ranging from the extremes of purely cell-free to purely cell-to-cell, and cell fluorescence data is more relevant to predominantly cell-to-cell infections where cell-free virus has little influence on the dynamics. Furthermore, viral titre observations, as opposed to cell-based observations do not permit the collection of spatial information. Our work is not the first in the literature to attempt to quantify the relative roles of cell-free and cell-to-cell infection routes. A number of mathematical modelling publications [2, 5, 9, 15], along with experimental works [13, 27] have applied varying models and methods to determine the prevalence of cell-to-cell infection. A common theme among the majority of these works [2, 5, 10, 13, 27] is the use of data collected from infections where one mode of infection is inhibited, usually by administering an antiviral drug such as oseltamivir for influenza [13], or by imposing a physical constraint on viral diffusion such as methylcellulose [27]. This approach is of limited relevance outside _in vitro_ contexts and moreover relies on the extremely strong assumption that these interventions perfectly inhibit one infection mechanism while leaving the other unaltered. The alternative approach -- collecting data from experiments in which both modes of infection are unimpeded -- raises additional challenges, but is more robust and, since it requires less invasive experimental intervention, dramatically widens the scope of experiments able to be used for inference. Kumberger and collaborators provided one early attempt at inferring the prevalence of cell-to-cell infection from this type of data. They used a spatial model with two modes of infection to generate synthetic global observational data (similar to the fluorescence data we have used here) and attempted to fit it using ODE models [15]. As we have found here, their work suggested that models which (artificially) account for the spatial structure of infection provided better estimates of the prevalence of cell-to-cell spread \(P_{\text{\tiny CC}}\). However, even then, these estimates were still not especially accurate and were subject to systematic biases, even when fitting multiple observational datasets in a single fit. Our work provides context for these findings, offers novel insight on the identifiability of \(P_{\text{\tiny CC}}\), and suggests an improved method for determining this quantity. We showed that ODE systems were unable to identify \(P_{\text{\tiny CC}}\), even when fitting data generated by the system itself, and moreover showed that the collection of spatial information, in the form of the clustering metric \(\kappa(t)\), was necessary to learn \(P_{\text{\tiny CC}}\), even with a spatial model. Our hope is that this work provides the foundations for applying mathematical modelling and inference methods to real experimental data in order to accurately quantify the relative roles of cell-free and cell-to-cell spread in real viral infections. The obvious extension to our work here is to apply our methods to experimental data. The data sources we have used here -- the fluorescent cell proportion time series and the time series for the clustering metric \(\kappa(t)\) -- are readily obtainable (or at least estimable) from model cellular systems. This could be achieved _in vitro_ by following methods similar to those described in Kongsomros _et al._[13], and would only require simple staining and imaging techniques. After harvesting and fixing the cell sheet at one of a specified set of observation times, fluorescent cells are easily identified by staining with fluorescent antibodies and imaging the cell sheet. The resulting image could then be processed to compute the fluorescent proportion of the cell population, and to compute or estimate the clustering metric \(\kappa(t)\). In brief, this work has explored the identifiability of the relative proportions of cell-free and cell-to-cell infection (the latter of these we termed \(P_{\text{\tiny CC}}\)) in two standard models of dual-spread viral dynamics: one ODE model and one spatially-explicit multicellular model. We showed that \(P_{\text{\tiny CC}}\) could not be determined using either model when only the proportion of fluorescent cells was reported. We found that when an additional data source, describing the clustering structure of the infection, was also used for fitting, \(P_{\text{\tiny CC}}\) could be accurately determined using the spatial model. This was the case even when the clustering metric was only approximated using a small sample of the cell sheet. Our results imply that some degree of information about the spatial structure of infection is necessary to infer \(P_{\text{\tiny CC}}\). We have demonstrated practically obtainable data types which, combined with experimental collaboration, could lead to more precise and robust predictions of the role of the two modes of viral spread. ## Methods ### An ODE model for dual-spread dynamics We employ an ODE model which is adapted from a typical model of viral dynamics with two modes of spread [15], which is in turn based on the standard model of viral dynamics [20]. We make the additional inclusion of a latent phase of infection, based on observations from data published by Kongsomros and colleagues [13]. We noticed a delay in the initial uptick of the fluorescent cell time series curve, indicating that cells only become detectably fluorescent once they are productively infected, that is, following the eclipse phase of infection. We tested having both single and multiple latent stages in the model -- or equivalently, exponentially and gamma-distributed durations for the eclipse phase and obtained dramatically improved agreement with the data when we assumed multiple latent stages before cells become detectably fluorescent. This approach is common in representing the eclipse phase of infection in the literature [6, 21]. We arrived at the following form of the model, in ODE form: \[\frac{dT}{dt} =-\alpha TI-\beta TV, \tag{1}\] \[\frac{dE^{(1)}}{dt} =\alpha TI+\beta TV-K\gamma E^{(1)},\] (2) \[\frac{dE^{(k)}}{dt} =K\gamma\left(E^{(k-1)}-E^{(k)}\right),\qquad\text{for $k=2,3,...,K$},\] (3) \[\frac{dI}{dt} =K\gamma E^{(K)}-\delta I,\] (4) \[\frac{dV}{dt} =pI-cV, \tag{5}\] where \(T\) is the fraction of cells susceptible to infection, \(\sum_{i=1}^{K}E^{(i)}\) is the fraction of cells in the eclipse phase of infection, \(I\) is the fraction of cells in the productively infected state, and \(V\) is the quantity of extracellular virus. Since we wish to keep track of whether infections come from the cell-to-cell or cell-free infection routes, we incorporate the following subsystem which keeps track of the cumulative proportion of the target population which has become infected via the cell-to-cell mechanism (\(F_{\text{CC}}\)) or the cell-free mechanism (\(F_{\text{CF}}\)). We have \[\frac{dE^{(1)}_{\text{CC}}}{dt} =\alpha TI-K\gamma E^{(1)}_{\text{CC}}, \tag{6}\] \[\frac{dE^{(k)}_{\text{CC}}}{dt} =K\gamma\left(E^{(k-1)}_{\text{CC}}-E^{(k)}_{\text{CC}}\right), \qquad\text{for $k=2,3,...,K$},\] (7) \[\frac{dE^{(1)}_{\text{CF}}}{dt} =\beta TV-K\gamma E^{(1)}_{\text{CF}},\] (8) \[\frac{dE^{(k)}_{\text{CF}}}{dt} =K\gamma\left(E^{(k-1)}_{\text{CF}}-E^{(k)}_{\text{CF}}\right), \qquad\text{for $k=2,3,...,K$},\] (9) \[\frac{dF_{\text{CC}}}{dt} =\frac{K\gamma}{T_{0}}E^{(K)}_{\text{CC}},\] (10) \[\frac{dF_{\text{CF}}}{dt} =\frac{K\gamma}{T_{0}}E^{(K)}_{\text{CF}}, \tag{11}\] where \(T_{0}=T(0)\) is the initial target cell proportion. The sum of these two quantities, \[F(t)=F_{\text{CC}}(t)+F_{\text{CF}}(t), \tag{12}\] is the cumulative proportion of the cell population which has become infected through either mechanism, which we take to be equivalent to the proportion of fluorescent cells as observed in Kongsomros _et al._[13]. The assumption that cells remain fluorescent even after they die (over the time scale of interest) is justified by the observation that in Kongsomros _et al._ fluorescent proportions were observed to saturate at 100% at later times in their experiments Throughout this work, we will assume fixed values of the parameters \(K\), \(\gamma\), \(\delta\), \(p\), and \(c\), as specified in Table 1. These parameters were obtained by running a Bayesian parameter estimation for the form of the ODE model as defined above against fluorescent cell time series data in [13], and selecting one particular posterior sample at random (results not shown for sake of brevity). These values were selected simply to be indicative of the realistic range of values for these parameters and are sufficiently realistic for the purposes of this work. In each case we initiate the infection by setting \(T(0)=0.99\), \(I(0)=0.01\) and the remaining compartments to zero. ### A multicellular spatial model for dual-spread dynamics It is straightforward to adapt this system of ODEs into a spatially-structured multicellular model, that is, a model which tracks the dynamics of a finite number of discrete cells which each occupy some specified region of space and at any given point in time, may be in one of a set of cell states [22, 26]. Suppose we model the dynamics of a population of \(N\) cells. We associate with each of these cells an index \(i\in\{1,2,...,N\}\), and a cell state at time \(t\) given by \(\sigma_{i}(t)\), where the possible cell states correspond to the compartments of the ODE system, including the implicit dead cell compartment. That is, for any cell \(i\), \(\sigma_{i}(t)\in\{T,E,I,D\}\), representing the target, eclipse, infected and dead state respectively. For the spatial model, following Blahut and colleagues [2], we make the simplifying assumption that the dispersal of free virions over the computational domain is fast, and that the extracellular viral distribution can therefore be considered approximately uniform. As such, the equation for \(V\) in our spatial model changes only in notation from Equation (5): \[\frac{dV}{dt}=p\sum_{i=1}^{N}\frac{\mathbb{1}_{\{\sigma_{i}(t)=I\}}}{N}-cV. \tag{13}\] As such, cell-free infection is considered a spatially _global_ mode of spread in our spatial model. By contrast, following results from the biological literature, we assume that cell-to-cell spread is a spatially _local_ mechanism [13, 14]. As such, we assume that the probability of cell-to-cell infection in the spatial model depends not on the global proportion of infected cells as in Equation (1), but rather the proportion of a cell's neighbours which are infected. Specifically, if we denote by \(\nu(i)\) the set of indices of the cells neighbouring cell \(i\), the probability of cell \(i\) becoming infected by cell-to-cell infection over a given time period depends on the term \(\sum_{j\in\nu(i)}(\mathbb{1}_{\{\sigma(j)=I\}})/|\nu(i)|\). Combining these two mechanisms, we obtain the following transition probability for target cell \(i\) to become (lantly) infected over some time interval \(\Delta t\): \[P(\sigma_{i}(t+\Delta t)=E|\sigma_{i}(t)=T)=1-\exp\left(-\left(\alpha\sum_{j \in\nu(i)}\frac{\mathbb{1}_{\{\sigma_{j}(t)=I\}}}{|\nu(i)|}+\beta V\right) \Delta t\right) \tag{14}\] For the eclipse phase, instead of implementing transition probabilities for each \(E^{(k)}\), for computational simplicity we instead sample a latent phase duration from its probability distribution at the time a cell first enters the eclipse state. That is, if we write \(t_{i}^{E}=\min\{t:\sigma_{i}(t)=E\}\) for the time at which cell \(i\) enters the eclipse state, and \(t_{i}^{I}=\min\{t:\sigma_{i}(t)=I\}\) for the time at which cell \(i\) enters the productively infected state, we have \[t_{i}^{I}=t_{i}^{E}+\tau_{i}, \tag{15}\] where \[\tau_{i}\sim\textit{Gamma}\left(K,\frac{1}{K\gamma}\right). \tag{16}\] The remaining compartments are easily described by simple transition probabilities. \begin{table} \begin{tabular}{l|l|l} \hline **Description** & **Symbol** & **Value and Units** \\ \hline Number of delay compartments & K & 3 \\ Eclipse cell activation rate & \(\gamma\) & \(3.366934\times 10^{-1}\)h\({}^{-1}\) \\ Death rate of infected cells & \(\delta\) & \(8.256588\times 10^{-2}\) h\({}^{-1}\) \\ Extracellular virion production rate & \(p\) & \(1.321886\times 10^{6}\) (TCID\({}_{50}\)/ml) h\({}^{-1}\) \\ Extracellular virion clearance rate & \(c\) & \(4.313531\times 10^{-1}\) h\({}^{-1}\) \\ \hline \end{tabular} \end{table} Table 1: Fixed parameters used in our simulations. \[P(\sigma_{i}(t+\Delta t)=I|\sigma_{i}(t)=E)=1-\exp\left(-K\gamma \Delta t\right), \tag{17}\] \[P(\sigma_{i}(t+\Delta t)=D|\sigma_{i}(t)=I)=1-\exp\left(-\delta \Delta t\right). \tag{18}\] Equations (13)-(18), together with appropriate initial and boundary conditions, define the spatial model. Note that the spatial structure of this model is not explicit, but rather is defined by the adjacency function \(\nu(\cdot)\). Nonetheless, we will take the spatial configuration of our model tissue to be as follows. We consider a two-dimensional sheet of cells with hexagonal packing of cells and periodic boundary conditions in both the \(x\) and \(y\) directions, such that each cell has precisely size neighbours. This packing reflects the arrangement of cells in real epithelial monolayers and has the practical benefit that all adjacent cells are joined via a shared edge, avoiding any complications associated with corner-neighbours. Throughout this work we use a \(50\times 50\) grid of cells, and, following equivalent initial conditions as for the ODE model, we initiate infection by randomly selecting \(1\%\) of the cell sheet to be initially infected, and the remainder of the sheet to be susceptible to infection. In Figure 3a we show a schematic of the model as well as the layout of the cell grid. This is not a novel model. This model structure, or slight variations thereof, has been used in a number of recent publications describing infection dynamics with two modes of viral spread and has become somewhat of a standard approach in the field in recent years [2, 5, 6, 9]. As with the ODE model, we can additionally keep track of the cumulative proportion of infections arising from each mode of infection individually in the spatial model. As addition to the overall probability of infection in Equation (14), we can compute a probability of infection by each mode of spread individually as follows. Using the same Poisson process argument as above, the probability of cell-to-cell infection of cell \(i\)_not_ taking place over the time interval \([t,t+\Delta t]\) is given by \[P(\mathbf{E}_{i}^{\text{\tiny{CC}}}\notin[t,t+\Delta t])=\exp\left(-\alpha \sum_{j\in\nu(i)}\frac{\mathbb{1}_{\{\sigma_{j}(t)=I\}}}{|\nu(i)|}\Delta t \right), \tag{19}\] and the probability of cell-free infection of cell \(i\) not occurring over the same time interval is given by \[P(\mathbf{E}_{i}^{\text{\tiny{CF}}}\notin[t,t+\Delta t])=\exp\left(-\beta V \Delta t\right), \tag{20}\] where \(\mathbf{E}_{i}^{CC}\) and \(\mathbf{E}_{i}^{CF}\) are the events of a cell-to-cell infection and a cell-free infection occurring at cell \(i\) respectively. Note that we have to account for the fact that while, mathematically, both events may occur in the time interval \([t,t+\Delta t]\), we need to assign a unique mode of transmission to each infection. We do so as follows. The following calculation is also derived in work by Blahut and colleagues [2]. If we write \(m(i)\in\{CC,CF\}\) for the mode of infection of cell \(i\), then at the time of infection of cell \(i\) -- that is, when \(t=t_{i}^{E}\) -- we compute the probability of each individual mode of transmission as follows: \[P(m(i)=CC)=\frac{1-P(\mathbf{E}_{i}^{\text{\tiny{CC}}}\notin[t,t+\Delta t])}{2 -P(\mathbf{E}_{i}^{\text{\tiny{CC}}}\notin[t,t+\Delta t])-P(\mathbf{E}_{i}^{ \text{\tiny{CF}}}\notin[t,t+\Delta t])} \tag{21}\] and \[P(m(i)=CF)=\frac{1-P(\mathbf{E}_{i}^{\text{\tiny{CF}}}\notin[t,t+\Delta t])}{2 -P(\mathbf{E}_{i}^{\text{\tiny{CC}}}\notin[t,t+\Delta t])-P(\mathbf{E}_{i}^{ \text{\tiny{CF}}}\notin[t,t+\Delta t])} \tag{22}\] In our model, therefore, when an infection is detected, we draw a random number \(p\sim\text{\it Uniform}(0,1)\), and if \(p<P(m(i)=CC)\), we designate the infection a cell-to-cell infection, otherwise, it is considered a cell-free infection. We use a similar calculation to assign the viral lineage associated with an infection, which we used to construct the colouring of cells in Figure 3d, which we stipulate in full in Supplementary Section 2. The quantities \(F_{\text{CC}}\) and \(F_{\text{CF}}\) can easily be calculated for the spatial model as \[F_{\text{CC}}(t) =\frac{\sum_{i=1}^{N}\mathbb{1}_{\{m(i)=CC\}}\mathbb{1}_{\{t_{i}^{ \ell}\in[0,t]\}}\mathbb{1}_{\{\sigma_{i}(0)=T\}}}{\sum_{i=1}^{N}\mathbb{1}_{ \{\sigma_{i}(0)=T\}}}, \tag{23}\] \[F_{\text{CF}}(t) =\frac{\sum_{i=1}^{N}\mathbb{1}_{\{m(i)=CF\}}\mathbb{1}_{\{t_{i}^{ \ell}\in[0,t]\}}\mathbb{1}_{\{\sigma_{i}(0)=T\}}}{\sum_{i=1}^{N}\mathbb{1}_{ \{\sigma_{i}(0)=T\}}}, \tag{24}\] which allow us to keep track of the count of each type of infection event throughout simulations of the spatial model. As before, we define \[F(t)=F_{\text{CC}}(t)+F_{\text{CF}}(t). \tag{25}\] ### Metrics #### Proportion of infections from the cell-to-cell route - \(P_{\text{CC}}\) We introduce the quantity \(P_{\text{CC}}\) to denote the proportion of infections arising from the cell-to-cell route. This is calculated by keeping track of the cumulative proportion of the target cell population which becomes infected by either infection mechanism over time. At long time -- once the infection has essentially run its course -- we compute \(P_{\text{CC}}\) as the fraction of the total infections which occurred via cell-to-cell infection. Using \(F_{\text{CC}}\) and \(F_{\text{CF}}\) as we have defined them, we have \[P_{\text{CC}}=\lim_{t\to\infty}\frac{F_{\text{CC}}(t)}{F_{\text{CC}}(t)+F_{ \text{CF}}(t)}. \tag{26}\] In Figure 0(c) we show an illustration of this calculation more generally. \(P_{\text{CC}}\) quantifies the relative weight of the cell-to-cell route of infection and is therefore our target for estimation in this work. Its definition is general and is not specific to any particular model structure. \(P_{\text{CC}}\) cannot be directly calculated in closed form directly from the model parameters. Instead, we repeatedly simulate our model using parameters sampled from \(\alpha\)-\(\beta\) space and compute \(P_{\text{CC}}\) in order to construct lookup tables. In Figure 0(d), we plot a contour map of \(P_{\text{CC}}\) values for the ODE model in \(\alpha\)-\(\beta\) space. In the case of the spatial model, we accounted for the inherent stochasticity of the model by running 20 simulations of the model at each \((\alpha,\beta)\) pair in the lookup table and kept track of mean \(P_{\text{CC}}\) values. The associated contour map for the spatial model is shown in Figure 0(a). Contour plots were generated by computing contours over the lookup table using MATLAB's contourf function. We interpolate between values on the lookup table by constructing spline fits along \(\alpha\) and \(\beta\) contours. #### Exponential growth rate - \(r\) A second quantity, which describes the _overall_ rate of infection spread is the exponential growth rate \(r\). This quantity, related to the basic reproduction number \(\mathcal{R}_{0}\), is well-established in the theory of epidemiological and virus dynamical models and has the property that, for small \(t\), we have \(I(t)\approx I_{0}e^{rt}\)[3, 4, 17, 18]. The exponential growth rate for the ODE model can be readily computed by linearising the ODE system about the infection free steady state and finding the dominant eigenvalue of the resulting system [4, 17]. For our model, we obtain the following explicit definition: \[r=\max\left\{\lambda\ :\ g(\lambda)=0\right\}, \tag{27}\] where \[g(\lambda):=\left(1+\frac{\lambda}{K\gamma}\right)^{K}\left(c+\lambda\right) \left(\delta+\lambda\right)-\left(\alpha\left(c+\lambda\right)+\beta p\right).\] #### Time to peak infected cell population - \(t_{\text{peak}}\) The exponential growth rate \(r\) relies on asymptotically exponential behaviour of the infected proportion curve. However, for the spatial model, especially in instances where infections spread mainly locally -- that is, through the cell-to-cell route -- the infected proportion curve does not grow exponentially. For the spatial model, therefore, \(r\) is not well-defined. We instead use the time of the peak infected cell proportion, which we label as \(t_{\text{peak}}\), as an alternative measure of the overall growth behaviour of the infected population. As with \(P_{\text{CC}}\), this quantity is not easily approximated _a priori_, therefore we also compute lookup tables in \(\alpha\)-\(\beta\) space for this quantity. We show the contour map of \(t_{\text{peak}}\) on \(\alpha\)-\(\beta\) space in Figure (b)b. Contour plots were generated by computing contours over the lookup table using MATLAB's contourf function. It is worth mentioning that the time of the peak infected cell population is a quantity that is not typically experimentally observable, whereas a quantity like the time of peak viral load is comparatively much easier to measure in an experimental context. However, we opt to use the latter metric, since this is a meaningful metric of the model regardless of the mechanism of infection spread. Even in a scenario where all infections in the model arise from the cell-to-cell route (i.e. \(P_{cc}=1\)) the time of peak infected population remains a relevant as a measure of the overall rate of infection progression, where the time of the peak extracellular viral load is far less meaningful here. In any event, for our purposes in this work, \(t_{\text{peak}}\) is used simply to illustrate a quantity which represents the overall rate of infection spread in a model simulation, and a quantity which we observe to be preserved between accepted samples of our simulation estimation (at least when clustering data are not used). This choice of metric does not diminish the relevance of our analysis to experimental application. #### 0.0.1 Clustering metric - \(\kappa(t)\) (and approximation - \(\kappa_{S}(t)\)) Given a cell grid where we denote by \(\mathcal{F}(t)\) the set of cells which are fluorescent, we compute for each fluorescent cell \(i\in\mathcal{F}(t)\) the quantity \(k_{i}(t)\), which is the proportion of the neighbours of cell \(i\) which are also fluorescent. We then define \(\kappa(t)\) as the mean of the \(k_{i}(t)\)s. We compute \(\kappa(t)\) over time \(t\) to form a time series. In Figures (a)a and (g)g, we show an example of computing fluorescent neighbour proportions, and plot example \(\kappa(t)\) time series for three parameter pairs, corresponding to \(P_{\text{CC}}\) values of 0.1, 0.5, and 0.9. Figure (g)g, shows that, unlike with the fluorescent cell time series, there is substantial variation in the \(\kappa(t)\) curves with changing \(P_{\text{CC}}\). \(\kappa(t)\) has the property that when it is near zero, fluorescent cells are mostly isolated and the infection is very diffuse, and when it is near one, fluorescent cells are generally found in clusters, indicating that the infection is very compact. In principle, \(\kappa(t)\) could be computed or estimated in experimental settings with the use of fluorescence imaging of the cell sheet, samples of which can be found in works Figure 6: \(P_{\text{CC}}\) and \(t_{\text{peak}}\) contour maps respectively on \(\alpha\)–\(\beta\) space for the spatial model. \(\alpha\) and \(\beta\) have units of \(\text{h}^{-1}\) and \((\text{TCID}_{50}/\text{ml})^{-1}\text{h}^{-1}\), respectively. by Kongosmros _et al._ and Fukuyama _et al._[7, 13]. We modify the definition of \(\kappa(t)\) to define the approximation \(\kappa_{S}(t)\) as follows. Given a grid of \(N\) cells at time \(t\), of which the fluorescent population is given by \(\mathcal{F}(t)\) as before, we draw \(S\leq N\) cells without replacement and call the set of sampled cells \(\mathcal{S}(t)\). For each sampled cell \(i\), if \(i\in\mathcal{F}(t)\), we compute \(k_{i}(t)\), and then compute the approximate clustering metric \(\kappa_{S}(t)\) as the mean of the computed \(k_{i}(t)\)s, that is, if \(|\mathcal{S}(t)\cap\mathcal{F}(t)|\neq 0\), then \[\kappa_{S}(t)=\sum_{i\in\mathcal{S}(t)\cap\mathcal{F}(t)}\frac{k_{i}(t)}{| \mathcal{S}(t)\cap\mathcal{F}(t)|}.\] In the event that no fluorescent cells are sampled (that is, \(|\mathcal{S}(t)\cap\mathcal{F}(t)|=0\)), we define \(\kappa_{S}(t)=0\). ### Simulation estimation Throughout this work we conduct a series of simulation estimation experiments to explore what can be learned about the roles of the two modes of viral spread based on observed model outputs. We outline here the general framework of this process. For both the ODE and the spatial model, we begin by drawing a set of target values for the infection parameters \(\alpha\) and \(\beta\). As mentioned above, the values of the other model parameters are considered fixed and known. We then simulate the chosen model using these parameter values, and apply an observational model \(f(\cdot)\) to its output to generate a set of observed data \(\mathbf{D}\). The observational model \(f\) is designed to simulate the noise incurred in actual experiments. Throughout this work, we focus especially on the observed _fluorescent cell proportion_ over time, since this is the main source of data reported by Kongosmros and colleagues [13]. For the ODE model, we obtain the observed fluorescent cell proportion \(\mathbf{D}\) by computing the true fluorescent cell time series \(F(t)\) (defined in Equation (12)) at each of a series of observation times, converting this proportion to a count of fluorescent cells and applying negative binomial noise. The negative binomial distribution reflects the observed error structure in [13], which is constructed from overdispersed count data. We assume some vector of observation times \(\mathbf{t}=\{t_{1},t_{2},...t_{m}\}\) and define \[\mathbf{D}_{\text{ODE}}=f_{\text{ODE}}(F(t);\mathbf{t},k,N_{\text{sample}})= \left(\frac{1}{N_{\text{sample}}}\right)\cdot\left\{D_{1},D_{2},...,D_{m} \right\}, \tag{28}\] where \[D_{i}\sim\text{{Negative Binomial}}\left(N_{\text{sample}}F(t_{i}),\phi\right)\] for \(i=1,2,...,m\), where \(N_{\text{sample}}F(t_{i})\) and \(\phi\) are the mean and dispersion parameter respectively of \(D_{i}\). \(N_{\text{sample}}\) is the number of cells measured for fluorescence, in a sense the size of the cell population. \(\text{Var}\left[D_{i}\right]=N_{\text{sample}}F(t_{i})+\left[N_{\text{sample }}F(t_{i})\right]^{2}/\phi\) for all \(i=1,2,...m\). In Figure 7b, we show an illustration of this observation process. The curve shown in blue is the true fluorescent proportion curve \(F(t)\). At each of the observation times, indicated with dots, we apply noise about the true value. After obtaining observed data \(\mathbf{D}_{\text{ODE}}\), we then re-estimate \(\alpha\) and \(\beta\) using Bayesian methods. We assume uniform prior distributions \[\pi_{\alpha}(\alpha) =\begin{cases}1/\alpha_{\text{max}},\ \alpha\in[0,\alpha_{\text{max}}]\\ 0,\ \text{otherwise}\end{cases}, \tag{29}\] \[\pi_{\beta}(\beta) =\begin{cases}1/\beta_{\text{max}},\ \beta\in[0,\beta_{\text{max}}]\\ 0,\ \text{otherwise}\end{cases}, \tag{30}\] with \(\alpha_{\text{max}}=2.5\text{h}^{-1}\), \(\beta_{\text{max}}=2\times 10^{-6}(\text{TCID}_{50}/\text{ml})^{-1}\text{h}^{-1}\). We re-estimate \(\alpha\) and \(\beta\) using No U-Turn Sampling (NUTS) Markov Chain Monte Carlo (MCMC) methods with a negative binomial likelihood \[D_{i}\sim\text{{Negative Binomial}}\left(N_{\text{sample}}\hat{F}(t_{i}), \phi\right) \tag{31}\] for \(i=1,2,...,m\), where \(\hat{F}(t)\) is the fluorescent proportion time series estimated by simulating the ODE model using samples \(\hat{\alpha}\) and \(\hat{\beta}\). For each estimation we use four chains seeded with random initial values and draw 2000 samples for each, including 200 burn-in samples. We assume \(N_{\text{\tiny sample}}=200,000\), which was the number of cells used in the experiments in [13]. For the spatial model, we have two sources of observational data: both the fluorescent proportion time series and the clustering metric \(\kappa(t)\). Since the system is inherently stochastic, we do not add additional external noise, instead, we aim to emulate the experimental process whereby the fluorescent proportion of a cell population (and consequently the clustering metric) cannot be observed without destroying, or at least disrupting, the cell sheet. We implement this by sampling our observations from \(m\) independent simulations of the model. That is, if we have observation times \(\mathbf{t}=\{t_{1},t_{2},...,t_{m}\}\) and \(m\) true fluorescence and clustering time series from independent simulations of the spatial model, \(\mathbf{F}(t)=\{F_{1}(t_{1}),F_{2}t_{2},...,F_{m}(t_{m})\}\) and \(\mathbf{K}(t)=\{\kappa_{1}(t_{1}),\kappa_{2}(t_{2}),...,\kappa_{m}t_{m}\}\) respectively, we generate the two sets of observational data \[\mathbf{D}_{\text{spatial}}^{\text{fluoro}} =f_{\text{spatial}}(\mathbf{F}(t);\mathbf{t})=\left\{F_{1}(t_{1 }),F_{2}(t_{2}),...F_{m}(t_{m})\right\}, \tag{32}\] \[\mathbf{D}_{\text{spatial}}^{\text{cluster}} =f_{\text{spatial}}(\mathbf{K}(t);\mathbf{t})=\left\{\kappa_{1} (t_{1}),\kappa_{2}(t_{2}),...\kappa_{m}(t_{m})\right\}. \tag{33}\] We show a demonstration of this observation process in Figure (c)c. Due to the stochasticity of the system, we use Approximate Bayesian Computation (ABC) to re-estimate \(\alpha\) and \(\beta\) for the spatial model. In particular, we adapt the Population Monte Carlo (PMC) method introduced by Toni and collaborators [25] and revised by others [1, 16]. We sketch this method in pseudocode in Algorithm 1. In our case, the model \(\mathcal{M}(\hat{\alpha},\hat{\beta})\) is simply the time series \(F(t)\) obtained by a single simulation of the spatial model with parameters \(\alpha=\hat{\alpha}\) and \(\beta=\hat{\beta}\), and evaluated at time points \(\mathbf{t}\), the vector of time points at which the reference data \(\mathbf{D}\) is obtained. We again use the uniform prior distributions in Equations (29) and (30), although now with \(\alpha_{max}=10\text{h}^{-1}\), \(\beta_{max}=1.5\times 10^{-6}(\text{TCID}_{50}/\text{ml})^{-1}\text{h}^{-1}\). For the perturbation kernel, we use the following definition proposed by Beaumont and colleagues: \[K(\mathcal{P}_{k}^{*}|\mathcal{P}_{i})=\Phi(\mathcal{P}_{k}^{*};\mathcal{P}_{ i},2\Sigma), \tag{34}\] where \(\Phi(\mathbf{x};\mu,\sigma^{2})\) is a multivariate normal and \(\Sigma\) is the empirical covariance matrix of the particle population \(\{\mathcal{P}_{1},\mathcal{P}_{2},...,\mathcal{P}_{N_{P}}\}\), using their weights \(\{w_{1},w_{2},...,w_{N_{P}}\}\). For the other parameters of the algorithm, we set \(N_{P}=500\), \(G=5\), \(p_{\text{\tiny{0,accept}}}=0.3\), and \(q=0.5\). We use euclidean distance for the distance metric \(d\). For the case where we attempt only to estimate \(\alpha\) and \(\beta\) using the spatial model and fluorescence data only, we slightly simplify the fitting process. We apply the same observational model, outlined in Equation (32), to the fluorescence data, and use a slightly simplified version of the PMC method to refit \(\alpha\) and \(\beta\). We provide full details in Supplementary Section 5. ## Code Availability Our ODE simulation estimation code is written in R. All other code, including visualisations, are written in MATLAB. Our code is available at [https://github.com/thomaswilliams23/dual_spread_viral_dynamics_fitting](https://github.com/thomaswilliams23/dual_spread_viral_dynamics_fitting). ## Acknowledgements We are very grateful to Pengxing Cao and Ke Li for their insight and guidance in the initial stages of approaching this project, and for valuable discussions about applying Bayesian methods in our work. ``` Input: Model \(\mathcal{M}(\alpha,\beta)\), prior distributions for target parameters \(\pi_{\alpha}(\alpha)\) and \(\pi_{\beta}(\beta)\), target number of particles \(N_{P}\), number of generations \(G\), reference data \(\mathbf{D}^{\text{\tiny{fluoro}}}\) and \(\mathbf{D}^{\text{\tiny{cluster}}}\), distance metrics \(d^{\text{\tiny{fluoro}}}(\cdot,\cdot)\) and \(d^{\text{\tiny{cluster}}}(\cdot,\cdot)\), perturbation kernel \(K(\cdot|\cdot)\), initial acceptance proportion \(p_{0,\text{\tiny{accept}}}\), threshold tightening parameter \(q\). Output: Weighted samples from the posterior distributions \(\hat{\pi}_{\alpha}(\alpha|\mathbf{D}^{\text{\tiny{fluoro}}},\mathbf{D}^{\text{ \tiny{cluster}}})\), \(\hat{\pi}_{\beta}(\beta|\mathbf{D}^{\text{\tiny{fluoro}}},\mathbf{D}^{\text{ \tiny{cluster}}})\). ``` _Rejection sampling_ ``` for\(i=1,2,...,\lceil N_{P}/p_{0,\text{\tiny{accept}}}\rceil\)do Randomly draw \(\hat{\alpha}_{i}\) and \(\hat{\beta}_{i}\) from \(\pi_{\alpha}(\alpha)\) and \(\pi_{\beta}(\beta)\), respectively. Obtain the model output using these parameters, \(\left\{\hat{\mathbf{D}}_{i}^{\text{\tiny{fluoro}}},\hat{\mathbf{D}}_{i}^{ \text{\tiny{cluster}}}\right\}=\mathcal{M}(\hat{\alpha}_{i},\hat{\beta}_{i})\). Compute the distance between model output and reference data \(\epsilon_{i}^{\text{\tiny{fluoro}}}=d^{\text{\tiny{fluoro}}}(\hat{\mathbf{D}}_{i}^{ \text{\tiny{fluoro}}},\hat{\mathbf{D}}^{\text{\tiny{fluoro}}})\) and \(\epsilon_{i}^{\text{\tiny{cluster}}}=d^{\text{\tiny{cluster}}}(\hat{\mathbf{D} }_{i}^{\text{\tiny{cluster}}},\hat{\mathbf{D}}^{\text{\tiny{cluster}}})\). endfor \(n_{\text{\tiny{opt found}}}\gets 0\), \(T\gets 0\) while\(n_{\text{\tiny{opt found}}}<N_{P}\)do \(T\gets T+1\), define \(\mathcal{I}_{1},\mathcal{I}_{2},...,\mathcal{I}_{n_{\text{\tiny{opt found}}}}\), as the indices \(i\) in the smallest \(T\) values of the \(\epsilon_{i}^{\text{\tiny{fluoro}}}\)s and the smallest \(T\) values of the \(\epsilon_{i}^{\text{\tiny{fluoro}}}\)s. endwhile for\(j=1,2,...,N_{P}\)do Set \(\mathcal{P}_{j}=(\hat{\alpha}_{\mathcal{I}_{j}},\hat{\beta}_{\mathcal{I}_{j}})\). Set \(w_{j}=1/N_{P}\). endfor \(\mathcal{P}=\left\{\mathcal{P}_{1},\mathcal{P}_{2},...,\mathcal{P}_{N_{P}}\right\}\) is the initial **particle** population. \(w=\left\{w_{1},w_{2},...,w_{N_{P}}\right\}\) is the initial **weight** vector. Set the distance thresholds \(\epsilon_{D}^{\text{\tiny{fluoro}}}\) and \(\epsilon_{D}^{\text{\tiny{cluster}}}\) as the \(q^{th}\) quantile of the \(\epsilon_{i}^{\text{\tiny{fluoro}}}\)s and \(\epsilon_{i}^{\text{\tiny{cluster}}}\)s respectively. ``` _Importance sampling_ ``` for\(g=1,2,...,G\)do Set number of accepted particles \(N_{\text{\tiny{accepted}}}\gets 0\) while\(N_{\text{\tiny{accepted}}}<N_{P}\)do Randomly draw a particle \(\mathcal{P}_{j}\) with probability \(w_{j}\). Perturb particle by the kernel \(K(\cdot|\mathcal{P}_{j})\) to obtain a new sample \((\hat{\alpha},\hat{\beta})\). Obtain the model output using these parameters, \(\left\{\hat{\mathbf{D}}_{i}^{\text{\tiny{fluoro}}},\hat{\mathbf{D}}_{i}^{ \text{\tiny{cluster}}}\right\}=\mathcal{M}(\hat{\alpha}_{i},\hat{\beta}_{i})\). Compute the distance between model output and reference data \(\epsilon_{i}^{\text{\tiny{fluoro}}}=d^{\text{\tiny{fluoro}}}(\hat{\mathbf{D}}_{i}^{ \text{\tiny{fluoro}}},\hat{\mathbf{D}}^{\text{\tiny{fluoro}}})\) and \(\epsilon_{i}^{\text{\tiny{cluster}}}=d^{\text{\tiny{cluster}}}(\hat{\mathbf{D} }_{i}^{\text{\tiny{cluster}}},\hat{\mathbf{D}}^{\text{\tiny{cluster}}})\). if\(\epsilon_{i}^{\text{\tiny{fluoro}}}<\epsilon_{D}^{\text{\tiny{fluoro}}}\) and \(\epsilon_{i}^{\text{\tiny{cluster}}}<\epsilon_{D}^{\text{\tiny{cluster}}}\)then Set \(N_{\text{\tiny{accepted}}}\gets N_{\text{\tiny{accepted}}}+1\) and \(\mathcal{P}_{N_{\text{\tiny{accepted}}}}^{\text{\tiny{next}}}=(\hat{\alpha}_{i}, \hat{\beta}_{i})\). else Return to start of **while**. endif endwhile for\(i=1,2,...,N_{P}\)do Set \(w_{i}^{*,\text{\tiny{next}}}=w_{i}/\sum_{j=1}^{N_{P}}K\left(\mathcal{P}_{i}^{ \text{\tiny{next}}}|\mathcal{P}_{j}\right)w_{j}\) endfor Set \(\mathcal{P}\leftarrow\left\{\mathcal{P}_{1}^{\text{\tiny{next}}},\mathcal{P}_{2 }^{\text{\tiny{next}}},...,\mathcal{P}_{N_{P}}^{\text{\tiny{next}}}\right\}\), \(w\leftarrow(1/\sum_{i=1}^{N_{P}}w_{i}^{*,\text{\tiny{next}}})\cdot\left\{w_{1}^ {*,\text{\tiny{next}}},w_{2}^{*,\text{\tiny{next}}},...,w_{N_{P}}^{*,\text{ \tiny{next}}}\right\}\) Set the distance thresholds \(\epsilon_{D}^{\text{\tiny{fluoro}}}\) and \(\epsilon_{D}^{\text{\tiny{cluster}}}\) as the \(q^{th}\) quantile of the \(\epsilon_{i}^{\text{\tiny{fluoro}}}\)s and \(\epsilon_{i}^{\text{\tiny{cluster}}}\)s respectively. endfor ``` **Algorithm 1** PMC algorithm for parameter estimation using the spatial model - fluorescence and clustering data Set \(N_{\text{\tiny{accepted}}}\gets N_{\text{\tiny{accepted}}}+1\) and \(\mathcal{P}_{N_{\text{\tiny{accepted}}}}^{\text{\tiny{next}}}=(\hat{\alpha}_{i}, \hat{\beta}_{i})\). Set \(N_{\text{\tiny{accepted}}}\gets N_{\text{\tiny{accepted}}}+1\) and \(\mathcal{P}_{N_{\text{\tiny{accepted}}}}^{\text{\tiny{next}}}=(\hat{\alpha}_{i}, \hat{\beta}_{i})\). **Algorithm 2** PMC algorithm for parameter estimation using the spatial model - fluorescence and clustering data _Set \(\mathcal{P}\leftarrow\left\{\mathcal{P}_{1}^{\text{\tiny{next}}},\mathcal{P}_{2 }^{\text{\tiny{next}}},...,\mathcal{P}_{N_{P}}^{\text{\tiny{next}}}\right\}\), \(w\leftarrow(1/\sum_{i=1}^{N_{P}}w_{i}^{*,\text{\tiny{next}}})\cdot\left\{w_{1}^ {*,\text{\tiny{next}}},w_{2}^{*,\text{\tiny{next}}},...,w_{N_{P}}^{*,\text{ \tiny{next}}}\right\}\) Set the distance thresholds \(\epsilon_{D}^{\text{\tiny{fluoro}}}\) and \(\epsilon_{D}^{\text{\tiny{cluster}}}\) as the \(q^{th}\) quantile of the \(\epsilon_{i}^{\text{\tiny{fluoro}}}\)s and \(\epsilon_{i}^{\text{\tiny{cluster}}}\)s respectively._ [MISSING_PAGE_POST] Figure 7: (a) Schematic of the simulation estimation process. (b)–(c) Observation model for fluorescent proportion of susceptible cells for (b) the ODE model and (c) the spatial model (first five points shown). In the spatial case we also show the observation model for the clustering metric \(\kappa(t)\). For the ODE model, we sample the true fluorescent proportion curve at a series of time points (shown in blue), then observe a value based on a negative binomial distribution centred on the true value (box plot of the distribution shown in orange). Here, \(\phi=10^{2}\). For the spatial model, we run independent iterations of the stochastic model and observe one point from each.
2308.03693
The Bornological Dual of the Structure Sheaves of Complex Manifolds
An alternative proof of bornological Verdier duality for complex manifolds, as proven initially by Prosmans & Schneiders is given, using Schneider's theory of quasi-abelian homological algebra, and the theory of residues and duality.
Christopher Burns
2023-08-07T16:11:37Z
http://arxiv.org/abs/2308.03693v1
# The Bornological Verdier Dual of the Structure Sheaves of Complex Manifolds ###### Abstract Quasi-abelian categories and their homological properties are recalled from [12] in the first section, and the following section recalls the details of bornological sheaves from [9] and [10]. The third section provides an alternative proof of the bornological Verdier duality of [10], using the theory of residues and duality utilised in [1] and [2] in the rigid analytic context. ## 1 Quasi-abelian Categories and Sheaves **Quasi-abelian categories** Let \(\mathcal{C}\) be an additive category with kernels and cokernels. **Definition** - A morphism \(f:X\to Y\) is _strict_ if it induces an isomorphism \(\tilde{f}:\operatorname{coim}(f)\to\operatorname{im}(f)\). The category \(\mathcal{C}\) is abelian if all maps are strict. Remark also that kernels and cokernels are always strict, and so any strict monomorphism admits a decomposition as a strict epimorphism, an isomorphism (which can simply be ignored by absorbing into on of the factors), and a strict monomorphism. Quasi-abelian categories instead satisfy a stability condition on strict morphisms, and so they generalise abelian categories: **Definition** - An additive category \(\mathcal{C}\) with kernels and cokernels is quasi abelian, if: 1. The pullback of a strict epimorphism is strict: in the following pullback diagram, \(f^{\prime}\) strict implies that \(f\) strict: 2. Dually, the pushforward of a strict monomorphism is a strict monomorphism - so in the diagram above, instead take it to be a pushout square. Then \(f\) a strict monomorphism implies \(f^{\prime}\) a strict monomorphism. **Examples** - Functional Analysis is a rich source of quasi-abelian categories, such as topological abelian groups, locally convex topological vector spaces and, in particular, Frechet spaces. Of particular interest to us are complete convex bornological spaces - vector spaces with a distinguished class of _bounded_ subsets. This category will be denoted CBorn. Quasi-abelian categories possess familiar homological properties, which in turn also grants sheaves valued in them such familiar properties, while also being much more rigid in structure than, say, Quillen exact categories. We summarise those properties proven in [12] - let \(\mathcal{C}\) be a quasi-abelian category: 1. We may place \(t\)-structures on the homotopy category \(\mathcal{K}(\mathcal{E})\), but because the image and coimage of a map are generally distinct, there is no distinguished candidate to describe left truncation. This leads to two \(t\)-structures, the _left and right_\(t\)-structures. All concepts dependent on the choice of \(t\)-structure, such as the hearts, will be called left or right accordingly. 2. The _derived category_\(\mathcal{D}(\mathcal{E})\) is the localisation of \(\mathcal{K}(\mathcal{E})\) with respect to the saturated null system of _strictly exact complexes_. A complex is strictly exact in degree \(n\), if \(\ker(d^{n})\cong\operatorname{im}(d^{n-1})\), and \(d^{n}\) is strict, and a complex is strict if it's strict in all degrees. 3. Because homotopic complexes are strictly exact in the same degrees, and strict exactness in a given degree is inherited by mapping cones, it follows that the \(t\)-structures on \(\mathcal{K}(\mathcal{E})\) localise to define \(t\)-structures on \(\mathcal{D}(\mathcal{E})\). 4. The left heart of \(\mathcal{D}(\mathcal{E})\) is denoted \(\mathcal{LH}(\mathcal{E})\), and is equivalent to the full subcategory of \(\mathcal{K}(\mathcal{E})\) of complexes with a single nonzero monomorphic map between the only two nonzero terms. Taking the cokernels of such maps, and noting that the cokernels of parallel arrows of a cocartesian diagram are isomorphic, this enables us to embed \(C\) in \(\mathcal{LH}(\mathcal{C})\), and this embedding has a left adjoint. This embedding induces a derived isomorphism \[\mathcal{D}(\mathcal{E})\cong\mathcal{D}(\mathcal{LH}(\mathcal{E}))\] (1) Therefore, at the derived level, quasi-abelian categories can be understood in terms of the conceptually easier abelian heart (recall that the heart of a t-structure is always an abelian category - see [3]). 5. Derived functors can be defined similarly to the classical case as in [3], with only minor technical adjustments. 6. The category of sheaves on a topological space \(X\) valued in quasi-abelian \(C\), \(\operatorname{Sh}(X;\mathcal{C})\), is quasi-abelian. Strict exactness can be measured stalkwise, and the category admits strong properties with respect to limits, generators and exactness, so that the operations used later in the category of complex convex bornological sheaves will be valid. 7. \(\mathcal{LH}(\operatorname{Sh}(X;\mathcal{C}))\cong\operatorname{Sh}(X; \mathcal{LH}(\mathcal{C}))\), and combining this with the previous derived equivalence, we see that sheaves valued in quasi-abelian categories may also be understood in purely abelian terms at the derived level: \[\mathcal{D}(\operatorname{Sh}(X;\mathcal{LH}(\mathcal{C})))\simeq\mathcal{D}( \operatorname{Sh}(X;\mathcal{C}))\] (2) From this one can derive theorems for derived categories of sheaves valued in a quasi abelian category from analogous results for abelian categories. Ind(Ban) and Bornological Sheaves We are interested in convex bornological sheaves for two reasons. Firstly, most of the sheaves of interest in algebraic analysis are valued in some functional-analytic category which embeds into CBorn. Secondly, CBorn is a particularly well behaved quasi-abelian category, with favourable closed structure. In [10], results are proven for the category \(\mathrm{Ind}(\mathrm{Ban})\), but in fact the subcategory of reduced inductive systems is equivalent to CBorn [9]. **Definition** - The functor \(\mathrm{IB:Tc\to Ind(Ban)}\) is defined on objects by \[\mathrm{IB}(V)=``\varinjlim_{B\in\mathcal{B}_{V}}"\hat{V}_{B} \tag{3}\] where \(\mathcal{B}_{V}\) is the category of absolutely convex bounded subsets \(B\subseteq V\), and \(\hat{V}_{B}\) is the completion of the linear span of \(B\). A continuous map of locally convex topological vector spaces will induce maps on all these linear spans, and hence on their completions. These maps extend to the formal inductive limits, thereby defining a functor \(\mathrm{IB:Tc\to Ind(Ban)}.\) This may then be applied to the structure sheaf of a complex manifold, viewed as a sheaf valued in Tc. Let us now discuss how we can adapt this functor into one which maps to the conceptually simpler category of complete bornological vector spaces: **Definitions** - A _disc_, or a _balanced or circled set_ in a real or complex vector space, is convex, absolutely convex, and internally closed. Intuitively, a disc is a subset, which remains invariant under multiplication by the unit disc in \(\mathbb{R}\) or \(\mathbb{C}\). To any disc, we may associate the gauge seminorm and the associated seminormed space \(V_{B}\) on its linear span. The disc is _complete_ if \(V_{B}\) is complete. A bornological space is then complete if every bounded set is contained in a complete bounded disc. Define an order on the discs of \(V\): \(B\leq B^{\prime}\) if \(B^{\prime}\)_absorbs_\(B\) - that is, \(B\subseteq rB^{\prime}\) for some positive real number \(r\). If \(B\leq B^{\prime}\), there is an injective bounded linear map \(V_{B}\to V_{B^{\prime}}\). Given a bounded linear map \(V\to W\), by definition this induces a map of the directed sets of bounded discs, which defines a morphism of inductive systems. This functor is called _dissection_ and denoted diss. On an ordinary bornological space, the target of diss is \(\mathrm{Ind}(\mathrm{Sns})\), the \(\mathrm{Ind}\) completion of the category of seminormed spaces. If \(V\) is complete then, by assumption, the spaces \(V_{B}\) are complete, and so diss may be regarded as a functor \(\mathrm{CBorn\to Ind(Ban)}\). The main theorem relevance to us is the following: **Theorem**[9] - \(\mathrm{diss:CBorn\to Ind(Ban)}\) is fully faithful, with essential image the _reduced_ systems - those with injective transition functions. We therefore have the fundamental equivalence: \[\mathrm{CBorn}\simeq\mathrm{Ind}(\mathrm{Ban})_{\mathrm{red}} \tag{4}\] In [10], the functor \(\mathrm{IB}\) associates to a locally convex topological vector space, a _reduced_ inductive system of Banach spaces. We may therefore restrict the codomain of \(\mathrm{IB}\) and regard it as a functor valued in complete bornological vector spaces. By abuse of notation, we shall denote the associated functor by \(\mathrm{IB}:\mathrm{Tc}\rightarrow\mathrm{CBorn}\). We summarise the main facts about \(\mathrm{IB}\) proven in [10]: 1. \(\mathrm{IB}\) commutes with _reduced_ inductive limits of systems of Frechet spaces over \(\mathbb{N}\) - those with all transition maps injective: \[\varinjlim_{n\in\mathbb{N}}\mathrm{IB}(F_{n})\cong\mathrm{IB}(\varinjlim_{n \in\mathbb{N}}(F_{n}))\] (5) 2. For \(V,W\in\mathrm{Tc}\) we can bestow the space \(L_{b}(V,W)=\mathrm{Hom}_{\mathrm{Tc}}(V,W)\) with the structure of a locally convex topological vector space, with seminorms \[\{p_{B}:B\subseteq V\mathrm{bounded},p\,\mathrm{a\,continuous\,seminorm \,on\,W}\}\] and \(p_{B}(f)=\underset{v\in B}{\sup}p(f(e))\). So, \(p_{B}\) evaluates on \(f\) the supremal value that \(p\circ f\) takes on \(B\). The tensor product of two locally convex topological vector spaces \(V,W\), inherits a canonical family of seminorms \(\{p\otimes q\}\) for \(p,q\) seminorms on \(V,W\) respectively. For \(x\) an element of \(V\otimes W\) \[(p\otimes q)(x)=\underset{x=\underset{i\in I}{\Sigma}v_{i}\otimes w_{i}}{\inf} \Sigma p(v_{i})q(w_{j})\] (6) where the infimum runs over all possible representations of \(x\) as an element of the tensor product. We record the isomorphisms of note - firstly for \(V\) bornological and \(W\) complete: \[\mathrm{IB}(L_{b}(V,W))\cong\mathrm{Hom}_{\mathrm{Ind}(\mathrm{Ban})}( \mathrm{IB}(V),\mathrm{IB}(W))\] (7) For arbitrary locally convex \(V,W\): \[\mathrm{IB}(V)\hat{\otimes}\mathrm{IB}(W)\cong\mathrm{IB}(E\otimes F)\] (8) where \(E\otimes F\) has the locally convex structure from the system of seminorms defined above, and the internal tensor product on \(\mathrm{Ind}(\mathrm{Ban})\) objects is obtained by the unique extension of the closed structure to \(\mathrm{Ind}\) objects. 3. Whenever \(E\) is a DFN space and \(F\) is an FN space: \[\text{Hom}(\text{IB}(E),\text{IB}(F)) \cong\mathbb{R}\text{Hom}(\text{IB}(E),\text{IB}(F))\] (9) \[L(\text{IB}(E),\text{IB}(F)) \cong\mathbb{R}L(\text{IB}(E),\text{IB}(F))\] (10) and whenever \(X,Y\) objects of \(\text{Ind}(\text{Ban})\) with \(X\) nuclear (meaning that for any index \(i\), there is a \(j\geq i\) with \(X(i)\to x(j)\) nuclear), then \[E\hat{\otimes}^{\mathbb{L}}F\cong E\hat{\otimes}F\] (11) 4. IB respects sheaves in the following sense: if \(F\) is a presheaf of Frechet spaces, which is a sheaf when viewed as a presheaf of vector spaces on second countable space \(X\) (such as a complex manifold), then the presheaf characterised by \[\text{IB}(F)(U)=\text{IB}(F(U))\] (12) is a sheaf. 5. Cartan's Theorem B holds for \(\text{IB}(\mathcal{O}_{X})\), which by (12) is a sheaf: \[\mathbb{R}\Gamma(U,\text{IB}(\mathcal{O}_{X}))\cong\text{IB}(\mathcal{O}_{X}( U))\] (13) whenever \(U\) has vanishing algebraic nonzero cohomology cohomology, \(H^{k}(U,\mathcal{O}_{X})=0,k>0\). ## 3 Verdier Dual of \(\mathrm{IB}(\mathcal{O}_{X})\) In this chapter we present an alternative calculation of the Verdier dual of \(\mathrm{IB}(\mathcal{O}_{X})\) and \(\mathrm{IB}(\Omega_{X}^{n-p})\), inspired by the argument from [2]. By the equivalence between reduced inductive systems of Banach spaces and complete bornological vector spaces, we shall henceforth consider complete bornological sheaves on an \(n\)-dimensional complex manifold \(X\). All derived functors are in the sense of Schneiders [12]. **Definition** - The _Verdier dual_ of a bornological sheaf \(F\) on \(X\), \(D(F)\) is characterised by the formula, for all open \(U\subseteq X\): \[D(F)(U)=\mathbb{R}L(\mathbb{R}\Gamma_{c}(U;F),\mathrm{IB}(\mathbb{C})) \tag{14}\] **Theorem** (Verdier Dual of \(\mathrm{IB}(\mathcal{O}_{X})\)) - There are isomorphisms for all positive integers \(p\leq n\): \[\mathrm{IB}(\Omega_{X}^{n-p})\simeq D(\mathrm{IB}(\Omega_{X}^{p})) \tag{15}\] In [10] this result is proven by constructing a perfect pairing between \(\mathrm{IB}(\Omega_{X}^{n-p})\) and \(\mathrm{IB}(\Omega_{X}^{p})\) by generalising the cup product and fibre integration to sheaves valued in \(\mathrm{Ind}(\mathrm{Ban})\). Their argument thereafter is a double induction on the dimensions of the spaces considered and the number of irreducible components involved. After a long series of reductions, it is eventually shown to be a corollary of Stokes' theorem. We propose a different proof, using an approach by descent to the calculation of the Verdier dual in the non-Archimedean context, detailed in [2]. Rather than making the long series of reductions as in the above proof, the idea of this proof is as follows: 1. Carry out the proof in the special case of complex affine space, \(\mathbb{C}^{n}\). This is done by the construction of a _residue map_ on compactly supported cohomology, \(H_{c}^{n}(X,\omega)\rightarrow\mathbb{C}\), which composed with a natural pairing on \(H^{0}(\mathcal{O}_{X})\times H_{c}^{n}(X,\omega)\) establishes duality. 2. Given a closed immersion \(i:X\rightarrow\mathbb{C}^{n}\) of a complex submanifold, we can express cohomology on \(X\) in terms of cohomology of \(\mathbb{C}^{n}\), using the argument given in [1]. Moreover, a residue map on \(X\) can be defined in terms of the residue on the ambient affine space, independent of the choice of Stein neighbourhood and of the choice closed immersion \(i:X\rightarrow\mathbb{C}^{n}\). 3. Any _Stein manifold_ admits a closed immersion into some complex affine space. For details see [5] or [8]. Therefore the duality theorem can be deduced for Stein domains from the affine case. 4. Any point on a complex manifold admits a Stein neighbourhood, which may be assumed to be irreducible (and of course of constant dimension), and such that the intersections of these spaces are also irreducible Stein spaces. This is because holomorphic separability and convexity are preserved by the intersection of such domains, so one only has to take irreducible components. 5. We therefore have unique canonical local duality isomorphisms, induced by unique canonical residue maps on these Stein neighbourhoods, and also on the overlaps of these domains. We conclude that all the duality isomorphisms glue to a global duality isomorphism for any complex manifold. (4) and (5) require no further explanation, so below we give a proof of (1), and briefly explain the pulling back of cohomology in (2) for closed immersed Stein domains. Before delving into the proof, we recall the necessary facts about Stein domains: **Definition**[8] - A _Stein space_\((X,\mathcal{O}_{X})\) is a complex space, whose connected components are second countable as subspaces, which satisfies the following: 1. \(X\) is _holomorphically separable_ - for any distinct points \(x,y\) of \(X\), there is a global section \(f\in\mathcal{O}_{X}\) with \(f(x)\neq f(y)\). 2. \(X\) is _holomorphically convex_ - for any compact subset \(K\subseteq X\), its _holomorphically convex hull_ \[\hat{K}_{\mathcal{O}_{X}}=\{x\in X:\,\forall f\in\mathcal{O}_{X},\,|f(x)|\leq \sup_{k\in K}|f(k)|\}\] (16) is convex. Stein spaces are those complex spaces which possess very nice function theory. The criteria above guarantees the existence of sufficiently many global sections. Indeed, holomorphically convex subspaces of \(\mathbb{C}^{n}\) are precisely the _regions of holomorphy_: those spaces one which global holomorphic functions exist and cannot be analytically continued at any point. Holomorphic separability guarantees that there is no redundancy in the function theory. Perhaps the most important theorem pertaining to Stein spaces is Cartan's Theorem B: **Theorem**[8] - The analytic cohomology of a complex space \(X\) is trivial if and only if \(X\) is a Stein space. Stein domains are therefore well suited for cohomological considerations. The following theorems demonstrate the flexibility of Stein manifolds for the local study of complex manifolds: **Theorem** [11] - A Stein manifold \(X\) of dimension \(n\) admits a proper embedding into \(\mathbb{C}^{2n+1}\). **Theorem** - [8] - The topology of every complex space has a basis of open Stein neighbourhoods. In particular, any point of a complex manifold possesses a neighbourhood basis of Stein manifolds. Therefore to locally study a complex manifold, we may take a Stein neighbourhood, which has nice cohomological properties, and exploit the cohomological relationship between the Stein manifold and a complex space into which it embeds. This is the approach we take in our alternative calculation of the Verdier dual. We need only a few additional facts: **Theorem** (Extension Principle) [4] - For a closed complex subspace \(X\subseteq Y\), an analytic sheaf \(F\) on \(X\) is coherent if and only if its pushforward \(i_{*}F\) is coherent. This will apply in particular to the embeddings of Stein manifolds into affine space. **Theorem** (Conormal Exact Sequence) [6] - For an embedding of smooth complex manifolds \(Y\hookrightarrow X\), if \(\mathcal{I}\) is the associated ideal sheaf, there is a canonical exact sequence (17) Moreover, this is an exact sequence of Frechet spaces, and therefore an exact sequence of bornological sheaves for the canonically induced bornologies. Through the use of Koszul resolutions, we can understand the cohomology of an embedded Stein manifold, or indeed any closed complex subspace, in terms of its local equations. First we need a definition: **Definition** [8] [14] - Let \(R\) be a ring. A finite sequence of elements \((f_{1},\cdots,f_{n})\) of \(R\) is a _regular sequence_ if each \(f_{i}\) is a nonzerodivisor modulo \(f_{1},\cdots,f_{i-1}\). We apply this definition to the local ring of a point (without loss of generality the origin) in complex affine space in the image of an embedded smooth Stein manifold \(X\). We remark that the local equations \(f_{i}\) for \(X\) can be chosen such that the associated stalk elements \((f_{i})_{0}\) form a regular sequence by smoothness - the Stein arises locally as an embedded subspace, and thus as the zero locus of complex variables. This could in principle fail for other embeddings, such as the embedding of a union of two spaces which meet tangentially at a point. **Theorem** (Koszul Resolution)[8] - For any open region \(U\subseteq\mathbb{C}^{n}\), and \(i:X\hookrightarrow U\), \(i(x)=0\) the embedding of an analytic subset defined by equations \(f_{1},\cdots,f_{p}\) such that \((f_{1})_{0},\cdots,(f_{p})_{0}\) defines a regular \((\mathcal{O}_{U})_{0}\)-sequence, the pushforward \(i_{*}\mathcal{O}_{X}\) admits a free resolution locally at \(x\), of length at most \(p\). That is, on some open neighbourhood \(0\in V\subseteq U\), there is an exact sequence: (18) Of course, by translation, the assertion that \(i(x)=0\) is redundant by translation, so we may conclude that for any embedded Stein manifold into affine space, the structure sheaf admits a free resolution locally. **Proof of Verdier Duality in the Affine Case, \(U=\mathbb{C}^{n}\)** 1. As in the proof in [10], all the cases for various \(p\) are equivalent, so we shall consider only the case of top differential forms. 2. \(\mathbb{C}^{n}\) possesses a filter by compact polydiscs \(P_{m}\) of polyradius \((m,\cdots,m)\), for \(m\) running over all positive integers. These polydiscs are cofinal in the system of compact subsets under inclusions, and so a compactly supported section is supported on one of these balls. 3. By excision and Cartan's Theorem B for coherent sheaves, we have an isomorphism \[H^{n}_{P_{m}}(U,\Omega_{U})\cong H^{n-1}(U\backslash P_{m},\Omega_{U})\] (19) 4. \(U\backslash P_{m}\) comes equipped with an acyclic cover by those subspaces, for varying \(i\), with \(i^{th}\) coordinate greater than \(n\): \[V_{i}=\{(z_{1},...,z_{n}):|z_{i}|\geq n\}\] (20) These are Stein domains, and therefore defines an acyclic cover by Cartan's theorem B. Therefore the associated Cech complex is adequate to calculate cohomology [6]. 5. Cech cohomology with respect to this cover, yields acyclicity in all degrees below \(n\), the dimension of \(U\). The \(n^{\rm th}\) cohomology can be identified with the Frechet space of Laurent series converging on the intersection of all of the acyclic domains \[V=\{(z_{1},...,z_{n}):|z_{i}|\geq R_{B}\ \forall i\}\] (21) of the form \[\mathop{\Sigma}_{m_{i}<0}a_{m}z^{m}dz/z\] (22) where \(m\) is the multi-index \((m_{1},\cdots,m_{d})\), \(z^{m}=\prod_{i}z_{i}^{m_{i}}\), and \(dz/z=\bigwedge_{i}dz_{i}/z_{i}\). 6. As compactly supported cohomology is the colimit of cohomology supported on these balls, we find that \(H^{n}_{c}(U,\Omega_{U})\) can be identified with the space of such Laurent series, convergent outside _some_ ball \(B\). 7. Bearing in mind the statement of Verdier duality, we would like to relate this cohomology space to the space of bounded linear maps from \(H^{0}(\mathcal{O}_{X})\) to \(\mathbb{C}\). Since all of the spaces involved are Frechet spaces, boundedness and continuity are equivalent. Therefore, following the argument of [2], we associate such a continuous linear map to the Laurent series (22) as follows: a global section in \(H^{0}(\mathcal{O}_{X})\) can be identified with a convergent power series, and we define the map by \[f=\Sigma c_{m}z^{n}\mapsto\Sigma a_{m}c_{m}\] (23) and since the projections onto the values of the coefficients are continuous and linear, every continuous linear map has this form. This proves the affine case: \[H^{n}_{c}(\omega)\cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega)\] (24) and the other Exts are zero, so by the above the dual is the translated structure sheaf. 8. We may extend this result. Firstly, we have established a bilinear map, from which two isomorphisms can be deduced by dualising. Secondly we can consider more general scalars than those in \(\mathbb{C}\), and consider scalars in an arbitrary \(\mathbb{C}\)-vector space \(V\). We define the _residue_\(\operatorname{Res}:H^{n}_{c}(X,\omega)\to\mathbb{C}\) to be the map sending the class associated to a convergent sum (22) to its lowest degree coefficient \(a_{0}\). The _trace_ on \(H^{n}_{c}(X,\omega\otimes V)\cong H^{n}_{c}(X,\omega)\otimes V\) is simply \(\operatorname{Res}\otimes 1_{V}\). Then we deduce the duality results from the bilinear form \[H^{n}_{c}(X,\omega\otimes V)\times H^{0}(\mathcal{O}_{X}) \to V\] (25) \[\left(\underset{m_{i}<0}{\Sigma}a_{m_{i}}z_{i}^{m_{i}}dz_{1} \wedge dz_{2}\wedge\cdots\wedge dz_{d},\Sigma c_{m}z^{n}\right) \mapsto\Sigma a_{m}c_{m}\] (26) We deduce the isomorphisms \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (27) \[H^{0}(\omega\otimes V) \cong\operatorname{Ext}^{0}(\mathcal{O}_{X},\omega\otimes V)\] (28) 9. Finally, these arguments can be generalised to a coherent \(\mathcal{O}_{X}\)-module \(F\) in place of \(\mathcal{O}_{X}\) by taking a local presentation. Therefore, we have the following isomorphisms for any coherent sheaf \(F\) on \(\mathbb{C}^{n}\): \[H^{n}_{c}(X,\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (29) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (30) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (31) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (32) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (33) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (34) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (35) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (36) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (37) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (38) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (39) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (40) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (41) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (42) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (43) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (44) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (45) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (46) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (47) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (48) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (49) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (50) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (51) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (52) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (53) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (54) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (55) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (56) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (57) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (58) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (59) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (60) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (61) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (62) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (63) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (64) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (65) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (66) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (67) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (68) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (69) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (70) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (71) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (72) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega\otimes V)\] (73) \[H^{n}_{c}(\omega\otimes V) \cong\operatorname{Ext}^{n}_{c}(\mathcal{O}_{X},\omega \[\mathrm{Ext}_{c}^{p}(F,\omega) =0,\ p\leq n\] (29) \[\mathrm{Ext}_{c}^{n}(F,\omega) \cong\mathrm{Hom}\,\mathrm{cont}(H^{0}(F),\mathbb{C})\] (30) \[\mathrm{Hom}(F,\omega) \cong\mathrm{Hom}\,\mathrm{cont}(H_{c}^{n}(F),\mathbb{C})\] (31) where \(\mathrm{Hom}\,\mathrm{cont}\) denotes the space of continuous, or equivalently bounded, linear maps between these spaces. **Remark** - Van der Put [2] instead considers affine space over a rigid space \(Y\), and so \(V\) is replaced by a finitely generated \(\mathcal{O}_{Y}\)-module. For our purposes complex affine space is sufficient, and so \(Y\) is taken to be a point, and \(V\) is just a complex vector space. **Pulling back Cohomology to an embedded smooth Stein manifold** We follow the argument given in [1]. The argument is given in the rigid analytic context also, where Steins have a similar definition, and have similar properties to complex Stein domains: 1. Let \(i:X\hookrightarrow\mathbb{C}^{n}\) be a closed embedding, and cover the image of \(X\) by open subsets \(U_{i}\), such that on \(U_{i}\), \(X\) has local equations \(f_{1}^{i},\cdots,f_{n-q}^{i}\). These open sets may be chosen such that the stalks of the equations \(f_{j}^{i}\) form a regular \(\mathcal{O}_{x}\)-sequence at any point \(x\in X\). We can describe the structure sheaf on \(X\) explicitly: \[\mathcal{O}_{X}(V_{i})\cong\mathcal{O}_{\mathbb{C}^{n}}(U_{i})/(f_{1}^{i}, \cdots,f_{n-q}^{i})\] (32) 2. Because \(i\) is a closed immersion there is an _underived_ exceptional pullback \(i^{!}\) - that is, a right adjoint to the pushforward \(i_{*}\) on the category of sheaves. One finds by the following series of isomorphisms: \[\mathrm{Hom}_{\mathcal{O}_{\mathbb{C}^{n}}}(i_{*}\mathcal{F}, \mathcal{G}) =\mathrm{Hom}_{i^{-1}\mathcal{O}_{\mathbb{C}^{n}}}(\mathcal{F},i^ {!}\mathcal{G})\] (33) \[=\mathrm{Hom}_{i^{-1}\mathcal{O}_{\mathbb{C}^{n}}}(\mathcal{F}, \mathcal{Hom}_{i^{-1}\mathcal{O}_{\mathbb{C}^{n}}}(\mathcal{O}_{X},i^{!} \mathcal{G}))\] (34) \[=\mathrm{Hom}_{i^{-1}\mathcal{O}_{\mathbb{C}^{n}}}(\mathcal{F},i^ {*}\mathcal{Hom}_{\mathcal{O}_{\mathbb{C}^{n}}}(i_{*}\mathcal{O}_{X},\mathcal{ G}))\] (35) \[=\mathrm{Hom}_{\mathcal{O}_{X}}(\mathcal{F},i^{*}\mathcal{Hom}_{ \mathcal{O}_{\mathbb{C}^{n}}}(i_{*}\mathcal{O}_{X},\mathcal{G}))\] (36) Therefore by uniqueness of representatives we have a description of the functor \(i^{!}\): \[i^{!}\mathcal{G}\cong i^{*}\mathcal{Hom}_{\mathcal{O}_{\mathbb{C}^{n}}}(i_{*} \mathcal{O}_{X},\mathcal{G})\] (37) 3. By taking the \(n^{\text{th}}\) exterior power, we get an isomorphism \[i^{*}\Omega^{n}_{\mathbb{C}^{n}}\otimes\mathcal{O}_{X}\cong\bigwedge^{n-q} \mathcal{I}/\mathcal{I}^{2}\otimes\Omega^{q}_{X}\] (38) This isomorphism is defined in the obvious way, as described in [7]. That is, we can choose local generators \(e_{1},...,e_{n}\) of \(\Omega^{1}_{\mathbb{C}^{n}}\otimes\mathcal{O}_{X}\), such that \(e_{1},...,e_{n-q}\) generate \(\mathcal{I}/\mathcal{I}^{2}\), and the images of the remaining generators generate \(\Omega^{1}_{X}\). This choice determines the isomorphism. The wedge product on the right is invertible, yielding the isomorphism: \[\Omega^{q}_{X}\cong i^{*}\Omega^{n}_{\mathbb{C}^{n}}\otimes\left(\bigwedge^{n -q}\mathcal{I}/\mathcal{I}^{2}\right)^{*}\] (39) 4. To analyse \(\mathbb{R}i^{!}\) one considers the restriction to open affine neighbourhoods as in (32). Then, one can analyse the local Ext sheaves \(\mathcal{E}\text{xt}^{j}_{\mathcal{O}_{X}|_{U_{i}}}(\mathcal{O}_{X}|_{U_{i}},\Omega^{n}_{\mathbb{C}^{n}}|_{U_{i}})\) using Koszul resolutions. 1. Consider a neighbourhood \(U_{i}\) as in (32), so that \(X\) is locally defined by the equations \(f^{i}_{1}=\cdots=f^{i}_{n-q}=0\). 2. Denote the Koszul resolution of \(i_{*}\mathcal{O}_{X}|_{U_{i}}\) by \(K_{\bullet}(f^{i}_{1},\cdots,f^{i}_{n-q})\), and for any sheaf \(F\) of \(\mathcal{O}_{U_{i}}\)-modules we define the _dual complex_: \[K^{\bullet}(f^{i}_{1},\cdots,f^{i}_{n-q};F)=\mathcal{H}\text{om}^{\bullet}_{ \mathcal{O}_{X}}(K_{\bullet}(f^{i}_{1},\cdots,f^{i}_{n-q}),F)\] (40) We see that because the Koszul complex is locally free, this calculates the sheaf Ext complex: \[K^{\bullet}(f^{i}_{1},\cdots,f^{i}_{n-q};F) \cong\mathcal{E}\text{xt}^{\bullet}_{\mathcal{O}_{U_{i}}}(K_{ \bullet}(f^{i}_{1},\cdots,f^{i}_{n-q}),F))\] (41) \[\cong\mathcal{E}\text{xt}^{\bullet}_{\mathcal{O}_{U_{i}}}( \mathcal{O}_{U_{i}}/(f^{i}_{1},\cdots,f^{i}_{n-q}),F))\] (42) \[\cong\mathcal{E}\text{xt}^{\bullet}_{\mathcal{O}_{U_{i}}}( \mathcal{O}_{X}|_{V_{i}},F))\] (43) We deduce an isomorphism on global cohomology \[\text{Ext}^{n-q}(\mathcal{O}_{X}|_{V_{i}},F)\to H^{n}K^{\bullet}(f^{i}_{1}, \cdots,f^{i}_{n-q};F)\to F/(f^{i}_{1},\cdots,f^{i}_{n-q})F\] (44) The first map is the induced map on \((n-q)^{\text{th}}\) cohomology. The second morphism \(H^{n}(f^{i}_{1},\cdots,f^{i}_{n-q};F)\to F/(f^{i}_{1},\cdots,f^{i}_{n-q})F\), sends \(\alpha\in K^{n}(f^{i}_{1},\cdots,f^{i}_{n-q};F)\) to \(\alpha_{1,\cdots,n}\). 3. The argument of Proposition 7.2 from [7] adapts to our context, and allows us to conclude that \[\mathcal{E}\mathrm{xt}^{j}_{\mathcal{O}_{X}|_{U_{i}}}(\mathcal{O}_{X}|_{U_{i}}, \Omega^{n}_{\mathbb{C}^{n}}|_{U_{i}})=0,\ j\neq n-q\] (45) and therefore these Ext sheaves vanish globally also. 4. By (44) there are isomorphisms depending on the chosen equations for \(X\) \[\phi_{i}:\mathcal{E}\mathrm{xt}^{n-q}_{\mathcal{O}_{X}|_{U_{i}}}(\mathcal{O}_{ X}|_{U_{i}},\Omega^{n}_{\mathbb{C}^{n}}|_{U_{i}})\cong\frac{\Omega^{n}_{ \mathbb{C}^{n}}|_{U_{i}}}{\mathcal{I}|_{U_{i}}\Omega^{n}_{\mathbb{C}^{n}}|_{U_{ i}}}\] (46) and these isomorphisms glue via the maps relating the systems of equations on overlaps. That is, given two systems of equations \(X\) on \(U_{i}\) and \(U_{j}\), \((f^{i}_{1},\cdots,f^{i}_{n-q})\) and \((g^{i}_{1},\cdots,g^{i}_{n-q})\) respectively, then on the overlap \(U_{i}\cap U_{j}\), these equations can be written in terms of each other. Equivalently, there is a matrix \(c_{kl}\) with \(g_{k}=\Sigma c_{kl}f_{l}\). By lemma 7.1 of [7] these isomorphisms differ exactly by the canonical map \(\det(c_{ij})\). These morphisms \(\phi_{i}\) can therefore be glued along these maps, yielding a global isomorphism: \[\phi:\mathcal{E}\mathrm{xt}^{n-q}_{\mathcal{O}_{X}|_{U_{i}}}(\mathcal{O}_{X}| _{U_{i}},\Omega^{n}_{\mathbb{C}^{n}}|_{U_{i}})\cong\mathcal{H}\mathrm{om}_{ \mathcal{O}_{X}}\left(\bigwedge^{n-q}\frac{\mathcal{I}}{\mathcal{I}^{2}}, \frac{\Omega^{n}_{\mathbb{C}^{n}}|_{U_{i}}}{\mathcal{I}|_{U_{i}}\Omega^{n}_{ \mathbb{C}^{n}}|_{U_{i}}}\right)\] as the gluing data \(\{\det(c_{ij})\}\) is described by the automorphisms of \(\bigwedge^{n-q}\frac{\mathcal{I}}{\mathcal{I}^{2}}\). To be precise, this map \(\phi\) evaluated on \(f^{i}_{1},\cdots,f^{i}_{n-q}\) is exactly \(\phi_{i}\). 5. Applying \(i^{*}\) and the isomorphism (39), we deduce \[i^{*}\mathcal{E}\mathrm{xt}^{i}_{\mathcal{O}_{X}}(\mathcal{O}_{X},\Omega^{n}_ {\mathbb{C}^{n}})\cong i^{*}\Omega^{n}_{\mathbb{C}^{n}}\otimes\left(\bigwedge ^{n-q}\mathcal{I}/\mathcal{I}^{2}\right)^{*}\cong\Omega^{q}_{X}\] (48) We therefore finally arrive at a neat description of the derived exceptional pullback, according to (37) \[\mathbb{R}i^{!}(\Omega^{n}_{\mathbb{C}^{n}})\cong\Omega^{q}_{X}[q-n]\] (49) 6. Recall that \(i_{*}\) preserves coherence. To prove coherent duality we take an injective resolution of top forms \[\begin{CD}0@>{}>{}>\Omega^{n}_{\mathbb{C}^{n}}@>{}>{}>I^{\bullet}\end{CD}\] and take cohomology of the adjunction isomorphism between \(i_{*}\) and \(i^{!}\). This yields the duality isomorphisms, for any coherent \(\mathcal{O}_{X}\)-module \[\operatorname{Ext}^{i}_{\mathcal{O}_{\mathbb{C}^{n}}}(i_{*}M, \Omega^{n}_{\mathbb{C}^{n}}) \cong\operatorname{Ext}^{i-(n-q)}_{\mathcal{O}_{X}}(M,\Omega^{q}_{ X})\] (50) \[\mathcal{E}\mathrm{xt}^{i}_{\mathcal{O}_{\mathbb{C}^{n}}}(i_{*}M, \Omega^{n}_{\mathbb{C}^{n}}) \cong i_{*}\mathcal{E}\mathrm{xt}^{i-(n-q)}_{\mathcal{O}_{X}}(M, \Omega^{q}_{X})\] (51) 6. Because \(i\) is proper, the correspondence between compact subsets yields the isomorphism on compactly supported cohomology, for any sheaf of \(\mathcal{O}_{X}\)-modules \(M\): \[H^{*}_{c}(\mathbb{C}^{n},i_{*}M)\cong H^{*}_{c}(X,M)\] (52) These isomorphisms are already enough to pull back cohomology from affine space to the closed immersed Stein. From these we deduce the necessary isomorphisms. However, the isomorphisms need to be sufficiently canonical in order for us to deduce a global isomorphism. In particular, the isomorphism must not depend on the choice of embedding into affine space, and two different Stein covers should yield the same isomorphism. This is done by constructing an invariant trace map for Steins, based on the affine definition. 7. We can define for an embedding \(\phi:X\to\mathbb{C}^{n}\), a residue map on \(X\): \[\begin{split} H^{n}_{c}(X,\Omega_{X})& \xrightarrow{\sim}H^{n}_{c}(\mathbb{C}^{n},\mathcal{E}\mathrm{xt}^{n-q}( \phi_{*}\mathcal{O}_{X},\Omega_{\mathbb{C}^{n}}))&\xrightarrow{ \alpha}\operatorname{Ext}^{N}_{c}(\phi_{*}\mathcal{O}_{X},\Omega_{\mathbb{C}^{ n}})\\ &\operatorname{Hom}\operatorname{cont}(H^{0}(\mathcal{O}_{X}), \mathbb{C})&\xrightarrow{\gamma}\mathbb{C}\end{split}\] The map \(\alpha\) is an isomorphism, coming from the spectral sequence for \(\operatorname{Ext}^{N}_{c}\). The isomorphism \(\beta\) comes from the coherent duality isomorphism. The final map \(\gamma\) is the evaluation on the constant function \(1\) on \(X\). The trace map is defined by taking the same sequence, tensoring the sheaves of differential forms (including the trivial one \(\mathbb{C}\) on the point) with a complex vector space \(V\). 8. It remains to show the following, which we explain below, following the argument of [2]: 1. Given two distinct immersions into distinct affine spaces, the residues associated both equal the residue associated to the sum of the maps \((i_{1},i_{2}):X\to\mathbb{C}^{(n_{1}+n_{2})}\) 2. Given an open immersion \(X_{1}\to X_{2}\) of Steins, the composition \[H^{n}_{c}(X_{1},\Omega_{X_{1}})\to H^{n}_{c}(X_{2},\Omega_{X_{2}})\xrightarrow{ \operatorname{res}_{X_{2}}}\mathbb{C}\] (54) is \(\operatorname{Res}_{X_{1}}\) ### Invariance of Trace and Residue **Lemma** - The residue map on affine space can be written as a composition \[H^{n}_{c}(\mathbb{C}^{n},\omega)\xrightarrow{\phi}H^{n}_{c}(\mathbb{P}^{n}, \omega)\xrightarrow{\sim}H^{n}(\mathbb{P}^{n},\omega)\xrightarrow{\psi} \mathbb{C}\] for some final isomorphism \(\psi:H^{n}(\mathbb{P}^{n},\omega)\to\mathbb{C}\). **Proof** - by GAGA [13] the cohomology group of \(\mathbb{P}^{n}\) can be calculated algebraically. To prove the statement we show that \(\phi\) and \(\operatorname{Res}_{n}\) have the same kernel. Then, \(\psi\) can be chosen such that \(\psi\circ\phi=\operatorname{Res}_{n}\). We consider the action of \((S^{1})^{n}\) on \(\mathbb{C}^{n}\). It respects the polydiscs \(P_{m}\), as well as the open cover of \(\mathbb{C}^{n}\setminus P_{m}\), and therefore defines an action on \(H^{n}_{c}(X,\omega)\). The extension of this action to \(H^{n}(\mathbb{P}^{n})\) is trivial. Therefore for any cohomology class \(\xi\), and any operator \(\sigma=(\lambda_{1},\cdots,\lambda_{n})\in(S^{1})^{n}\) we have \(\phi(\sigma\xi-\xi)=0\). Representing \(\xi\) as a convergent sum \(\underset{m_{i}<0}{\Sigma}a_{m}z^{m}dz/z\), we see that \[\sigma\xi-\xi=\underset{m_{i}<0}{\Sigma}a_{m}z^{m}(\lambda^{m}-1)dz/z \tag{55}\] With careful choices of \(\lambda\) such that the terms \(\lambda^{m}-1\) don't vanish for \(m\neq 0\), we see that the terms that vanish under \(\phi\) are exactly those terms with \(a_{0}=0\) - that is, \(\ker\operatorname{Res}_{n}=\ker\phi\). This proves the factorisation, and then because \(H^{n}(\mathbb{P}^{n},\omega)\) is invariant under the projective general linear group, so is the residue. **Lemma** - For any automorphism \(\sigma\) of \(\mathbb{C}^{n}\), there exists a unique invertible \(f\in H^{0}(\mathcal{O}_{X})\) such that \(\operatorname{Res}_{n}\circ\sigma=f\cdot\operatorname{Res}_{n}\) as maps \(H^{n}_{c}(\mathbb{C}^{n},\omega)\to\mathbb{C}\). **Proof** - The map \(\operatorname{Res}_{n}\circ\sigma\) is a continuous linear map \(H^{n}_{c}(\omega)\to\mathbb{C}\). Because \(\omega\) is coherent, the duality statement implies that \(\operatorname{Res}_{n}\circ\sigma(\xi)=\operatorname{Res}_{n}(f\xi)\). We may apply the same reasoning to \(\sigma^{-1}\), and so we conclude that \(f\) is invertible. With this, we may conclude invariance of \(\operatorname{Res}_{n}\) with respect to certain automorphisms of \(\mathbb{C}^{n}\): **Lemma** - Automorphisms of the form \[\sigma(z_{1},\cdots,z_{n})=(z_{1},\cdots,z_{m},z_{m+1}+k_{m+1},\cdots,z_{n}+k_ {n}) \tag{56}\] where the \(k_{j}\) are holomorphic functions varying with \(z_{1},\cdots,z_{m}\), are \(\mathrm{Res}_{n}\)-invariant: \(\mathrm{Res}_{n}\circ\sigma=\mathrm{Res}_{n}\). **Proof** - Such automorphisms can be written as a commutator composed with a translation. Commutators are \(\mathrm{Res}_{n}\)-invariant because of the last lemma. Translations are \(\mathrm{Res}_{n}\)-invariant from the power series formulation. These lemmas are enough to prove the invariance properties of the generalised residue map (53). **Theorem** - The residue (53) does not depend on the choice of embedding \(\phi\). **Proof** - A closed immersion of steins \(\phi:X_{1}\to X_{2}\) of dimensions \(m_{1},m_{2}\) respectively, induces a map \(\tilde{\phi}\) functorially on cohomology, by pulling back sheaves of differential forms: \[H_{c}^{m_{1}}(X_{1},\Omega_{X_{1}})\xrightarrow{\sim}H_{c}^{m_{1}}(X_{2}, \mathcal{E}\mathrm{xt}^{m_{2}-m_{1}}(\phi_{*}\mathcal{O}_{X},\Omega_{X_{2}})) \xrightarrow{\sim}\mathrm{Ext}_{c}^{m_{2}}(\phi_{*}\mathcal{O}_{X_{1}}, \Omega_{X_{2}})\\ \Big{\downarrow}\\ H_{c}^{m_{2}}(X_{2},\Omega_{X_{2}})\] Now given two closed immersions \(X\hookrightarrow\mathbb{C}^{n_{1}}\), \(X\hookrightarrow\mathbb{C}^{n_{2}}\), we show that the residue associated to these embeddings is equal to the residue associated to the coproduct of the embeddings. Consider the commutative diagram where \(k_{1}\) is the holomorphic extension of \(\phi_{2}\circ\phi_{1}^{-1}\) on \(\phi_{1}(X)\), \(l_{1}\) is the embedding \(z\mapsto(z,0)\), and this determines a unique map \(\tau_{1}\). The other maps are defined similarly. For the map \(l_{1}\), the induced map \(\tilde{l_{1}}\) on cohomology induces the canonical map on power series rings. Therefore, it respects the lowest order coefficient, which is exactly the residue. So, the residue associated with \(\phi_{1}\) is the same as the residue associated with \(l_{1}\circ\phi_{1}\). As \(\tau_{1}\) is an automorphism of the form considered before it respects residues. As this composition equals \(k_{1}\circ\phi_{1}\), the associated residues are equal. The same reasoning applies to the bottom half of the diagram, concluding the proof. With this proven, we are entitled to unambiguously denote the residue on a Stein manifold \(X\) by \(\operatorname{Res}_{X}\), without reference to an ambient affine space into which it embeds. It remains only to prove the following: **Proposition** - For a closed immersion of Stein spaces \(\phi:X_{1}\to X_{2}\), the composition \[H^{n}_{c}(X_{1},\Omega_{X_{1}})\xrightarrow{\tilde{\phi}}H^{n}_{c}(X_{2}, \Omega_{X_{2}})\xrightarrow{\operatorname{Res}_{X_{2}}}\mathbb{C}\] is \(\operatorname{Res}_{X_{1}}\). Therefore, the residue map and the coherent duality isomorphisms do not depend on the choice of Stein neighbourhood chosen. **Proof** - \(\operatorname{Res}_{X_{2}}\circ H^{n}_{c}(\phi)\) is continuous and linear, and so by coherent duality, it corresponds to a unique element \(a(X_{1},X_{2})\in H^{0}(X_{1},\mathcal{O}_{X_{1}})\): \[\operatorname{Res}_{X_{2}}\circ H^{n}_{c}(\phi)(\xi)=\operatorname{Res}_{X_{1} }(a(X_{1},X_{2})\xi) \tag{57}\] Our task is to show that \(a(X_{1},X_{2})=1\). First this is proven for the prototypical embedding of a polydisc into affine space: \(P^{n}(R)\subseteq\mathbb{C}^{n}\). The action of \((S^{1})^{n}\) respects both the polydisc and affine space, and the residue maps are invariant under the action of each. From this symmetry, we see that the element \(a(X_{1},X_{2})\) must be a scalar, because it's holomorphic and constant on tori. Next we reduce to the one-dimensional case. We consider the commutative diagram: where the vertical maps are the closed immersions, mapping to the first coordinate. Because these are closed immersions, the maps on cohomology are canonical, and the scalars \(a(P^{n}(R),\mathbb{C}^{n})\) and \(a(P^{1}(R),\mathbb{C})\) must agree, reducing to dimension 1. Van der Put very elegantly proves that \(a^{2}=a\) by considering the following diagram: Every map in this diagram is the evident embedding. Now on one hand, \(a\) is determined by the composition of the top two horizontal maps, since this is the embedding of a polydisc. On the other hand, \(a\) is also determined by both the horizontal lower maps, and since the upwards maps are all closed immersions, we conclude that \(a^{2}=a\), and \(a\neq 0\) because \(\operatorname{Res}_{n}\circ H^{n}_{c}(\phi)\neq 0\). It remains to prove the result for any open immersion of Stein manifolds \(\phi:X_{1}\to X_{2}\) - that is, we'd like to show that for any point \(p\in X_{1}\) that \(a(X_{1},X_{2})(p)=1\). Embed each \(X_{i}\) into affine space, \(\phi_{i}:X_{i}\to\mathbb{C}^{n_{i}}\), such that \(p\mapsto 0\) under both embeddings. Choose small polydiscs \(B_{i}\subseteq\mathbb{C}^{n_{i}}\) whose preimages in \(X_{1}\) and \(X_{2}\) are the same open neighbourhood \(U\) of \(p\) in \(X_{1}\): \(U=\phi_{1}^{-1}(B_{1})=\phi_{2}^{-1}(B_{2})\). The commutative diagrams \(U\)\(X_{i}\)\(B_{i}\)\(X_{i}\)\(U\)\(X_{i}\)\(B_{i}\)\(X_{i}\)\(U\)\(X_{i}\)\(X_{i}\)\(U\)\(X_
2310.13812
Yet Another Model for Arabic Dialect Identification
In this paper, we describe a spoken Arabic dialect identification (ADI) model for Arabic that consistently outperforms previously published results on two benchmark datasets: ADI-5 and ADI-17. We explore two architectural variations: ResNet and ECAPA-TDNN, coupled with two types of acoustic features: MFCCs and features exratected from the pre-trained self-supervised model UniSpeech-SAT Large, as well as a fusion of all four variants. We find that individually, ECAPA-TDNN network outperforms ResNet, and models with UniSpeech-SAT features outperform models with MFCCs by a large margin. Furthermore, a fusion of all four variants consistently outperforms individual models. Our best models outperform previously reported results on both datasets, with accuracies of 84.7% and 96.9% on ADI-5 and ADI-17, respectively.
Ajinkya Kulkarni, Hanan Aldarmaki
2023-10-20T20:58:45Z
http://arxiv.org/abs/2310.13812v1
# Yet Another Model for Arabic Dialect Identification ###### Abstract In this paper, we describe a spoken Arabic dialect identification (ADI) model for Arabic that consistently outperforms previously published results on two benchmark datasets: ADI-5 and ADI-17. We explore two architectural variations: ResNet and ECAPA-TDNN, coupled with two types of acoustic features: MFCCs and features extracted from the pre-trained self-supervised model UniSpeech-SAT Large, as well as a fusion of all four variants. We find that individually, ECAPA-TDNN network outperforms ResNet, and models with UniSpeech-SAT features outperform models with MFCCs by a large margin. Furthermore, a fusion of all four variants consistently outperforms individual models. Our best models outperform previously reported results on both datasets, with accuracies of 84.7% and 96.9% on ADI-5 and ADI-17, respectively. ## 1 Introduction Dialect identification can be viewed as a special case of language recognition Tong et al. (2006); Vijayan et al. (2018). Both tasks suffer from similar performance issues in the presence of background noise, channel mismatch, prosodic fluctuations, and so on. However, with closely related dialects having a small difference in both acoustic and linguistic feature space, dialect identification tasks are substantially more difficult in nature Zaidan and Callison-Burch (2014). The Arabic language is spoken in various dialects across the Arab world, in addition to Modern Standard Arabic (MSA) which is used in official and educational settings. Speech recognition systems trained on MSA data generally don't generalize well to dialectal Arabic and specialized dialectal models may be needed for improving automatic speech recognition (ASR) performance in systems developed for specific populations. Dialect identification could facilitate the development of dialectal speech recognition systems in various ways, such as by identifying dialectal utterances in large multi-dialectal corpora, or online dialect identification for routing utterances to dialect-specific ASR modules. To enable the development of spoken Arabic dialect identification systems, two benchmark datasets have been developed: ADI-5, which was deployed as part of the MGB-3 challenge Ali et al. (2017) and ADI-17, deployed as part of the MGB-5 challenge Ali et al. (2019). For both challenges, the top systems developed and submitted for the initial challenges remain the best performing systems reported in the research literature for these benchmarks. The ADI-5 training set consists of 10 hours of dialectal speech from broadcast news, covering five dialects: Egyptian (EGY), Levantine (LAV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA), in addition to two hours each for development and test sets. The ADI-17 data set consists of 17 dialectal classes for a total of 3K hours extracted automatically from YouTube. Roughly 58 hours of data were manually verified for the development and test sets. In this paper, we describe spoken dialect identification models we developed and tested on these benchmarks, and we report results exceeding the best performing models submitted to both challenges. We experimented with the Residual networks (ResNet) He et al. (2015) and Emphasized Channel Attention, Propagation and Aggregation (ECAPA-TDNN) Desplanques et al. (2020) architectures. Both architectures have been successfully employed for speaker verification tasks. In addition, ResNet was used in the best performing dialect identification system in the MGB-5 challenge, and ECAPA-TDNN has been recently explored for dialect classification, as in Lonergan et al. Lonergan et al. (2023) for Irish dialects. In addition, we explored the use of acoustic features extracted from the UniSpeech-SAT Chen et al. (2021) model, which have been shown to provide improvements in various tasks in the SUPERB benchmark Yang et al. (2021). We observe large improvements in accuracy by incorporating these features into our models. We also employ data augmentation via additive noise and speed perturbation, which generally help improve the generalization of speech classification models. Our best model result is 84.7% accuracy in the ADI-5 test set, compared to 75% previously reported as the best result in Ali et al. (2017). In ADI-17, our best model achieves 96.9% accuracy compared to 94.9% previously reported as the best model in Ali et al. (2019). ## 2 Related Work In this section, we describe the approaches proposed for ADI tasks in MGB-3 and MGB-5 challenges, which are used as baseline systems in this work. We first describe the top two performing systems for the MGB-3 challenge (ADI-5) (Ali et al., 2017), followed by the top two systems in the MGB-5 challenge (ADI-17) (Ali et al., 2019). The MIT-QCRI ADI system (Shon et al., 2017; Khurana et al., 2017) combines acoustic and linguistic features within a Siamese neural network framework to reduce dimensionality based on i-vectors. They used loss functions involving both Euclidean and cosine distances and employed support vector machines as the backend classifier. In contrast, the University of Texas at Dallas (UTD) submission (Bulut et al., 2017) to the MGB-3 challenge fused five systems, incorporating acoustic and lexical information through various techniques, including i-vectors, Generative Adversarial Networks (GANs), Gaussian Back-end (GB), and BNF i-vector features. The UTD system obtained the second-best performance with an overall accuracy of 70.38% (Ali et al., 2017). Duke Kunshan University (DKU) submitted four variants of ResNets with different block sizes and datasets, which were fused to achieve the best performing system in the MGB-5 challenge (Ali et al., 2019). The DKU system employed a ResNet with global statistics pooling and a fully connected layer. They used the Kaldi toolkit for data augmentation, including speed-perturbation and datasets such as MUSAN and RIR. The ResNet system was trained using cross-entropy loss with a softmax layer, taking 64-dimensional mel-filterbank energy features as input. On the other hand, the University of Kent (UKent) MGB-5 system (Miao and Mcloughlin, 2019) used a neural network architecture combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks with Time-Scale Modification (TSM). The UKent system reported an accuracy of 93.1% on the test set. While the best performing models reported in the original MGB-3 and MGB-5 challenges have not been outperformed in later publications (to the best of our knowledge), several other studies proposed model variants and analyzed the performance in various ways. Regarding the use of pre-trained self-supervised acoustic models, Sullivan et al. (2023) recently utilized the XLS-R model (Babu et al., 2022), which is a multi-lingual pre-trained acoustic model that includes Arabic as one of the languages used in pre-training, and HuBERT (Hsu et al., 2021), which was pre-trained solely in English. They fine-tuned dialect classification models on the ADI-17 dataset, and interestingly, the model based on HuBERT outperformed the XLS-R-based model, in spite of the multi-lingual pre-training of the latter. This indicates that the quality of the features extracted from pre-trained acoustic models may depend more on the self-supervised training details rather than linguistic coverage. A model outperforming HuBERT on several benchmark tasks is the UniSpeech-SAT acoustic model (Chen et al., 2021), which includes additional objectives on top of the HuBERT model to facilitate speaker-aware representations, which also generally embody non-linguistic characteristics of utterances, such as tone and emotion. ## 3 Proposed Model As the space of possible architectural or feature variations increases with the increasing volume of developments in the ML field, exhaustively searching all possible architectures is unfeasible. Therefore, we draw inspiration from the best performing models in related literature to reduce the search space and increase the likelihood of finding a best performing model. We selected two neural network architectures, ResNet and ECAPA-TDNN, for their potential in speech classification tasks. For feature extraction, we compare classical MFCC features with the pre-trained UniSpeech-SAT large acoustic model (Chen et al., 2021) that has been shown to provide consistent improvements in various Speech classification benchmarks. Finally, as best models in previous works typically include a form of ensemble, we experimented with fusing all model variants to further improve performance. We describe the details of these parts in this section. ### Feature extraction We experimented with two types of features: classical acoustic features, namely MFCCs, and modern acoustic features extracted from a large pre-trained acoustic model, namely the Universal Speech representation learning with speaker-aware pre-training (UniSpeech-SAT) [3]. The large variant of this model demonstrated outstanding performance in various tasks in the SUPERB benchmark [21], including linguistic and non-linguistic tasks, such as speaker diarization and emotion recognition. UniSpeech-SAT model is built on the HuBERT model [16] with additional self-supervised objectives involving utterance-wise contrastive learning and utterance mixing augmentation. The speaker-aware pre-training enabled the model to improve the discriminating capabilities of embeddings learned under self-supervised learning. In total, the large variant of UniSpeech-SAT was trained on 94K hours of English speech data from various sources, including Audiobooks and YouTube. We extracted 1024-dimensional features from the pre-trained UniSpeech-SAT1 model and kept model parameters frozen. For MFCCs, we extract 80-dimensional features using a window length of 25 ms with a sliding window of 10 ms and frame-level instance normalization. Footnote 1: [https://github.com/microsoft/UniSpeech](https://github.com/microsoft/UniSpeech) ### Network architectures We experimented with two network architectures that have been shown to work well in speech classification tasks: ResNet and ECAPA-TDNN, which we describe below. #### 3.2.1 ResNet We use the ResNet architecture [14] as our first model. Our model is composed of four residual networks, each consisting of two convolutional layers in addition a skip connection. We utilize batch normalization and ReLU activation functions. Statistical pooling is implemented to map the variable length feature frames to a time-invariant representation by aggregating frame level mean and variance as statistical parameters. The output of statistical pooling is followed by two feed-forward layers. We employ the original ResNet34 set-up as described in the original paper [14], which has 34 2D-convolutional layers organized into 4 residual network blocks, with each block containing a specific number of layers [1, 2, 3, 4], and the convolutional filters for these layers are [25, 26, 32, 6] respectively. The last feed-forward layer includes the output dimension of a number of dialect classes to identify with Additive Angular Margin (AAM) softmax layer [10] with a scale of 30.0 and margin of 0.4, trained with cross-entropy loss function. #### 3.2.2 Ecapa-Tdnn The ECAPA-TDNN architecture [3], based on the x-vector architecture [20], utilizes a Squeeze-excitation (SE)-Res2Net module in each block. These modules consist of 1-dimensional convolutional layers, ReLU activation, batch normalization, and 1-dimensional Res2Net modules with impactful skip connections and SE blocks. This design allows the model to extract hierarchical and global information from the input features. Additionally, the architecture incorporates attentive statistical pooling by calculating channel-dependent frame attention-weighted statistics (mean and variance). This process transforms variable-length hidden outputs into a time-invariant representation. The representation is further processed through feed-forward layers. Similar to the ResNet architecture, we use the AAM-softmax as the final layer and train it with the cross-entropy loss criterion. The model uses 512 channels in 1-dimensional convolutional layers, 128 dimensions for SE-Block and attention, and a scaling factor of 8 for each Res2Block. The output dimension for feed-forward layers is set to 192, and the last feed-forward layer's dimension corresponds to the number of dialect classes. ### Inference Scheme In our model, we integrate a similarity measure with our learned classifiers to enhance classification performance [3, 21, 22]. ResNet and ECAPA-TDNN are optimized for dialect identification via softmax, which we augment with a similarity-based measure based on the final embeddings produced by the network. For each dialect class, we randomly extract a cohort of 500 samples from the training set, and we calculate the average cosine similarity score between the test utterance and the cohort representing each class. After normalizing the scores, we combine them with the softmax scores by averaging them with equal weight (0.5) and selecting the class with the maximum score. Experimental setup ### Datasets We evaluate the dialect identification model on two Arabic dialect identification tasks: the MGB-3 ADI-5 dataset [1], and the fine-grained MGB5 ADI-17 dataset [1]. ADI-5 training set consists of 13,825 utterances (53.6 hours), and the test and development sets consist of 1,524 (10 hours) and 1,492 (10 hours) utterances, respectively, with each set having approximately 2 hours of data per dialect class: Egyptian (EGY), Levantine (LAV), Gulf (GLF), North African (NOR), and Modern Standard Arabic (MSA). In ADI-17, approximately 3,000 hours of training data were labeled via distant supervision into 17 dialect classes using the origin country of the YouTube videos from which they were extracted. The testing and development sets contain \(\sim\)25 and \(\sim\)33 hours of speech, respectively, manually verified by human annotators. ### Data Augmentation For data augmentation, we apply additive noise drawn from the Music, Speech, and Noise corpus (MUSAN) [10] and the QMUL impulse response dataset [10]. We also apply speed perturbation, where the tempo is modified by factors of 0.9 and 1.1. All noise augmentation was implemented using the Kaldi toolkit [11]. ### Training settings During the training phase, each model was initially trained with randomly selected 5-second segments from training utterances for the first 50 epochs. Subsequently, the duration of the training segments was reduced to 4 seconds for a total of 100 epochs to enable the model to generalize to short-duration utterances. All systems were trained using the Adam optimizer with a triangular learning scheduler policy and a batch size of 256. ## 5 Results Tables 1 and 2 show the performance of our model variants in ADI-5 and ADI-17 test sets, respectively. _Fusion_ refers to an ensemble model where scores from all four variants are combined, each with an equal weight of 0.25. We also show the performance of the best performing models from the original challenges, which have not been previously outperformed to the best of our knowledge. We observe consistent results in both datasets: ECAPA-TDNN network consistently outperforms ResNet, and the models using UniSpeech-SAT features consistently outperform those using MFCC features. Incorporating these pre-trained features results in 4% to 5% absolute improvement in accuracy for both models. We observe additional gains of 0.8% to 2% improvement in absolute accuracy by fusing all four model/feature combinations. The highest performance gain is observed by using UniSpeech-SAT features as input, which leads to outperforming all previous baselines. ## 6 Conclusions This paper described variations of model architectures, namely ResNet and ECAPA-TDNN, employing two acoustic features: classical MFCCs and self-supervised UniSpeech-SAT, leading to state-of-the-art performance in two spoken Arabic dialect identification benchmarks: ADI-5, and ADI-17. UniSpeech-SAT features, which are extracted from a large pre-trained model optimized for acoustic and speaker variability, consistently demonstrated superior performance compared to MFCC features. Despite being pre-trained solely in English speech, UniSpeech-SAT illustrates transfer learning capa \begin{table} \begin{tabular}{|l|c|c|c|} \hline **System** & **Features** & **Accuracy** & **Precision** & **Recall** \\ \hline \multicolumn{4}{|l|}{Best systems from [1]} \\ \hline MIT-QCRI & — & 75.0 & 75.1 & 75.5 \\ UTD & — & 70.4 & 70.8 & 71.7 \\ \hline ResNet & MFCC & 74.2 & 74.1 & 74.4 \\ ECAPA & MFCC & 75.3 & 75.1 & 75.3 \\ ResNet & UniS & 80.4 & 80.4 & 80.5 \\ ECAPA & UniS & 82.5 & 82.6 & 82.7 \\ Fusion & — & **84.7** & **84.8** & **84.9** \\ \hline \end{tabular} \end{table} Table 1: Performance evaluation on MGB-3 ADI-5 test set (in %) with baseline systems submitted to MGB-3 challenge. UniS denotes the UniSpeech-SAT feature extraction. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline **System** & **Features** & **Accuracy** & **Precision** & **Recall** \\ \hline \multicolumn{4}{|l|}{Best systems from [1]} \\ \hline DKU & — & 94.9 & 94.9 & 94.9 \\ UKent & — & 91.1 & 91.1 & 91.1 \\ \hline ResNet & MFCC & 90.1 & 90.1 & 90.1 \\ ECAPA & MFCC & 92.2 & 92.2 & 92.2 \\ ResNet & UniS & 95.7 & 95.7 & 95.7 \\ ECAPA & UniS & 96.1 & 96.1 & 96.2 \\ Fusion & — & **96.9** & **96.9** & **96.9** \\ \hline \end{tabular} \end{table} Table 2: Performance evaluation on MGB-5 ADI-17 test set (in %) with baseline systems submitted to MGB-5 challenge. UniS denotes the UniSpeech-SAT feature extraction. bility by extracting suitable feature representations for this discriminative task in the Arabic language. This may also indicate that non-linguistic acoustic variability (such as speaking tone, for example) could play a role in dialect identification. Consistent with previous models from the MGB-3 and MGB-4 challenge, fusing multiple models results in consistent improvements of overall performance. ## 7 Limitations In this work, we limited our analysis and exploration to two network architectures and two types of acoustic features. We based our choice on observations from the current literature on dialect identification, speech classification, and self-supervised acoustic models. However, many additional features and architectural variations could have been explored, with additional detailed analysis of the different combinations. Furthermore, we did not analyze the acoustic features that are most discriminative in these datasets, which is a complex analysis that eludes us at this stage, but future work could explore more on which aspects of an utterance (linguistic, tonal, other) are most useful for dialect identification.
2305.06717
On a generalization of Jacobi's elegantissima
We establish a generalization of Jacobi's elegantissima, which solves the pendulum equation. This amazing formula appears in lectures by the famous cosmologist Georges Lema\^itre, during the academic years 1955-1956 and 1956-1957. Our approach uses the full power of Jacobi's elliptic functions, in particular imaginary time is crucial for obtaining the result.
Luc Haine
2023-05-11T10:54:07Z
http://arxiv.org/abs/2305.06717v2
# On a generalization of Jacobi's elegantissima ###### Abstract. We establish a generalization of Jacobi's elegans-sima, which solves the pendulum equation. This amazing formula appears in lectures by the famous cosmologist Georges Lemaitre, during the academic years 1955-1956 and 1956-1957. Our approach uses the full power of Jacobi's elliptic functions, in particular imaginary time is crucial for obtaining the result. Key words and phrases:Elliptic functions, Landen's transformation 2020 Mathematics Subject Classification: Primary: 33E05; Secondary: 97I80 ###### Contents * 1 Introduction * 2 Jacobi's elliptic functions and the pendulum * 3 Lemaitre's generalization of Jacobi's elegantissima * 4 The circulatory motion * 5 A sketch of Lemaitre's proof ## 1. Introduction In _"Fundamenta Nova Theoriae Functionum Ellipticarum"_, Jacobi obtained the following formula for the "modular angle" \[\frac{arcsin\ k}{4}=arctan\sqrt{q}-arctan\sqrt{q^{3}}+arctan\sqrt{q^{5}}-\dots, \tag{1.1}\] which is an inversion formula for \[K =\int_{0}^{1}\frac{dx}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}},\;0<k<1, \tag{1.2}\] \[K^{\prime} =\int_{0}^{1}\frac{dx}{\sqrt{(1-x^{2})(1-k^{\prime 2}x^{2})}},\; k^{\prime}=\sqrt{1-k^{2}}, \tag{1.3}\] with \[q=e^{-\pi\frac{K^{\prime}}{K}}.\] Immediately following his formula (see [5] 40, formula (47)), Jacobi wrote _"quae inter formulas elegantissima censeri debet"_. The only modern reference mentioning (1.1) seems to be [11] 22.5, Example 2. During the two academic years 1955-1956 and 1956-1957, Georges Lemaitre, the famous Belgian cosmologist, professor at the University of Louvain and father of the big bang theory, taught his _"Lecons de mecanique. Le pendule"_[8], [9]. For a beautiful account of Georges Lemaitre's life and work, we refer the reader to [6]. In the context of the oscillatory motion of a pendulum of length \(l\) under the influence of the force of gravity, as explained in Section 2, Jacobi's formula can be seen as a formula giving the maximum angle of oscillation in terms of the period \(4K\sqrt{l/g}\) of the oscillation and the _imaginary period_\(4K^{\prime}\sqrt{l/g}\) of the _complementary motion_ obtained by inverting the direction of gravity, when the pendulum reaches the maximum angle. In his lectures [8], [9], Georges Lemaitre derived a formula of a similar type, giving the angle of the oscillation at _any_ time, of which Jacobi's formula (1.1) becomes a particular case. According to his own words _"La theorie que nous avons exposee peut etre consideree comme une generalisation de l'elegantissime de Jacobi"_. Lemaitre's derivation of his generalization of Jacobi's elegantissima is based on a very ingenious infinite iteration of Landen's transformation, a well known modular transformation in the theory of elliptic functions. He started his lectures with elementary mechanical and geometric proofs of Jacobi's interpretation [4] of Poncelet theorem and Landen's transformation [7]. Though not mentioned in his lectures, some of the material can also be found in Greenhill's book "The Applications of Elliptic Functions"; the French translation [3] we refer to, with a preface by Paul Appell, is a revised version of the original work. To the best of our knowledge, the infinite iteration of Landen's transformation to obtain the solution of the pendulum, does not seem to appear anywhere else in the literature. Though very intuitive, the passage to the limit is not carefully justified in [8], [9]. The purpose of this note is to give a direct proof of Lemaitre's formula, based on Jacobi's theory of elliptic functions, emphasizing _the role played by imaginary time_. This is the content of Sections 3 and 4. In Section 5, we provide a sketch of Lemaitre's proof. We don't follow his geometric arguments, using instead a standard version of Landen's transformation, to be found in [11]. Our hope is to attract the attention on Lemaitre's elementary and geometric approach of elliptic functions. ## 2. Jacobi's elliptic functions and the pendulum The motion of a pendulum of mass \(m\) and length \(l\) under the influence of the force of gravity is given by \[ml\ \ddot{\varphi}(t)=-mg\ sin\ \varphi(t)\Leftrightarrow\ddot{\varphi}(t)=-\frac{ sin\ \varphi(t)}{R},\ R=\frac{l}{g}, \tag{2.1}\] with \(g\) the acceleration of the gravity, and \(\varphi(t)\) the angle made with the descending vertical. Equation (2.1) has the following elementary solutions, the two equilibria given by \[\varphi(t)=0,\ \forall\ t\in\mathbb{R},\quad\varphi(t)=\pi,\ \forall\ t\in \mathbb{R},\] and, assuming \(\varphi(0)=0\), the two doubly asymptotic solutions \[\varphi(t)=4\ arctan\ e^{\pm\frac{t}{\sqrt{R}}}-\pi. \tag{2.2}\] All other solutions can be expressed in terms of Jacobi elliptic functions. Assuming \(\varphi(0)=0\) and \(\dot{\varphi}(0)>0\), as long as \(\dot{\varphi}(t)>0\), one has \[\sqrt{\frac{m}{2}}l\ \int_{0}^{\varphi(t)}\frac{d\varphi}{\sqrt{E-2mgl\ sin^{2} \frac{\varphi}{2}}}=t, \tag{2.3}\] which follows from the conservation of energy \[\frac{ml^{2}\dot{\varphi}^{2}}{2}+2mgl\ sin^{2}\frac{\varphi}{2}=E.\] _Oscillatory motions_ occur when \[0<E<2mgl.\] By the change of variable \[sin\ \frac{\varphi}{2}=kx,\] with \[0<k=sin\ \frac{\alpha}{2}<1, \tag{2.4}\] where \(0<\alpha<\pi\), gives the maximum angle of oscillation, i.e. \(\dot{\varphi}(\alpha)=0\), equation (2.3) becomes \[\int_{0}^{x(t)}\frac{dx}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}}=\frac{t}{\sqrt{R}}.\] Jacobi's elliptic sine function "sn" solves this inversion problem. Thus \[sin\ \frac{\varphi(t)}{2}=k\ sn\Big{(}\frac{t}{\sqrt{R}},k\Big{)},\ cos\ \frac{\varphi(t)}{2}=dn\Big{(}\frac{t}{\sqrt{R}},k\Big{)},\ \forall\ t\in\mathbb{R}. \tag{2.5}\] where the second formula follows from the identity \(dn^{2}(z,k)=1-k^{2}sn^{2}(z,k)\), satisfied by Jacobi's elliptic functions "sn" and "dn". Denoting by \(T\) the period of the motion, \[\frac{T}{4\sqrt{R}}=\int_{0}^{1}\frac{dx}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}} \Leftrightarrow T=4K\sqrt{R}, \tag{2.6}\] with \(K\) defined as in (1.2). For an _oscillatory motion_, the _complementary motion_ is obtained by changing the sign of gravity, when the angle of oscillation reaches it's maximum value \(\alpha\). This motion follows the arc of circle complementary to the arc followed during the oscillation. It is characterized by \[\ddot{\phi}(t)=-\frac{sin\ \phi(t)}{R},\ \phi(0)=\alpha-\pi,\ \dot{\phi}(0)=0.\] One immediately checks that the solution is given by \[\phi(t)=\varphi(it+K\sqrt{R})-\pi.\] Since \[sin^{2}\ \frac{\phi(0)}{2}=cos^{2}\ \frac{\alpha}{2}=1-k^{2},\] from (2.6), it follows that the period of the complementary motion is \[T^{\prime}=4K^{\prime}\sqrt{R},\] with \(K^{\prime}\) as in (1.3). The mechanical interpretation of \(T^{\prime}\), using _imaginary time_, is due to Paul Appell, see [1]. _Circulatory motions_ occur when \[E>2mgl\Leftrightarrow\frac{\dot{\varphi}^{2}(0)}{4}>\frac{1}{R}.\] By the change of variable \(x=sin\frac{\varphi}{2}\), equation (2.3) becomes \[\int_{0}^{x(t)}\frac{dx}{\sqrt{(1-x^{2})(1-k^{2}x^{2})}}=\frac{t}{k\sqrt{R}}, \quad 0<k=\frac{2}{\sqrt{R}\ \dot{\varphi}(0)}<1,\] leading to \[sin\ \frac{\varphi(t)}{2}=sn\Big{(}\frac{t}{k\sqrt{R}},k\Big{)},\ cos\ \frac{\varphi(t)}{2}=cn\Big{(}\frac{t}{k\sqrt{R}},k\Big{)},\ \forall\ t\in\mathbb{R}, \tag{2.7}\] where the second formula follows from the identity \(cn^{2}(z,k)+sn^{2}(z,k)=1\), satisfied by Jacobi's elliptic sine and cosine functions denoted "sn" and "cn". Using that \(sn(z+2K,k)=-sn(z,k)\), one finds the period \(T\) of the circulatory motion \[T=2kK\sqrt{R}. \tag{2.8}\] ## 3. Lemaitre's generalization of Jacobi's elegantissima For the oscillatory motion, defining \[\theta(t)=\pi+\varphi(t),\] i.e. measuring the angle of oscillation from the ascending vertical, from (2.5) one gets \[cos\ \frac{\theta(t)}{2}=-sin\ \frac{\varphi(t)}{2}=-k\ sn\Big{(} \frac{t}{\sqrt{R}},k\Big{)}, \tag{3.1}\] \[sin\ \frac{\theta(t)}{2}=cos\ \frac{\varphi(t)}{2}=dn\Big{(} \frac{t}{\sqrt{R}},k\Big{)}. \tag{3.2}\] To derive from (3.1), (3.2) a nice formula for \(e^{i\frac{\theta(t)}{2}}\), one is tempted to use the following infinite product formula (see [11] 22.5, for a proof) \[dn(z,k)+iksn(z,k)=\\ \prod_{n=1}^{\infty}\frac{\big{(}1+(-1)^{n}q^{n-\frac{1}{2}}e^{- \frac{i\pi z}{2K}}\big{)}\big{(}1-(-1)^{n}q^{n-\frac{1}{2}}e^{\frac{i\pi z}{2K }}\big{)}}{\big{(}1-(-1)^{n}q^{n-\frac{1}{2}}e^{-\frac{i\pi z}{2K}}\big{)} \big{(}1+(-1)^{n}q^{n-\frac{1}{2}}e^{\frac{i\pi z}{2K}}\big{)}}, \tag{3.3}\] with \[q=e^{-\pi\frac{K^{\prime}}{K}}, \tag{3.4}\] then to take the logarithm to get \(\theta(t)\). However, it is not immediately appropriate because of the imaginary exponentials which would appear in the infinite product. Thus we first rewrite the solution using Jacobi's imaginary transformation (see [11] 22.41) combined with standard formulas, which can all be obtained from the addition theorems for "sn", "cn" and "dn". Since \[sn(z-K,k)=-\frac{cn(z,k)}{dn(z,k)},\ dn(iz+K^{\prime},k^{\prime})=\frac{k}{dn( iz,k^{\prime})}=k\ \frac{cn(z,k)}{dn(z,k)},\] we deduce that \[sn(z-K,k)=-\frac{1}{k}\ dn(iz+K^{\prime},k^{\prime}). \tag{3.5}\] Similarly, since \[dn(z-K,k)=\frac{k^{\prime}}{dn(z,k)},\ sn(iz+K^{\prime},k^{\prime})=\frac{cn( iz,k^{\prime})}{dn(iz,k^{\prime})}=\frac{1}{dn(z,k)},\] one has \[dn(z-K,k)=k^{\prime}\ sn(iz+K^{\prime},k^{\prime}). \tag{3.6}\] Combining (3.1), (3.2), (3.5) and (3.6), from (3.3), with \(k^{\prime}\) instead of \(k\), we obtain \[e^{i\frac{\theta(t-K\sqrt{R})}{2}} =dn\Big{(}i\ \frac{t}{\sqrt{R}}+K^{\prime},k^{\prime}\Big{)}+i\ k^{ \prime}\ sn\Big{(}i\ \frac{t}{\sqrt{R}}+K^{\prime},k^{\prime}\Big{)},\] \[=\prod_{n=1}^{\infty}\frac{\Big{(}1-i(-1)^{n}q^{\prime n-\frac{1} {2}}e^{\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}\Big{(}1-i(-1)^{n}q^{\prime n- \frac{1}{2}}e^{-\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}}{\Big{(}1+i(-1)^{n} q^{\prime n-\frac{1}{2}}e^{\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}\Big{(}1+i(-1)^{n}q^{ \prime n-\frac{1}{2}}e^{-\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}},\] with \[q^{\prime}=e^{-\pi\frac{K}{K^{\prime}}}. \tag{3.7}\] Since for \(a\in\mathbb{R}\), \[e^{(-1)^{n}2i\arctan\ a}=\frac{1+(-1)^{n}ia}{1-(-1)^{n}ia},\] taking the logarithm of the last formula, gives \[\theta\Big{(}t-\frac{T}{4}\Big{)}=\\ \sum_{n=1}^{\infty}(-1)^{n-1}\ 4\ \Big{\{}arctan\Big{(}q^{\prime n- \frac{1}{2}}\ e^{\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}+arctan\Big{(}q^{ \prime n-\frac{1}{2}}\ e^{-\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}\Big{\}}, \tag{3.8}\] with \(T=4K\sqrt{R}\) as in (2.6), the period of the oscillatory motion. Evaluated at \(t=0\), remembering that \(\varphi(0)=0\), \[\theta\Big{(}-\frac{T}{4}\Big{)}=\pi+\varphi\Big{(}-\frac{T}{4}\Big{)}=\pi-\alpha,\] with \(\alpha\) the maximum angle of the oscillation, (3.8) reduces to \[\pi-\alpha=8\ \sum_{n=1}^{\infty}(-1)^{n-1}\ arctan\ \sqrt{q^{\prime 2n-1}}.\] Since the maximum angle of oscillation of the complementary motion is \(\pi-\alpha\), permuting \(K\) et \(K^{\prime}\) and remembering (2.4) and (3.7), we obtain \[2\ arcsink=\alpha=\pi-(\pi-\alpha)=8\ \sum_{n=1}^{\infty}(-1)^{n-1}\ arctan\ \sqrt{q^{2n-1}},\] with \(q\) as in (3.4). This is exactly _Jacobi's elegantissima_ (1.1). Thus, formula (3.8), to be found in Lemaitre [8], [9], deserves to be called _the generalized elegantissima_. ## 4. The circulatory motion Putting again \[\theta(t)=\pi+\varphi(t),\] from (2.7), we now obtain \[cos\ \frac{\theta(t)}{2}=-sin\ \frac{\varphi(t)}{2}=-sn\Big{(} \frac{t}{k\sqrt{R}},k\Big{)}, \tag{4.1}\] \[sin\ \frac{\theta(t)}{2}=cos\ \frac{\varphi(t)}{2}=cn\Big{(} \frac{t}{k\sqrt{R}},k\Big{)}. \tag{4.2}\] Once again, we need to pass to imaginary time. Now we have \[cn(z-K,k)=k^{\prime}\ \frac{sn(z,k)}{dn(z,k)},\] \[cn(iz+K^{\prime},k^{\prime})=-k\ \frac{sn(iz,k^{\prime})}{dn( iz,k^{\prime})}=-ik\ \frac{sn(z,k)}{dn(z,k)},\] and thus \[cn(z-K,k)=i\ \frac{k^{\prime}}{k}\ cn(iz+K^{\prime},k^{\prime}). \tag{4.3}\] Combining (3.5), (4.1), (4.2) and (4.3), we obtain \[e^{i\frac{\theta(t-kK\sqrt{R})}{2}}=\frac{dn\Big{(}i\frac{t}{k\sqrt{R}}+K^{ \prime},k^{\prime}\Big{)}-k^{\prime}\ cn\Big{(}i\frac{t}{k\sqrt{R}}+K^{\prime },k^{\prime}\Big{)}}{k}.\] Therefore, we are led to use the following infinite product formula (see [3] 266, formula (39), for a proof) \[\frac{dn(z,k^{\prime})-k^{\prime}\ cn(z,k^{\prime})}{k}\quad=\quad\prod_{n=1} ^{\infty}\frac{\big{(}1-q^{\prime n-\frac{1}{2}}e^{-\frac{i\pi z}{2K^{\prime} }}\big{)}\big{(}1-q^{\prime n-\frac{1}{2}}e^{\frac{i\pi z}{2K^{\prime}}}\big{)} }{\big{(}1+q^{\prime n-\frac{1}{2}}e^{-\frac{i\pi z}{2K^{\prime}}}\big{)} \big{(}1+q^{\prime n-\frac{1}{2}}e^{\frac{i\pi z}{2K^{\prime}}}\big{)}},\] with \(q^{\prime}=e^{-\pi\frac{K}{K^{\prime}}}\). It gives \[e^{i\frac{\theta(t-kK\sqrt{R})}{2}}=\prod_{n=1}^{\infty}\frac{\Big{(}1+iq^{ \prime n-\frac{1}{2}}e^{\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}\Big{(}1-iq^ {\prime n-\frac{1}{2}}e^{-\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}}{\Big{(}1 -iq^{\prime n-\frac{1}{2}}e^{\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}\Big{(} 1+iq^{\prime n-\frac{1}{2}}e^{-\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}}.\] Taking the logarithm, we now obtain the following formula \[\theta\Big{(}t-\frac{T}{2}\Big{)}=\\ \sum_{n=1}^{\infty}4\Big{\{}arctan\Big{(}q^{\prime n-\frac{1}{2}} \ e^{\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}-arctan\Big{(}q^{\prime n-\frac {1}{2}}\ e^{-\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}\Big{\}}, \tag{4.4}\] with \(T=2kK\sqrt{R}\) as in (2.8), the period of the circulatory motion. ## 5. A sketch of Lemaitre's proof In this Section, we don't follow Lemaitre's arguments in detail, for which we refer the reader to [9], the second version of his lectures during the academic year 1956-1957. Our aim is to sketch the spirit of his approach, but we use freely a standard form of Landen's transformation to be found in [11]. We present his arguments in four steps. At each step, we refer to the sections in the table of contents of [9], where elementary proofs using plane geometry are given. Step 1. The basic idea (see [9] 10, 11) is to use Landen's transformation to express the circulatory motion as a sum of two _new_ circulatory motions _identical up to a shift by half a period_. Precisely, if \(\theta(t)\) is a circulatory motion with period \(T=2kK\sqrt{R}\) as in (2.8), there exists a circulatory motion \(\theta_{1}(t)\) with period \(T_{1}=2k_{1}K_{1}\sqrt{R_{1}}\) such that \[\theta\Big{(}\frac{R}{R_{1}}t\Big{)}=\theta_{1}(t)+\theta_{1}\Big{(}t-\frac{T_ {1}}{2}\Big{)}, \tag{5.1}\] and the period \(T_{1}\) of \(\theta_{1}(t)\) is related to the period \(T\) of \(\theta(t)\) as follows \[T_{1}=2\frac{R_{1}}{R}T. \tag{5.2}\] To establish this result, using (4.1), (4.2) and the addition theorem for "sn" and "cn", we rewrite (5.1) as follows \[cos\ \frac{\theta\Big{(}\frac{R}{R_{1}}t\Big{)}}{2} =cos\frac{\theta_{1}(t)}{2}cos\frac{\theta_{1}(t-k_{1}K_{1}\sqrt{ R_{1}})}{2}\] \[\quad-sin\frac{\theta_{1}(t)}{2}sin\frac{\theta_{1}(t-k_{1}K_{1} \sqrt{R_{1}})}{2},\] \[=sn\Big{(}\frac{t}{k_{1}\sqrt{R_{1}}},k_{1}\Big{)}sn\Big{(}\frac {t}{k_{1}\sqrt{R_{1}}}-K_{1},k_{1}\Big{)}\] \[\quad-cn\Big{(}\frac{t}{k_{1}\sqrt{R_{1}}},k_{1}\Big{)}cn\Big{(} \frac{t}{k_{1}\sqrt{R_{1}}}-K_{1},k_{1}\Big{)},\] \[=-(1+k_{1}^{\prime})\frac{sn\Big{(}\frac{t}{k_{1}\sqrt{R_{1}}},k _{1}\Big{)}cn\Big{(}\frac{t}{k_{1}\sqrt{R_{1}}},k_{1}\Big{)}}{dn\Big{(}\frac {t}{k_{1}\sqrt{R_{1}}},k_{1}\Big{)}}. \tag{5.3}\] Now Landen's transformation (see [11] 22.42) tells us that (5.3) can be written as follows \[cos\ \frac{\theta\Big{(}\frac{R}{R_{1}}t\Big{)}}{2}=-sn\Big{(}\frac{(1+k_{1}^{ \prime})t}{k_{1}\sqrt{R_{1}}},k\Big{)}, \tag{5.4}\] with \[k=\frac{1-k_{1}^{\prime}}{1+k_{1}^{\prime}}=\Big{(}\frac{k_{1}}{1+k_{1}^{\prime}} \Big{)}^{2}. \tag{5.5}\] Thus assuming, as prescribed by (4.1), that \[cos\frac{\theta(t)}{2}=-sn\Big{(}\frac{t}{k\sqrt{R}},k\Big{)},\] to satisfy (5.4), we must pick \[R_{1}=\frac{R}{k}, \tag{5.6}\] and choose \(k_{1}\) as function of \(k\) as in (5.5). It is a standard result (see [11] 22.42) that \[K=\frac{(1+k_{1}^{\prime})K_{1}}{2}, \tag{5.7}\] with \(K_{1}\) as in (1.2) by replacing \(k\) with \(k_{1}\), hence \[T_{1}=2k_{1}K_{1}\sqrt{R_{1}}=4K\sqrt{R}=2\frac{R_{1}}{R}T,\] which establishes (5.2). Iterating \(j\) times the previous construction, we obtain a circulatory motion \(\theta_{j}(t)\) with period \[T_{j}=2k_{j}K_{j}\sqrt{R_{j}}=2^{j}\frac{R_{j}}{R}T, \tag{5.8}\] such that the circulatory motion \(\theta(t)\) with period \(T=2kK\sqrt{R}\) we started with, can be expressed as follows \[\theta(t)=\sum_{n=-2^{j-1}}^{2^{j-1}-1}\theta_{j}\Big{(}\frac{R_{j}}{R}(t+nT) \Big{)}, \tag{5.9}\] with \(R_{j},k_{j}\) inductively defined by \[R_{j}=\frac{R_{j-1}}{k_{j-1}},\;k_{j}^{\prime}=\sqrt{1-k_{j}^{2}}=\frac{1-k_{ j-1}}{1+k_{j-1}},j\geq 1, \tag{5.10}\] with \(R_{0}=R,k_{0}=k\). Lemaitre also observes that connecting the \(2^{j}\) successive circulatory motions in (5.9) by segments, one gets a closed Poncelet polygon (see [9] 9, 15, 22). The next step is to establish that \(\lim_{j\to\infty}R_{j}=R_{\infty}<\infty\). Step 2. We follow [9] (13, 14, 15). From (5.10), one easily establishes that \[\frac{R_{n}}{R_{n+1}}=\frac{\sqrt{R_{n-1}R_{n}}}{\frac{R_{n-1}+R_{n}}{2}},\;n \geq 1.\] Putting \[b_{0}=R<a_{0}=R_{1},\] as shown by Gauss [2], the sequence \[a_{n}=\frac{a_{n-1}+b_{n-1}}{2},\;b_{n}=\sqrt{a_{n-1}b_{n-1}},\;n\geq 1,\] converges to the arithmetic-geometric mean \[a_{\infty}=\lim_{n\to\infty}a_{n}=\lim_{n\to\infty}b_{n}=b_{\infty},\] with \[\frac{\pi}{2b_{\infty}}=\int_{0}^{\frac{\pi}{2}}\frac{d\phi}{\sqrt{a_{0}^{2} cos^{2}\phi+b_{0}^{2}sin^{2}\phi}}, \tag{5.11}\] (see [10] 2.3, for a beautiful account of the arithmetic-geometric mean). We have \[\frac{R_{1}}{R_{2}}=\frac{\sqrt{RR_{1}}}{\frac{R+R_{1}}{2}}=\frac{b_{1}}{a_{1}},\] and, by induction \[\frac{R_{n}}{R_{n+1}}=\frac{\sqrt{\frac{R_{n-1}}{R_{n}}}}{\frac{1}{2}(1+\frac {R_{n-1}}{R_{n}})}=\frac{\sqrt{a_{n-1}b_{n-1}}}{\frac{a_{n-1}+b_{n-1}}{2}}= \frac{b_{n}}{a_{n}}.\] Hence, using that \[b_{n}^{2}b_{n+1}^{2}\ldots b_{n+j}^{2}=a_{n-1}a_{n}\ldots a_{n+j-1}b_{n-1}b_{n }\ldots b_{n+j-1},\] we get \[\frac{R_{n}}{R_{n+j}} =\frac{R_{n}}{R_{n+1}}\frac{R_{n+1}}{R_{n+2}}\ldots\frac{R_{n+j- 1}}{R_{n+j}}=\frac{b_{n}b_{n+1}\ldots b_{n+j-1}}{a_{n}a_{n+1}\ldots a_{n+j-1}}\] \[=\frac{a_{n-1}b_{n-1}}{b_{n+j}^{2}}=\frac{b_{n}^{2}}{b_{n+j}^{2}}.\] In particular, for \(n=0\), we have \[\frac{R}{R_{j}}=\frac{b_{0}^{2}}{b_{j}^{2}},\] which gives \[R_{\infty}=\lim_{j\to\infty}R_{j}=\frac{R}{b_{0}^{2}}\lim_{j\to\infty}b_{j}^{ 2}=R\Big{(}\frac{b_{\infty}}{b_{0}}\Big{)}^{2}.\] Remembering (5.6), \(\frac{R}{R_{1}}=k\), we compute from (5.11) that \[\frac{\pi}{2b_{\infty}}=\frac{k}{R}\int_{0}^{\frac{\pi}{2}}\frac{d\phi}{\sqrt {1-k^{\prime 2}sin^{2}\phi}}=\frac{k}{R}K^{\prime},\] with \(K^{\prime}\) as in (1.3), leading to \[R_{\infty}=\Big{(}\frac{\pi}{2kK^{\prime}}\Big{)}^{2}R. \tag{5.12}\] The next step, the passage to the limit in (5.9), is not carefully justified in Lemaitre's syllabi [8], [9]. We won't attempt to justify it, since we have obtained a different proof of his result in Section 4. Step 3. This is developed in [9] (16, 17). Remembering (5.8), during the iteration, the period essentially doubles at each step. Since the limit radius \(R_{\infty}\) is finite, at the limit we obtain the doubly asymptotic motion (2.2) of the pendulum \[\lim_{j\to\infty}\theta_{j}(t)=\theta_{\infty}(t)=4\;arctan\;e^{\frac{t}{ \sqrt{R_{\infty}}}},\] with \(\theta_{\infty}(0)=\pi,\dot{\theta}_{\infty}(0)>0\). Taking _formally_ the limit of (5.9), we get \[\theta(t)=\sum_{n=-\infty}^{\infty}\theta_{\infty}\Big{(}\frac{R_{\infty}}{R} (t+nT)\Big{)},\] i.e., using (5.12), we obtain \[\theta(t)=\sum_{n=-\infty}^{\infty}4\;arctan\;e^{\frac{\pi}{2kK^{\prime}\sqrt {R}}(t+nT)}. \tag{5.13}\] However, this series is not convergent. Putting \[q^{\prime}=e^{-\pi\frac{K}{K^{\prime}}},\] and remembering (2.8), we can rewrite (5.13) as follows \[\theta\Big{(}t-\frac{T}{2}\Big{)}=\sum_{n=1}^{\infty}4\;arctan \Big{(}q^{\prime n-\frac{1}{2}}\;e^{\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}\] \[+\sum_{n=1}^{\infty}4\;arctan\Big{(}q^{\prime-(n-\frac{1}{2})}\; e^{\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)},\] \[=\sum_{n=1}^{\infty}4\Big{\{}arctan\Big{(}q^{\prime n-\frac{1}{ 2}}\;e^{\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}-arctan\Big{(}q^{\prime n- \frac{1}{2}}\;e^{-\frac{\pi t}{2kK^{\prime}\sqrt{R}}}\Big{)}\Big{\}},\] where each term in the second sum is recomputed modulo \(2\pi\) using \[4\;arctan\;\frac{1}{x}=2\pi-4\;arctan\;x,\] which agrees with formula (4.4) established in Section 4. Finally, and it is the last step, Lemaitre obtains his generalization of Jacobi's elegantssima by another application of Landen's transformation. Step 4. Following [9] (12, 18), we consider _two identical circulatory motions_\(\theta_{1}(t)\), with period \(T_{1}=2k_{1}K_{1}\sqrt{R_{1}}\), up to a shift by half a period, going now in _opposite_ directions, i.e. we look at \[\theta(t)=\theta_{1}(t)-\theta_{1}(t-k_{1}K_{1}\sqrt{R_{1}}).\] By a similar computation as in (5.3) and (5.4), using (4.1) and (4.2), we get \[cos\ \theta(t)=-\frac{1-k_{1}^{\prime}}{1+k_{1}^{\prime}}\ sn\Big{(}\frac{(1+k_{1 }^{\prime})t}{k_{1}\sqrt{R_{1}}},\frac{1-k_{1}^{\prime}}{1+k_{1}^{\prime}} \Big{)},\] i.e. defining \[k=\frac{1-k_{1}^{\prime}}{1+k_{1}^{\prime}}=\Big{(}\frac{k_{1}}{1+k_{1}^{ \prime}}\Big{)}^{2}\quad\text{and}\quad R=kR_{1}, \tag{5.14}\] we obtain \[cos\ \theta(t)=-k\ sn\Big{(}\frac{t}{\sqrt{R}},k\Big{)}.\] Remembering (3.1), this shows that \(\theta(t)\) is now an oscillatory motion with period \(T=4K\sqrt{R}\) as in (2.6). Using (5.7) and (5.14), one finds \[T_{1}=2k_{1}K_{1}\sqrt{R_{1}}=\frac{k_{1}}{1+k_{1}^{\prime}}4K\sqrt{R_{1}}=4 K\sqrt{R}=T, \tag{5.15}\] i.e. the oscillatory motion \(\theta(t)\) and the circulatory motion \(\theta_{1}(t)\) have now the same period. It is a standard result (see [11] 22.42, Example 1) that \[K^{\prime}=(1+k_{1}^{\prime})K_{1}^{\prime}, \tag{5.16}\] with \(K_{1}^{\prime}\) as in (1.3) by replacing \(k^{\prime}\) with \(k_{1}^{\prime}\). Hence, using (5.14), we deduce \[2k_{1}K_{1}^{\prime}\sqrt{R_{1}}=\frac{k_{1}}{1+k_{1}^{\prime}}2K^{\prime} \sqrt{R_{1}}=2K^{\prime}\sqrt{R}. \tag{5.17}\] Using (5.7) and (5.16), we also have \[q_{1}^{\prime}=e^{-\pi\frac{K_{1}}{K_{1}^{\prime}}}=e^{-2\pi\frac{K}{K^{ \prime}}}=(q^{\prime})^{2}. \tag{5.18}\] From (4.4), using (5.15), (5.17) and (5.18), we obtain \[\theta_{1}\Big{(}t-\frac{T_{1}}{2}\Big{)}=\\ \sum_{n=1}^{\infty}\ 4\ \Big{\{}arctan\ \Big{(}q^{\prime 2n-1}\ e^{\frac{\pi t }{2K^{\prime}\sqrt{R}}}\Big{)}-arctan\ \Big{(}q^{\prime 2n-1}\ e^{-\frac{\pi t}{2K^{\prime} \sqrt{R}}}\Big{)}\Big{\}},\] hence, by an easy computation, we get \[\theta\Big{(}t-\frac{T}{4}\Big{)}=\theta_{1}\Big{(}t-\frac{T_{1}}{2 }+\frac{T_{1}}{4}\Big{)}-\theta_{1}\Big{(}t-\frac{T_{1}}{2}-\frac{T_{1}}{4} \Big{)}\] \[=\sum_{n=1}^{\infty}(-1)^{n-1}\ 4\ \Big{\{}arctan\Big{(}q^{\prime n -\frac{1}{2}}\ e^{\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}+arctan\Big{(}q^{ \prime n-\frac{1}{2}}\ e^{-\frac{\pi t}{2K^{\prime}\sqrt{R}}}\Big{)}\Big{\}},\] which is _the generalized elegantissima_ as in (3.8).
2305.16504
On the Tool Manipulation Capability of Open-source Large Language Models
Recent studies on software tool manipulation with large language models (LLMs) mostly rely on closed model APIs. The industrial adoption of these models is substantially constrained due to the security and robustness risks in exposing information to closed LLM API services. In this paper, we ask can we enhance open-source LLMs to be competitive to leading closed LLM APIs in tool manipulation, with practical amount of human supervision. By analyzing common tool manipulation failures, we first demonstrate that open-source LLMs may require training with usage examples, in-context demonstration and generation style regulation to resolve failures. These insights motivate us to revisit classical methods in LLM literature, and demonstrate that we can adapt them as model alignment with programmatic data generation, system prompts and in-context demonstration retrievers to enhance open-source LLMs for tool manipulation. To evaluate these techniques, we create the ToolBench, a tool manipulation benchmark consisting of diverse software tools for real-world tasks. We demonstrate that our techniques can boost leading open-source LLMs by up to 90% success rate, showing capabilities competitive to OpenAI GPT-4 in 4 out of 8 ToolBench tasks. We show that such enhancement typically requires about one developer day to curate data for each tool, rendering a recipe with practical amount of human supervision.
Qiantong Xu, Fenglu Hong, Bo Li, Changran Hu, Zhengyu Chen, Jian Zhang
2023-05-25T22:10:20Z
http://arxiv.org/abs/2305.16504v1
# On the Tool Manipulation Capability of ###### Abstract Recent studies on software tool manipulation with large language models (LLMs) mostly rely on closed model APIs. The industrial adoption of these models is substantially constrained due to the security and robustness risks in exposing information to closed LLM API services. In this paper, we ask _can we enhance open-source LLMs to be competitive to leading closed LLM APIs in tool manipulation, with practical amount of human supervision_. By analyzing common tool manipulation failures, we first demonstrate that open-source LLMs may require training with usage examples, in-context demonstration and generation style regulation to resolve failures. These insights motivate us to revisit classical methods in LLM literature, and demonstrate that we can adapt them as model alignment with programmatic data generation, system prompts and in-context demonstration retrievers to enhance open-source LLMs for tool manipulation. To evaluate these techniques, we create _ToolBench1_, a tool manipulation benchmark consisting of diverse software tools for real-world tasks. We demonstrate that our techniques can boost leading open-source LLMs by up to \(90\%\) success rate, showing capabilities competitive to OpenAI GPT-4 in \(4\) out of \(8\) ToolBench tasks. We show that such enhancement typically requires about one developer day to curate data for each tool, rendering a recipe with practical amount of human supervision. Footnote 1: Available at [https://github.com/sambanova/toolbench](https://github.com/sambanova/toolbench) ## 1 Introduction Tool-augmented large language models (LLMs) recently emerge as a research frontier. Such augmented LLMs demonstrate tool manipulation capabilities which automate software operations through natural language instructions [1; 2; 3; 4; 5]. Despite the fact that open-source LLMs substantially shrink the quality gap towards proprietary closed LLMs in tasks such as chatbot [6; 7; 8; 9], recent tool-augmented LLMs still mostly rely on closed LLM APIs [1; 2; 3; 4]. This leads to a fundamental barrier for the industrial adoption of these augmented LLMs due to security and robustness risks associated with exposing enterprise-internal workflows and information to closed LLM APIs [10; 11]. To maximize the industrial impact, there is a substantial need for tool manipulation capabilities founded on open-source LLMs. To this end, we ask _can we build on open-source LLMs with practical amount of human supervision and achieve tool manipulation capabilities competitive to closed LLMs._ In this paper, we first demystify key challenges for tool manipulation using open-source LLMs; we then leverage the insights to suggest practical recipes for enhancement. Concretely, we study the setting shown in Figure 1 where LLMs take in a natural language instruction as the goal and generate API calls to accomplish the goal. Although we expect a quality gap between the open-source and closed LLMs [12], what we observe is a far more severe disparity. Specifically, for an on-sale house searching tool, a leading open LLM for code generation fails every test case while the OpenAI GPT 4 [13] attains \(77\%\) success rate across the same one hundred examples. This observation motivates us to study the challenges for open-source LLMs to attain strong tool manipulation capability. During our investigation, we identify three key challenges that impede the performance of open-source LLMs in tool manipulation. Firstly, open-source models often struggle to accurately identify API names, whereas closed LLMs demonstrate the capability to invoke the correct APIs without explicit usage examples or documentation during inference. This suggests that closed LLMs hypothetically internalize knowledge of API usage during training. Secondly, we show that without demonstration examples, open-source LLMs often fail to populate the appropriate values for API arguments. Thirdly, we demonstrate that open-source LLMs tend to produce non-executable generation, such as natural language beyond the desired code. Our insights suggest us to revisit three _simple_ techniques from LLMs for conventional NLP tasks. In the context of tool manipulation, we adapt them with practical amount of supervision and use them to enhance open-source LLMs. _Model alignment:_ To first internalize API usage knowledge, we perform instruction tuning [14; 15] with programatically generated data. Specifically, we first write a few dozens of templates on goals and corresponding API calls. We then pragmatically bootstrap the data volume by instantiating templates with concrete key word values. _In-context demonstration retriever:_ Inspired by retrieval-augmented generation [16; 17; 18], we additionally enhance the LLMs with a _retriever_ to leverage in-context demonstrations during inference. This module selects demonstration examples with the most semantically similar goals from a human-curated pool of examples. Given \(n\) API functions, the retriever only requires \(\mathcal{O}\left(n\right)\) examples where every API function appears in at least one example. We then leverage LLMs to generalize to goals achieved by unseen API combinations. _System prompt:_ Finally we embed goal descriptions into a pre-defined system prompt which provides inference-time guidelines to generate executable API calls; such system prompts were shown to regulate language style in chatbots [19]. These techniques only require a small amount of human supervision. Thus they render a potentially practical recipe for building on top of open-source LLMs. To extensively evaluate the inspired techniques, we present _ToolBench_, a benchmark suite on eight diverse tools ranging from Google Sheets manipulation to controlling robots [20]. It enables the first publicly-available quantitative evaluation test bench among the ones brought up in the tool-augmented LLM literature [2; 3]. For the software tools in our benchmark, LLMs need to accomplish a variety of goals by selecting and combining API functions from up to a hundred candidates. Using the tools in the ToolBench suite, we first empirically show that leading open-source LLMs can demonstrate up to \(78\%\) lower success rate when compared to the OpenAI GPT-4 APIs. We then demonstrate that these simple techniques can substantially improve the success rate of open-source LLMs by up to \(90\%\), attaining results competitive or better than OpenAI GPT-4 models in \(4\) out of the \(8\) tools in our benchmark2. To reveal the impact of different techniques, we provide evidence that aligning model with synthetic data primarily contributes to the significant improvement of open-source LLMs. The system prompt and the in-context demonstration retriever further enhance the performance. During the enhancement process, we observe that, on average, it takes just one day for a developer to craft the in-context demonstrations and curate the templates for generating model alignment data. This implies that the recipe requires a practical level of human supervision. Footnote 2: We apply the same system prompt and in-context example retriever for GPT-4. Model alignment is not applicable to GPT-4 as there is no publicly available tuning APIs for it during our experiments. Figure 1: Tool manipulation setup. We augment LLMs as action generators with access to API documentations. In a single-step scenario, an action generator directly generates API calls to accomplish the goal. A multi-step action generator iterates with an environment using API calls and generates the next-step calls based on the information from the environment until an exit state. Our contributions and the structure of this paper are as follows. * In Section 3, we reveal challenges in API selection, argument populating and non-executable generation which hinder open-source LLMs on tool manipulation. * To alleviate the challenges, we revisit simple techniques for conventional NLP tasks. We adapt them for tool manipulation to boost open-source LLMs with minimal human supervision in Section 4. * In Section 5, we introduce the ToolBench, the first open-sourced benchmark with pre-defined test cases for quantitative evaluation compared to the ones in the recent tool-augmented LLM literature. * We demonstrate in Section 6 that our adapted techniques boost open-source LLMs by up to \(90\%\) success rate, showing competitiveness with GPT-4 APIs in \(4\) out of \(8\) ToolBench tasks. ## 2 Background To establish backgrounds, we first concretize the software tool manipulation setup. We then present a preliminary observation on the capability of open-source LLMs. This observation motivates our study on the challenges in Section 3 which inspire simple techniques for enhancements in Section 4. ### Tool manipulation setup In this paper, we study the scenario where software users intend to translate a natural language goal description \(g\) into a sequence of application programming interface (API) calls \(C_{g}=\left\{c_{0},c_{1},\cdots,c_{n_{g}}\right\}\) to accomplish the goal. We study tool manipulation with open-source LLMs in this specific setting, because APIs serve as the prevalent abstraction for developers and users in modern software systems. Large language modelAutoregressive language models encode probabilities of the next word \(x_{N+1}\) given \(x_{0},x_{1},\cdots,x_{N}\) as the context sequence [21]. By sampling from this conditional probability \(p\left(x_{N+1}|x_{0},x_{1},\cdots,x_{N}\right)\) iteratively, it generates language continuations from given contexts. In the recent wave of scaling up model size and training data volume, transformer-based language models show unprecedented capability in instruction following for text and code generation [22; 23; 24]. In the context of tool manipulation, we cast goal descriptions and optional information as an instruction in the context and task the LLMs to generate code for API calls as the continuation. Action generatorA key implementation for tool manipulation is an action generator \(\mathcal{A}\) which maps a goal \(g\) to API calls \(C_{g}\). As open-source LLMs likely have not seen the information regarding the relevant APIs, we augment an LLM \(\mathcal{M}\) into an action generator by providing access to a pool of \(m\) candidate API functions \(\mathcal{D}=\left\{d_{0},d_{1},\cdots,d_{m}\right\}\). Due to the input sequence length limit of LLMs, we provide an optional retriver \(\mathcal{R}\) to retain a relevant subset of API documents \(\mathcal{D}_{g}=\mathcal{R}\left(g,\mathcal{D}\right)\in\mathcal{D}\). Thus, the action generator produces the sequence of API calls \(C_{g}=\mathcal{A}\left(g,\mathcal{D}_{g},O\right)\), where \(O\) represents the optional information that can be included in the prompt. This is a naive way of retrieval augmented generation [18; 25; 26] and we employ an off-the-shelf retriever implementation [27] for our study, but we also highly encourage the community to explore algorithms tailored for the action generator. Single and multi-step tool manipulationAs shown in Figure 1, an action generator may interact with software in either a single-step or a multi-step scenario. In a single-step scenario, action generator directly produces an API call sequence \(C_{g}=\mathcal{A}\left(g,\mathcal{D}_{g},\emptyset\right)\). In a multi-step scenario, the action generator produces a series of API call sequences \(C_{g}=\cup_{i}C_{g,i}\) where each segment \(C_{g,i}\) is used to interact with a predefined environment \(\mathcal{E}\) and generates the observation \(O_{i}=\mathcal{E}(C_{g,i})\). The observation is then used to generate a new segment \(C_{g,i+1}=\mathcal{A}\left(g,\mathcal{D}_{g},O_{i}\right)\). The process stops at an exit state. Throughout the remainder of this paper, we use the single-step setup for illustration clarity unless stated otherwise. Our experiments in Section 6 cover both single and multi-step cases. To assess the tool manipulation capability of open-source LLMs, we compare them to OpenAI GPT-4 API using the setup discussed in Section 2.1. In this preliminary comparison, we initially anticipate the closed LLMs go exhibit an advantage in tool manipulation, as observed in traditional NLP tasks [12]. However we observe a significantly larger gap than expected. For instance, in a home search task, open-source LLMs have a hard time to generate correct API calls, resulting in a \(70\%\) success rate gap compared to the zero-shot GPT-4 APIs as shown in Table 1. Such gap motivates us to study what impedes open-source LLM' performance. ## 3 Challenges for open-source LLMs To demystify key challenges, we study the behaviors of open-source LLMs in tool manipulation. By analyzing common mistakes in a weather query task, we discover three challenges to attain strong tool manipulation capabilities. As shown in Table 2, we observe that open-source LLMs often face difficulty in (1) API selection, (2) API argument population, and (3) generating legitimate and executable code 3. These insights are described in detail in this section and inspire the techniques to alleviate the challenges in Section 4. Footnote 3: If a failure case has multiple errors, we categorize it by the first triggered category in the following order: non-executable generation, wrong API selection, wrong argument populating Difficulty in API selectionWe observe that API selection failures often involve using incorrect APIs and even hallucinating non-existent API names. To quantitatively understand the intrinsic capability in API selection, we compare open-source LLMs to GPT-4 without providing any documentation or in-context demonstrations during inference. The results, as shown in Figure 2 for the weather query tool OpenWeather, reveal that GPT-4 can choose the right API without additional information beyond the goal, while open-source models struggle. Such capability disparity entails that _closed LLMs potentially internalize knowledge of API usage during training_. as shown in Table 3. In an attempt to mitigate this issue, we provide the LLMs with a hand-picked oracle in-context demonstration which achieves the same goal with different argument values. We show in Figure 2 that the hand-picked oracle examples improve success rates by up to \(45\%\). It is important to note that oracle examples are not intended as a solution for argument populating confusion, as they are hand-picked on a per-test-case basis. Nonetheless, these observations suggest that _in-context demonstrations can substantially enhance open-source LLMs for tool manipulation_. Non-executable generationThe third common failure of open-source LLMs is non-executable generation. Such failures encompass issues such as language verbosity around API calls and adherence to natural language based guidelines, as shown in Table 2. Open-source models sometimes exhibit such errors in \(23\%\) of one hundred weather query cases. These observations underscore _the necessity of regulating open-source LLMs to exclusively generate code._ ## 4 Boosting Open-source LLMs for Tool Manipulation The insights from Section 3 emphasize the importance of tuning with API usage examples, in-context demonstration and generation regulation in the domain of tool manipulation. In this section, _we revisit three techniques from the LLM literature and adapt them to address the aforementioned challenges, using a practical amount of human supervision_. We first introduce model alignment with programatically curated data to internalize API usage knowledge in Section 4.1. We then discuss augmenting open-source LLMs with an in-context demonstration retriever in Section 4.2. Lastly, we apply a system prompt to regulate generation in Section 4.3. These techniques collectively serve as a strong baseline for alleviating the challenges presented in Section 3 and inspiring further innovations. ### Multi-tool model alignment with programmatic data curation Model alignment, through tuning LLMs with usage examples, plays a vital role in improving LLMs for capabilities such as instruction following and conversation [14; 19; 28]. In light of our insights from in Section 3, we recognize the potential of model alignment with API usage examples to improve API selection capability. To practically leverage such alignment for tool manipulation, it requires a data curation strategy without massive manual example writing. Towards this end, we prototype a method which generates usage examples from human-curated templates. Figure 3 depicts our flow to generate alignment data. We create a handful of templates consisting of goal descriptions and corresponding API calls. These templates contain one or more placeholder pairs. Each of these pairs maps to a key word in the goal and an argument in the corresponding API calls. We also provide a pool of candidate values for each keyword and randomly choose values to fill in the placeholders within the template. Given a tool with \(n\) candidate APIs, we only require \(\mathcal{O}(n)\) human-curated templates to ensure practical human supervision. Specifically we use a principle where each of the \(n\) APIs is encouraged to appear in at least one template. In practice, we find it takes on average one day for one developer to curate the data for one software tool in our benchmark; this includes writing the goal templates, providing the pool of argument values and generate the data. We provide example templates we use for different tools in Appendix C. With data curated for all the tools, we perform model alignment tuning _jointly for all tools and produce a single model_. ### Demonstration retrieval In Section 3, we demonstrate the efficacy of hand-picked oracle examples in improving argument populating. However, extending from oracles to practical in-context demonstration poses two challenges. First, given \(n\) API function candidates, there are exponentially many combinations of API calls associated with different goals. Thus, LLMs should be capable of generalizing to a wide Figure 3: Programmatic training data generation using templates and random values variety of goals based on a limited number of examples. Second, to ensure effective demonstration, it is important to provide LLMs with only the relevant examples without human interventions. To fulfill the above two desiderata, we augment open-source LLMs with a demonstration retriever module. This module revolves around a repository where every API is required to appear in only one human-curated demonstration. This implies that only \(\mathcal{O}(n)\) examples are needed. Among these demonstration examples, the retriever selects the most semantically similar examples to the goal descriptions. ValidationTo verify the effectiveness of demonstration examples in practice, we empirically show that the retrieved demonstrations can improve the success rate on goals requiring API combinations unseen in the example repository. In particular, we evaluate this approach on the home search task which exposes \(15\) API functions and requires multiple functions to accomplish each goal. With only \(10\) human-curated demonstrations that do not precisely match any of the \(100\) test cases in terms of API combinations, the retrieved demonstrations can boost the success rate by up to \(79\%\) across open-source LLMs and make GPT-4 nearly perfect, as shown in Figure 4. This shows that the demonstration examples can improve tool manipulation for unseen types of goals with a repository of size \(\mathcal{O}(n)\) only. ### Generation regulation with system prompts The use of system prompts is a well-established technique in chatbots powered by LLMs [19]. By incorporating human-chatbot conversations, system prompts can effectively control the natural language style of the generated responses. In the context of tool manipulation, we regularize open-source LLMs to exclusively generate API calls with a system prompt in Figure 5, where the black part is the template shared across all tasks and the red rows are instantiated during inference for a certain goal. Our system prompt first defines a format that combines text sections containing goals, demonstrations, and generations. It then provides explicit guidelines in natural language, instructing the LLMs to generate code exclusively. The system prompt incorporates the goal description and the retrieved API functions directly for each request, reducing the human development effort to a one-time task. ## 5 ToolBench: A New Tool Manipulation Benchmark To evaluate open-source LLMs in the domain of tool manipulation, we curate a benchmark suite from both existing datasets and newly collected ones. This benchmark stands out as the first open-source test bench with predefined test cases for quantitative evaluation, distinguishing it from recent tool manipulation research using closed LLMs [2; 3]. In this section, we introduce the software tools and the evaluation infrastructure. We also demonstrate the level of challenges posed by each tool, in terms of the ability to generalize to unseen API combinations and the requirement for advanced reasoning. ### Software tools and evaluation infrastructure As shown in Table 4, our benchmark consists of five tasks we collected and three tasks derived from existing datasets, including VirtualHome[29; 30], Webshop[31] and Tabletop[20]. They cover both single-step and multiple-step action generation, which requires selecting and combining from \(2\) to \(108\) Figure 4: In-context demonstration can improve both closed and open-source models on Home Search, a tool for browsing houses on sale. Figure 5: System prompt with guidelines to only generate code in a desired format. Red parts are populated with real data for each test case during inference. API functions to accomplish the goals. Each task consists of approximately approximately \(100\) test cases, including goal descriptions and the ground truth API calls. We also provide a limited number of demonstration examples to aid model predictions4. We include a comprehensive introduction and analysis of each task within the benchmark in Appendix A. Footnote 4: For WebShop, we find that more than \(\mathcal{O}(n)\) demonstration examples can improve the success rate. Nonetheless, these examples can be acquired from programmatic software operations without heavy human curation. We use _success rate_ as the primary evaluation metric for most tasks, except for the WebShop where we report rewards, as well as for VirtualHome where we use executability and Longest Common Subsequence (LCS), following the original metrics proposed by the respective authors. To facilitate evaluation, we build an infrastructure that executes the API calls generated by the action generators and assess the final outcome. This process enables reliable evaluation of tool manipulation capabilities without restricting the action generators to perfectly match the ground truth API calls. ### Level of challenges To assess the level of challenge, we examine ToolBench tasks based on their API complexity and the requirement for advanced reasoning. Intuitively, API complexity indicates the challenges in generalizing to unseen API combinations and non-default argument values. Challenges beyond API complexity then involve advanced reasoning. API ComplexityTo quantify the challenge in generalizing to unseen API combinations, we develop a task-agnostic complexity score \(S\in\mathbb{R}_{0}^{+}\), where \[S(\mathcal{T},\mathcal{X},\mathcal{D})=\mathbb{E}_{t\in\mathcal{T}}\min_{e \in\mathcal{X}}d(t,e). \tag{1}\] It averages over all the test samples in the test set \(\mathcal{T}\) on the minimum distance between \(t\) and any demonstration example \(e\) from the example pool \(\mathcal{X}\). In particular, the distance \(d(t,e)\) between each test sample \(t\) and a demonstration example \(e\) is negatively proportional to the probability of transforming the API combination of \(e\) to match that of \(t\), by randomly dropping the API functions irrelevant to \(t\) and inserting the uncovered API functions required by \(t\) from the API pool \(\mathcal{D}\). We refer to the details of the complexity score to Appendix D and list their values in Table 4. The score is non-negative and the higher the score is, the more complex a task is. Despite the fact that this complexity score reflects the challenge level of API selection, it does not capture all the difficulties of a task. A task with low complexity score can still be very challenging as it might require advanced reasoning. For instance, even though Webshop is challenging, the API selection complexity of it is zero. This is because there are only two API functions requiring only one argument each in Webshop, and they are both covered by the examples, so there is no API selection complexity. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{**Task**} & \multirow{2}{*}{\begin{tabular}{c} **Open** \\ **Weather** \\ \end{tabular} } & \multicolumn{2}{c}{**The Cat**} & \multicolumn{2}{c}{**Home**} & \multicolumn{2}{c}{**Trip**} & \multicolumn{2}{c}{**Google**} & \multirow{2}{*}{**VirtualHome**} & \multicolumn{2}{c}{**WebShop**} \\ & & **The Cat** & & & **Search** & & **Booking** & & **Sheets** & **Long / Short** & **Tabletop** \\ \hline \hline \multicolumn{1}{c}{_Data_} & \multirow{2}{*}{\begin{tabular}{c} API functions \\ Demonstration examples \\ Test cases \\ \end{tabular} } & 9 & 6 & 15 & 20 & 108 & 40 & 2 & 32 \\ \cline{1-1} & 18 & 12 & 10 & 11 & 10 & 83 & 1533 / 200 & 74 \\ \cline{1-1} & 100 & 100 & 120 & 70 & 100 & 100 & 105 \\ \hline \hline \multicolumn{1}{c}{_Level of challenges_} & \multirow{2}{*}{\begin{tabular}{c} \\ API complexity \\ Advanced reasoning \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 2.2 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 1.4 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 7.3 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 11.1 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 8.4 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 12.3 \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} 0.0 \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} 4.6 \\ \end{tabular} } \\ & & & & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} Table 4: Tasks in the ToolBench. We provide demonstration examples for few-shot in-context-learning while test cases are for quantitatively evaluation. We develop API complexity, a metric to quantify the challenge level in generalizing to unseen API combinations; higher complexity indicates more challenging tasks. We package the challenges beyond API complexity as advanced reasoning. We refer to Appendix A for more details on these tasks. \begin{table} \begin{tabular}{c c c} \hline \hline Product & Cost & Price \\ \hline beef & 1 & 3 \\ pork & 5 & 4 \\ chicken & 10 & 11 \\ \hline \hline \multicolumn{3}{l}{Task: Update beef’s price to 10.} \\ \multicolumn{3}{l}{Action:} \\ \multicolumn{3}{l}{**worksheet.update("C2", 10)**} \\ \hline \hline \end{tabular} \end{table} Table 5: A typical task of Google Sheets manipulation. It requires both selecting the correct API function and reasoning on the arguments. Advanced reasoningWithin our benchmark, advanced reasoning encompasses challenges beyond generalizing to unseen API combinations. These challenges include non API-based coding for tasks such as Google Sheets and Tabletop, as well as decision-making based on observations returned from the WebShop environment. For instance, in the Google Sheets example shown in Table 5, the coordinate of the beef price's cell ("C2") cannot be easily derived from either the goal or the table itself. The action generator needs to understand the content or write additional python code to derive this coordinate before calling the API function. In the similar scenario, WebShop task requires the action generator to extract the exact button ID to click on the webpage given the description. These challenges, categorized as advanced reasoning, complement the API complexity category. ## 6 Experiment In this section, we leverage the ToolBench to empirically validate the techniques introduced in Section 4. First, to concretize the capability gap between open-source and closed LLMs, we demonstrate that OpenAI GPT-4 API can have substantially higher success rate than representative open-source LLMs in Section 6.2. We then show in Section 6.3 that the simple techniques in Section 4 can boost open-source LLMs to achieve success rates competitive to in-context-learning with GPT-4 APIs5 in four out of the eight tasks. Through ablation studies in Section 6.4, we additionally show that model alignment does the heavy lifting for boosting open-source LLMs, while system prompt and in-context learning robustify LLMs for further improvement. Footnote 5: GPT-4 tuning APIs were not released by the time this work is done. ### Experiment Setup To establish strong baselines, we use GPT-4 API as the representative closed LLM in our study because it attains the leading accuracy in mainstream NLP tasks. In our study, we compare LLAMA-30B [32], StarCoder [33] and CodeGen-16B-mono [34] to GPT-4. LLAMA represents open research models, while StarCoder and CodeGen are publicly available for both research and commercial purposes. We choose these three models due to their superior performance on ToolBench among open-source models as shown in Table 96. In our experiments, we consider the zero-shot setting as the out-of-the-box configuration where only API documentation is provided without any demonstration examples. We use this configuration to understand the initial gap in capabilities among models. We then incorporate all available techniques on top of this initial configuration to assess their benefits. For the original Tabletop dataset [20], which includes examples in a few-shot setting without explicit API definitions, we only evaluate settings with in-context demonstrations. More detailed setup information is included in Appendix C. We run each job 3 times with different random seeds and report average accuracy. The variation is minimal, so we ignore them in the main paper but report them in appendix. Footnote 6: Surprisingly, we observe that for tool manipulations, open-source LLMs instruction-tuned for conventional NLP tasks do not outperform their base models before tuning. ### Capability Gap Table 6 exhibits significant disparities in tool manipulation between the closed GPT-4 API and open-source models in the out-of-the-box zero-shot setting. For simpler tasks, namely Open Weather and the Cat API, which require only one API call for each goal, the open-source models exhibit success rates up to \(74\%\) lower than GPT-4. Furthermore, on all the remaining tasks other than the Webbshop, none of the LLAMA, the StarCoder and the CodeGen model can reach meaningful accuracy or compare with GPT-4. These results highlight an opportunity to enhance open-source LLMs. ### Boosting open-source LLMs To boost the open-source LLMs, we first perform model alignment using programmatially generated data. We then apply a system prompt and a 3-shot demonstration retriever during inference. Given GPT-4 does not provide tuning APIs, we enhance the out-of-the-box GPT-4 with the same system prompt and demonstration retriever as the baseline. The improvements from the combined enhancement techniques are shown in Table 6, where the success rates of the open-source LLMs can improve up to \(90\%\). As a result, the open-source models achieve competitive or better success rates on 4 out of 8 tasks, including Open Weather, the Cat API, VirtualHome and WebShop. Moreover, on Home Search and Trip Booking, the gap between the LLAMA model and the GPT-4 API is reduced to \(11\%\) and \(13.4\%\) respectively, compared to the initial gap of up to \(91\%\). Despite the fact that open-source models are still lagging behind on the Google Sheets and Tabletop, these observations show that _our recipe can significantly improve the performance of open-source LLMs and attain success rates comparable to GPT-4 API on many of the ToolBench tasks_. Human supervisionTo identify the practicality of an enhancement recipe, the amount of required human supervision is a crucial factor. In our approach, human supervision is primarily in the form of in-context demonstration examples and alignment data templates. Regarding the demonstration examples, we provide \(10\) to \(83\) examples for each task as shown in Table 4, except for WebShop given its difficulty in advanced reasoning. As shown in Table 10, the number of templates for alignment data is typically less than \(100\) for each task. We observe that providing these supervisions takes one developer day on average, making it practical in terms of the time cost on human supervision. Remaining challengesIn our experiments, we observe that the boosted open-source LLMs still have relatively low success rates on tasks that require advanced reasoning, such as Google Sheets, WebShop and Tabletop tasks. This implies the need to further enhance the reasoning capabilities of open-source models. We are excited about the prospect of more exploration from the community to address the challenges for tool manipulation on these complex tasks. ### Ablation Study We break down the contribution of the techniques in two ways. First, we apply each technique individually on top of the out-of-the-box zero-shot configuration and evaluate its impact. As shown in Table 7, both the 3-shot in-context demonstration and model alignment techniques bump up the success rates across all tasks, while the system prompt only benefits simple tasks that involve relatively fewer API calls for each goal. Next, we consider the combination of all techniques and remove them one at a time to evaluate their relative contributions within the full system. As shown in Table 7, solely removing model alignment triggers success rate degradation in up to 7 tasks, while removing either in-context demonstration up to 5 tasks and dropping system prompt up to 3. We notice that the tasks that are not significantly impacted when removing techniques are typically the ones with relatively low success rate (usually <20% even in the full system). Thus, those accuracy changes are hypothetically subject to high variance and fluctuation. The full results from the experiments in this section can be found in Table 12. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline **Task** & \begin{tabular}{c} **Open** \\ **Weather** \\ \end{tabular} & \begin{tabular}{c} **The Cat** \\ **API** \\ \end{tabular} & \begin{tabular}{c} **Home** \\ **Search** \\ \end{tabular} & \begin{tabular}{c} **Trip** \\ **Booking** \\ \end{tabular} & \begin{tabular}{c} **Google** \\ **Sheets** \\ \end{tabular} & \begin{tabular}{c} **VirtualHome** \\ **Lo** \\ \end{tabular} & \begin{tabular}{c} **WebShop** \\ **Short** \\ \end{tabular} & \begin{tabular}{c} **Tabletop** \\ **Tabletop** \\ \end{tabular} \\ \hline \multicolumn{1}{c}{_Zero-shot Baseline_} & & & & & & & & \\ GPT-4 & 81.3 & 97.4 & 76.6 & 91.5 & 5.7 & 40.8 / 8.0 & 0.0 & - \\ LLaMA-30b & 39.0 & 49.0 & 0.0 & 0.0 & 0.0 & 78.0 / 0.3 & 0.0 & - \\ StarCoder & 32.0 & 71.0 & 7.0 & 13.3 & 5.9 & 22.0 / 3.7 & 0.0 & - \\ CodeGen-16B-mono & 7.0 & 78.0 & 0.0 & 0.0 & 1.4 & 4.0 / 1.0 & 0.0 & - \\ \hline \multicolumn{1}{c}{_Enhanced of techniques_} & & & & & & & & \\ GPT-4 & 99.0 & 98.0 & 98.0 & 99.2 & 68.6 & 29.0 / 21.7 & 0.0 & 0.0 & 83.8 \\ LLaMA-30b & 100.0 & 94.0 & 87.0 & 85.8 & 2.9 & 16.0 / 24.3 & 0.0 & 0.0 & 7.5 \\ StarCoder & 99.0 & 97.0 & 83.0 & 80.8 & 21.2 & 31.0 / 18.4 & 0.0 & 0.0 & 13.9 \\ CodeGen-16B-mono & 97.7 & 99.0 & 82.0 & 77.5 & 19.8 & 29.0 / 17.2 & 0.0 & 3.5 & 16.2 \\ \hline \hline \end{tabular} \end{table} Table 6: Capability gap in tool manipulation is substantial between closed API and open-source LLMs in the out-of-the-box zero-shot setting. Using model alignment, the in-context demonstration retriever and the system prompt, open-sourced LLMs attain significant boost in success rate. GPT-4 is enhanced with the retriever and system prompt. Tabletop is only evaluated in the few-shot fashion. \begin{table} \begin{tabular}{l c c c} \hline \hline & LLaMA & StarCoder & CodeGen \\ \hline **Zero-shot** & - & - & - \\ + Sys. Prompt & +4 & +4 & +4 \\ + 3-shot & +8 & +8 & +8 \\ + Alignment & +7 & +7 & +7 \\ \hline **Full system** & - & - & - \\ - Sys. Prompt & -0 & -2 & -3 \\ -3-shot & -3 & -4 & -5 \\ - Alignment & -5 & -5 & -7 \\ \hline \hline \end{tabular} \end{table} Table 7: The number of ToolBench tasks improved (+N) or hurt (-N) over the baselines when adding or dropping techniques. Related work Our work establishes a strong connection to the LLM-driven program synthesis. In contrast to the conventional rule-based code generation in popular compilation frameworks [35], recent auto-regressive LLMs such as CodeGen[34], SantaCoder[36] and StarCoder[33] treat the problem as a sequence generation task and demonstrate superior capabilities in emitting semantically correct computer programs. We use CodeGen as a representative from these models in our study for API call generation. Tool manipulation are also known as tool augmented learning [3, 37]. Some of the works seek to augment generations with the execution results from various tools[1, 38, 39, 26, 40, 41, 42], while another line of works focus on executing the tools themselves, including embodied robotic learning [20, 30, 43, 44, 45], and automation for other tools [31, 46, 47, 48]. We focus on the study of the second stream with different models and techniques. Recent works in tool manipulation with LLMs mostly study techniques to enhance in-context-learning with closed LLMs APIs [1, 2, 3, 4, 5]. In contrast, we study simple techniques to allow for developers to practically build on top of open-source LLMs. The three techniques we mention in this paper [19, 22, 26, 49] are well studied in the conventional NLP tasks. We revisit and adapt them in the context of tool manipulation on open-source models with a practical amount of human supervision. In the recent LLM literature, there are several works presenting tool manipulation benchmarks [2, 3]. Compared to these benchmarks, the ToolBench is the first one providing predefined test cases for evaluation on real execution results. ## 8 Conclusion In this paper, we answer the question _can we enhance open-source LLMs to compete with leading closed LLM APIs in tool manipulation, with practical amount of human supervision_. Drawing from our observations of the common tool manipulation failures and insights from the literature on conventional NLP tasks with LLM, we propose to instantiate model alignment with programmatical data generation, system prompts, and in-context demonstration retieves to improve the tool manipulation capability of open-source models. To comprehensively evaluate the impact of these techniques, we create the _ToolBench_, a benchmark consisting of diverse software tools for real-world tasks. Our results demonstrate that these techniques can make the leading open-source LLMs competitive with the OpenAI GPT-4 in \(4\) out of \(8\) ToolBench tasks, all achieved with a practical amount of human labeling effort. ## Acknowledgments and Disclosure of Funding We sincerely appreciate the helpful discussion with Urmish Thakker, Tian Zhao, Raghu Prabhakar, Kaizhao Liang, Petro Junior Milan, Bowen Yang, Qinghua Li and Yaqi Zhang.
2307.01601
Prototypes as Explanation for Time Series Anomaly Detection
Detecting abnormal patterns that deviate from a certain regular repeating pattern in time series is essential in many big data applications. However, the lack of labels, the dynamic nature of time series data, and unforeseeable abnormal behaviors make the detection process challenging. Despite the success of recent deep anomaly detection approaches, the mystical mechanisms in such black-box models have become a new challenge in safety-critical applications. The lack of model transparency and prediction reliability hinders further breakthroughs in such domains. This paper proposes ProtoAD, using prototypes as the example-based explanation for the state of regular patterns during anomaly detection. Without significant impact on the detection performance, prototypes shed light on the deep black-box models and provide intuitive understanding for domain experts and stakeholders. We extend the widely used prototype learning in classification problems into anomaly detection. By visualizing both the latent space and input space prototypes, we intuitively demonstrate how regular data are modeled and why specific patterns are considered abnormal.
Bin Li, Carsten Jentsch, Emmanuel Müller
2023-07-04T09:40:30Z
http://arxiv.org/abs/2307.01601v1
# Prototypes as Explanation for Time Series Anomaly Detection ###### Abstract. Detecting abnormal patterns that deviate from a certain regular repeating pattern in time series is essential in many big data applications. However, the lack of labels, the dynamic nature of time series data, and unforeseeable abnormal behaviors make the detection process challenging. Despite the success of recent deep anomaly detection approaches, the mystical mechanisms in such black-box models have become a new challenge in safety-critical applications. The lack of model transparency and prediction reliability hinders further breakthroughs in such domains. This paper proposes ProtoAD, using prototypes as the example-based explanation for the state of regular patterns during anomaly detection. Without significant impact on the detection performance, prototypes shed light on the deep black-box models and provide intuitive understanding for domain experts and stakeholders. We extend the widely used prototype learning in classification problems into anomaly detection. By visualizing both the latent space and input space prototypes, we intuitively demonstrate how regular data are modeled and why specific patterns are considered abnormal. Anomaly detection, Anomaly explanation, Prototypes, Time series + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 + Footnote †: conference: 2022 In this paper, we propose to use prototypes to interpret the regular data during anomaly detection using autoencoders. In this context, data showing a certain repeating regular pattern is generated by several latent distributions, while the anomaly is any data point or period that deviates from from this regular pattern. We model the regular patterns of the time series data with prototypes in the latent space of the autoencoder and learn multiple prototypes to discover the latent components of the regular data distribution. Anomaly patterns that lie distantly from the data generated during the regular pattern state can be explained by comparing them with the constructed prototypes. Moreover, we further explain the anomaly patterns that lie distantly from regular data in the latent space by comparing them with the prototypes. To our knowledge, this is the first prototype-based explanation application in the time series anomaly detection domain. Our contribution to the paper can be summarized as follows: 1. we propose ProtoAD, an end-to-end LSTM-Autoencoder for anomaly detection with prototype learning 2. we develop latent space prototype-based explanations for the understanding of the regular state of the studied data 3. we evaluate our method with synthetic and real-world time series data. Moreover, we visually demonstrate examples of prototypes to show the benefit of our model qualitatively ## 2. Related Works This section briefly reviews the existing work in autoencoder-based anomaly detection models and prototype-based explanations methods. ### Reconstruction-based anomaly detection Autoencoders have been used as an unsupervised anomaly detection approach for years. Feed-forward autoencoders (Zhou et al., 2017) and Variational Autoencoders (Zhou et al., 2017) are used for time-independent data. In contrast, RNN-based autoencoders (Zhou et al., 2017) show their strength in detecting contextual anomalies in time series data. Based on the reconstruction error, a standard approach for estimating anomaly likelihood is to assume the reconstruction error following a normal distribution and measure the Mahalanobis distance between the reconstruction error of unknown data and the estimated distribution (Zhou et al., 2017). In addition to reconstruction error, the hidden representation in the latent space can also be used for likelihood estimation (Zhou et al., 2017). Gaussian Mixture Model (GMM) (Zhou et al., 2017) and energy-based model (Zhou et al., 2017) are also used for the likelihood estimation. Common thresholding techniques over the anomaly likelihood are based on maximizing the performance on a validation set, which requires labels in advance (Zhou et al., 2017). Other approaches, including the hierarchical temporal memory (HTM) (Bengio et al., 2016) and temporal convolutional network (TCN) (Chen et al., 2017) are also adopted in time series anomaly detection concerning different use cases and data properties. However, they are not directly relevant to the reconstruction-based models. ### Explanation with prototypes Due to the complex properties of both feature and time dimensions of time series data, prototypes are considered an intuitive explanation. Common prototype learning approaches for neural networks follow a three-step paradigm. 1) Representation learning, 2) prototype learning in the latent space, and 3) class prediction. The objective commonly includes 1) minimizing classification error, 2) minimizing the distances between each hidden representation and one of the prototypes, and 3) maximizing the distances between prototypes. In the existing prototype learning literature, (Zhou et al., 2017) employs a multi-layer convolutional neural network to construct the autoencoder, which learns hidden representations for image data. They rely on the decoder to project the learned prototypes in the human-understandable space, sometimes producing unrealistic reconstructions. Using a single encoder to replace the autoencoder is considered as a reduction of training effort in (Chen et al., 2017; Chen et al., 2017), and they use the nearest neighborhood of each prototype in the latent space as the corresponding realistic patterns in the input space. Under different problem settings, (Chen et al., 2017) and (Chen et al., 2017) build up the encoder with convolutional neural networks to encode image data, (Chen et al., 2017) uses RNNs for sequential data, (Chen et al., 2017) use a convolutional layer to learn time series representations, (Chen et al., 2017) employs graph neural networks for the encoder. In our work, we use the single LSTM-Autoencoder for both reconstruction-based time series anomaly detection and hidden space representation learning. The standard objective functions of existing prototype learning approaches consist of multiple regularisation terms that are trained jointly. To ensure the representation ability of the prototypes, (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017; Chen et al., 2017) all minimize the distance between each prototype and every nearby hidden representation as well as every hidden representation to each prototype. Furthermore, the learned prototypes are supposed to be diverse from each other (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017). In the objective, we will follow the standard design of the regularization terms above. However, different from most existing works, which use cross-entropy for their classification task to minimize the classification error (Chen et al., 2017; Chen et al., 2017; Chen et al., 2017), in our unsupervised setting, we use the reconstruction-error to regularize the reconstruction process of regular data. Besides prototypes, other techniques are also used for explaining time series data. The representative subsequences Shapelets (Sandar et al., 2016; Chen et al., 2017) can be similarly used for explanation. Instead of finding the representative pattern as prototypes, counterfactuals (Chen et al., 2017) explain the instance towards the opposite class. Recently, the attention mechanism is also used for explaining time series data (Chen et al., 2017; Chen et al., 2017). ## 3. Preliminaries ### Terminology Let \(X=\{X_{t}\}_{t\in\mathbb{Z}}\) be a \(d\)-dimensional time series process that shows a regularly repeating pattern over time periods of some length \(L\). These repeating patterns are contained in sliding windows \(W_{t}=\{X_{t+1},\dots,X_{t+L}\}\) of \(L\) consecutive elements of the time series. Often, the window size \(L\) can be selected based on prior knowledge on the dataset, which is known to show seasonality over e.g., one day or one week. Anomalies in time series data are commonly divided into three categories (Chen et al., 2017), point anomaly, contextual anomaly, and collective anomaly. In this work, we consider point anomalies (e.g., abrupt peaks in the data window) and contextual anomalies (e.g., the appearance of one or more points makes a temporal context unusual). Formally, we assume that the data points are generated by a periodically stationary time series process \(X\) with periodicity \(L\)(Kang et al., 2019; Wang et al., 2020). That is, the time series consists of regularly repeating patterns of length \(L\) which evolve over time without distributional changes, i.e., we do not consider concept drifts (Beng et al., 2019). Let \((W_{t},y_{t})_{t\in\mathbb{Z}}\) be the dataset after applying the sliding window and \(y_{t}\in\{0,1\}\) is the label of the window \(W_{t}\) (0 for regular data and 1 for anomaly). The anomaly detection is conducted on the window level. A window is considered abnormal if at least one point or sub-window with multiple points that shows a significantly different behavior to the window during the regular pattern state. The significance is determined by comparing the window anomaly score predicted by the model and a user-defined threshold. ### Problem definition Given the multi-dimensional time series data with applied sliding window, the target is to train an autoencoder-based end-to-end anomaly detector that 1. detect anomaly windows in an unsupervised manner 2. deliver representative prototypes of regular data in the latent space 3. leverage interpretation of anomalies based on the prototypes of regular data ## 4. Methodology In this section, we propose ProtoAD, an LSTM-Autoencoder with an additional prototype layer, which can be trained end-to-end in an unsupervised manner. ### ProtoAD architecture The architecture of ProtoAD is in line with the existing prototype neural networks (Wang et al., 2020; Wang et al., 2020). We use an LSTM-Autoencoder to learn time series hidden representations in the latent space and feed the representations to the prototype layer for similarity-based prototype comparison. Specifically, we designed the architecture and training procedure for unsupervised anomaly detection, while only data consisting of regularly repeating patterns is used for the training and prototype learning. An overview of the ProtoAD architecture is shown in Figure 1. We construct the LSTM-Autoencoder in the fashion of (Wang et al., 2020). More specifically, the \(d\) dimensional input window \(W_{t}=\{X_{t+1}^{t+L}\}\) is feed into the encoder \[\mathbf{f}:\mathbb{R}^{L\times d}\rightarrow\mathbb{R}^{m}\] The last hidden state of the encoder LSTM unit \(h_{t}=\mathbf{f}(W_{t})\) (\(h_{i}\in\mathbb{R}^{m}\)) is used as the hidden representation of the input window in latent space. A same-structured decoder \[\mathbf{g}:\mathbb{R}^{m}\rightarrow\mathbb{R}^{L\times d}\] target at reconstructing the window from the hidden representation \(W_{t}^{\prime}=\{X_{t+1}^{t+L}\}\). The decoder LSTM unit takes \(h_{i}\) as the initial hidden state while takes the real data from previous timestamp as input. We train the autoencoder to minimize the reconstruction error of regular windows, i.e., no anomaly data will be used during training. The reconstruction error at timestamp \(t\) is defined as \[e_{t}=|X_{t}-X_{t}^{{}^{\prime}}|\] The training set is used to estimate a normal distribution \(\mathcal{N}(\mu,\Sigma)\) (\(\mathcal{N}(\mu,\sigma)\) for univariate data) of the reconstruction error for multivariate input data. And the likelihood of a data point being abnormal is defined by the anomaly score \[a_{t}=\begin{cases}\frac{1}{\sigma\sqrt{2\pi}}e^{-\left(e_{t}-\mu\right)^{2} \left/2\sigma^{2}\right.}&d=1\\ \left(e_{t}-\mu\right)^{T}\Sigma^{-1}\left(e_{t}-\mu\right)&d>1\end{cases}\] The largest anomaly score is picked up to represented the window anomaly score \[a_{t+1}^{t+L}=\underset{i=1,\dots,L}{max}\left(a_{t+i}\right)\] In our work, we do not specify a threshold over the window anomaly scores to get a binary prediction. Instead, we directly evaluate the AUC score based on the real-valued anomaly scores. Different existing thresholding techniques can be applied to get a binary prediction in such a situation (Wang et al., 2020; Wang et al., 2020; Wang et al., 2020). Based on the anomaly detection model above, we introduce a prototype layer between the encoder and decoder, which leverages interpretable prototypes of the regular data during the end-to-end training process. The prototype layer does not influence the information flow from the encoder to the decoder, i.e., the only information the decoder gets from the encoder is the last encoder hidden state. The prototypes are learned by regularizing the objective function. Figure 1. ProtoAD overview: ProtoAD consists of a encoder and a decoder, which reconstruct the input data window \(W_{t}\) to \(W_{t}^{\prime}\). The anomaly score is calculated based on the reconstruction error. In the latent space, prototypes (stars) are learned as representative points of the encoded normal windows (white dots). In the test phase, latent representations of the abnormal data windows (red dots) can also be explained by comparing with the learned prototypes. The prototype layer contains \(k\) prototypes \(p_{i}\in\mathbb{R}^{m}(i=1...k)\) to be learned, where \(k\) is a user-defined parameter and \(k\) vectors are randomly initialized within the range \([-1,1]\). As introduced in subsection 4.2, several regularization terms are employed in the objective function to get the expected prototypes. In most existing prototype-based models for classification task (Kang et al., 2017; Li et al., 2018; Li et al., 2019), the prototype layer is followed by some linear layers and a Softmax layer for the production of prediction, which increases the complexity of the network and requires additional regularization to enforce interpretability. As an anomaly detection model, our outputs are derived directly from the autoencoder reconstruction errors. Therefore, we omit the other common output layers after the prototype layer to simplify the network structure. ### Objective function The objective in the training phase is to train the autoencoder with regular windows such that the reconstruction error is minimized and to learn a batch of prototypes from the regular data. The reconstruction error loss of the autoencoder is given by \[\mathcal{L}_{e}=\frac{1}{n}\sum_{i=1}^{n}\sum_{l=1}^{L}e_{t+l}\] where \(n\) is the number of sliding windows. To ensure the learned prototypes are informative and diverse enough to each other, we use the diversity loss designed by (Li et al., 2018), \[\mathcal{L}_{d}=\sum_{i=1}^{k}\sum_{j=i+1}^{k}max(0,d_{min}-||p_{i}-p_{j}||_{2 }^{2})^{2}\] which defines the threshold \(d_{min}\) that only apply this penalize to nearby prototype pairs. Finally, to ensure the prototypes are representative to the local hidden representations, we define the following representation regularization term \[\mathcal{L}_{r}=\frac{1}{k}\sum_{j=1}^{k}\min_{i\in[1,n]}||p_{j}-h_{i}||^{2}+ \frac{1}{n}\sum_{i=1}^{n}\min_{j\in[1,k]}||h_{i}-p_{j}||^{2}\] The first term ensure that each prototype is close to at least one hidden representation, which the second term ensures each hidden representation has one prototype to be represented. The overall objective function is \[\mathcal{L}=\lambda_{e}\mathcal{L}_{e}+\lambda_{d}\mathcal{L}_{d}+\lambda_{r} \mathcal{L}_{r}\] where \(\lambda_{e}\), \(\lambda_{d}\) and \(\lambda_{r}\) are weighting hyperparameters. ## 5. Experiments In this section, we introduce the experiments on ProtoAD under different settings. We experiment over different real-world datasets with a variety of anomaly types. In addition, to evaluate the model performance on specific data characteristics, we also introduce a synthetic dataset with artificial anomalies. Finally, we demonstrate the prototypes visually and analyze the prototype properties w.r.t. a variety of parameter settings. ### Experiment setup #### 5.1.1. Datasets We experiment on one synthetic dataset and four common real-world benchmark datasets in the time series anomaly detection domain. The dataset properties are summarised in Table 1. To understand the anomaly detection process and the learned prototypes, we introduce a one-dimensional synthetic dataset sampled from a sine wave with amplitude 1 and period 100 timestamps. A random noise \(\epsilon\in[0,0.1]\) is added to every timestamp. In addition, we add a random factor \(\alpha\in[0,1]\) every 100 timestamps in the test set to simulate point anomalies. We define a half period of the sine wave as the window length (i.e., \(L=50\)), such that the model is supposed to learn the crests and troughs as two types of prototypes. The New York City Taxi (Taxi) dataset is a one-dimensional real-world dataset with a clear periodical feature. It recorded the passenger counts over days in 2014. Extreme passenger counts on public holidays are considered anomalies. Following (Bang et al., 2017) we aggregate the count numbers into 30-minute intervals. We take one day (i.e., \(L=48\)) as the window length. SMAP (Soil Moisture Active Passive satellite) and MSL (Mars Science Laboratory royer) are two multivariate telemetry datasets from NASA (Kang et al., 2017). The datasets contain both point and contextual anomalies. Domain experts labeled the test sets. However, there are also anomaly data in the training sets. The polluted training set can impact the purity of prototypes. There is no common repeating pattern in the datasets. We set the window length \(L\) as 100 for both datasets. SMD (Server Machine Dataset) (Krizhevsky et al., 2012) is a multivariate dataset collected from servers of an Internet company. The dataset is already divided into two equal-sized training and test sets by the provider and labeled anomalies by domain experts based on incident reports. We only use the data from one machine (machine-1-1) in our experiments. We set \(L=100\) for SMD. #### 5.1.2. Evaluation metrics We adopt the AUC score as the evaluation metric. Considering the essential requirement of detecting both point and contextual anomalies, we only evaluate on the window level. A data window is abnormal if it contains one or multiple abnormal instance(s). #### 5.1.3. Competitors To the best of our knowledge, this is the first work that engages time series anomaly detection and prototype learning. The existing prototype learning networks (Kang et al., 2017; Li et al., 2019; Li et al., 2019) commonly work in a supervised manner, which requires labeled data for the training phase. Therefore they are not directly relevant to our setting. We mainly compare our method with the unsupervised anomaly detection approaches. Firstly, we compare with the \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Length & Dimensionality & Anomaly rate (\%) \\ \hline Synthetic & 20000 & 1 & 1.0 \\ Taxi & 17520 & 1 & 4.4 \\ SMAP & 562800 & 25 & 13.1 \\ MSL & 132046 & 55 & 10.7 \\ SMD & 56958 & 38 & 9.5 \\ \hline \end{tabular} \end{table} Table 1. Datasets EncDecAD (Hidskar et al., 2017), which has a similar setting as ours, but without the prototype layer. Thereby, we can determine whether the prototype learning damages the original reconstruction-based anomaly detection. Furthermore, we compare with one of the state-of-the-art unsupervised time series anomaly detection OmniAnomaly (Meng et al., 2017). We follow most of the default hyperparameter settings in (Meng et al., 2017) but use the window length same as in our work for the sliding windowing. #### 5.1.4. Experimental details In all experiments, we set \(\lambda_{e}=0.025\), \(\lambda_{d}=0.2\) and \(\lambda_{r}=0.5\). During training, \(25\%\) of the data is used for learning the parameters \(\mu\) and \(\Sigma\) (\(\sigma\) for univariate data). All models are trained for \(100\) epochs with batch size \(20\), learning rate \(0.0001\), dropout rate \(0.2\). We use the Adam optimizer (Kingmare et al., 2014). All experiments are conducted on a NVIDIA Quadro RTX 6000 24GB GPU. The experimental results are averaged over three runs. ### Evaluation results #### 5.2.1. Anomaly detection performance Firstly we report the AUC score over different models in Table 2. For ProtoAD, we take the number of prototypes \(k=10\). There is no significant difference between EncDecAD and ProtoAD, which indicates that the additional prototype layer and corresponding learning process do not directly impact the anomaly detection performance. ProtoAD even benefits from the prototype learning in the Synthetic and Taxi datasets. OmniAnomaly shows worse AUC scores in comparison with the other two models. Different from (Meng et al., 2017), where all possible thresholds over the predicted anomaly scores are traversed, and only the threshold with the best \(F1\) score is reported, the AUC score reflects more general quality of the anomaly scores over multiple thresholds. #### 5.2.2. Parameter sensitivity The hidden layer size \(m\) and the number of prototypes \(k\) are two major hyperparameters in ProtoAD. In this section, we examine the performance sensitivity to those two parameters. For each dataset, we try \(m\in[10,50,100,200,400,600,800]\) and \(k\in[0,5,10,20,30,50]\), where \(k=0\) reduces ProtoAD to EncDecAD. Heatmaps of the AUC scores on real-world datasets are shown in Figure 2. The results indicate that the model is sensitive to neither hidden size nor the number of prototypes, as far as the hidden size is large enough to capture all information in the data windows. However, the number of prototype \(k\) should not be set too large. Otherwise, the model tends to learn redundant prototypes (see Figure 4). #### 5.2.3. Latent space visualization We investigate a visualization of the autoencoder hidden space in this section to understand how time series data windows are embedded and how prototypes of regular data are learned. We use UMAP (Hidskar et al., 2017) to reduce the high-dimensional latent representations into two dimensions. The result is visualized in Figure 4. Here, we set \(k=5\) for all datasets. The prototypes shown in the plots are learned during the training phase. The plotted regular and anomaly points are embedded from the test data. In the synthetic data, the regular data lie in two regions. Four prototypes are learned from the trough half (lower left) and one from the crest half (upper right). Since the anomalies are always generated at the beginning of the crest half, the anomalies lie nearer to the crest regular data embedding. In the real-world datasets, especially the SMAP and MSL with polluted training data, regular and abnormal data do not clearly show separated clusters, though the learned prototypes represent the major blocks of dense regions showing regular patterns. Specifically, the prototypes gather at the bottom right corner for SMD, while no prototype is at the larger upper cluster. A possible reason is that the high-dimensional server data contain many zero values. The model can not summarize informative patterns in the training set, and slightly different regular patterns in the test data are mapped into a different region. #### 5.2.4. Prototype-based explanation Finally, we map the prototypes learned in the latent space back to the human-interpretable input space. Similar to (Hidskar et al., 2017; Krizhevsky et al., 2014), we map the prototypes back to the input space using the nearest training data embedding in the latent space to prevent unrealistic produced by the decoder. Moreover, each neighbor can only be used once, so every prototype is unique. We visualize the prototypes learned in the one-dimensional datasets Taxi and Synthetic in Figure 3 with five prototypes (\(P1\) to \(P5\)) for each dataset. In Figure 2(a) and Figure 2(b), four similar prototypes (\(P1\), \(P2\), \(P4\), \(P5\)) show a increasing taxi usage pattern in the morning and turning down at night. \(P3\) can be seen as a delayed version of the other four, which is a weekend pattern. The light lines in the background are the regular (grey) and anomaly (red) sequences with the smallest distance to the corresponding prototypes in the latent space. Most of the regular patterns fit the assigned prototypes. A considerable \begin{table} \begin{tabular}{|c|c|c|c|} \hline & EncDecAD & ProtoAD & OmniAnomaly \\ \hline Synthetic & 0.50 & 0.54 & 0.95 \\ Taxi & 0.53 & 0.63 & 0.52 \\ SMAP & 0.41 & 0.40 & 0.49 \\ MSL & 0.73 & 0.73 & 0.50 \\ SMD & 0.95 & 0.95 & 0.51 \\ \hline \end{tabular} \end{table} Table 2. Anomaly detection performance Figure 2. AUC score under different parameter setting number of both regular and anomaly sequences have the smallest latent space distance to \(P3\). Some of them visually fit better with other prototypes, while the distance comparison and prototype assignment do not directly take place in the input space but in latent space. However, this is effective for long and high-dimensional sequences. Figure 2(b) depicts the explanation of anomaly patterns, namely how different are the anomaly sequences to their nearest prototype. Figure 2(c) shows the three assigned regular and anomaly sequences (if available) to each prototype. Since the point anomalies are always generated at the beginning of the crest half, all anomalies are assigned to crest prototype \(P4\). For the high dimensional datasets, we stay observing the prototypes in the latent space and leave the searching for informative sub-input space prototypes as future work. Similarly, we also plan to investigate the reduction of redundant prototypes (e.g., \(P1\), \(P2\), \(P3\), \(P5\) in Figure 2(c)). #### 5.2.5. Efficiency comparison Training the autoencoder with an extra prototype layer does not bring much training expense. We compare the epoch training time between EncDecAD (\(k=0\)) and ProtoAD (\(k\in[5,10,20,30,50]\)) in Figure 5. As shown in the figure, there is no significant increase in training time for ProtoAD. In the contrary, due to the complex model structure, the epoch training time for OmniAnomaly is: Synthetic 32s, Taxi 39s, SMD 225s, MSL 888s and SMAP 3627s. ## 6. Conclusion and discussion In this paper, we explored using prototypes to explain the reconstruction-based anomaly detection process. Specifically, we integrate the recent end-to-end prototype learning into the LSTM-Autoencoder. We use the latent space representations in the autoencoder, which are not directly used in the conventional reconstruction-based anomaly detection models. In our empirical evaluation, we figured out that adding a prototype learning step during the training of the autoencoders does not damage the performance of the autoencoder. The prototypes contribute to an intuitive understanding of the regular pattern state. Although the prototypes learned in the two one-dimensional datasets are realistic and interpretable for humans, there are still two major problems to be solved. Firstly, the selection of parameter \(k\) is tricky. Pruning techniques can be applied to reduce the redundancy in the prototypes. Moreover, the prototypes of high-dimensional data can only be shown as regular state patterns. However, it is not intuitive enough for humans to directly figure out a small subset of dimensions of interest. In future work, we plan to investigate learning prototypes in subspaces. Figure 4. The UMAP visualization of the ProtoAD latent space. Figure 5. Efficiency analysis Figure 3. Prototype visualization (blue) with assigned regular (grey) and anomaly sequences (red)
2306.01528
Does it pay to optimize AUC?
The Area Under the ROC Curve (AUC) is an important model metric for evaluating binary classifiers, and many algorithms have been proposed to optimize AUC approximately. It raises the question of whether the generally insignificant gains observed by previous studies are due to inherent limitations of the metric or the inadequate quality of optimization. To better understand the value of optimizing for AUC, we present an efficient algorithm, namely AUC-opt, to find the provably optimal AUC linear classifier in $\mathbb{R}^2$, which runs in $\mathcal{O}(n_+ n_- \log (n_+ n_-))$ where $n_+$ and $n_-$ are the number of positive and negative samples respectively. Furthermore, it can be naturally extended to $\mathbb{R}^d$ in $\mathcal{O}((n_+n_-)^{d-1}\log (n_+n_-))$ by calling AUC-opt in lower-dimensional spaces recursively. We prove the problem is NP-complete when $d$ is not fixed, reducing from the \textit{open hemisphere problem}. Experiments show that compared with other methods, AUC-opt achieves statistically significant improvements on between 17 to 40 in $\mathbb{R}^2$ and between 4 to 42 in $\mathbb{R}^3$ of 50 t-SNE training datasets. However, generally the gain proves insignificant on most testing datasets compared to the best standard classifiers. Similar observations are found for nonlinear AUC methods under real-world datasets.
Baojian Zhou, Steven Skiena
2023-06-02T13:28:53Z
http://arxiv.org/abs/2306.01528v1
# Does it pay to optimize AUC? ###### Abstract The Area Under the ROC Curve (AUC) is an important model metric for evaluating binary classifiers, and many algorithms have been proposed to optimize AUC approximately. It raises the question of whether the generally insignificant gains observed by previous studies are due to inherent limitations of the metric or the inadequate quality of optimization. To better understand the value of optimizing for AUC, we present an efficient algorithm, namely AUC-opt, to find the provably optimal AUC linear classifier in \(\mathbb{R}^{2}\), which runs in \(\mathcal{O}(n_{+}n_{-}\log(n_{+}n_{-}))\) where \(n_{+}\) and \(n_{-}\) are the number of positive and negative samples respectively. Furthermore, it can be naturally extended to \(\mathbb{R}^{d}\) in \(\mathcal{O}((n_{+}n_{-})^{d-1}\log(n_{+}n_{-}))\) by calling AUC-opt in lower-dimensional spaces recursively. We prove the problem is NP-complete when \(d\) is not fixed, reducing from the _open hemisphere problem_. Experiments show that compared with other methods, AUC-opt achieves statistically significant improvements on between 17 to 40 in \(\mathbb{R}^{2}\) and between 4 to 42 in \(\mathbb{R}^{3}\) of 50 t-SNE training datasets. However, generally the gain proves insignificant on most testing datasets compared to the best standard classifiers. Similar observations are found for nonlinear AUC methods under real-world datasets. ## 1 Introduction The Area Under the ROC Curve (AUC) [1] is an important model evaluation metric that can be applied to a wide range of learning tasks such as binary classification [1], bipartite ranking [13], and recently fairness learning [16, 17]. It is a generally more reliable quality measure than the accuracy when the dataset is highly imbalanced, which often is the case in real-world problems. Multiple studies [15, 1] argue that optimizing classifiers for AUC may result in better classifiers than minimizing error rates. A wide variety of algorithms [14, 18, 16, 17, 18] have been proposed to optimize AUC approximately under different learning settings. Typically, these methods relax the original _nonconvex nondifferentiable_ objective to either convex or differentiable. Despite these advances, there exists no strong evidence that these algorithms generally perform better than standard classifiers. Empirical observations [1, 1] from previous indicate generally minor and statistically insignificant gains on particular datasets. This vagueness makes the question whether the observed results are due to the inherent limitations of the metric or the inadequate quality of optimization. To better understand the virtues of optimizing for AUC, we investigate it from both _computational_ and _algorithmic_ viewpoints. Although AUC optimization is often reported to be NP-hard for linear hypothesis classes [1, 12, 13], we show that it is polynomial-time solvable if the data dimension \(d\) is fixed. We also prove that it is NP-complete if \(d\) is not fixed in advance. The key idea of our proof is a reduction from the _open hemisphere problem_[15]. With the hope of polynomial-time solvable of linear classifiers, we present an efficient algorithm, namely AUC-opt, that can provably optimize AUC in \(\mathbb{R}^{2}\). A key observation is that given any \(n\) training samples on the plane, the number of "interesting" classifiers is at most \(\mathcal{O}\left(n_{+}n_{-}\right)\) where \(n_{+}\) and \(n_{-}\) are the number of positive and negative samples respectively. Inspired by the idea of the topological sweep [1], we calculate the AUC for the "minimal" slope once and then iterate through all other slopes by an ascending order using only constant update-time per iteration, yielding an \(\mathcal{O}(n_{+}n_{-}\log n)\) algorithm. Furthermore, our method can be naturally extended to \(\mathbb{R}^{d}\) in \(\mathcal{O}((n_{+}n_{-})^{d-1}\log(n))\) by calling AUC-opt in low-dimensional spaces recursively. This algorithm as an exponential depends on \(d\) and hence will be impractical on large real-world datasets where samples are usually high-dimensional. **Our goal here is not a general-purpose algorithm but to address whether the observed limitations of previous AUC optimizers result from inadequate optimization of a convex function or are an inherent result of the AUC objective criteria.** Doing such experiments requires the exact optimization of AUC-opt, even if we are limited to small data sets in low dimensions. Fig. 1 presents a toy example as an illustration where there are significant improvements. To further validate AUC-opt, we conduct experiments on 50 real-world datasets projected onto both \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) by using t-SNE [10]. Experiments comparing AUC-opt against seven linear classifiers show that AUC-opt achieves statistically significant improvements on between 17 to 40 of 50 t-SNE datasets at the training stage. To summarize, our main contributions are: * For the first time, we prove the linear AUC optimization problem is NP-complete when \(n\) and \(d\) are arbitrary but polynomial-time solvable for fixed \(d\). A key to our proof is a reduction of the open hemisphere problem. Although NP-hard of the problem was often reported, we have not identified proof in the literature. * We then present AUC-opt, to find the provably optimal AUC linear classifier in \(\mathbb{R}^{2}\). It runs in \(\mathcal{O}(n_{+}n_{-}\log(n_{+}n_{-}))\), which is optimal under the _algebraic computation tree model_[1]. It can be naturally extended to \(\mathbb{R}^{d}\) in \(\mathcal{O}((n_{+}n_{-})^{d-1}\log(n_{+}n_{-}))\) by decomposing original problem into same subproblems of lower dimensional spaces and calling AUC-opt recursively. * Experiments comparing AUC-opt against seven other classification methods show that AUC-opt achieves significant improvements on between 17 to 40 in \(\mathbb{R}^{2}\) and on between 4 to 42 in \(\mathbb{R}^{3}\) of 50 t-SNE training datasets. But generally, the gain proves insignificant on most testing datasets compared to the best standard classifiers. Empirical results suggest that approximate AUC classifiers have space to improve. * Similarly, empirical findings on nonlinear classifiers further suggest that the partial loss of significance of approximate AUC optimizers may be due to imperfect approximation, thus having space to improve current approximate algorithms. ### Related Work **AUC optimization.** In the seminal work, Cortes and Mohri [1] shows that AUC is monotonically increasing with respect to the accuracy and is equal to the accuracy when \(n_{+}=n_{-}\). Yet, because the variance is not zero, optimizing directly for AUC may still yield better AUC values than that of standard classifiers. Many AUC optimization methods have been proposed over past years [23, 2, 1, 13, 14, 15, 16, 17, 18]. These approaches all focus on approximation due to the nonconvex and nondifferentiable of the AUC objective. To avoid this, Yan et al. [1] propose to replace the 0-1 objective by a sum of differentiable sigmoid so that a gradient descent-based method can be applied. Joachims [1] relaxes the problem to a convex one so that SVM can be used (see also [1, 13]). A study of Rakotomamonjy [1] indicates that optimizing SVM objective is also attend to optimize AUC [1, 1]. More recently, methods for optimizing AUC are studied under the online learning setting [13, 14, 15]. However, performance gains are insignificant found in these studies, and there is a lack of comparison between AUC optimizers and standard methods. Different from these previous works, our goal is to optimize AUC score without approximation. **Bipartite ranking.** The AUC optimization is closely related with the bipartite ranking problem [13, 14, 15, 16] where minimizing the pairwise misranking error is equivalent to maximize the AUC score. For example, RankBoost [13], a popular ranking algorithm, implicitly optimizes AUC as proved in [1]. Kotlowski, Dembczynski, and Hullermeier [1] consider maximizing AUC as a minimization of the rank loss. Recently, Rudin and Wang [1] propose to directly optimize rank statistics by using mixed-integer programming. More works can be found in Menon and Williamson [1] and references therein. **Computational complexity results.** Although NP-hard of the problem is often cited as folklore [14, 15], we have not identified proof in the literature. Several NP-hardness results have previously been shown for both classifications of 0-1 loss and ranking [13, 14, 15, 16]. Cohen, Schapire, and Singer [17] show that finding the optimal ranking is NP-complete. Although Joachims [1] shows that the AUC optimization can be reformulated as a classification problem, a proof of NP-hardness from it does not seem to follow naturally. Instead, we prove the NP-hardness by the reduction from the open hemisphere problem. ### Paper outline and notations The remainder of this paper is organized as follows. We first present preliminaries in SS2. The proof of NP-hardness of AUC optimization is given in SS3. AUC-opt and its generalization to high-dimensional space are given in SS4. We empirically evaluate AUC-opt and then make our conclusion in SS5 and SS6, respectively. Throughout this paper, we restrict our attention to optimize AUC in a linear hypothesis Figure 1: The popular AUC classifier SVM-Perf [1] fails to find a decent AUC separator on an adversarial example (left), performing similar to Logistic Regression (LR). The corresponding ROC curves and AUC scores (right) for these and our AUC-opt, which beats SVM-Perf and LR by a large margin. class \(\mathcal{H}\) i.e. \(\mathbf{f}\in\mathcal{H}:=\{\mathbf{w}:\mathbf{w}\in\mathbb{R}^{d}\}\). Given a set of \(n\) training examples \(\mathcal{D}:=\{(\mathbf{x}_{i},y_{i}):i\in\{1,2,\ldots,n\}\}\) where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and \(y_{i}\in\{\pm 1\}\), we rewrite \(\mathcal{D}\) as a union of \(\mathcal{D}_{+}\) and \(\mathcal{D}\). where \(\mathcal{D}_{+}\) is the set of positive samples written as \(\{(\mathbf{x}_{1}^{+},y_{1}^{+}),\ldots,(\mathbf{x}_{n_{+}}^{+},y_{n_{+}}^{-})\}\) and \(\mathcal{D}_{-}\) is the set of negative samples written as \(\{(\mathbf{x}_{1}^{-},y_{1}^{-}),\ldots,(\mathbf{x}_{n_{-}}^{-},y_{n_{-}}^{-})\}\), respectively. Clearly, \(n=n_{+}+n_{-}\) and \(\mathcal{D}=\mathcal{D}_{+}\cup\mathcal{D}_{-}\). \(x_{ij}\) is denoted as \(j\)-th entry of vector \(\mathbf{x}_{i}\), i.e. \(\mathbf{x}_{i}=[x_{i1},x_{i1},\ldots,x_{id}]^{\top}\). ## 2 Preliminaries We first review the definition of AUC statistic and give the problem formulation under linear hypothesis \(\mathcal{H}\). We discuss that the problem is efficiently solvable when \(\mathcal{D}\) is separable. **Definition 1** (AUC Statistic [1]).: _Let \((X,Y)\) and \((X^{\prime},Y^{\prime})\) be two pairs of random variables in \(\mathbb{R}^{d}\times\{\pm 1\}\) following the same unknown distribution. Denote the probability of an event \(A\) condition on \(\{Y=1,Y^{\prime}=-1\}\) as \(\mathbb{P}\{A|Y=1,Y^{\prime}=-1\}\). Given a score function \(\mathbf{f}:\mathbb{R}^{d}\mapsto\mathbb{R}\), the AUC statistic is_ \[\mathrm{AUC}(\mathbf{f}):=\mathbb{P}\{\mathbf{f}(X)>\mathbf{f}(X^{\prime})|Y=1,Y^{\prime}= -1\}.\] Statistically, \(\mathrm{AUC}(\mathbf{f})\) is the probability that a randomly chosen positive sample is ranked by \(\mathbf{f}\) higher than a randomly chosen negative one (tie-breaks lexicographically). It is equivalent to the Wilcoxon statistic [12]. Given \(\mathcal{D}\), our linear AUC optimization is empirically defined as the following. **Problem 1** (Linear AUC Optimization (LAO)).: _Given the dataset \(\mathcal{D}\) and the hypothesis class \(\mathcal{H}:=\{\mathbf{w}:\mathbf{w}\in\mathbb{R}^{d}\}\), the LAO problem is to find a \(\mathbf{w}\in\mathcal{H}\) such that the empirical AUC score is maximized, that is_ \[(\text{LAO})\quad\mathbf{w}^{*}\in\operatorname*{arg\,max}_{\mathbf{w}\in\mathcal{H}} \sum_{i=1}^{n_{+}}\sum_{j=1}^{n_{-}}\frac{\mathbf{1}[\mathbf{w}^{\top}\mathbf{x}_{i}^{+}> \mathbf{w}^{\top}\mathbf{x}_{j}^{-}]}{n_{+}n_{-}}, \tag{1}\] _where the indicator \(\mathbf{1}[A]=1\) if \(A\) is true 0 otherwise._ Due to the non-convexity of 0-1 loss in LAO, directly optimizing (1) is challenging. Notice that \(\mathbf{w}^{*}\) is not unique as \(p+\alpha\mathbf{w}^{*}\) with \(\alpha>0\) and \(p\in\mathbb{R}\) is always an optimizer. Although (1) is hard to optimize, if \(\mathcal{D}\) is _linearly separable_, that is, there exists a \(\mathbf{w}\) such that for any \((\mathbf{x}_{i}^{+},y_{i}^{+})\in\mathcal{D}_{+}\), \(\left\langle\mathbf{w},\mathbf{x}_{i}^{+}\right\rangle\geq 0\) and for any \((\mathbf{x}_{j}^{-},y_{j}^{-})\in\mathcal{D}_{-}\), \(\left\langle\mathbf{w},\mathbf{x}_{j}^{-}\right\rangle<0\), then one can always find a \(\mathbf{w}\) such that \(\mathrm{AUC}(\mathbf{w})=1\) in polynomial-time [10, 11] by using Perceptron [10] or linear programming techniques. For example, the worst time complexity of an iterative reduction algorithm is \(\mathcal{O}(nr^{3})\) where \(r\leq\min(n,d+1)\)[10]. In the rest of this paper, we assume \(\mathcal{D}\) is not linearly separable. Although previous studies have claimed the NP-hardness of LAO [10, 11, 12], no previous literature proves it even for the linear classifier case. It motivates us to prove the NP-hardness under the linear hypothesis. ## 3 NP-hardness of LAO This section proves the LAO problem under the linear hypothesis is NP-complete if \(n\) and \(d\) are not fixed but polynomial-time solvable when \(d\) is fixed. We first introduce the open hemisphere problem and then prove the NP-complete by a reduction from it. **Definition 2** (Open hemisphere).: _Given the unit sphere \(\mathcal{S}^{d-1}:=\{\mathbf{s}\in\mathbb{R}^{d}:\|\mathbf{s}\|_{2}=1\}\), the open hemisphere of \(\mathbf{w}\) is defined as a set \(\{\mathbf{s}\in\mathcal{S}^{d-1}:\left\langle\mathbf{w},\mathbf{s}\right\rangle>0\}\)._ **Problem 2** (Open hemisphere problem [13]).: _Let \(\mathcal{K}:=\{\mathbf{s}_{1},\mathbf{s}_{2},\ldots,\mathbf{s}_{t}\}\) be a subset of \(\mathbb{Q}^{d}\cap\mathcal{S}^{d-1}\) where \(\mathbb{Q}\) is the set of rationals. The open hemisphere problem is to find an open hemisphere such that it contains a largest subset of \(\mathcal{K}\), that is,_ \[\operatorname*{arg\,max}_{\mathbf{w}\in\mathbb{R}^{d}}|\{\mathbf{s}_{i}\in\mathcal{K}: \left\langle\mathbf{w},\mathbf{s}_{i}\right\rangle>0\}|.\] To ease our analysis, we formulate the open hemisphere problem as a feasibility problem. Given positive integers \(d\), \(m\) and a set \(\mathcal{K}\), does there exist a hyperplane \(\mathbf{w}\) such that at least \(m\) inequalities are satisfied, that is, \[(\text{OH})\qquad|\{\mathbf{s}_{i}\in\mathcal{K}:\left\langle\mathbf{w},\mathbf{s}_{i} \right\rangle>0\}|\geq m\? \tag{2}\] **Lemma 1** (NP-complete of OH [13]).: _Given positive \(d\), \(n\), \(m\), and \(\mathcal{K}\subseteq\mathbb{Q}^{d}\cap\mathcal{S}^{d-1}\), the feasibility of OH problem defined in (2) is NP-complete when both \(n\) and \(d\) are not fixed._ The above lemma shows the fact that OH is NP-complete. Based on this lemma, we have the following main theorem. **Theorem 1** (NP-complete of LAO).: _Consider the linear AUC optimization problem defined in Problem 1, if \(\mathcal{D}\) is not linear separable, LAO is NP-complete when both \(n\) and \(d\) are arbitrary._ Before proving Thm. 1, we first reformulate the LAO problem as a feasibility problem and then show this feasibility problem is NP-complete. **Problem 3** (The feasibility problem of LAO).: _Given a finite dataset \(\mathcal{D}\), a set of linear classifiers \(\mathcal{H}\), and positive integers \(d,\)\(t\) as an input, the feasibility problem of LAO is to ask, does there exist \(\mathbf{w}\in\mathcal{H}\) such that_ \[\sum_{i=1}^{n_{+}}\sum_{j=1}^{n_{-}}\mathbf{1}[\mathbf{w}^{\top}\mathbf{x}_{i}^{+}>\mathbf{w}^{ \top}\mathbf{x}_{j}^{-}]\geq t\? \tag{3}\] _Proof Sketch.1 Without loss of generality, let us assume the problem in \(\mathbb{Q}^{d}\). To prove the NP-complete, we only need to show that the feasibility of LAO defined in Problem 3 is both in NP and is NP-hard by a reduction from OH. First of all, Problem 3 is in NP. Given any \(\mathbf{w}\in\mathcal{H}\), one can find a polynomial-time verifier such that it finishes in \(\mathcal{O}(n_{p}n_{q}d)\) time, and each certificate has a polynomial length for the input._ Footnote 1: A detailed proof is in the supplementary. To show Problem 3 is NP-hard, given any instance of the OH problem, the goal is to prove that an instance of (3) can solve it. To do this, we construct the training dataset \(\mathcal{D}\) so that an instance of Problem 3 can be defined. Notice that one can rewrite vectors in \(\mathcal{K}\) and construct new vectors \(\mathbf{x}_{i}^{+}\) and \(\mathbf{x}_{1}^{-}\) as the following \[\mathbf{s}_{1}=\underbrace{\left(\mathbf{s}_{1}+\mathbf{x}_{1}^{-}\right)}_{\mathbf{x}_{1}^{+}}- \mathbf{x}_{1}^{-},\ldots,\mathbf{s}_{t}=\underbrace{\left(\mathbf{s}_{t}+\mathbf{x}_{1}^{-} \right)}_{\mathbf{x}_{t}^{+}}-\mathbf{x}_{1}^{-}. \tag{4}\] The set of training labels is constructed such that \(y_{1}^{+},y_{2}^{+},\ldots,y_{t}^{+}\) are all ones and \(y_{1}^{-}=-1\). Combining it with equations in (4) provides a dataset \(\mathcal{D}=\{(\mathbf{x}_{1}^{+},y_{1}^{+}),\ldots,(\mathbf{x}_{t}^{+},y_{t}^{+}),(\bm {x}_{1}^{-},y_{1}^{-})\}\). The left two figures illustrate this reduction where the top figure is a sphere, and each positive sign represents \(\mathbf{s}_{i}\) while the negative sign is \(\mathbf{x}_{1}^{-}=\mathbf{0}\). The sphere contains 14 points, which correspond to 14 inequalities of the left-hand side of (3). The normal \(\mathbf{w}\) defines a hyperplane which corresponds to \(t=8\) of the right-hand side of (3). The bottom figure shows an AUC curve where TPR and FPR are true positive rate and false positive rate, respectively. It indicates that there exists \(\mathbf{w}\) such that the number of inequalities that can be satisfied is at least \(m\). This transformation and checking procedure can be done in polynomial-time. Therefore, any answer of the instance of LAO is affirmative if and only if the instance of OH is affirmative; hence, the problem is NP-hard. ## 4 Proposed methods for LAO In this section, we first present a trivial method in \(\mathbb{R}^{2}\) and then propose AUC-opt inspiring from _topological sweeping_. We then extend AUC-opt to \(\mathbb{R}^{d}\) by projecting high-dimensional problems into low-dimensional ones. ### A trivial method Notice that every pair of training samples defines a supporting line that separates the rest training samples, and the number of these interesting lines are at most \(\mathcal{O}(n^{2})\). Given any two sample \(\mathbf{x}_{i}:=[x_{i1},x_{i2}]^{\top}\) and \(\mathbf{x}_{j}:=[x_{j1},x_{j2}]^{\top}\), the slope of the line defined by \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) is \(m=-(x_{i2}-x_{j2})/(x_{i1}-x_{j1})\). Hence, one can find an algorithm runs in \(\mathcal{O}(n^{3}\log n)\) and takes \(\mathcal{O}(n^{2})\) space by using the following two steps: 1) identify all \(n(n-1)\) slopes \(m\); and 2) for each slope \(m\), let \(\mathbf{w}:=[m,1]^{\top}\) and then calculate \(\mathrm{AUC}(\mathbf{w})\) by using an \(\mathcal{O}(n\log n)\) algorithm [11]. \(\mathbf{w}^{\star}\) is the one that gives the highest \(\mathrm{AUC}(\mathbf{w})\) value. ### AUC-opt However, the above trivial method can be significantly improved. There are two successive algorithmic improvements needed. The first is that the number of interesting slopes is at most \(\mathcal{O}(n_{+}n_{-})\) by noticing that training pairs with the same labels are not interesting. The second improvement is inspired by the _topological sweep_[1] that we can save \(n\) running time by sorting all slopes once; that is, _we only need to calculate the AUC score once for the minimal slope and then sweep over all the rest of the slopes in ascending order._ The description of AUC-opt is presented in Algo. 1. There are three main steps: 1) to obtain all possible slopes and store into \(S\) (L1 to L5); 2) to calculate the AUC score of the minimal slope (L7 to L9); and 3) to update \(\mathbf{w}\) and its AUC score (L10 to L21). The critical part is the update rules of sweeping \(\mathbf{w}\). More specifically, at each iteration (from L10 to L21), the change of AUC score is from \(\{0,\pm 1/(n_{+}n_{-})\}\) by "sweeping" \(\mathbf{w}\) (L10 - L13). We illustrate this procedure on the upper right figure where red plus signs are positives while blue bar signs are negatives. Let \(\mathbf{w}=[s,1]^{\top}\), \(\mathbf{w}_{-}=[s-\epsilon,1]^{\top}\), and \(\mathbf{w}_{+}=[s+\epsilon,1]^{\top}\). The increase of slope from \(\mathbf{w}_{-}\) to \(\mathbf{w}_{+}\) only changes the ordering of two samples which corresponds to a magnitude change of \(|1/(n_{+}n_{-})|\) for \(\mathrm{AUC}(\mathbf{w})\). The following theorem shows \(\mathbf{w}\) returned by Algo. 1 is optimal. **Theorem 2**.: _Given the dataset \(\mathcal{D}:=\{(\mathbf{x}_{i},y_{i}):i\in\{1,2,\ldots,n\}\}\) where \(\mathbf{x}_{i}\in\mathbb{R}^{2}\) and \(y_{i}\in\{\pm 1\}\) and a collection of linear separators \(\mathcal{H}:=\{\mathbf{w}:\mathbf{w}\in\mathbb{R}^{2}\}\). The proposed \(\mathrm{AUC}\)-opt, solves the LAO problem (1) in \(\mathcal{O}(n_{+}n_{-}\log(n_{+}n_{-}))\). This time complexity is tight under the algebraic computation tree model._ Proof.: We first show that \(\mathbf{w}\) returned by AUC-opt is optimal. The key of our proof is to show that Algo. 1 iterates all possible AUC scores given by noticing that all slopes of lines between two consecutive slopes give the same AUC score. Let \(\mathbf{w}=[w_{1},w_{2}]^{\top}\) be any line in \(\mathbb{R}^{2}\). We assume that \(\mathbf{w}\) is a normal vector of two training samples \(\mathbf{x}_{i}^{+},\mathbf{x}_{j}^{-}\), that is, \(\mathbf{w}\) is given by \(\langle\mathbf{w},\mathbf{x}_{i}^{+}-\mathbf{x}_{j}^{-}\rangle=0\) The slopes of these normal vectors on \(\mathbb{R}^{2}\) can be calculated. Let the collection of all such slopes be \(\mathcal{S}:=\left\{-(x_{i2}^{+}-x_{j2}^{-})/(x_{i1}^{+}-x_{j1}^{-}):\mathbf{x}_{i}^{ +},\mathbf{x}_{j}^{-}\in\mathcal{D}\right\}\). Sort the collection of slopes \(\mathcal{S}\) and \(\mathbb{R}^{2}\) has been partitioned into \(n_{+}n_{-}+1\) parts. We consider two consecutive slopes in the sorted \(\mathcal{S}\) more carefully. Let us denote two consecutive sorted slopes as \(s_{1}\) and \(s_{2}\) which associated with two pairs \((\mathbf{x}_{i}^{+},\mathbf{x}_{j}^{-})\) and \((\mathbf{x}_{i^{\prime}}^{+},\mathbf{x}_{j^{\prime}}^{-})\), respectively. We only need to show that \(\forall s\in(s_{1},s_{2}),[s,1]^{\top}\) has identical AUC score. To do so, all we need to do is to show that given any \(\mathbf{x}_{i}^{+},\mathbf{x}_{j}^{-}\), \(\mathbf{w}\) scores \(\mathbf{x}_{i}^{+},\mathbf{x}_{j}^{-}\) as a same ordering. In other words, given any \(\mathbf{w}:=\left\{[s,1]^{\top}:s\in(s_{1},s_{2})\right\}\), the quantity \(\mathbf{x}_{i}^{\top}\mathbf{w}-\mathbf{x}_{j}^{\top}\mathbf{w}\) always has same sign. In the rest proof, we show this by carefully constructing a quantity function \(s(\lambda)\) as the following \[s(\lambda):=-\frac{x_{i2}^{+}-x_{j2}^{-}}{x_{i1}^{+}-x_{j1}^{-}}\lambda-\frac {x_{i2^{\prime}}^{+}-x_{j2^{\prime}}^{-}}{x_{i1^{\prime}}^{+}-x_{j1^{\prime}} ^{-}}(1-\lambda),\] where \(\lambda\in(0,1)\). We need to study the monotonicity of \(s(\lambda)\), we define another function \(h(\lambda):=s(\lambda)\left(x_{i1}-x_{j1}\right)+x_{i2}-x_{j2}\). Clearly, \(h(\lambda)\) is non-decreasing function by noticing that \(h^{\prime}(\lambda)=s_{2}-s_{1}\geq 0\). We just need to show \(h(\lambda)\) never vanishes at \(\lambda\in(0,1)\). Assume that we have \(h(\lambda)=0\), then we have \(s(\lambda)=-(x_{i2}-x_{j2})/(x_{i1}-x_{j1})\). It makes a contradiction since there is no existing slope between \(s_{1}\) and \(s_{2}\). Similarly, one can show that for any two consecutive slopes \(s_{1},s_{2},\forall s\in(s_{1},s_{2}),\mathbf{w}:=\left[-s,-1\right]^{\top}\), also defines the same AUC score. Since Algo. 1 iterates all such lines, the best AUC score of \(\mathbf{w}\) returned by AUC-opt is indeed optimal. AUC-opt finishes in \(\mathcal{O}(n_{+}n_{-}\log(n_{+}n_{-}))\) since the time complexity is dominated by sorting all \(n_{+}n_{-}\) slopes (L10). The tightness of time complexity follows by Lemma 3.6.16 of Lee and Preparata (1984). ### Generalization to \(\mathbb{R}^{d}\) When problem dimension \(d\geq 3\), inspired from Johnson and Preparata (1978), the general idea of solving high-dimensional LAO problem is that one can decompose the original \(d\) dimensional problem into several \(d-1\) subproblems. Notice that each hyperplane \(H(\mathbf{u})\) uniquely defines an interesting subspace, and there are at most \(n_{+}n_{-}\) such subspaces. We project points onto \(H(\mathbf{u})\) and then solve problem in \(d-1\) dimensional subspace (changing the number of coordinates from \(d\) to \(d-1\)), recursively. Specifically, let the projection be defined as \(\mathbf{P}(\mathbf{x}):=\mathbf{x}-(\mathbf{x}^{\top}\cdot\mathbf{u}/\|\mathbf{u}\|^{2})\cdot\mathbf{u}\) where \(\mathbf{u}\) is the normal vector of \(H(\mathbf{u})\). The critical property of \(\mathbf{P}\) is that for any \(\mathbf{x}\in H(\mathbf{u})\), \(\mathbf{x}^{\top}\mathbf{P}(\mathbf{x}_{i}^{+}-\mathbf{x}_{j}^{-})=\mathbf{x}^{\top}(\mathbf{x}_{i}^{+ }-\mathbf{x}_{j}^{-})\). Therefore, the inner products of training samples with \(\mathbf{w}\) are the same as those of projected training samples. Due to this inevitable recursion, the number of interesting hyperplanes exponentially depends upon \(d\); hence time complexity of AUC-opt in \(\mathbb{R}^{d}\) is \(\tilde{\mathcal{O}}\left((n_{p}n_{q})^{d-1}\log(n_{p}n_{q})\right)\). We summarize this recursive procedure in Algo. 2. Due to the space limitation, detailed algorithm description is in the supplementary. ``` 1:\(\mathcal{K}=\{(\mathbf{x}_{i}^{+}-\mathbf{x}_{j}^{-}):i\in\{1,\dots,n_{+}\},j\in\{1, \dots,n_{-}\}\}\) 2:if d=2then 3:return\(\mathrm{AUC}_{cur},\mathbf{w}^{\prime}=\mathrm{AUC}\text{-}\mathrm{opt}( \mathcal{D})\)\(\triangleright\) call Algo. 1 4:endif 5:for\(\mathbf{u}\in\mathcal{K}\)do 6:\(\mathcal{P}=\left\{\left(\mathbf{x}-\frac{\mathbf{x}^{\top}\cdot\mathbf{u}}{\|\mathbf{u}\|^{2}} \cdot\mathbf{u},y\right):(\mathbf{x},y)\in\mathcal{D}\right\}\)\(\triangleright\) project points of \(\mathcal{D}\) onto the hyperplane defined by \(\mathbf{u}\). 7:\(\mathcal{P}^{\prime}=\) change_coordinates\((\mathcal{P})\)\(\triangleright\) change coordinates so that points are presented in \(d-1\) coordinates. 8:\(\mathrm{AUC}_{cur},\mathbf{w}^{\prime}=\mathrm{AUC}\text{-}\mathrm{opt}(\mathcal{P}^{ \prime},d-1)\) 9:\(\mathbf{w}=\mathrm{change\_coordinates}(\mathbf{w}^{\prime})\)\(\triangleright\) change \(d-1\) coordinates of \(\mathbf{w}^{\prime}\) back to \(d\) coordinates. 10:if\(\mathrm{AUC}_{\mathrm{opt}}<\mathrm{AUC}_{\mathrm{cur}}\)then 11:\(\mathbf{w}^{\prime}=\mathbf{w}\), \(\mathrm{AUC}_{\mathrm{opt}}=\mathrm{AUC}_{\mathrm{cur}}\) 12:endif 13:endfor ``` **Algorithm 2**\([\mathrm{AUC}_{\mathrm{opt}},\mathbf{w}]=\)AUC-opt\((\mathcal{D},d)\) **Theorem 3**.: _Given the dataset \(\mathcal{D}:=\{(\mathbf{x}_{i},y_{i}):i\in\{1,2,\dots,n\}\}\) where \(\mathbf{x}_{i}\in\mathbb{R}^{d}\) and \(y_{i}\in\{\pm 1\}\) and a collection of linear separators \(\mathcal{H}:=\{\mathbf{w}:\mathbf{w}\in\mathbb{R}^{d}\}\). There exists an algorithm solves the LAO problem (1) exactly in \(\mathcal{O}\left((n_{+}n_{-})^{d-1}\log(n_{+}n_{-})\right)\)._ Proof.: The proof is in the supplementary. ## 5 Experiments We evaluate AUC-opt on both \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) by using t-SNE datasets. To confirm that AUC-opt produces the best possible linear AUC classifiers, we compare it with 7 other classifiers on the binary classification task at the training and testing stage. Furthermore, we compare the approximate AUC methods with other standard methods for nonlinear classifiers. _More results and experimental details, including data collection, baseline description, and parameter tuning, are in the supplementary.2_ Footnote 2: Our code can be found in [https://github.com/baojian/auc-opt](https://github.com/baojian/auc-opt) ### Datasets and experimental setup Datasets.We collect 50 real-world datasets where the positive ratio (\(n_{p}/n\)) of most datasets are \(\leq 0.1\). These highly imbalanced datasets make optimizing AUC problem meaningful. To generate 2 and 3 dimensional samples, we project samples of these datasets onto \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) respectively using t-SNE [11] so that class patterns are conserved. Models use projected points as training samples while keep labels unchanged. **Experimental setup.** For each dataset, 50% samples are for training and the rest for testing. All parameters are tuned by 5-fold cross-validation. Each dataset is randomly shuffled 200 times, and the reported results are averaged on 200 trials. All methods have been tested on servers with Intel(R) Xeon(R) CPU (2.30GHz) 64 cores and 187G memory. For all methods that involve randomness, the random state has been fixed for the purpose of reproducibility. **Baselines.** We consider the following baseline classifiers: 1) Logistic Regression (LR); 2) B-LR. Balanced Logistic Regression (B-LR). We adjust weights of samples inversely proportional to class frequencies so that it can have better performance on imbalanced datasets; 3) Support-vector Machine (SVM); 4) B-SVM. The balanced SVM (B-SVM) uses the same strategy as B-LR; 5) SVM-Perf. The SVM-Perf algorithm is proposed in Joachims (2005) where the goal is to minimize the AUC loss by using SVM-based method; 6) SPAUC. The Stochastic Proximal AUC (SPAUC) maximization algorithm is proposed in Lei and Ying (2019); 7) SPAM. The Stochastic Proximal AUC Maximization (SPAM) algorithm is proposed in Natole, Ying, and Lyu (2018). ### Results on t-SNE datasets **Comparison of AUC scores.** Table 1 presents the comparison of AUC scores calculated on both training and testing datasets in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\). To compare AUC scores from different methods, for the method I and J on a specific dataset, we calculate whether I is significantly better than J in a statistical sense. Important observations are: 1) AUC-opt achieves more significant gains over standard classifiers than the approximate AUC optimizers on the training stage. It confirms that gaps between AUC-opt and other approximate AUC optimizers are significant on some datasets; 2) Compared with SVM and LR, balanced versions of SVM (B-SVM) and LR (B-LR) have better performance. We see that the simple weighting strategy improves the performance by adjusting the weights on training samples;3 and 3) Compared with the best standard classifier B-LR, AUC-opt produces significant gains on 17 training datasets while reduced to 11 on testing datasets. It is worse in \(\mathbb{R}^{3}\). This degradation happens mainly because results of B-LR are tuned by adding regularization hence better generalization ability, while AUC-opt does not. With regularization, B-LR has better generalization performance on testing datasets but regularization is not taking into consideration in our method. Under \(\mathbb{R}^{3}\), AUC-opt only beats 4 datasets over B-LR, the best classifier. It means that the gain proves insignificant on most datasets. Footnote 3: The balanced version of a classifier is that a class weight has been added for each sample. Specifically, weight \(n/(2*n_{p})\) is for each positive sample while weight \(n/(2*n_{q})\) is for each negative sample. We also report the mean with the Friedman Rank and variance of AUC scores in Fig. 2. The Friedman Rank and averaged AUC scores shown in Fig. 2 (a) and (b) present that superiority of AUC-opt on optimizing AUC score. Variances of AUC shown in (c) and (d) indicate that 0-1 objective op \begin{table} \begin{tabular}{l l l l l l l l l l l l l l l l l} \hline \hline & & & & & & & & & & & & & & & & & & & & \\ \hline \multirow{6}{*}{\(d=2\)} & \multicolumn{8}{c}{Significance t-test (\(\alpha=0.05\)) on _training_} & \multicolumn{8}{c}{Significance t-test (\(\alpha=0.05\)) on _testing_} \\ & SVM & - & 0 & 1 & 1 & 3 & 0 & 1 & 0 & - & 0 & 0 & 0 & 3 & 0 & 1 & 1 \\ & B-SVM & 30 & - & 7 & 4 & 27 & 3 & 3 & 0 & 31 & - & 7 & 4 & 23 & 3 & 3 & 1 \\ & LR & 31 & 11 & - & 2 & 27 & 2 & 5 & 0 & 32 & 14 & - & 2 & 26 & 2 & 5 & 2 \\ & B-LR & 32 & 15 & 6 & - & 30 & 2 & 4 & 0 & 33 & 16 & 6 & - & 29 & 3 & 4 & 2 \\ \cline{2-19} & SVM-Perf & 16 & 2 & 0 & 0 & - & 0 & 1 & 0 & 16 & 2 & 0 & 1 & - & 1 & 2 & 2 \\ & SPAUC & 31 & 14 & 8 & 2 & 28 & - & 2 & 0 & 29 & 12 & 7 & 3 & 26 & - & 4 & 2 \\ & SPAM & 33 & 10 & 9 & 3 & 29 & 1 & - & 0 & 32 & 11 & 6 & 3 & 26 & 1 & - & 2 \\ & AUC-opt & **40** & **29** & **20** & **17** & **40** & **21** & **26** & - & **34** & **18** & **15** & **11** & **31** & **13** & **17** & - \\ \hline \multirow{6}{*}{\(d=3\)} & SVM & - & 0 & 0 & 0 & **0** & **0** & **0** & 0 & - & 0 & 0 & 0 & 1 & 1 & 0 & 1 \\ & B-SVM & 25 & - & 0 & 0 & 24 & 0 & **0** & 0 & 24 & - & 1 & 1 & 20 & 4 & 3 & 3 \\ & LR & 34 & 6 & - & 0 & 32 & 5 & 7 & 0 & 38 & 14 & - & 1 & 34 & 10 & 12 & **8** \\ & B-LR & 34 & 6 & 0 & - & 32 & 7 & 9 & 0 & 36 & 12 & 0 & - & 33 & 9 & 11 & **6** \\ \cline{1-19} & SVM-Perf & 1 & 0 & 0 & 0 & - & 0 & 0 & 0 & 1 & 0 & 0 & - & 1 & 2 & 3 \\ & SPAUC & 24 & 0 & 0 & 0 & 27 & - & 0 & 0 & 24 & 3 & 0 & 1 & 22 & - & 1 & 3 \\ & SPAM & 26 & 0 & 0 & 0 & 23 & 0 & - & 0 & 25 & 2 & 0 & 0 & 21 & 2 & - & 2 \\ & AUC-opt & **42** & **31** & **6** & **4** & **41** & **27** & **33** & - & **32** & **15** & 2 & 2 & **32** & **9** & **10** & - \\ \hline \hline \end{tabular} \end{table} Table 1: The comparison of AUC scores over 200 trials of 50 t-SNE datasets on both \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\). Each cell (I,J) means the number of datasets where method I is significantly better than method J by using t-test with a significance level of 5%. Numbers in the red region are results of AUC optimizers better than standard classifiers, while numbers in the blue region are the reverse. Figure 2: Mean and Variance of AUC scores of eight classifiers over 50 t-SNE datasets in \(\mathbb{R}^{2}\). -timization (AUC-opt) is more robust, while SVMs are not. Yet, the performance of SVM-Perf is better than SVM but not B-SVM. **Run time.** The left table shows the comparison of the averaged run time (in seconds) over all datasets. The run time of AUC optimizers is slower than standard classifiers. This is because the objective of AUC optimizers is the sum of the sample pairs, which makes them \(\mathcal{O}(n^{2})\) per SVM-Perf. **Comparison of other metrics.** As presented in our supplementary, an interesting observation is that AUC optimizers obtain better performance on balanced accuracy and F1-score than the more traditional classifiers. Results are consistent with the findings shown in Narasimhan and Agarwal (2013) where an AUC optimizer is a good way to construct a good balanced classifier. The performance of LR is good on AUC, but not on balanced accuracy and F1 score. ## Results on real-world datasets We test both the approximate AUC optimizers and the standard classification methods on 50 real-world datasets. The AUC scores on testing are reported in Table 2. First, Balanced Random Forest (B-RF) and Balanced SVM-RBF (B-SVM-RBF) prove the best overall on testing AUC scores. It wins 31.92 and 31.15 datasets on average respectively. This performance is consistent with findings shown in both Fernandez-Delgado et al. (2014) and Couronne, Probst, and Boulesteix (2018). Furthermore, a relationship between RF and LR has been studied in Couronne, Probst, and Boulesteix (2018) where it has been shown that RF can obtain much higher AUC scores compared with LR. Boosting-based methods such as AdaBoost4 and Gradient Boost (GB) also work well. Footnote 4: We treat AdaBoost as the AUC-based method because theoretical finding indicates that AdaBoost is equivalent to RankBoost. RankBoost is inferior to RF and B-RF, winning on only 9 and 8 datasets, respectively. The performance of AdaBoost and RankBoost prove competitive with each other. This has been theoretically justified in Rudin and Schapire (2009). However, interestingly, RankBoost still outperforms AdaBoost over 35 datasets in the testing stage. Gradient Boost (GB) wins more datasets than RankBoost. Generally speaking, two nonlinear AUC optimizers are inferior to these popular nonlinear standard classifiers. This clearly suggests that approximate AUC optimizers may not lead to the best AUC performance, hence having space to improve. All linear methods lose on average to non-linear ones. ## 6 Conclusion and future work Our complexity results show linear AUC optimization is NP-complete via reduction to the open hemisphere problem. It remains interesting to prove the hardness results for other hypothesis classes mentioned in Ben-David, Eiron, and Long (2003). We then present an optimal method AUC-opt that is both time and space-efficient for optimizing AUC in \(\mathbb{R}^{2}\). We demonstrate that it can be naturally extended to \(\mathbb{R}^{d}\) with a total cost \(\mathbb{O}((n_{+}n_{-})^{d-1}\log(n_{+}n_{-}))\). Our empirical results suggest that to justify the objective to optimize AUC, more effort may be needed to improve the optimization quality of AUC optimizers. AUC-opt is impractical for real-world datasets since the time complexity is exponentially getting worse with the dimension \(d\). However, it remains interesting to see whether more efficient algorithms exist for higher but still moderate dimensionality. One potential direction is to use branch and bound as explored in Nguyen and Sanner (2013). It is also interesting to compare our method with Rudin and Wang (2018), a recently proposed method that directly optimizes a rerank statistic. \begin{table} \begin{tabular}{c|c} \hline Method & Run Time \\ \hline SVM & \(0.1918\pm 0.4667\) \\ B-SVM & \(0.2862\pm 0.5205\) \\ LR & \(0.0785\pm 0.0489\) \\ B-LR & \(0.0781\pm 0.0491\) \\ SPAM & \(3.3675\pm 6.6170\) \\ SVMU & \(3.3826\pm 6.7024\) \\ SVM-Perf & \(8.9185\pm 14.691\) \\ AUC-opt & \(3.7073\pm 7.4934\) \\ \hline \end{tabular} \begin{tabular}{c|c} \hline \hline **Run time.** The left table shows the comparison of the averaged run time (in seconds) over all datasets. The run time of AUC optimizers is slower than standard classifiers. This is because the objective of AUC optimizers is the sum of the sample pairs, which makes them \(\mathcal{O}(n^{2})\) per SVM-Perf. \end{table} Table 2: The comparison of testing AUC scores of 50 real-world datasets. The settings are the same as in Table 1. Acknowledgments The authors would like to thank anonymous reviewers for their valuable comments. The work of Baojian Zhou is partially supported by startup funding from Fudan University. Steven Skiena was partially supported by NSF grants IIS-1926781, IIS-1927227, IIS-1546113, OAC-191952, and a New York State Empire Innovation grant.
2308.01084
Data-Driven Identification of Quadratic Representations for Nonlinear Hamiltonian Systems using Weakly Symplectic Liftings
We present a framework for learning Hamiltonian systems using data. This work is based on a lifting hypothesis, which posits that nonlinear Hamiltonian systems can be written as nonlinear systems with cubic Hamiltonians. By leveraging this, we obtain quadratic dynamics that are Hamiltonian in a transformed coordinate system. To that end, for given generalized position and momentum data, we propose a methodology to learn quadratic dynamical systems, enforcing the Hamiltonian structure in combination with a weakly-enforced symplectic auto-encoder. The obtained Hamiltonian structure exhibits long-term stability of the system, while the cubic Hamiltonian function provides relatively low model complexity. For low-dimensional data, we determine a higher-dimensional transformed coordinate system, whereas for high-dimensional data, we find a lower-dimensional coordinate system with the desired properties. We demonstrate the proposed methodology by means of both low-dimensional and high-dimensional nonlinear Hamiltonian systems.
Süleyman Yildiz, Pawan Goyal, Thomas Bendokat, Peter Benner
2023-08-02T11:26:33Z
http://arxiv.org/abs/2308.01084v2
# Data-Driven Identification of Quadratic Symplectic Representations of Nonlinear Hamiltonian Systems ###### Abstract We present a framework for learning Hamiltonian systems using data. This work is based on the lifting hypothesis, which posits that nonlinear Hamiltonian systems can be written as nonlinear systems with cubic Hamiltonians. By leveraging this, we obtain quadratic dynamics that are Hamiltonian in a transformed coordinate system. To that end, for given generalized position and momentum data, we propose a methodology to learn quadratic dynamical systems, enforcing the Hamiltonian structure in combination with a symplectic auto-encoder. The enforced Hamiltonian structure exhibits long-term stability of the system, while the cubic Hamiltonian function provides relatively low model complexity. For low-dimensional data, we determine a higher-order transformed coordinate system, whereas for high-dimensional data, we find a lower-order coordinate system with the desired properties. We demonstrate the proposed methodology by means of both low-dimensional and high-dimensional nonlinear Hamiltonian systems. C **Keywords:** Cubic Hamiltonian, quadratic Hamiltonian systems, lifting principle for dynamical systems, structure-preserving model order reduction, symplectic auto-encoder **Novelty statement**: * Inspired by quadratic lifting, allowing to rewrite nonlinear systems as quadratic systems in a lifted coordinate system, we discuss a lifting principle for nonlinear Hamiltonian systems. * We propose a data-driven approach to learning a quadratic Hamiltonian coordinate system by means of symplectic auto-encoders so that * the dynamics in the learned coordinate system can be given by a quadratic system, and * the underlying Hamiltonian function is cubic. * For high-dimensional data, we discuss learning a reduced coordinate system so that the above goals are achieved. This then aligns with non-intrusive model-order reduction by nonlinear projection. * By means of several examples, including high-dimensional ones, we demonstrate the proposed methodology. ## 1 Introduction Hamiltonian dynamics are ubiquitous as a powerful mathematical tool in modeling complex physical dynamical systems [1]. Classically, they are used in topics ranging from celestial mechanics [2] over fluid mechanics [3] to Schrodinger equations in quantum mechanics [4]. The coordinates in which Hamiltonian systems operate are split into generalized positions and momenta, which need to be identified from given data in order to fit a physical model to observations. The construction of models that can accurately capture and predict the dynamics of highly complex systems has been of interest for several decades if not centuries; see, e.g., [5] and references therein. Recently, the powerful approximation capabilities of neural networks have brought researchers in many fields closer to understanding complicated systems. Neural networks have been successfully studied for predicting complex dynamical systems [6, 7], improving turbulence models [8, 9], classifying time series [10], and studying differential equations (DEs) [11, 12]. To exploit the long-term stability properties of Hamiltonian systems, neural networks are used to learn the energy functions [13, 14, 15, 16] rather than dynamical systems. In this work, we are interested in learning quadratic Hamiltonian systems explaining given trajectory data from two different perspectives: lifting transformations [17], and nonlinear symplectic model order reduction [18] with weak enforcement of the transformation to be symplectic. In fact, we learn the quadratic Hamiltonian systems directly from data, without needing to resort to the orignal dynamical equations. In short, given data from a Hamiltonian system, we want to learn the dynamics in a structure-preserving way, while achieving low model complexity and having the option to reduce the dimension for high-dimensional data. This is achieved via structure-preserving auto-encoders and modeling of the dynamics with a quadratic Hamiltonian system. In [17], a unified approach, namely lifting transformations, is used to approximate general nonlinear systems. In the case where the dynamical system is known, one can manually design lifted variables. However, lifting the dynamical system does not necessarily lead to a Hamiltonian system. Therefore, we weakly force the lifting transformations to be symplectic by exploiting symplectic embeddings and strictly enforce the Hamiltonian structure of the dynamics equations. The second application of our approach lies in dimensionality reduction of Hamiltonian systems. Learning reduced-order models for Hamiltonian systems comes with some practical challenges. Without enforcing preservation of the Hamiltonian structure in the reduced-order model, it can quickly lose accuracy [19]. One established approach to preserve the symplectic structure is by using linear symplectic projections, i.e., proper symplectic decomposition [19, 20], but for Hamiltonian systems with slow decay of the Kolmogorov-\(n\)-width, this approach might not be feasible. Furthermore, for non-linear Hamiltonian functions, hyperreduction methods like the Symplectic Discrete Empirical Interpolation Method (SDEIM) [19] are needed for efficient computability of the reduced-order model. In [18], a reduction by a non-linear structure-preserving auto-encoder is studied, addressing this problem. We take a similar approach and use a structure-preserving auto-encoder to map data from a Hamiltonian system to a learned _quadratic_ Hamiltonian system, thereby reducing model complexity considerably. The simple quadratic structure allows for the direct learning of the Hamiltonian system, without having to learn the Hamiltonian function and without the need of calculating its gradient. For the purpose of learning reduced dynamics, similar as in MOR, we show that this quadratic Hamiltonian system can be of much lower dimension than the original full order model. Learning the quadratic Hamiltonian system from data has the further advantage that no hyper-reduction methods are needed for nonlinear systems, and the reduced-order model can thus be efficiently computed. Furthermore, as our approach learns the reduced dynamics directly, we do not need to take the gradient through the auto-encoder to simulate the learned models. The recent preprint [21] can be seen as a complementary approach to our method for reducing the order of Hamiltonian systems. While we learn a quadratic Hamiltonian system with a general non-linear symplectic auto-encoder, in [21] two different versions of quadratic symplectic auto-encoders are studied, which are then used for model order reduction leading to a general non-linear Hamiltonian system. Moreover, we learn the reduced dynamics directly from data, while [21] studies model order reduction, i.e., resorting to the Hamiltonian of the full-order model. In future work, a combination of both approaches seems worthwhile. The paper is structured as follows. In Section 2, we introduce the necessary mathematical background to embed Hamiltonian systems in a structure-preserving way into a higher-dimensional space and define quadratic Hamiltonian systems. In Section 3 we describe the auto-encoder structure to lift Hamiltonian systems. In Section 4 we adapt the theory to learn low-dimensional quadratic representations of high-dimensional data in a structure-preserving way. In Section 5 we show the applicability of the approach, for low-dimensional systems in Subsection 5.1 and for structure preserving reduction of high-dimensional systems in Subsection 5.2. Section 6 concludes the paper. Implementation details can be found in Appendix A. ## 2 Background In this section, we provide the necessary theoretical background needed for the derivation of our learning approach for Hamiltonian systems. ### Hamiltonian Systems and Symplectic Embedding The governing equations of Hamiltonian systems are Hamilton's equations, namely \[\dot{x}(t)=J_{2n}\nabla_{x}\mathcal{H}(x(t))\in\mathbb{R}^{2n}, \tag{2.1}\] where \(x=(q,p)\in\mathbb{R}^{2n}\) with \(q\) and \(p\) being generalized positions and momenta, respectively, \[J_{2n}:=\begin{bmatrix}0&I_{n}\\ -I_{n}&0\end{bmatrix}\in\mathbb{R}^{2n\times 2n},\] and \(\nabla_{x}\) denotes the gradient with respect to \(x\). Moreover, we consider an initial condition \(x(0)=x_{0}=(q_{0},p_{0})\in\mathbb{R}^{2n}\). The Hamiltonian function \(\mathcal{H}\colon\mathbb{R}^{2n}\to\mathbb{R}\) describes the energy of the system and is preserved along the solution trajectories. Next, we discuss the definition of a symplectic embedding, which plays an important role in our later discussions. **Definition 1** (Symplectic Embedding for Vector Spaces).: _A symplectic embedding of \(\mathbb{R}^{2n}\) into \(\mathbb{R}^{2N}\) is a homeomorphism \(\psi\colon\mathbb{R}^{2n}\to\psi(\mathbb{R}^{2n})\subset\mathbb{R}^{2N}\) for which the Jacobian \(\mathrm{d}\psi_{x}\in\mathbb{R}^{2N\times 2n}\) fulfills_ \[(\mathrm{d}\psi_{x})^{T}J_{2N}\,\mathrm{d}\psi_{x}=J_{2n} \tag{2.2}\] _at every \(x\in\mathbb{R}^{2n}\)._ It is immediate to see that a symplectic embedding is a smooth embedding in the sense of differential geometry [22, Section 22, p. 568], as the Jacobian has full rank at every point. The Jacobian is therefore injective and \(\psi\) is an immersion. This furthermore implies \(N\geq n\). Therefore, a symplectic embedding is also called a _symplectic lifting_. **Proposition 1** (Equivalent Embedded System).: _Let \(\psi\colon\mathbb{R}^{2n}\to\mathbb{R}^{2N}\) be a symplectic embedding and define \(z_{0}:=\psi(x_{0})\in\mathbb{R}^{2N}\). Then, the system (2.1) is equivalent to the embedded system_ \[\dot{z}(t)=J_{2N}\nabla_{z}\mathcal{H}(\psi^{-1}(z(t))), \tag{2.3}\] _i.e., the solution of the differential equation fulfills \(z(t)=\psi(x(t))\) for all \(t\in[0,\infty)\)._ Proof.: For any \(z\in\psi(\mathbb{R}^{2n})\), it holds with the chain rule that \[J_{2N}\nabla_{z}\mathcal{H}(\psi^{-1}(z)) =J_{2N}(\mathrm{d}\mathcal{H}\circ\psi^{-1})_{z}(z))^{T}=J_{2N}( \mathrm{d}\psi^{-1}{}_{z})^{T}(\mathrm{d}\mathcal{H}_{x}(\psi^{-1}(z)))^{T}\] \[=J_{2N}(\mathrm{d}\psi^{-1}{}_{z})^{T}\nabla_{x}\mathcal{H}(\psi ^{-1}(z))=\mathrm{d}\psi_{x}J_{2n}\nabla_{x}\mathcal{H}(\psi^{-1}(z)).\] As \(z(t):=\psi(x(t))\) implies \(\dot{z}(t)=\mathrm{d}\psi_{x(t)}\dot{x}(t)=\mathrm{d}\psi_{x(t)}J_{2n}\nabla _{x}\mathcal{H}(x(t))\), the claim follows. As \(\psi\) is a symplectic lifting, the system for \(z\) is called a _symplectic lifting of the system for \(x\)_. ### Quadratic Symplectic Representations There are many possibilities to construct a symplectic embedding of a nonlinear Hamiltonian system. However, in this work, we are seeking to identify a particular higher dimensional or lifted space so that a quadratic system can describe the dynamics in the lifted space. Moreover, the Hamiltonian in the lifted space is a cubic polynomial function. To briefly describe the lifting procedure, we consider the system of ODEs \[\dot{x}=f(x(t)),\quad x\in\mathbb{R}^{n}. \tag{2.4}\] The quadratic lifting transformation [17, 23, 24, 25] can be obtained by defining a transformation \(z(t)=\psi(x(t))\in\mathbb{R}^{N}\) for \(N\geq n\) such that the transformed system (2.4) satisfies \[\dot{z}=\mathcal{A}+\mathcal{B}z+\mathcal{C}z\otimes z\in\mathbb{R}^{N}. \tag{2.5}\] We illustrate the quadratic lifting for nonlinear systems by means of a nonlinear oscillator example. **Example 1** (Nonlinear Oscillator).: _Consider the nonlinear (an-harmonic) oscillator [26] with the Hamiltonian \(\mathcal{H}(q,p)=\frac{p^{2}}{2}+\frac{q^{2}}{2}+\frac{q^{4}}{4}\). The associated governing equations for this oscillator are given by_ \[\dot{q} =p, \tag{2.6}\] \[\dot{p} =-(q+q^{3}).\] _We demonstrate the lifting transformation by introducing the variables \(w_{1}=q\), \(w_{2}=p\), and \(w_{3}=q^{2}\). With the new variable \(w_{3}\), the equations of motion for the oscillator (2.6) can be written as_ \[\dot{w}_{1} =w_{2}, \tag{2.7}\] \[\dot{w}_{2} =-(w_{1}+w_{1}w_{3}),\] \[\dot{w}_{3} =2w_{1}w_{2},\] _which is a quadratic system. Moreover, one can also define an inverse mapping from \((w_{1},w_{2},w_{3})\) to \((q,p)\). However, it is easy to note that the system in (2.7) is not a Hamiltonian system in canonical coordinates since it is odd-dimensional. We further note that even introducing new variables to make the lifted system (2.7) even dimensional does not necessarily result in a Hamiltonian system._ _Notably, the theory of generating functions can be used to construct quadratic Hamiltonian systems. For a detailed overview of generating functions, we refer to the book [27]. To illustrate this for the nonlinear oscillator, we suppose \(\hat{p}=q^{2}\). Then, using a generating function of type 1, one can find \(F_{1}=-\hat{q}q^{2}\), so that \(p=-2\hat{q}q\), implying \(p=-2\hat{q}\hat{p}^{1/2}\) and \(q=\hat{p}^{1/2}\). The new Hamiltonian with new variables becomes \(\hat{\mathcal{H}}=2\hat{q}^{2}\hat{p}+\frac{\hat{p}}{2}+\frac{\hat{p}^{2}}{4}\), which is cubic; hence, the underlying dynamics are given by a quadratic system._ Inspired by the above example, in this work, we seek to identify a symplectic space to lift to. The desired properties can be achieved when the lifted system (2.5) satisfies (2.3) with a symplectic lifting \(\psi\) fulfilling (2.2). For this, we first define quadratic Hamiltonian systems for our reference. **Definition 2** (Quadratic Hamiltonian System).: _A quadratic Hamiltonian system is a Hamiltonian system (2.1) for which the Hamiltonian function is cubic, i.e.,_ \[\mathcal{H}(x)=A^{T}x+B^{T}(x\otimes x)+C^{T}(x\otimes x\otimes x),\] _where \(A\in\mathbb{R}^{2n}\), \(B\in\mathbb{R}^{(2n)^{2}}\), \(C\in\mathbb{R}^{(2n)^{3}}\), and \(\otimes\) denotes the Kronecker product._ The simple structure of quadratic Hamiltonian systems allows enforcing the Hamiltonian condition directly onto the system, without having to compute the gradient of the Hamiltonian function. **Proposition 2**.: _A quadratic system of ODEs_ \[\dot{x}=\mathcal{A}+\mathcal{B}x+\mathcal{C}(x\otimes x)\in\mathbb{R}^{2n}, \tag{2.8}\] _where \(\mathcal{A}\in\mathbb{R}^{2n}\), \(\mathcal{B}\in\mathbb{R}^{2n\times 2n}\) and \(\mathcal{C}\in\mathbb{R}^{2n\times(2n)^{2}}\), is a quadratic Hamiltonian system if and only if \(J^{T}_{2n}\mathcal{B}\) is a symmetric matrix and there is a symmetric tensor \(\mathcal{T}\in\mathbb{R}^{2n\times 2n\times 2n}\) for which_ \[\mathcal{T}_{u}(x\otimes x)=J^{T}_{2n}\mathcal{C}(x\otimes x)\] _holds for all \(x\in\mathbb{R}^{2n}\), where \(\mathcal{T}_{u}\in\mathbb{R}^{2n\times(2n)^{2}}\) is the unfolding of \(\mathcal{T}\) by frontal slices._ Proof.: By definition, (2.8) is a quadratic Hamiltonian system if and only if there is a cubic function \(\mathcal{H}(x)=A^{T}x+B^{T}x\otimes x+C^{T}x\otimes x\otimes x\) such that \(\dot{x}=J_{2n}\nabla_{x}\mathcal{H}(x)\), which is equivalent to \[\nabla_{x}\mathcal{H}(x)=\nabla_{x}\left(A^{T}x+B^{T}(x\otimes x)+C^{T}(x \otimes x\otimes x)\right)=J^{T}_{2n}(\mathcal{A}+\mathcal{B}x+\mathcal{C}x \otimes x).\] Since there is a bijection between homogenous polynomials and symmetric tensors [28, p. 6], the claim follows. For many smooth nonlinear systems, there exist guaranteed liftings which allow us to rewrite nonlinear systems as quadratic systems, see, e.g., [23, 24]. However, there is currently no established result ensuring the existence of a symplectic lifting for nonlinear Hamiltonian systems to higher dimensions where the dynamics can be represented by quadratic Hamiltonian systems. Exploring this aspect remains an intriguing theoretical endeavor for future research. In this work, however, we hypothesize the existence of such a system and focus on learning such a symplectic lifting/embedding by means of suitable optimization problems, which we discuss next. ## 3 Learning the Lifted Quadratic Symplectic Representation Here, we describe our methodology to learn a symplectic lifting to map from a given canonical Hamiltonian system to a quadratic Hamiltonian system, which we visualize in Figure 3.1. The first ingredient to it is to define lifted coordinates \(z(t)\) using a classical auto-encoder loss as follows: \[\mathcal{L}_{\text{encoder}}=\|x(t)-\phi(\psi(x(t)))\|, \tag{3.1}\] where \(\psi(x(t))=z(t)=(\hat{q}(t),\hat{p}(t))\) and \(\phi(z(t))=\tilde{x}(t)=(q(\hat{q}(t)),p(\hat{p}(t)))\approx x(t)\). However, the mapping obtained through (3.1) does not necessarily yield a symplectic mapping. To get a symplectic map, we use (2.2) and define a symplectic loss as follows: \[\mathcal{L}_{\text{symp}}=\|(\mathrm{d}\psi_{x})^{T}J_{2N}\,\mathrm{d}\psi_ {x}-J_{2n}\|. \tag{3.2}\] Furthermore, we assume that time derivatives of states are accessible. Thus, we compute the time derivatives of the lifted space \(z\) using the chain-rule. Hence, we add the following term in the loss function: \[\begin{split}\mathcal{L}_{\dot{z}\dot{x}}&=\big{\|} \mathrm{d}\psi_{x(t)}\dot{x}(t)-J_{2N}\nabla_{z}\mathcal{H}(\psi^{-1}(z(t))) \big{\|}\\ &=\big{\|}\mathrm{d}\psi_{x(t)}\dot{x}(t)-(\mathcal{A}+\mathcal{B }z(t)+\mathcal{C}z(t)\otimes z(t))\big{\|}\end{split} \tag{3.3}\] with \(z=\psi(x)\). Finally, to obtain a quadratic Hamiltonian system, we combine all these losses defined in (3.1)-(3.3). Hence we have the total loss as a weighted sum of these loss functions, given by \[\mathcal{L}=\lambda_{1}\mathcal{L}_{\text{encoder}}+\lambda_{2}\mathcal{L}_{ \text{symp}}+\lambda_{3}\mathcal{L}_{\dot{x}\dot{x}}, \tag{3.4}\] where \(\lambda_{\{1,2,3\}}\) are hyper-parameters. The details of the implementation and auto-encoders are given in Appendix A. Finally, we optimise all parameters in (3.4) at the same time. We remark that we do not enforce the homeomorphism property, but only enforce (2.2), i.e., the condition that the encoder is a symplectic immersion and therefore locally invertible, and that the encoder is invertible on the training data. ## 4 Low-dimensional Quadratic Symplectic Representation of High-dimensional Data Thus far, we have discussed how nonlinear Hamiltonian systems can be lifted to higher dimensional quadratic symplectic systems and how they can be learned by means of data. However, there are many Figure 3.1: The auto-encoder structure of the symplectic lifting method. Here, the encoder \(\psi:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2N}\) is weakly enforced to be a symplectic mapping and the quadratic system is enforced to be Hamiltonian. Hamiltonian systems which are high-dimensional, specially coming from partial differential equations. Furthermore, it is known that high dimensional dynamic data often evolve in a lower-dimensional subspace. Therefore, in these cases, we aim to learn lower-dimensional coordinates for high-dimensional data so that they are not only symplectic but can also be used to describe the dynamics of the high-dimensional system, as depicted in Figure 4.1. However, there is a subtlety compared to the method discussed in the previous section. It is worth noting that (3.2) will weakly enforce symplecticity in the case of symplectic lifting. On the other hand, for high-dimensional data, the quadratic system is of lower dimension than the original data; thus, we actually need to enforce that the _decoder_\(\phi\), and not the encoder \(\psi\), of the auto-encoder is a symplectic embedding from the quadratic model to the original high-dimensional system. Since the high-dimensional data, particularly coming from partial differential equations, have a lot of spatial coherency, we make use of a deep convolutional auto-encoder (DCA), which is computationally efficient. Moreover, to enforce the symplecticity condition in the loss function, we use a weakly symplectic deep convolutional auto-encoder with essentially the same conditions as in [18, Section 3.3]. This means that in the symplectic reduction, instead of (3.2), we use the following loss terms for symplectic loss: \[\tilde{\mathcal{L}}_{\text{symp}}=\|(\mathrm{d}\phi_{z})^{T}J_{2N}\,\mathrm{ d}\phi_{z}-J_{2n}\|. \tag{4.1}\] As we have switched the role of \(n\) and \(N\) in the reduction case, (3.3) is replaced by \[\begin{split}\tilde{\mathcal{L}}_{\dot{z}\dot{z}}& =\big{\|}\mathrm{d}\psi_{x(t)}\dot{x}(t)-J_{2n}\nabla_{z}\mathcal{H }(\psi^{-1}(z(t)))\big{\|}\\ &=\|\mathrm{d}\psi_{x(t)}\dot{x}(t)-(\mathcal{A}+\mathcal{B}z(t) +\mathcal{C}z(t)\otimes z(t))\|.\end{split} \tag{4.2}\] The loss of the auto-encoder \(\mathcal{L}_{\text{encdec}}\) is given by (3.1) as in the lifting case. Hence, the total loss is calculated via \[\mathcal{L}=\lambda_{1}\mathcal{L}_{\text{encdec}}+\lambda_{2}\tilde{\mathcal{ L}}+\lambda_{3}\tilde{\mathcal{L}}_{\dot{z}z}, \tag{4.3}\] which is then used to learn a suitable embedding. ## 5 Numerical Experiments In this section, we examine the performance of the proposed methodology in two scenarios: low-dimensional dynamical systems and high-dimensional dynamical systems. For the low-dimensional case, we investigate three different examples: the simple pendulum, an an-harmonic oscillator, and the Lotka-Volterra equations. For the high-dimensional case, we study the linear wave and nonlinear Schrodinger equations. All the experiments are done using PyTorch on a machine with an Intel(r) Core(tm) 15-12600K CPU and NVIDIA RTX(tm) A4000(16GB) GPU. To preserve the symplectic structure after time discretization, we have used the implicit midpoint rule as time integrator. In the case of symplectic lifting for all low-dimensional examples, we set the dimension of the latent space--for which the dynamics are quadratic and have a constant cubic Hamiltonian--to four. In the case of symplectic reduction we set the dimension of the latent space of the linear wave equation to four and of the nonlinear Schrodinger equation to two. All other hyperparameter settings and neural network architectures are listed in detail in Appendix A for each example. Figure 4.1: The auto-encoder structure of the symplectic reduction method. Here, the decoder \(\phi:\mathbb{R}^{2n}\rightarrow\mathbb{R}^{2N}\) is weakly enforced to be a symplectic mapping and the quadratic system is enforced to be Hamiltonian. ### Low-dimensional Systems Here, we discuss learning dynamical systems using low-dimensional data by means of three examples. #### 5.1.1 Nonlinear Pendulum Our first example of low-dimensional dynamics is a frictionless pendulum. Pendulums are non-linear oscillators and Hamiltonian systems, making them challenging to learn solely from data. The Hamiltonian for the system can be given by \[\mathcal{H}(q,p)=2mgl(1-\cos(q))+\frac{l^{2}p^{2}}{2m}, \tag{5.1}\] where \(g\) represents the gravitational constant, \(l\) denotes the length of the pendulum, and \(m\) denotes the mass. For simplicity, we set the mass of the pendulum to \(m=1.0\), its length to \(l=1.0\), and the gravitational constant to \(g=0.5\). Consequently, we can express the governing equations that define the evolutions of \(p\) and \(q\) as follows: \[\begin{bmatrix}\dot{p}(t)\\ \dot{q}(t)\end{bmatrix}=\begin{bmatrix}-\sin(q(t))\\ p(t)\end{bmatrix}. \tag{5.2}\] To generate the training dataset, we consider initial conditions for variables \(p\) and \(q\) within the range of \([-2,2]\times[-2,2]\), encompassing the transition from linear to nonlinear dynamics in the system. However, to avoid a complete circle described by the pendulum, we select initial conditions from the range with energy \(\mathcal{H}(q,p)<2\). We consider \(10\) random initial conditions and take \(50\) equidistant data points in the time interval \([0,10]\). We have pictorially shown the training data in Figure (a)a. Next, we learn latent variables \(\hat{p}\) and \(\hat{q}\) with our desired objective, which is that the dynamics of the latent variables can be given by a quadratic system with a cubic Hamiltonian. Moreover, the latent variables are learned by means of an encoder, and the quantities-of-interest, namely \(p\) and \(q\), are identified using a decoder, which maps the latent variables to \(p(\hat{q},\hat{p})\) and \(q(\hat{q},\hat{p})\). With the training configuration given in Appendix A, we first demonstrate the learned dynamics in the phase space for three random test initial conditions in Figure (b)b for the pendulum example, where the figure shows that the learned system is stable and orbiting at the same energy level as the ground truth model. Furthermore, in Figure (b)b, we compare time-domain simulations of the identifying model with the ground truth model for a random initial condition that is different from the training set. Figure (b)b shows that the learned model is not only good at capturing the dynamics in a test case but also stable and accurate for long-time integration, on a time interval larger than the training interval \([0,10]\). In Figure (a)a, we plot the learned and canonical Hamiltonians, demonstrating that all Hamiltonians remain constant over time with minor oscillations. Notably, the learned Hamiltonian closely aligns with the canonical coordinates, even without the need for additional constraints in the optimization process, such as weakly enforcing the initial Hamiltonian value. Figure 5.1: Nonlinear pendulum: the plot (a) shows training data of the pendulum example in phase space, and the plot (b) shows a comparison of the learned model with the ground truth in phase space with three random initial conditions. #### 5.1.2 Lotka-Volterra Equations Our second example of a low-dimensional system is the Lotka-Volterra system [29]. The Lotka-Volterra model is a well-known mathematical model used to describe predator-prey populations' dynamics. This model has been extensively applied to study the population dynamics of diverse species across various ecosystems. Moreover, it is an example of a system with an underlying Hamiltonian structure with Hamiltonian \[\mathcal{H}(q,p)=p-e^{p}+2q-e^{q}.\] To learn the dynamics of the Lotka-Volterra equations, we constructed a training set with 10 trajectories. These trajectories were simulated up to a time of \(T=4\), using a time-step size of \(\Delta t=0.2\). For this experiment, we generated trajectories within the energy range \([-4,4]\). After learning a suitable quadratic embedding with the given set-up in A, we plot the training data of the Lotka-Volterra equations in phase space in Figure 4(a). In this example, we focus on trajectories in phase space that do not complete a full orbit of the energy level. Furthermore, we demonstrate the learned dynamics in the phase space for a random initial value in Figure 4(b) for the Lotka-Volterra equations. The figure shows that the learned model is accurate even in terms of predicting the orbit level of random test initial conditions. In Figure 5, we compare time-domain simulations between the learned and ground truth models for the Lotka-Volterra equations, along with the corresponding absolute error. The simulations were conducted using a random initial condition, distinct from the initial trajectories used in the training set. The results depicted in Figure 5 demonstrate a high level of agreement between the dynamics of the Lotka-Volterra equations and the ground truth model, even after the final training time \(T=4\). In Figure 5: Nonlinear pendulum: A comparison of the Hamiltonian in canonical coordinates for the ground truth model \(\mathcal{H}(q,p)\), the learned Hamiltonian \(\tilde{\mathcal{H}}(\hat{q},\hat{p})\) in the latent space, and the difference between the ground truth model and the learned model in the original space \(\mathcal{H}(q(\hat{q},\hat{p}),p(\hat{q},\hat{p}))\) along time using a random test initial condition. Figure 3: Nonlinear pendulum: A comparison of the learned model with the ground truth model for a random test condition. Figure 5.6, we present the learned and canonical Hamiltonians for the Lotka-Volterra equation. Evidently, all Hamiltonians remain constant over time with minor fluctuations. We observe that these fluctuations primarily arise from errors in the autoencoder component, as the symplecticity condition is (weakly) applied to the encoder part but not the decoder part. Additionally, the fluctuations in the Hamiltonian error could be attributed to the training data, which is constructed from trajectories with a shorter time span compared to the pendulum example. The constant offset of the Hamiltonians corresponds to a different choice of energy null-level in the original space and the latent space, respectively. Since the Hamiltonian is a relative quantity, the overall performance of the learned model is linked to the error plot between the ground truth Hamiltonian and the learned Hamiltonian in the original space, where the Figure 5.6 shows that they coincide. #### 5.1.3 Nonlinear Oscillator Our last low-dimensional example is a nonlinear (an-harmonic) oscillator with Hamiltonian \[\mathcal{H}(q,p)=\frac{p^{2}}{2}+\frac{q^{2}}{2}+\frac{q^{4}}{4}, \tag{5.3}\] where the natural frequency and the mass of the oscillator are considered to be unity. To learn dynamics from data, we initially generated 20 random trajectories which are simulated up to final time \(T=4\) with step-size \(\Delta t=0.14\). The generated trajectories are in the energy range of \([-1,1]\) for this task. In Figure 5.7a, we plot the training data of the nonlinear oscillator in phase space. Having learned the desired embeddings, we next present a comparison of the learned model over three random initial points in Figure 5.7b, where the model captures the dynamics of the learned model with Figure 5.4: Lotka–Volterra: plot (a) shows training data in phase space, and plot (b) shows a comparison of the learned model with the ground truth in phase space with five random initial test conditions. Figure 5.5: Lotka–Volterra: A comparison of time-domain simulation obtained using the learned model with the ground truth model for the Lotka-Volterra equations and using a random test initial condition. good accuracy. In Figure 5.8, we demonstrate the temporal evolution of the learned model, the ground truth model for a nonlinear oscillator, and the corresponding absolute error in the time domain for a randomly chosen initial condition. The figure shows that dynamics are well captured over a long time horizon, exceeding the final training time \(T=4\). Furthermore, in Figure 5.9, we plot the learned and canonical Hamiltonians for the nonlinear oscillator, which demonstrates that all Hamiltonians remain constant over time with minor fluctuations, as seen in previous examples. ### High-dimensional Systems Next, we focus on learning low-dimensional models for high-dimensional data coming from high-dimensional systems. #### 5.2.1 Linear Wave Equation We begin by considering a simple linear wave equation of the form: \[\begin{split}& u_{tt}=cu_{xx},\\ & u(t_{0},x)=u^{0}(x),\ \ \ x\text{ in }\Omega,\end{split} \tag{5.4}\] Figure 5.7: Nonlinear oscillator: Plot (a) shows training data in phase space, and Plot (b) shows a comparison of the learned model with the ground truth in phase space with three random initial test conditions. Figure 5.6: Lotka–Volterra: A comparison of the Hamiltonian in canonical coordinates for the ground truth model \(\mathcal{H}(q,p)\), the learned Hamiltonian \(\hat{\mathcal{H}}(\hat{q},\hat{p})\) in the latent space, and the difference between the ground truth model and the learned model in the original space \(\mathcal{H}(q(\hat{q},\hat{p}),p(\hat{q},\hat{p}))\) along time using a random test initial condition. where \(c\) is the transport velocity, and boundary conditions are set to be periodic. The wave equation is an example of a Hamiltonian PDE. By defining the variables \(p=u_{t}\) and \(q=u\), we can obtain the Hamiltonian form of the wave equation [30], which is given by \[\frac{\partial z}{\partial t}=\begin{bmatrix}0&1\\ -1&0\end{bmatrix}\nabla_{z}\mathcal{H},\quad z=\begin{bmatrix}q\\ p\end{bmatrix}, \tag{5.5}\] where the Hamiltonian is given as \[\mathcal{H}(u)=\frac{1}{2}\int_{\Omega}cq_{x}^{2}+p^{2}\,dx.\] Next, we discretize the Hamiltonian form of the wave equation and obtain the following semi-discrete Hamiltonian ODE systems: \[\frac{d\mathbf{z}}{dt}=\mathbf{K}\mathbf{z}, \tag{5.6}\] where \[\mathbf{z}=\begin{bmatrix}\mathbf{q}\\ \mathbf{p}\end{bmatrix},\quad\mathbf{K}=\begin{bmatrix}0_{N}&I_{N}\\ cD_{xx}&0_{N}\end{bmatrix},\] \(D_{xx}\in\mathbb{R}^{N\times N}\) is the three-point central difference approximation of \(\partial_{xx}\), \(0_{N}\in\mathbb{R}^{N\times N}\) is a matrix of zeros, \(I_{N}\in\mathbb{R}^{N\times N}\) is the identity matrix, and \((\mathbf{q},\mathbf{p})\) are the discretized \((q,p)\). In this task, we focus on learning a single wave equation over a single trajectory. For this purpose, we set the initial condition to \(u^{0}(x)=\mathrm{sech}(x)\). For training purposes, we have generated data of the wave equation on the domain \(\Omega=[-5,5]\) up to time \(T=20\) with time-step size \(\Delta t=0.05\). We set the spatial Figure 5.8: Nonlinear oscillator: Comparison of the learned model with the ground truth model for the harmonic oscillator in the time axis using a random initial condition. Figure 5.9: Nonlinear oscillator: A comparison of the Hamiltonian in canonical coordinates for the ground truth model \(\mathcal{H}(q,p)\), the learned Hamiltonian \(\hat{\mathcal{H}}(\dot{q},\dot{p})\) in the latent space, and the difference between the ground truth model and the learned model in the original space \(\mathcal{H}(q(\dot{q},\dot{p}),p(\dot{q},\dot{p}))\) along time using a random test initial condition. dimension for the ground truth model of wave equation (5.6) to \(2N=1024\) and the learned problem dimension to \(2n=4\). We compare the learned model with the ground truth in Figures (a)a and (b)b, for the states \(q\) and \(p\), respectively. The figures show that the obtained model is stable and accurate over a long time horizon. Next, we examine the learned variables in phase space and time domain in Figure (a)a, which shows that the learned variables are orbiting on one particular energy level in phase space and are stable in the time domain as well. #### 5.2.2 Nonlinear Schrodinger Equation Finally, we test the ability of our model to learn the nonlinear Schrodinger (NLS) equation in the last example of a high-dimensional problem. The NLS equation has various use cases, e.g., small-amplitude gravity waves on the surface of deep water with zero-viscosity, in the study of Bose-Einstein condensation, and the propagation of light in nonlinear optical fibers. Specifically, we look at the cubic Schrodinger equation which is given [31] by \[\begin{split}& i\frac{\partial u}{\partial t}+\alpha u_{xx}+\beta|u |^{2}u=0,\\ & u(t_{0},x)=u^{0}(x),\qquad\qquad\quad x\text{ in }\Omega,\end{split} \tag{5.7}\] with periodic boundary conditions. In the NLS equation (5.7), the parameter \(\alpha\) is a non-negative constant and the constant parameter \(\beta\) is the focusing--with negative--and defocusing--with positive--values. In this example, we have fixed the parameters to \(\alpha=\frac{1}{2}\), \(\beta=1\), and the domain is fixed to \(\Omega=[-10,10]\). To obtain the canonical Hamiltonian form of the NLS equation (5.7), we write the complex-valued solution \(u\) in terms of its imaginary and real parts as \(u=q+ip\). Then, the Hamiltonian of the NLS Figure 5.10: Wave equation: Comparisons of the position \(q\) and momenta \(p\) obtained using the learned model with the ground truth wave model (5.5). Figure 5.11: Wave equation: Learned variables in phase space and time domain. equations reads as \[\mathcal{H}(u)=\frac{1}{2}\int_{\Omega}\left[\alpha\bigg{(}\frac{\partial q}{ \partial x}\bigg{)}^{2}+\alpha\bigg{(}\frac{\partial p}{\partial x}\bigg{)}^{2} -\frac{\beta}{2}(q^{2}+p^{2})^{2}\right]dx.\] We used the same discretization as in the linear wave equation. The NLS equation was simulated with the initial condition \(u^{0}(x)=\mathrm{sech}(x)\) in the time domain up to final time \(T=80\) with time-step size \(\Delta t=0.05\). The spatial dimension for the ground truth model was fixed to \(2N=1024\). We used half of the obtained data, i.e., up to \(T_{\mathrm{train}}=40\), to train our model. We set the dimension of the learned model to \(2n=2\). Having learned the desired embedding, in Figures (a)a and (b)b, we present a comparison of the dynamics of the ground truth model (20) and the learned model, as well as the corresponding absolute error, in the time domain for the states \(q\) and \(p\), respectively. The figures show that the learned model agrees with the ground truth model to a high degree of accuracy and can infer the dynamics of the nonlinear Schrodinger equation (20). Lastly, we plot the dynamics of the learned phase space and time domain simulation for the ground truth model (20) in Figure (a)a to present the stability of the learned dynamics. Figure (a)a shows that the learned model is suitable for long time integration. ## 6 Conclusions In this work, we have discussed the concept of data-driven quadratic symplectic representations of nonlinear Hamiltonian systems. We have defined an embedding as the lifting of original data coming from nonlinear Hamiltonian systems using a symplectic transformation, resulting in quadratic systems that describe the dynamics in the lifted space, with a cubic function as the Hamiltonian. The symplectic structure of the dynamics can be enforced by using symplectic auto-encoders and symmetric tensors. Figure 12: Nonlinear Schrödinger equation: Comparisons of the position \(q\) and momenta \(p\) obtained using the learned model with the ground truth wave model (20). Figure 13: Nonlinear Schrödinger equation: Learned variables in phase space and time domain. This approach enables us to obtain a learned symplectic lifting. Additionally, for high-dimensional data, we discuss symplectic reduction to achieve a quadratic representation, leading to a low-dimensional quadratic Hamiltonian system. The advantage of this approach over structure-preserving model order reduction is that we directly learn the reduced dynamics fitting the data, eliminating the need for hyper reduction methods or taking gradients through the auto-encoder. We note that the proposed methodology does not require to know the full-order model in a discretized form. We have demonstrated the efficiency of the proposed methodology by means of several low-dimensional and high-dimensional examples, illustrating the preservation of the Hamiltonian, i.e., energy, and long-term stability in extrapolation settings. In our future work, we investigate the effect of noise on the performance of the methodology and propose suitable treatments to it, for example, tailoring the approach proposed in [32]. Additionally, extensions to discrete Hamiltonian systems, and parametric and externally controlled Hamiltonian systems would be valuable contributions. ## Appendix A Implementation Details Tables A.1 and A.2 contain all the necessary hyper-parameters for our illustrative examples. We set the hyper-parameters experimentally by monitoring the performance of the learned model on training data. For the symplectic lifting case, we have set the hyper-parameters \((\lambda_{1},\lambda_{2},\lambda_{3})\) to \((10^{-1},1,1)\) by monitoring all the losses to obtain a balanced decrease of all the losses simultaneously, while in the symplectic reduction case, the hyper-parameters \((\lambda_{1},\lambda_{2},\lambda_{3})\) are set to \((1,10^{-1},10^{-1})\) for the same goal. In order to deal with the inaccuracy of the reconstruction due to the structure of the auto-encoder, we have applied the penalisation: \[\mathcal{L}_{\text{Rec}}=0.5\|x(t)-\phi(\psi(x(t)))\|_{1},\] (A.1) where \(\|\cdot\|_{1}\) denotes the mean absolute error, averaged over all samples and dimensions. Similarly, we have penalized the parameters of \(\mathcal{L}_{ix}\) with the mean absolute error scaled with a hyper-parameter \(10^{-5}\). Finally, we used fixed decay in both the symplectic reduction and lifting cases, using the StepLR implementation in PyTorch. We experimentally fixed both the decay rate and the decay step by monitoring the decay of the total loss function. For the symplectic lifting case, we have used a Multi Layer Perceptron (MLP) architecture with skip connections and three hidden layer. For the symplectic reduction case, we used a similar deep convolutional network (DCA) structure as the one given in [18]. In Figure A.1, we give the details of the auto-encoder structure. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Parameters** & **Pendulum** & **Lotka-Volterra** & **Nonlinear oscillator** \\ & **example** & **example** & **example** \\ \hline Encoder layers [neurons] & [64, 64, 64] & [32, 32, 32] & [32, 32, 32] \\ \hline Lifted coordinate system dimension & 4 & 4 & 4 \\ \hline Learning rate & \(3\cdot 10^{-3}\) & \(3\cdot 10^{-3}\) & \(3\cdot 10^{-3}\) \\ \hline Batch size & 5 & 5 & 20 \\ \hline Activation function & selu & selu & selu \\ \hline Weight decay & \(10^{-5}\) & \(10^{-5}\) & \(10^{-5}\) \\ \hline Epochs & 5501 & 4501 & 3501 \\ \hline Tolerance & \(5\cdot 10^{-2}\) & \(1\cdot 10^{-2}\) & \(5\cdot 10^{-2}\) \\ \hline \end{tabular} \end{table} Table A.1: The table contains all the hyper-parameters to learn the dynamics of the low-dimensional examples. ## Funding Statement Suleyman Yildz and Peter Benner are partially supported by the German Research Foundation (DFG) Research Training Group 2297 "MathCoRe", Magdeburg.
2306.06247
Online Learning with Set-Valued Feedback
We study a variant of online multiclass classification where the learner predicts a single label but receives a \textit{set of labels} as feedback. In this model, the learner is penalized for not outputting a label contained in the revealed set. We show that unlike online multiclass learning with single-label feedback, deterministic and randomized online learnability are \textit{not equivalent} even in the realizable setting with set-valued feedback. Accordingly, we give two new combinatorial dimensions, named the Set Littlestone and Measure Shattering dimension, that tightly characterize deterministic and randomized online learnability respectively in the realizable setting. In addition, we show that the Measure Shattering dimension characterizes online learnability in the agnostic setting and tightly quantifies the minimax regret. Finally, we use our results to establish bounds on the minimax regret for three practical learning settings: online multilabel ranking, online multilabel classification, and real-valued prediction with interval-valued response.
Vinod Raman, Unique Subedi, Ambuj Tewari
2023-06-09T20:43:19Z
http://arxiv.org/abs/2306.06247v4
# Online Learning with Set-Valued Feedback ###### Abstract We study a variant of online multiclass classification where the learner predicts a single label but receives a _set of labels_ as feedback. In this model, the learner is penalized for not outputting a label contained in the revealed set. We show that unlike online multiclass learning with single-label feedback, deterministic and randomized online learnability are _not equivalent_ even in the realizable setting with set-valued feedback. Accordingly, we give two new combinatorial dimensions, named the Set Littlestone and Measure Shattering dimension, that tightly characterize deterministic and randomized online learnability respectively in the realizable setting. In addition, we show that the Measure Shattering dimension tightly characterizes online learnability in the agnostic setting. Finally, we show that practical learning settings like online multilabel ranking, online multilabel classification, and online interval learning are specific instances of our general framework. ## 1 Introduction In the standard online multiclass classification setting, a learner plays a repeated game against an adversary over \(T\) rounds. In each round \(t\in[T]\), an adversary picks a labeled example \((x_{t},y_{t})\in\mathcal{X}\times\mathcal{Y}\) and reveals \(x_{t}\) to the learner. The learner observes \(x_{t}\) and then makes a (possibly randomized) prediction \(\hat{y}_{t}\in\mathcal{Y}\). Finally, the adversary reveals the true label \(y_{t}\) and the learner suffers the loss \(\mathbb{1}\left\{\hat{y}_{t}\neq y_{t}\right\}\)[2]. In many settings, however, there may not be single correct label \(y_{t}\in\mathcal{Y}\), but rather, a _small collection_ of correct labels \(S_{t}\subset\mathcal{Y}\). For example, in online multilabel ranking with binary relevance feedback, the learner is tasked with predicting a permutation over \(K\) labels but only receives a bit string indicating which of the labels are relevant. Here, for any given bit string, there are multiple permutations that correctly rank relevant items above non-relevant items. Thus, if the learner only receives a bit string as feedback, they effectively only observe a _set_ of correct permutations. Motivated by online multilabel ranking and other natural learning problems, we study a variant of online multiclass classification where in each round \(t\in[T]\), the learner still predicts a single label \(\hat{y}_{t}\in\mathcal{Y}\), but the adversary reveals a set \(S_{t}\subset\mathcal{Y}\) of labels. The learner then suffers a loss if and only if \(\hat{y}_{t}\notin S_{t}\). Surprisingly, we show that under set-valued feedback, deterministic and randomized learnability are _not equivalent_ even in the realizable setting. This is in contrast to single-label feedback, where there is no separation between deterministic and randomized learnability in the realizable setting. To our best knowledge, this is the first natural online learning problem with such a separation. Recent work on online list classification by Moran et al. (2023) is closely related to our setting. However, they study the "flip" of our problem, where instead, the learner outputs a set of labels, but the adversary reveals a single label. In addition, Kakade et al. (2008) and Daniely and Helbertal (2013) study online multiclass learnability under _bandit_ feedback, where the learner outputs a single label, but only gets to observe their 0-1 loss. Daniely and Helbertal (2013) also consider the setting where the adversary can output a list of correct labels, however, the learner only observes whether their predicted label was in the set or not. This is in contrast to set-valued feedback where the learner still gets to observe the set of correct labels even when it makes a mistake. Lastly, there is a growing literature on online multiclass learning with feedback graphs (van der Hoeven et al., 2021; Alon et al., 2015). In this setting, the learner predicts a single label, but observes the losses of a specific set of labels determined by an arbitrary directed graph. This setting differs from ours in the sense that in their model, there is still a single-label ground truth.
2305.10175
The Tiered Radio Extragalactic Continuum (T-RECS) simulation II: HI emission and continuum-HI cross-correlation
In this paper we extend the Tiered Radio Extragalactic Continuum Simulation (T-RECS) to include HI emission. The HI T-RECS model is based on the most recent HI mass function estimates, combined with prescriptions to convert HI mass to total integrated HI flux. It further models source size, morphology and kinematics, including rotational velocity and HI line width. The continuum T-RECS model is updated to improve the agreement with deeper number counts available at 150\,MHz. The model for star-forming galaxies (SFGs) is also modified according to the most recent indications of a star formation rate (SFR)--radio luminosity relation, which depends primarily on stellar mass rather than redshift. We further introduce prescriptions to associate an HI mass to the T-RECS radio continuum SFG and Active Galactic Nuclei (AGN) populations. This gives us a way to meaningfully associate counterparts between HI and continuum catalogues, thus building HI $\times$ continuum simulated observations. Clustering properties of the sources in both HI and continuum are reproduced by associating the galaxies to dark matter haloes of a cosmological simulation. We deliver a set of mock catalogues, as well as the code to produce them, which can be used for simulating observations and predicting results from radio surveys with existing and forthcoming radio facilities, such as the Square Kilometre Array (SKA)
Anna Bonaldi, Philippa Hartley, Tommaso Ronconi, Gianfranco De Zotti, Matteo Bonato
2023-05-17T13:00:52Z
http://arxiv.org/abs/2305.10175v2
The Tiered Radio Extragalactic Continuum (T-RECS) simulation II: HI emission and continuum-HI cross-correlation. ###### Abstract In this paper we extend the Tiered Radio Extragalactic Continuum Simulation (T-RECS) to include HI emission. The HI T-RECS model is based on the most recent HI mass function estimates, combined with prescriptions to convert HI mass to total integrated HI flux. It further models source size, morphology and kinematics, including rotational velocity and HI line width. The continuum T-RECS model is updated to improve the agreement with deeper number counts available at 150 MHz. The model for star-forming galaxies (SFGs) is also modified according to the most recent indications of a star formation rate (SFR)-radio luminosity relation, which depends primarily on stellar mass rather than redshift. We further introduce prescriptions to associate an HI mass to the T-RECS radio continuum SFG and Active Galactic Nuclei (AGN) populations. This gives us a way to meaningfully associate counterparts between HI and continuum catalogues, thus building HI \(\times\) continuum simulated observations. Clustering properties of the sources in both HI and continuum are reproduced by associating the galaxies to dark matter haloes of a cosmological simulation. We deliver a set of mock catalogues, as well as the code to produce them, which can be used for simulating observations and predicting results from radio surveys with existing and forthcoming radio facilities, such as the Square Kilometre Array (SKA). keywords: radio lines: galaxies, radio continuum: galaxies, galaxies: luminosity function, mass function, large-scale structure of Universe, software: simulations ## 1 Introduction The neutral gas component of galaxies can be traced by the 21-cm "spin-flip" transition of neutral hydrogen (HI), which can be detected over the radio continuum emission at a rest frequency of \(\nu_{\rm 1cm}\simeq 1.420\) GHz (Hellwig et al., 1970). So far, due to the faintness of the signal, most untargeted HI observations have observed the local Universe (e.g. HIPASS, Barnes et al., 2001, Wong et al., 2006, ALFALFA Giovanelli et al., 2005; Haynes et al., 2018), with a few targeted fields to extend to slightly higher redshifts (AUDS Xi et al., 2021, BUDHIES Gogate et al., 2023). A new generation of facilities is allowing HI surveys that will greatly improve over their predecessors in terms of redshift range, survey area, sensitivity and spatial resolution: CHILES on the Jansky Very Large Array (JVLA, Fernandez et al., 2016), WALLABY and DINGO on the Australian SKA Pathfinder (ASKAP, Koribalski et al., 2020; Meyer, 2009), LADUMA and MIGHTEE on the SKA Precursor MeerKAT (Blyth et al., 2016; Maddox et al., 2021; Ponomareva et al., 2023) and the Aperti Imaging surveys (Adams et al., 2022). These observational efforts will culminate with the Square Kilometre Array Observatory (SKAO) planned HI observations (Staveley-Smith & Oosterloo, 2015). Interstellar gas and stars, and the link between them, are at the heart of galaxy formation and galaxy evolution studies. They together account for the galaxy's baryonic content, and the former fuels the latter through bursts of star-formation activity. In this context, HI and continuum surveys combined are a particularly powerful tool, with the former tracing neutral gas and the latter tracing star formation (e.g. Maddox et al., 2016). In Bonaldi et al. (2019) (hereafter, T-RECS I) we delivered a code, T-RECS1, to produce mock radio continuum catalogues consistent with the most recent data and models, with the aim of predicting what upcoming and future radio continuum surveys could look like, and to optimise survey specifications to science goals. This work extends its scope to HI surveys, as well as continuum-HI combined studies. Footnote 1: [https://github.com/abonaldi/TRECS.git](https://github.com/abonaldi/TRECS.git) One way to produce mock HI catalogues would be to add HI properties to the radio continuum modelling presented in T-RECS I. However, this would produce catalogues selected in radio continuum rather than HI, which would not be adequate to simulate the untargeted HI surveys mentioned above. In light of this, we have implemented two independent modules: one producing catalogues selected in HI (described in Sec. 2) and one producing catalogues selected in radio continuum (updated from T-RECS I as described in Sec. 3). As a further step, those two outputs can be combined in a consistent HI \(\times\) continuum catalogue set as detailed in Sec. 4. This approach explicitly allows for sources in one catalogue either to have or not have a counterpart in the other, based on expected correlations between HI and radio continuum properties and on the respective selection functions. It also gives the option to produce either of the two simulations as stand-alone. This code architecture also easily supports extending to other wavelengths in the future (e.g. optical, infra-red). The issue of including realistic clustering into the simulation is discussed in Sec 5. Finally we draw our conclusions in Sec 6. In this work, \(H_{0}=67\), \(\Omega_{m}=0.32\), \(\Omega_{\Lambda}=0.68\), which are the Planck Collaboration VI (2020) \(\Lambda\)CDM best-fit parameters rounded to two significant figures. ## 2 HI model ### HI flux Following Duffy et al. (2012), the total integrated HI flux of a source, \(f_{\rm HI}\), can be directly linked to the neutral hydrogen mass, \(M_{\rm HI}\). By adopting an optically-thin approximation for the HI, where self-absorption is negligible, they obtain \[f_{\rm HI}=\frac{M_{\rm HI}}{49.8}d_{\rm L}^{-2}\,{\rm Jy\,Hz} \tag{1}\] where \(d_{\rm L}\) is the luminosity distance of the source in Mpc and \(M_{\rm HI}\) is the HI mass in units of \(M_{\odot}\). This relation essentially converts between HI luminosity function and HI mass function and it is yet another example of the intrinsic power of HI observations for probing galaxy physics. To generate T-RECS HI catalogues, we combined this relation with the best-fit \(z=0\) mass function from Jones et al. (2018): \[\Phi(M_{\rm HI})=\ln(10)\phi_{\star}(M_{\rm HI}/M_{\rm HI}^{\star})^{(\alpha+1 )}\exp[-M_{HI}/M_{\rm HI}^{\star}], \tag{2}\] with \(\log(M_{\rm HI}^{\star})\)=9.94 \(M_{\odot}\), \(\phi_{\star}=4.5\times 10^{-3}\), \(\alpha=-1.25\). The redshift evolution of the HIMF is currently poorly constrained by the data, a situation that should drastically improve once the new-generation surveys are completed. Bera et al. (2022) and Paul et al. (2023) investigated the HIMF at \(z\sim 0.3\) with GMRT and MeerKAT data, respectively. They both found a significant evolution of both \(M_{\rm HI}\) and \(\phi_{\star}\), which results in far fewer massive HI galaxies at \(z=0.3\) than at \(z=0\). In line with those results, we parameterize the evolution as \[\log(M_{\rm HI}^{\star}(z)) = \log(M_{\rm HI}^{\star})_{z=0}+C_{\rm evol}\times z. \tag{3}\] \[\log\phi_{\star}(z) = \log\phi_{\star z=0}+\phi_{\rm evol}\times z. \tag{4}\] We determine the parameters \(C_{\rm evol}=-1.41\), \(\phi_{\rm evol}=1.55\), by imposing that, at \(z=0.32\), \(\log(M_{\rm HI}^{\star}/M_{\odot})=9.49\) and \(\phi_{\star}=1.4\times 10^{-2}\,{\rm Mpc}^{-3}\,{\rm dex}^{-1}\), which are the best-fit values of Paul et al. (2023). Given the lack of data on the HIMF at higher redshifts, we restrict the redshift range of the simulation to \(z=0\)-0.5, which sets the frequency range for HI surveys we can simulate to 1420.40-946.9 MHz. Fig.1 shows the adopted HI mass function (black line) compared with a T-RECS realization (black symbols) for different redshifts. ### Source size and morphology There is a tight relation between HI mass and the diameter of the HI disk \(D_{\rm HI}\) in kpc, defined at an HI surface density \(\Sigma_{\rm HI}\) of \(1\,M_{\rm sun}\,{\rm pc}^{-2}\)(e.g., Broeils & Rhee, 1997; Verheijen & Sancisi, 2001; Swaters et al., 2002; Noordermeer et al., 2005; Wang et al., 2014; Ponomareva et al., 2016; Naluminsa et al., 2021), which appears to be independent of redshift in the so far explored range (Rajohnson et al., 2022). We adopt the recent relation by Naluminsa et al. (2021): \[\log M_{\rm HI}=(1.95\pm 0.03)\log D_{\rm HI}+(6.5\pm 0.04), \tag{5}\] Figure 1: HI mass function from Jones et al. (2018) with redshift evolution constrained by Paul et al. (2023, back lines), compared to the distribution of T-RECS HI masses for an HI catalogue of \(25\,{\rm deg}^{2}\), and flux limit \(f_{\rm HI}=1\,{\rm Jy\,Hz}\) (black symbols). In red we also show the distribution of the HI mass proxy obtained for a companion continuum simulation of \(25\,{\rm deg}^{2}\), \(f_{1.4{\rm GHz}}\geq 100\,{\rm nJy}\) (see sec. 3.4). which was derived by using a complete sample of 228 WHISP galaxies including all morphological types. This relation is in particularly good agreement with Broeils & Rhee (1997) and Wang et al. (2016), which have a similar morphological selection. It is steeper than Verheijen & Sancisi (2001); Swaters et al. (2002); Noordermeer et al. (2005), which focused on specific morphological types. \(D_{\rm HI}\), obtained by inverting eq. (5), is finally converted from the physical size in kpc to an angular size in arcsec depending on the angular diameter distance of the source. For consistency with the continuum module, where the size of SFGs is in terms of a scale radius, the T-RECS catalogue contains the radius \(R_{\rm HI}=D_{\rm HI}/2\). To further complete the morphological description of the HI sources, the catalogue contains galaxy inclination \(i\) which, in the hypothesis of a random orientation, follows a \(\sin(i)\) distribution. Following e.g. Holmberg (1958), galaxy inclination is linked to the axis ratio \(b/a\) \[\cos^{2}\!i=\frac{(b/a)^{2}-\kappa^{2}}{1-\kappa^{2}} \tag{6}\] where \(\kappa\) is the ratio of smallest to largest axis of an oblate spheroid which best represents the galaxy's 3-dimensional shape. Early measurements indicated values of \(k=0.2\) for spirals and \(k=0.5\) for ellipticals. Rodriguez & Padilla (2013) derive updated distributions for \(\kappa\) by studying the Sloan Digital Sky Survey (SDSS) Galaxy Zoo objects. A Gaussian fit to these distributions yields \(\kappa=0.267\pm 0.102\) for spirals and \(\kappa=0.438\pm 0.196\) for ellipticals. We use the previous relations and the galaxy's size and inclination to the derive major and minor axes. Since HI-rich galaxies are predominantly of spiral morphology, we adopt for the whole HI sample the spiral distribution of Rodriguez & Padilla (2013). ### Source HI line width The width of the 21-cm emission line of neutral hydrogen is related to the circular velocity, corrected for the galaxy's inclination, by \[w_{50}=2\sin(i)v_{\rm max} \tag{7}\] (e.g. Oman, 2022), where \(w_{50}\) is the full with at half maximum (FWHM) of the 21 cm line and \(v_{\rm max}\) is the maximum circular velocity. Katz et al. (2019) derive empirical scaling relations of \(v_{\rm max}\) with the galaxy's baryonic mass \(M_{\rm b}\) (Baryonic Tully-Fisher relation, BTFR), as well as dark matter halo mass, \(M_{\rm h}\), from 120 late-type galaxies from the SPARCS database. We use their BTFR results obtained by using a "flat" prior on the mass-to-light ratio, to derive \(v_{\rm max}\) from the baryonic mass. The latter has been modelled as \(M_{\rm b}=M_{\rm HI}+M_{\rm star}+M_{\rm H2}+M_{\rm HII}\), where \(M_{\rm star}\) is the stellar mass, \(M_{\rm H2}\) is the mass of molecular hydrogen and \(M_{\rm HII}\) is the mass of ionised hydrogen. Although there are other baryonic components, the list above contains the majority of baryons in a galaxy. For \(M_{\rm star}\) we use the maximum likelihood \(M_{\rm star}\)-\(M_{\rm HI}\) relation by Naluminsa et al. (2021). Walter et al. (2020) investigate the density of atomic and molecular hydrogen, \(\rho_{\rm HI}(z)\) and \(\rho_{\rm H2}(z)\), in galaxies as a function of redshift. For \(M_{\rm H2}\) we use \(M_{\rm H2}=M_{\rm HI}\,\rho_{\rm H2}(z)/\rho_{\rm HI}(z)\) where \(\rho_{\rm H2}(z)\) and \(\rho_{\rm HI}(z)\) are the best-fit relations of Walter et al. (2020) (their table 1). For \(M_{\rm HII}\) we use the \(\log(M_{\rm star})\)-\(\log(M_{\rm HI}/M_{\rm star})\) relation by Popping et al. (2014) (linear fit to their Figure 2). This relation is provided in the \(\log M_{\rm star}=6-13\,\log M_{\odot}\) and implies an increase of the ionised hydrogen with decreasing stellar mass. To avoid \(M_{\rm HII}\) to become unrealistically high for low values of \(M_{\rm star}\), we cap the maximum value of \(M_{\rm HII}\) to \(M_{\rm HI}\). Once \(v_{\rm max}\) has been obtained, by using the \(M_{\rm h}\)-\(v_{\rm max}\) relation by Katz et al. (2019) we add the modelling of the dark matter mass from HI properties, \(M_{\rm h,HI}\), which is used to model source clustering in Sec 5. In Fig. 2 we compare the distribution of the T-RECS \(w_{50}\) at low redshifts (red symbols) with the HI velocity width function (HIWF) derived from the ALFALFA survey by Oman (2022; black lines). Despite the relative simplicity of the model, there is a good agreement between the two. The decreasing number of T-RECS objects below \(w_{50}\sim 10^{2}\) is due to the catalogues becoming increasingly incomplete. When including higher redshifts, the T-RECS HIWF slowly shifts to lower \(w_{50}\) values and increases in amplitude due to the evolution of the HI mass function. We note that our modelling of the line width relies on the presence of a plateau in the velocity curve for \(v=v_{\rm flat}\). This is not representative of irregular or dwarf galaxies (Swaters et al., 2009; Oh et al., 2015) whose measured velocity width would be smaller. ## 3 Radio Continuum Model The radio continuum model is described in detail in T-RECS I; here we give a summary and introduce a few modifications. T-RECS models radio continuum sources as either star-forming galaxies (SFGs) or Radio Loud Active Galactic Nuclei (RL AGN). There is no explicit modelling of radio-quiet (RQ) AGN (e.g., Kellermann et al., 2016; Padovani et al., 2015; Mancuso et al., 2017; White et al., 2017; Hartley et al., 2019). RQ AGN can be modelled as objects where star formation in the host galaxy is responsible for the radio emission. On the other hand, observations have also shown that the active nucleus can produce the radio emission. RQ AGN would contribute part of the flux of those sources that, in T-RECS, are modelled as SFGs. Figure 2: Black lines: HIWF from Oman (2022) computed from the \(\alpha\).100 catalogue (solid, dashed and dot-dashed lines are for the “complete”, “spring” and “fall” sky areas, respectively). Red symbols: T-RECS HIWF computed on an HI catalogue of \(25\,\rm deg^{2}\), and flux limit \(f_{\rm HI}=1\,\rm Jy\,Hz\), over the 0–0.075 redshift range. ### Star-forming galaxies (SFGs) SFGs are modelled as in Mancuso et al. (2015) as late-type, spheroidals and lensed spheroidals. The radio emission is based on redshift-dependent star-formation rate (SFR) functions and a modelling of synchrotron, free-free and thermal dust emission as a function of SFR for each of the three sub-populations. The free-free emission and the thermal dust emission are still modelled as in T-RECS I. The synchrotron emission, which is overall the dominant component, is updated in this work to follow the most recent results. Across the 150 MHz-20 GHz frequency range, we adopt \[L_{\rm synch}(\nu)=L({\rm SFR})\,\left(\frac{\nu}{\nu_{0}}\right)^{\alpha+ \delta\alpha\log(\nu/\nu_{0})}\,, \tag{8}\] where \(L({\rm SFR})\) follows Smith et al. (2021) \[\frac{L({\rm SFR})}{{\rm W}/{\rm Hz}}=L_{0}\left(\frac{{\rm SFR}}{M_{\odot}/{ \rm yr}}\right)^{\beta}\,\left(\frac{M_{\rm star}}{10^{10}\,M_{\odot}}\right)^{ \gamma}. \tag{9}\] The frequency dependence is a power-law with spectral index that progressively steepens towards higher frequencies; the parameters are \(\alpha=-0.85\), \(\delta\alpha=-0.1\), \(\nu_{0}=1.6\) GHz. The previous relation we adopted, as in Mancuso et al. (2015), already had a steepening towards the highest frequencies. The most recent data at 150 MHz (Siewert et al., 2019; Mandal et al., 2021) indicate that this behaviour should be extended to the whole frequency range. A possible mechanism for a steepening synchrotron spectral index is synchrotron ageing. \(M_{\rm star}\) in eq. (9) is the stellar mass of the galaxy. This is modelled starting from the SFR using the best-fit SFR\(-M_{\rm star}\) relation of Aversa et al. (2015) (their table 2, with redshift evolution). The redshift evolution of \(M_{\rm star}\) induces a redshift evolution on the radio luminosity function. Parameters in eq. (9) are \(\beta=0.850\pm 0.005\), \(\gamma=0.402\pm 0.005\)(Smith et al., 2021) and \(\log(L_{0}/[{\rm W}{\rm Hz}^{-1}])=21.28\). The latter differs from the Smith et al. (2021) value of \(\log L_{0}/[{\rm W}{\rm Hz}]=22.10\), however our normalization is linked to that of eq. (8) at 150 MHz \(\log(L_{\rm sync}(150{\rm MHz})/\log({\rm SFR}))=0.77\), which needs a \(d\log L_{0}\)=-0.77 correction. Once this is accounted for, there is still a difference in the overall normalization of \(d\log L_{0}\)=-0.06,W/Hz. Given that \(L({\rm SFR})\) depends on both SFR and \(M_{\rm star}\) as per eq. (9), differences in the distribution of either quantities between the measurements by Smith et al. (2021) and the models in Mancuso et al. (2015) and Aversa et al. (2015) could cause some discrepancy. Furthermore, the parameters we adopt have been chosen to achieve consistency with the available data over the whole 150 MHz-20 GHz range rather than being optimised at 150 MHz. In Fig. 3 we verify that the updated luminosity functions at 1.4 GHz are in good agreement with the data. The updated comparison of differential source counts in the 150 MH-20 GHz range is presented in Fig. 4. As in T-RECS I, the size and shape of SFGs is modelled in terms of elliptical Sersic profiles. As a refinement, we link galaxy inclination to galaxy ellipticity using again eq. (6) with \(\kappa=0.267\pm 0.102\) for spirals and \(\kappa=0.438\pm 0.196\) for ellipticals. We make a direct association with the T-RECS radio continuum sub-populations as follows: late-type to spirals, spheroids and lensed spheroids to ellipticals (Rodriguez & Padilla, 2013). ### Radio Loud (RL) AGN RL AGN are based on the Massardi et al. (2010) evolutionary model as updated by Bonato et al. (2017). This model classifies the AGN into steep-spectrum sources (SS-AGN), flat-spectrum radio quasars (FSRQs) and BL Lacs, with different evolutionary properties. The basis of our AGN model is given by luminosity functions at 1.4 GHz for these populations and for redshift intervals ranging from \(z=0\) to \(z=8\). These trivially translate to flux density at 1.4 GHz given the redshift of the source and the adopted cosmology. To compute flux densities at other frequencies, Bonato et al. (2017) adopt power-law spectra \(S\propto\nu^{\alpha}\) with \(\alpha_{\rm FSRQ}=\alpha_{\rm BLLac}=-0.1\), and \(\alpha_{\rm steep}=-0.8\). T-RECS I modified this simple recipe by allowing for a scatter in the spectral index between different sources. Moreover, it introduced systematic variations with frequency of the spectral index distributions, in order to achieve agreement with multi-frequency observations. An effective spectral index between the frequencies \(\nu_{1}\) and \(\nu_{2}\) of sources of a given population with flux density \(S_{1}\), within \(dS_{1}\), at \(\nu_{1}\), was computed by finding the flux density \(S_{2}\) at \(\nu_{2}\) such that \(N_{1}(S_{1})dS_{1}=N_{2}(S_{2})dS_{2}\). Thus \(\alpha_{\rm off}(\nu_{1},\nu_{2})\) is the single spectral index relating the counts at \(\nu_{1}\) to those at \(\nu_{2}\). This procedure was applied to the 1.4-4.8 GHz and 4.8-20 GHz range, using the Massardi et al. (2010) model up to 5 GHz and the De Zotti et al. (2005) model at higher frequencies. In this work, we extend this approach to the 150 MHz-1.4 GHz frequency range, where we had previously applied the Bonato et al. (2017) indices plus scatter, in order to improve the agreement with the Siewert et al. (2019) and Mandal et al. (2021) counts at 150 MHz. The morphological model for RL AGN is the same as in T-RECS I. Following the unified model of AGN, we assume the same parent population for all our AGN classes. Intrinsic sizes are drawn using DiPompeo et al. (2013); BL-Lac and FSRQ are then assigned a small viewing angle (\(\theta\leq 5\) deg) and SS-AGN a large viewing angle (\(5<\theta\leq 90\) deg), thus creating the familiar dichotomy in observed source sizes. For SS-AGN, the description is completed by Fanaroff & Riley (1974)'s \(R_{\rm s}\) parameter, modelled following Lin et al. (2010), to provide the FRI/FRII classification. For consistency with the SFG catalogue, we complement the morphological description of AGN with an elliptical/spiral classification of their host galaxy. The issue of radio vs optical morphology was investigated by Koziel-Wierzbowska et al. (2020) on a sample of 32,616 spectroscopically-selected galaxies from SDSS with radio counterpart from FIRST and VLASS. For our AGN, we adopt their result on galaxies with extended radio morphology, which have an elliptical host in 98 % of cases. ### Dark matter mass modelling from radio continuum properties As in T-RECS I, we include an estimate of the dark matter (DM) mass of galaxies based on radio continuum properties \(M_{\rm h,cont}\). The main use of this mass is to simulate the clustering properties for our sources. Special attention has been given to ensuring that the obtained T-RECS DM mass distribution matches well the mass function from the P-millennium (Baugh et al., 2019) cosmological simulation, which is used to simulate clustering (see Sec. 5 for more details). The DM mass modelling is the same as T-RECS I. For SFGs, we used a \(L_{\rm SFR}\)-\(M_{\rm h}\) relation of the form: \[L(M_{\rm h})=N\times\left[\left(\frac{M_{\rm h}}{M_{b}}\right)^{\alpha}+\left( \frac{M_{\rm h}}{M_{b}}\right)^{\omega}\right]^{-1}, \tag{10}\] where \(N\), \(\alpha\), \(\omega\) and \(M_{\rm h}\) are free parameters which include redshift evolution. We start from the Baugh et al. (2019) target halo mass function as a function of redshift, and we derive the parameters so that the resulting luminosity function matches as closely as possible the radio luminosity function at 1.4 GHz from Bonato et al. (2017). Once the parameters are derived, eq. (10) can be inverted to obtain \(M_{\rm h}\) from the 1.4 GHz luminosity. For RL AGN, we start again from the halo mass function of the Baugh et al. (2019) simulation to draw a sample of galaxies with plausible DM mass distribution for each T-RECS redshift interval. For all galaxies in the sample, we compute the stellar mass with the Aversa et al. (2015) \(M_{\rm star}=F(M_{\rm h})\) relation (their Table 2, including redshift evolution). Janssen et al. (2012) derived the fraction of galaxies Figure 3: Comparison of continuum luminosity functions between T-RECS and the available data from Novak et al. (2017); Butler et al. (2019); Ocarán et al. (2020); Bonato et al. (2021) hosting an RL AGN as a function of the host galaxy stellar mass, \(M_{\rm star}\), separately for Low Excitation Radio Galaxies (LERGs) and High Excitation Radio Galaxies (HERGs). We use this result to extract subsamples of HERGs and LERGs from the initial galaxy sample. We finally compute the \(M_{\rm h}\) distribution of the two sub-samples, which, once appropriately normalised, can be used to draw halo masses suitable for a HERG/LERG host. We map the HERG/LERG populations into our T-RECS observational categories as follows: FSRQs to the HERG population; BL Lacs to the LERG population; SS-AGN morphologically classified as FR II/FR I to the HERG/LERG population (see TRECS I for more details). ### HI mass modelling from radio continuum properties The interplay between gas and stars in the lifecycle of a galaxy results in a correlation between HI mass, stellar mass, and SFR, which is exemplified by the Kennicutt-Schmidt Law (Kennicutt, 1998). Scaling relations between HI mass tracers and SFR tracers have been derived (e.g. Zhou et al., 2018; Bothwell et al., 2013; Catinella et al., 2010; Cortese et al., 2011; Jaskot et al., 2015; Huang et al., 2012; Sinigaglia et al., 2022, just to cite some of the most recent); the correlation is well established, with a scatter indicative of the many factors that regulate SFR. Among those, there is feedback from AGN activity, that heats the surrounding gas therefore reducing star formation. Additionally, an important role is played by galaxy type. Late-type galaxies, associated with spiral/disk morphologies, are typically HI-rich and actively forming stars, while early-type galaxies, with spheroidal/elliptical morphology, are typically HI-poor and have low star formation. The SFR-\(M_{\rm HI}\) correlation mentioned above provides a relatively straightforward way to add HI mass to the modelling of T-RECS star-forming galaxies. We call this HI mass \(\tilde{M}_{\rm HI}\), to distinguish it from the HI mass modelled for HI sources in Sec. 2. It can be written as: \[\log(\tilde{M}_{\rm HI})=A\log({\rm SFR})+B \tag{11}\] with a scatter \(\sigma\); the slope and normalization \(A\) and \(B\) are in general redshift-dependent. We derive the parameters \(A\) and \(B\) by imposing that the resulting \(\tilde{M}_{\rm HI}\) distribution is consistent with the HI mass function adopted for the HI model in Sec. 2. The redshift evolution as in Paul et al. (2023) means that the maximum HI mass changes relatively rapidly as a function of redshift. The evolution of the SFR function in Mancuso et al. (2015) is not as fast, and imparts a significant redshift dependence of the parameters \(A\) and \(B\). We find that \[A(z) = 1.1-2.46\,z+2.06\,z^{2} \tag{12}\] \[B(z) = 9.5-z \tag{13}\] and a scatter \(\sigma\log\tilde{M}_{\rm HI}=0.2\) give a good agreement in the considered redshift range. In Fig.5 we show the \(\log({\rm SFR})\)-\(\log(\tilde{M}_{\rm HI})\) relation for the T-RECS catalogue up to \(z=0.3\) compared to the best fits from Naluminsa et al. (2021), Chver et al. (2014) and Michalowski et al. (2015). The slope of the correlation is consistent with the observational estimates. However, our normalization is higher, at \(\Delta\log({\rm SFR})\sim 0.3\)-0.4. Using one of the observed relations instead of the abundance-matching parameters produces larger HI masses than what would be consistent with the HI mass function by Jones et al. (2018) plus redshift evolution. In other words, this is essentially a tension between the Mancuso et al. (2015) model of SF and the Jones et al. (2018) HI mass function, which indicates that there is still some distance to cover to get a consistent model across both observables. Several relations have been published between \(M_{\rm HI}\) and the stellar mass, \(M_{\rm star}\)(e.g. Catinella et al., 2010; Huang et al., 2012; Maddox et al., 2015; Naluminsa et al., 2021), which could be used as an HI mass proxy for the AGN population. The relation appears to be linear at the low-mass end, and to flatten at the high-mass end due to the dominance of early-type, HI-poor galaxies. We adopt the maximum likelihood relation by Naluminsa et al. (2021) up to \(M_{\rm star}=10^{10}\,M_{\odot}\) and Catinella et al. (2010) for higher masses. The latter was derived on an unbiased sample of galaxies selected by stellar mass and therefore should contain the gas-poor systems that are typically missed by the current untargeted HI surveys. In Fig. 1 we compare the HI mass function (black line and symbols) with the HI continuum mass proxy distribution (red symbols). Overall, there is a satisfactory agreement between the two, ensured by the abundance-matching method used to derive the parameters in eq. (11), as well as a plausible modelling of the HI mass proxy for the AGN population. The discrepancy at the low-mass end is due to the different selection functions. As mentioned, the selection in HI flux almost corresponds to a selection in mass, and gives high completeness of the sample down to the last mass bin. Conversely, the selection in continuum flux means that the mass proxy sample is progressively incomplete. Departures from the two distributions not due to completeness can be ascribed to the simplicity of eq. (11), which cannot guarantee a perfect fit over the whole mass range. At the high-mass end, the mass-proxy distribution tends to over-predict the number of objects. This region of the distribution contains very few objects, and therefore its weight in the abundance-matching scheme is relatively low. ## 4 Continuum \(\times\) HI model So far, the T-RECS HI and continuum produced with the modules described in Secs. 2 and 3 are independent. Each produces statistically plausible samples of sources, but the two samples are uncorrelated and have different sky coordinates. If the two catalogues were used to represent the same area of the sky, one would expect a certain number of positional cross-matches between the two, as the same galaxy is visible in both bands. In this section, we discuss how a consistent HI \(\times\) continuum catalogue can be constructed by associating counterparts between the two catalogues and rewriting one of the two coordinate sets to match. First of all, counterparts are only explored between galaxies that belong to the same redshift slice between the two simulations. We consider a sample \(\{i\}\) of continuum sources and \(\{j\}\) of HI sources, which have been selected with \(S^{i}>S1\) and \(f^{j}_{\rm HI}>F1\), where \(S^{i}\) are the fluxes in continuum, \(f^{j}_{\rm HI}\) the fluxes in HI, and \(S1\), \(F1\) are the respective detection limits. If we ignore redshift variation within a slice, from eq. (1) we can rewrite the HI selection in terms of HI mass, as \(M^{j}_{\rm HI}>M1\) where \(M^{j}_{\rm HI}\) and \(M1\) are the HI masses corresponding to \(f^{j}_{\rm HI}\) and \(F1\) at the given redshift. The counts of both the \(\{i\}\) and \(\{j\}\) samples come from the appropriately normalised distributions (luminosity function, SFR function and HI mass function) and need to be conserved. Therefore, all objects are retained, and in general only some of them have a counterpart. Based on the HI selection function, continuum sources belonging to \(\{i\}\) would be visible also in HI if their HI mass is above M1. Assessing this requires a modelling of the HI mass for the continuum sources, \(\tilde{M}^{i}_{\rm HI}\), which we have introduced and described in Sec. 3.4. We note that \(\tilde{M}^{i}_{\rm HI}\) does not come from sampling an expected HI mass distribution and so selecting a subsample of potential counterparts as \(\tilde{M}^{i}_{\rm HI}>M1\) would not automatically reproduce HI source counts and could lead to objects that are mismatched with \(\{j\}\). However, if we have compatible \(\tilde{M}_{\rm HI}\) and \(M_{\rm HI}\) distributions, the differential counts are similar \(N(M_{\rm HI})\sim N(M_{\rm HI})\) and so are the number of sources above the same mass limit. In this case, assigning counterparts based on matching \(M_{\rm HI}\) from the HI catalogue with \(\tilde{M}_{\rm HI}\) from the continuum catalogue produces a similar selection function. As already discussed in Sec. 3.4, our modelling of \(\tilde{M}_{\rm HI}\) for the SFGs, which are the vast majority of the continuum Figure 4: Comparison of differential source counts in total intensity at 150 MHz (top left) 1.4 GHz (top right), 3 GHz (bottom left) and 15 GHz (bottom right) between T-RECS and the available data from Franzen et al. (2016); Bondi et al. (2008); Vernstrom et al. (2016); Smolčić et al. (2017); Padovani et al. (2015); Ibar et al. (2009); Whittam et al. (2016); Waldram et al. (2010); AMI Consortium et al. (2011); Mandal et al. (2021); Bonato et al. (2021) Figure 5: Relation between SFR and \(\tilde{M}_{\rm HI}\) for the \(z\leq 0.3\) continuum catalogue, compared to Naluminsa et al. (2021) (dashed line), Cluver et al. (2014) (dot-dashed line) and Michalowski et al. (2015) (solid line). Contours include are 99%, 90% and 50% of all galaxies. sources, has been explicitly designed to reproduce the HI mass function as best as possible, by using an abundance-matching technique to constrain its parameters (see also Fig 1). The mass match ensures that broadly compatible objects between the two catalogues are assigned, and it propagates, although with additional scatter, the modelled correlations between continuum and HI properties. For example, the SFR-\(M_{\rm HI}\) correlation that has been modelled for the continuum catalogue shown in Fig. 5 is conserved when we consider the HI \(\times\) continuum catalogue. The HI selection \(M_{\rm HI}^{j}>M1\) maps to a selection in \(\tilde{M}_{\rm HI}^{j}\) which is smoother than the original step function, due to the various uncertainties introduced by the additional modelling. The HI mass-matching itself is performed with a nearest neighbour implementation. Initially, for each galaxy in the HI catalogue, the 20 nearest neighbours from the continuum catalogue are retained, as counterpart candidates. The HI galaxies are then reprocessed in order of decreasing HI mass, and a unique counterpart is assigned to each of them, if available, as the best of the 20 candidates that has not been already matched. The 20 nearest neighbours have been chosen as a trade-off between computational speed and having enough samples for a good mass-matching. The reason for proceeding in inverse mass order is that high-mass objects are more rare and therefore more difficult to match accurately, but at the same time brighter and therefore more likely to have a counterpart. In the left panel of Fig. 6 we show the histograms of the HI masses \(M_{\rm HI}\) for an HI catalogue with \(f_{\rm HI}>1\,\rm Jy\,Hz\) and \(\tilde{M}_{\rm HI}\) for a continuum catalogue with \(S_{\rm 1.4\,GHz}>\)100 nJy, over \(25\,\rm deg^{2}\) and \(z=0.5\). The HI catalogue has a mass limit \(\log M1=6.45\). Similar numbers of galaxies are available in both catalogues as a function of \(M_{\rm HI}\) for \(\log M_{\rm HI}\geq 7\). In the \(\log M_{\rm HI}=6\) mass bin, the counts \(N(M_{\rm HI})\) and \(N(\tilde{M}_{\rm HI})\) start to diverge, and there is an excess of sources in the continuum catalogue. In the right panel of Fig. 6 we show the mass distribution of the cross-matched catalogue with the red line, and the un-matched sources with the black solid line and the black dash-dotted line for HI and continuum, respectively. In this case, the continuum is deeper than the HI and most HI sources have a continuum counterpart. When considering another couple of catalogues with different detection limits, the proportion of sources in the matched, HI-only and continuum-only categories would change. The \(M_{\rm HI}\)-\(\tilde{M}_{\rm HI}\) relation for the matched objects is very tight, with \(\log M_{\rm HI}-\log\tilde{M}_{\rm HI}=2.4\pm 9.4\times 10^{-3}\). In Fig. 7 we compare the selection in mass of the HI catalogue (\(\log M1>6.45\)) with that of the matched continuum counterparts (diamonds and dot-dashed line). The error bars are consistent with Poisson statists and reflect the very different number of sources that are present in the different mass bins. Figure 8: Median T-RECS \(M_{\rm h,cont}\)–\(M_{\rm HI}\) (T-RECS 1) and \(M_{\rm h,HI}\)–\(M_{\rm HI}\) (T-RECS 2) compared to HI–halo mass relation from observations (Guo et al., 2020; Rhee et al., 2023, central galaxies only) and theoretical modelling (Baugh et al., 2019; Chauhan et al., 2021). The contours are the full distribution for the T-RECS 1 model, including 99%, 90% and 10% of the galaxies. Figure 6: _Left:_ Distribution of \(M_{\rm HI}\) for an HI catalogue of \(25\,\rm deg^{2}\)\(f_{\rm HI}\geq 1\,\rm Jy/Hz\) and \(z=\)0.475–0.525 (solid line) and of \(\tilde{M}_{\rm HI}\) for a continuum catalogue with the same sky area and redshift range and \(S_{\rm 1.4\,GHz}\geq 100\,\rm nJy\) (dot-dashed line). _Right:_ distribution of \(M_{\rm HI}\) for the cross-matched catalogue (red solid line); HI un-matched sources (black dot-dashed line). Figure 7: Selection function as a function of HI mass for the HI catalogue (black solid line) and the matched continuum counterparts (black diamonds and dot-dashed line) for the example in Fig. 6. Superimposed in red is the theoretical selection function corresponding to a Gaussian error \(\sigma\log M=0.65\) and an offset \(\Delta\log M=0.3\), which is a good fit to the observed selection. In the Gaussian approximation, for a scatter \(\sigma_{M}\) on the mass \(M\) and a detection limit \(M1\), the selection function \(F(M)\) is an error function \[F(M)=\frac{1}{2}\left[1-\mathrm{erf}\left(\frac{M_{1}-M}{\sigma_{M}\sqrt{2}} \right)\right]. \tag{14}\] The red line in Fig. 7 is an error function similar to the measured selection, which corresponds to a mass limit \(\log\dot{M}_{1}=6.75\pm 0.65\). The scatter is due to that of the SFR-\(\tilde{M}_{\mathrm{HI}}\) relation and that of the mass matching. The offset \(\Delta\log M=0.3\) is due to differences in the distributions of \(\tilde{M}_{\mathrm{HI}}\) and \(M_{\mathrm{HI}}\), in particular the excess of continuum sources in the \(\log M_{\mathrm{HI}}=6\) mass bin. At the highest masses, the observed selection departs from the error function, with a few radio continuum galaxies not having an HI counterpart. This is due to discrepancies between the high-mass end of the HIMF and the HI mass proxy distribution (see also Fig 1). The difficulty in accurately reproducing the expected HI mass function starting from a radio continuum catalogue is one of the reasons behind our choice to model the HI independently and then cross-match with the continuum catalogue. In this way those discrepancies, rather than affecting the number of HI sources for an HI selection, only affect the number of continuum counterparts. We note that the method we described does not necessarily identify pairs of galaxies that are the most consistent in terms of properties other than the HI mass. To achieve this, a better approach would be to simultaneously match multiple catalogue attributes at once, for example \(M_{\mathrm{h}}\), \(M_{\mathrm{star}}\) and \(M_{\mathrm{HI}}\). However, this approach would increase the scatter between the matched \(\tilde{M_{\mathrm{HI}}}\) and \(M_{\mathrm{HI}}\) which would result in a selection of the continuum sources largely independent from the HI signal strength. We point out that the selection functions adopted in T-RECS are an approximation of what actually achieved by real observations. In radio continuum, the ability to detect a source of a given integrated flux density depends, amongst other things, on how much of that flux is contained within the instrumental point spread function. Similarly, completeness for a given HI flux depends on the 3D size and shape of the source, particularly the line width, with sources having narrower line widths being easier to detect (see, e.g., Haynes et al., 2011). Instrumental specifications, such as spatial and spectral resolution, need to feature in an accurate description of selection functions. Additionally, an important role is played by the adopted source finding algorithms, which perform spatial and spectral filtering of the sources to maximise detection (see, e.g. Bonaldi et al., 2021; Hartley et al., 2023, for a comparison of different source finding methods on the same data). If the goal is to accurately reproduce the selection functions of specific surveys, T-RECS catalogues should be generated with flux limits consistent with the deepest achievable fluxes, and further filtering of the source lists should be performed on additional source properties, as appropriate. ## 5 Clustering As described in T-RECS I, the clustering properties of the sources are simulated by associating each galaxy with a dark matter sub-halo of the P-millennium (Baugh et al., 2019) cosmological simulation. The cosmological parameters adopted in this simulation are: \(H_{0}=67.77\,\mathrm{km~{}s}^{-1}\,\mathrm{Mpc}^{-1}\), \(\Omega_{\Lambda}=0.693\), \(\Omega_{\mathrm{M}}\)=0.307, \(\sigma_{8}=0.8288\)(Planck Collaboration, 2014). The reason for choosing this simulation is its very high mass resolution, which means that sub-haloes are tracked individually down to \(M_{h}=1.6\times 10^{9}\,M_{\odot}\), but also a large box size (800 comoving Mpc/h). The latter allows producing a \(5\times 5\,\mathrm{deg}^{2}\) lightcone up to \(z=8\), which is therefore the maximum T-RECS survey size for which we can simulate clustering. This approach essentially matches galaxies to haloes by means of the dark mass \(M_{\mathrm{h}}\), separately for each redshift slice, and overwrites the T-RECS latitude, longitude and redshift coordinates with those of the matched haloes. The clustering is therefore simulated in 3 dimensions. Two predictions for the dark matter mass of T-RECS galaxies are available: \(M_{\mathrm{h,HI}}\) as modelled from the HI properties in Sec 2, and \(M_{\mathrm{h,cont}}\) as modelled from the radio continuum properties for SFGs and AGN in Sec 3. Given the uncertainties related with both modelling and measurement of dark matter mass, both quantities are retained as alternative models in an HI \(\times\) continuum catalogue. Several works have explored the total HI mass in dark matter haloes (HI-halo mass relation, HIHM), which is important to understand the role of HI in galaxy formation and its connection to structure formation. This relation has been investigated extensively in different theoretical models (Barnes & Haehnelt, 2014; Paul et al., 2018; Obuljen et al., 2019; Kim et al., 2017; Villaescusa-Navarro et al., 2018) and observationally, via spectral stacking (Guo et al., 2020; Rhee et al., 2023). Theoretical studies predict a positive correlation between \(M_{\mathrm{h}}\) and \(M_{\mathrm{HI}}\) in both the low-\(M_{\mathrm{h}}\) regime, which is dominated by central galaxies, and the high-\(M_{\mathrm{h}}\) regime, dominated by groups and satellite galaxies. The overall relation has a flattening at masses in between these two regimes. Observational estimates have so far measured a much flatter relation, with \(M_{\mathrm{HI}}\) only mildly depending on \(M_{\mathrm{h}}\). Possible observational biases to account for these discrepancies are discussed in details in Chauhan et al. (2021). In Fig. 8 we compare the \(M_{\mathrm{h,HI}}\)-\(M_{\mathrm{HI}}\) (T-RECS B) and \(M_{\mathrm{h,cont}}\)-\(M_{\mathrm{HI}}\) (T-RECS A) relations with some of the HIMH in the literature. They can only be qualitatively compared, since T-RECS's are galaxy-galaxy relations and the MIMH is the total HI content of haloes including multiple galaxies. The comparison is more meaningful at the lowest halo masses, where a single galaxy occupancy of haloes is prevalent, or when considering the central galaxies' contribution to the total HIMH. The filled circles show the median T-RECS A and T-RECS B relations; black contours are the full T-RECS A \(M_{\mathrm{h,cont}}\) model including 99%, 90% and 50% of the galaxies. Diamonds and squares are the central galaxies' observational relations by Rhee et al. (2023) and Guo et al. (2020). Dashed and dash-dotted lines are the theoretical relations by Baugh et al. (2019) and Chauhan et al. (2021). Error bars on the observational estimates have been omitted for figure clarity. The T-RECS A model is in qualitatively good agreement with both the observational estimates and with the Chauhan et al. (2021) model over the mass range \(10^{11}<M_{\mathrm{h}}<10^{13}\,M_{\odot}\). It also manages to reproduce the flattening of the relation towards high masses, which in T-RECS is the result of associating some HI galaxies to AGN sources, which typically have a larger halo mass. The T-RECS B model does not present any flattening and it predicts a higher HI mass for the same dark mass. This is because the relation used to model \(M_{\rm h,HI}\) has been derived for late-type galaxies, which have higher HI fraction. As such, this relation fails to capture the most massive, early-type galaxies which are HI-poor. A lower DM mass for these sources would result in lower clustering. For this reason, in the clustering analysis that follows, the dark matter mass of the continuum counterpart \(M_{\rm h,cont}\) has been used, whenever present. The implementation of the DM mass matching is based on the same nearest-neighbour approach used for the \(M_{\rm HI}\) mass matching in Sec. 4. Given the very large number of DM sub-haloes contained in the lightcone, however, comparing every galaxy to every halo is a significant bottleneck in terms of execution time. At low \(M_{\rm h}\) especially, there is an overabundance of DM haloes with respect to galaxies, due to the shape of the DM mass function and selection effects for the T-RECS catalogues. On those mass ranges, and depending on the sizes of the samples to be compared, a subsample of the available sub-haloes can be randomly selected for the subsequent analysis. This has very significant advantages in terms of computational cost, at the price of a negligible increase of the scatter between the matched masses. ### Validation for continuum sources Several observational estimates of clustering for AGN and SFGs are present in the literature (Magliocchetti et al., 2017; Hale et al., 2018; Chakraborty et al., 2020; Bonato et al., 2017). Details of the source selection for those studies are collected in Table 1. The observational estimates adopted the standard power-law shape for the two-point angular correlation function: \(w(\theta)=A\theta^{1-\gamma}\). Since the data did not allow an accurate determination of both \(A\) and \(\gamma\) for each source population, they fixed \(\gamma\) and fit for the normalization. To produce the T-RECS correlation functions we use the same flux limits of the observational estimates and a sky area of 4 \(\times\)4 square degrees. We use the Hamilton (1993) estimator: \[w(\theta)=\frac{DD\cdot RR}{DR\cdot DR}-1, \tag{15}\] where \(DD\), \(RR\) and \(DR\) are the number of data-data, random-random and data-random pairs separated by \(\theta\). The random catalogue is constructed by allocating a random position of all sources within the simulated sky area. The \(w(\theta)\) is computed for 500 realizations. Each realization uses a different generation of the continuum catalogue and a different 4\(\times\)4 deg\({}^{2}\) portion of the DM lightcone, which is extracted from the 5\(\times\)5 deg\({}^{2}\) with a random uniform shift from the field centre in both coordinates. In Fig. 9 we show the mean observational estimates as lines of different colours. Uncertainties have not been included, in order to improve figure clarity. The different selection criteria are responsible for some of the differences in the measured clustering. At the same selection frequency, a deeper flux limit would correspond to less luminous sources, which are expected to be found in less massive, weakly clustered haloes. Therefore, deeper catalogues find a lower normalization of the correlation function. A different selection frequency would favour different sub-populations, which could have different clustering properties. T-RECS mean values and their dispersion for catalogues with flux limits as in Magliocchetti et al. (2017) and Hale et al. (2018) are shown as symbols and error bars with the same colour as the observational estimates. For the same source selection, our simulations give amplitudes of the angular correlation function consistently lower than the observational ones. The higher amplitudes of the observationally estimated \(w(\theta)\) imply higher average bias factors, i.e. higher halo masses. As already discussed in T-RECS I, various factors can influence the comparison between the observational determinations and the T-RECS simulations. One of those is the observational classification between AGN and SFG and how this compares with the definition of the two populations in T-RECS. Another factor is the availability of high-mass haloes in the Baugh et al. (2019) cosmological simulation adopted in T-RECS. The mass function of the cosmological simulation and that inferred from the luminosity function are somewhat different, which means that there is a deficit of suitable haloes in some mass ranges and a surplus in others. Allowing for some scatter between the predicted and the associated mass alleviates the problem; however, it typically favours association to smaller halo masses, given the shape of the mass function. T-RECS results are still consistent with the observations using the same flux limit, with the exception of Hale et al. (2018) SFGs. In the latter case the discrepancy is of 2.5 \(\sigma\), where \(\sigma\) is the quadratic sum of errors of the observational estimates and of the simulations. Despite the differences noted above, and taking into ac \begin{table} \begin{tabular}{l c c l} \hline Observation & \(\nu\)[GHz] & \(S_{\rm limit,\nu}\)[mJy] & Ref. \\ \hline COSMOS & 3 & 0.013 & Hale et al. (2018) \\ COSMOS & 1.4 & 0.15 & Magliocchetti et al. (2017) \\ LH & 1.4 & 0.15 & Bonato et al. (2021) \\ EN1 & 0.612 & 0.05 & Chakraborty et al. (2020) \\ \hline \end{tabular} \end{table} Table 1: Comparison on the selection between different radio continuum clustering estimates in the literature Figure 9: Two-point angular correlation function \(w(\theta)\) yielded by our simulation (symbols with error bars) for radio AGN (left) and SFGs (right) compared to the observational results in Table 1 (coloured lines). count the uncertainties in both determinations, the agreement between the T-RECS clustering and the empirically-determined one is reasonably good. ### Validation for HI sources Observational estimates of the real-space correlation functions for galaxies selected in HI are presented by Passmoor et al. (2011), Martin et al. (2012) and Papastergis et al. (2013). Details of the survey used are reported in Table 2. The observational estimates adopt a power-law model for the correlation function of the form \(\xi(r)=(r/r_{0})^{-\gamma}\), and fit for both the \(r_{0}\) and the \(\gamma\) parameters. We select T-RECS sources with a mass limit of \(\log M_{\rm HI}=7.5\,M_{\odot}\) and a redshift limit of \(z=0.05\), in an attempt to be comparable with the observational estimates. However, the observed samples have a complex selection function and they are neither flux- or mass-limited, which makes the comparison of the results more complicated. In order to get good statistics for the computation of \(\xi(r)\), we use the maximum possible area for a T-RECS clustered simulation, of 25 deg\({}^{2}\). Observational estimates were obtained within larger surveys, of \(\sim\)400 deg\({}^{2}\) for ALFALFA and 29,000 deg\({}^{2}\) for HIPASS. To produce T-RECS correlation functions we use again the estimator in eq. (15); we checked that using the Landy & Szalay (1993) estimator as in Passmoor et al. (2011), Martin et al. (2012) and Papastergis et al. (2013) gives comparable results. For the random catalogue we use the unclustered T-RECS catalogue, which therefore has a plausible redshift distribution and random angular coordinates within the simulated sky area. The clustered catalogue contains the same sources, but the angular and redshift coordinates are replaced with those of dark haloes in the same redshift range that have similar mass. The \(\xi(r)\) is computed for 500 realizations. Each realization uses a different generation of both the HI and continuum catalogue (which is also used in the dark mass matching process, as previously explained). In Fig. 10 we show the comparison between T-RECS and the observational estimates (uncertainties in the latter were not included to improve figure clarity). T-RECS results reproduce the slope of the observations and are lower in amplitude; the discrepancy is of 1.5-2.5 \(\sigma\) depending on the dataset. As previously mentioned, this comparison is made more difficult by the different source selection. T-RECS sources are complete down to \(M_{\rm HI}=10^{7.5}\). This is the minimum mass achieved by the observational samples (specifically, Papastergis et al., 2013), which could in part explain the lower normalization. The HI catalogue would also most likely inherit the same issue of the continuum catalogue, that has to do with the availability of high-mass haloes in the cosmological simulation. ## 6 Conclusions This work updated the T-RECS radio continuum model presented in Bonaldi et al. (2019) to improve the agreement with the most recent observations and extended it to include HI emission from extra-galactic sources as well as the cross-correlation between the two. The HI model reproduces the Jones et al. (2018) HI mass function and the Oman (2022) HI velocity width function at \(z=0\). Evolution of the HI mass function has been modelled as a linear dependence of the \(\log(M_{\rm HI}^{*})\) and \(\log\phi\star\) parameters with redshift, with amplitudes calibrated to give results consistent with Paul et al. (2023) at \(z=0.3\). Uncertainties in this modelling are due to a lack of high-redshift data and currently limit our HI simulation to a maximum redshift of 0.5. The HI model further includes source size (modelled with the \(\log M_{\rm HI}\)-\(\log D_{\rm HI}\) relation by Naluminsa et al., 2021) and morphology (galaxy type, inclination, ellipticity using Holmberg, 1958 and Rodriguez & Padilla, 2013). The updated radio continuum model makes use of a steepening synchrotron spectral index and the Smith et al. (2021) \(L\)(SFR)-\(M_{\rm star}\) relation. The goodness of the model has been evaluated by comparing T-RECS realizations with observed luminosity functions at 1.4 GHz as well as the most recent compilations of number counts in the 150 MHz-15 GHz. We also modelled an HI mass proxy for the continuum sources, \(\tilde{M}_{\rm HI}\), by using the SFR-\(M_{\rm HI}\) correlation for star-forming galaxies and published relations between the HI mass and the stellar mass for the AGN. We showed that, by matching the HI mass proxy in the continuum catalogue with the HI mass in the HI catalogue, we are able to assign plausible counterparts between the otherwise independent T-RECS continuum and HI simulations. Our method ensures a correct number of counterparts by applying an HI selection function to the radio continuum sources consistent with that of the HI catalogue, within an error \(\sigma_{\log M}\sim 0.65\). It also propagates modelled correlations between the radio continuum and HI properties via their link to \(M_{\rm HI}\). The \(M_{\rm h}\)-\(M_{\rm HI}\) relation for T-RECS galaxies is qualitatively in good agreement with the HI-halo mass relation from data (Guo et al., 2020; Rhee et al., 2023) and theoretical modelling (Chauhan et al., 2021). Clustering properties were simulated by associating the galaxies with haloes of the P-millennium (Baugh et al., 2019) simulation of similar dark halo mass. We assessed the clustering properties of both the HI and the continuum cata Figure 10: Real-space correlation function \(\xi(r)\) yielded by our simulation (symbols with error bars) compared to the observational results in Table 2 (coloured lines). logue and validated them against the empirical estimates of \(w(\theta)\)(Magliocchetti et al., 2017; Bonato et al., 2021; Chakraborty et al., 2020) and \(\xi(r)\)(Passmoor et al., 2011; Martin et al., 2012; Papastergis et al., 2013). The T-RECS simulation correctly reproduces the slope of both correlation functions. The normalization is in both cases lower (of 1.5-2.5 \(\sigma\)). This could be explained at least in part by the difficulty in reproducing the selection functions of the observational estimates; issues with the availability of high-mass haloes in the Baugh et al. (2019) dark matter simulation we derived the clustering from could also be responsible. The code used to produce these results is available on github ([https://github.com/abonaldi/TRECS.git](https://github.com/abonaldi/TRECS.git)) and can be used to produce mock observations of radio continuum and HI observations, as well as their cross-correlation. We believe these can be very useful to plan for future surveys. This code has been used as a basis of the second science data challenge (SDC2 Hartley et al., 2023) organised by the SKA Observatory to prepare the science community to analyse SKA HI data. ## Acknowledgements We thank M. Massardi, V. Galluzzi, A. Lapi, I. Prandoni for useful suggestions on how to update the T-RECS continuum model. We thank S. Blyth for her useful comments on the paper. We also thank the anonymous referee for their review, that has led to a substantial improvement of this work. We acknowledge the use of computational resources at the SKA Observatory. TR is supported by the PRIN MIUR 2017 prot. 20173ML3WW, 'Opening the ALMA window on the cosmic evolution of gas, stars and supermassive black holes', and by the Fondazione ICSC - Spoke 3 Astrophysics and Cosmos Observations - National Recovery and Resilience Plan Project ID CN-0000013 'Italian Research Center on High-Performance Computing, Big Data and Quantum Computing' - Next Generation EU. We acknowledge usage of the Fortran (Backus & Heising, 1964) and Python (Van Rossum et al., 2007) programming languages, Astropy (The Astropy Collaboration et al., 2013), NumPy (Harris et al., 2020), Scikit-learn (Pedregosa et al., 2011), LAPACK (Anderson et al., 1999), GSL (Galassi et al., 1996), HEALPix (Gorski et al., 2005), CFitsIO (Pence, 2010). ## Data Availability The data underlying this article are available at the link [https://tinyurl.com/TRECS22](https://tinyurl.com/TRECS22). The code used to generate the data is available on Github ([https://github.com/abonaldi/TRECS.git](https://github.com/abonaldi/TRECS.git)) Footnote 2: The full link is [https://www.dropbox.com/sh/52pbsscr8pkrdn7/AAA-RPe6QcTeQcqEvNpSWkoa?dl=0](https://www.dropbox.com/sh/52pbsscr8pkrdn7/AAA-RPe6QcTeQcqEvNpSWkoa?dl=0)
2305.06703
Neural Fine-Gray: Monotonic neural networks for competing risks
Time-to-event modelling, known as survival analysis, differs from standard regression as it addresses censoring in patients who do not experience the event of interest. Despite competitive performances in tackling this problem, machine learning methods often ignore other competing risks that preclude the event of interest. This practice biases the survival estimation. Extensions to address this challenge often rely on parametric assumptions or numerical estimations leading to sub-optimal survival approximations. This paper leverages constrained monotonic neural networks to model each competing survival distribution. This modelling choice ensures the exact likelihood maximisation at a reduced computational cost by using automatic differentiation. The effectiveness of the solution is demonstrated on one synthetic and three medical datasets. Finally, we discuss the implications of considering competing risks when developing risk scores for medical practice.
Vincent Jeanselme, Chang Ho Yoon, Brian Tom, Jessica Barrett
2023-05-11T10:27:59Z
http://arxiv.org/abs/2305.06703v1
# Neural Fine-Gray: ###### Abstract Time-to-event modelling, known as survival analysis, differs from standard regression as it addresses _censoring_ in patients who do not experience the event of interest. Despite competitive performances in tackling this problem, machine learning methods often ignore other _competing risks_ that preclude the event of interest. This practice biases the survival estimation. Extensions to address this challenge often rely on parametric assumptions or numerical estimations leading to sub-optimal survival approximations. This paper leverages constrained monotonic neural networks to model each competing survival distribution. This modelling choice ensures the exact likelihood maximisation at a reduced computational cost by using automatic differentiation. The effectiveness of the solution is demonstrated on one synthetic and three medical datasets. Finally, we discuss the implications of considering competing risks when developing risk scores for medical practice. + Footnote †: 5. [https://github.com/Jeanselme/NeuralFineGray](https://github.com/Jeanselme/NeuralFineGray) + Footnote †: 5. [https://github.com/Jeanselme/NeuralFineGray](https://github.com/Jeanselme/NeuralFineGray) Data and Code AvailabilityExperiments are performed on publicly available datasets: Primary Biliary Cholangitis1(Therneau et al., 2000), Framingham2(Kannel and McGee, 1979), Synthetic3(Lee et al., 2018), and the Surveillance, Epidemiology, and End Results Program4. The code to reproduce the proposed model and the presented results is available on GitHub5. Footnote 1: Available in the R survival package. Footnote 2: Available in the R riskCommunicator package. Footnote 3: Available at [https://github.com/chl8856/DeepHit](https://github.com/chl8856/DeepHit) Footnote 4: Available at [https://seer.cancer.gov/](https://seer.cancer.gov/) Institutional Review Board (IRB)This research does not require IRB approval as it relies on publicly available datasets from previous studies. ## 1 Introduction ### Motivation Survival analysis involves modelling the time to an event of interest, which plays a critical role in medicine to understand disease manifestation, treatment outcomes, and the influence of different risk factors on patient health (Selvin, 2008). This analysis differs from standard regression settings as patients may not experience the outcome of interest over the study period. These _censored_ patients inform this regression as they participate in the study event-free until exiting the study. Multiple approaches have been proposed to take advantage of these patients by maximising the likelihood of the observed data. Often, in medical data, patients may experience events, known as _competing risks_, that preclude the observation of the event of interest. For instance, in modelling the time to cardiac events, patients who die from another condition during the observation period exit the study because of a competing risk. Competing risks remain overlooked despite their prevalence in medicine (Koller et al., 2012; Austin et al., 2016). Particularly, practitioners frequently consider competing risks as censoring (Austin and Fine, 2017). This practice breaks the common assumption of non-informative censoring, i.e., censored patients must leave the study for reasons independent of the outcome of interest. Considering competing risks as censoring, therefore, results in misestimating the risk of the event of interest (Fisher and Kanarek, 1974; Leung et al., 1997). To better tackle the problem of competing risks, one can explicitly model them through the marginal probability of observing each risk, known as the Cumulative Incidence Function (CIF). Estimation of these functions often relies on proportional hazards, parametric assumptions, or numerical integration, potentially resulting in the optimisation of a sub-optimal target misrepresenting the true underlying survival distribution. ### Contribution This work introduces a novel machine learning model to tackle the problem of competing risks. This approach generalises Rindt et al. (2022) to competing risks, leveraging monotonic neural networks to model cumulative incidence functions. The proposed method tackles the limitations of existing strategies by an exact computation of the likelihood at a lower computational cost. First, we explore the existing literature before introducing in detail our proposed model. Subsequently, we demonstrate the advantages and limitations of our approach as applied to one synthetic and three real-world medical datasets. Finally, we further investigate the Framingham dataset to underline the importance of considering competing risks in cardiovascular disease risk estimation. ## 2 Related work This section summarises the recent progress in machine learning for survival analysis. ### Time-to-event modelling Survival analysis is an active field of research in the statistical community (Kartsonaki, 2016). Non-parametric (Ishwaran et al., 2008) and parametric (Cox, 2008; Royston, 2001; Cox et al., 2007) models have been introduced to model survival outcomes. Despite these multiple alternatives and considerable proposed extensions, the original Cox proportional-hazards model (Cox, 1972) remains widely used in the medical literature (Stensrud and Hernan, 2020). This semi-parametric approach estimates the impact of covariates on the instantaneous risk of observing an event, i.e., hazard. The model assumes the hazard to take the form of the product of a non-parametric estimate of the population survival and a parametric covariate effect. This assumption is known as proportional hazards and renders tractable the model optimisation for covariate effect estimation. The machine learning community has extended the Cox model for unknown parametric forms of covariate effect. Specifically, DeepSurv (Katzman et al., 2018) replaces this otherwise parametric component with a neural network. However, this model still assumes proportional hazards that may not hold in real-world medical settings (Stensrud and Hernan, 2020). To relax this assumption, DeepCox (Nagpal et al., 2021) identifies subgroups using independent Cox models. Each subgroup is characterised by its own non-parametric baseline and covariate effect. At the intersection between DeepCox and parametric models, Nagpal et al. (2021) model each subgroup with a Weibull distribution parameterised by neural networks to allow end-to-end training. Jeanselme et al. (2022) abandon the parametric and proportional hazards assumption with unconstrained distributions learnt through monotonic networks. With a focus on predictive performance, DeepHit (Lee et al., 2018) approaches survival as a classification problem where survival prediction time is discretised. The associated task is to predict the interval at which a patient experiences the event. The model's training procedure consists of a likelihood and a ranking penalty which favours temporally coherent predictions. Extrapolation of this model to infinite time discretisa tion resembles an ordinary differential equation (ODE), as proposed in Danks and Yau (2022). The models above approximate the underlying survival likelihood either through parametric assumptions, discretisation or numerical integration. Recently, Rindt et al. (2022) proposed to overcome this challenge of likelihood estimation by deploying a constrained neural network with a monotonically increasing outcome to obtain the survival function, and, therefore, the exact likelihood. In addition, to show improved performance, the authors demonstrate that one should prefer likelihood optimisation over discriminative performance as the optimal likelihood is obtained for the true underlying survival distribution, i.e., the likelihood is a proper scoring rule. Our study is a generalisation of this work to competing risks, harnessing monotonic neural networks to directly model CIFs. ### Modelling competing risks Using the aforementioned models without consideration of competing risks would lead to a mis-estimation of the risk associated with the event of interest (Schuster et al., 2020). To tackle this issue, one can independently estimate each competing-risk-specific model and combine them to estimate the risk associated with a specific outcome given the non-observation of the other risks, as formulated in the cause-specific Cox model (Prentice et al., 1978). This independent estimation describes how covariates impact each event risk (Austin and Fine, 2017) but may misrepresent the relative effect of these covariates on outcomes (Austin et al., 2016) and lead to sub-optimal predictive performance. Alternatively, Fine and Gray (1999) propose to model the sub-hazards, i.e., the probability of observing a given event if the patient has not experienced this event until \(t\), under an assumption of proportionality analogous to the one made in the Cox proportional-hazards model. While providing insights into the link between covariates and risk particularly suitable for prediction (Austin and Fine, 2017), this model suffers from two shortcomings: (i) the proportionality assumption impairs its real-world applicability; (ii) this approach can result in an ill-defined survival function (Austin et al., 2021). Machine learning approaches have been extended to jointly model competing risks. DeepHit's time-discretisation results in a straightforward extension in which the output dimension is multiplied by the number of risks (Lee et al., 2018). Similarly, hierarchical discretisation (Tjandra et al., 2021) has been proposed. As parametric distributional assumptions result in a closed-form likelihood, Nagpal et al. (2021) propose to extend their mixture of Weibull distributions and Bellot and Schaar (2018) introduce a Bayesian mixture of Generalised Gamma distributions to tackle competing risk. Under more complex non-parametric likelihoods, numerical integration (Danks and Yau, 2022; Aastha and Liu, 2020) and pseudo-value approximations (Rahman et al., 2021) have been proposed. Finally, non-likelihood-based approaches have been introduced such as boosted trees (Bellot and van der Schaar, 2018) or survival trees (Schmid and Berger, 2021). However, these methods are optimised towards a Brier-score-like loss. While survival analysis has received considerable attention in the machine learning community, the problem of competing risks is less well studied (Wang et al., 2019) and even less applied (Monterrubio-Gomez et al., 2022), despite being central to medical applications. The existing methodologies to tackle competing risks rely on parametric assumptions, likelihood approximation, or optimise for a score that may misrepresent the true underlying survival distribution. This paper offers a novel competing risk model relying on constrained networks to obtain CIFs as a derivative instead of an integral. This approach results in the exact maximisation of the likelihood by leveraging automatic differentiation. ## 3 Proposed approach This section formalises the problem of survival analysis and introduces the proposed model. ### Notation We model a population of the form \(\{x_{i},t_{i},d_{i}\}_{i}\) with \(x_{i}\) the covariates for patient \(i\), \(t_{i}\in\mathbb{R}^{+}\) the time of end of follow-up and \(d_{i}\in\llbracket 0,R\rrbracket\) its associated cause. If \(d_{i}\in\llbracket 1,R\rrbracket\), the patient left the study due to one of the \(R\) considered risks. Otherwise, the patient is right-censored, i.e., the patient left the study for an _unrelated_ reason before any of the events of interest were observed. In this work, we focus on right-censoring, but the model can easily be extended to left-censoring. Note that we assume that experiencing one event precludes the observation of any other. ### Survival quantities Single risk.In settings with no competing risk, i.e., \(R=1\), one aims to estimate the _survival function_\(S\), the probability of not observing the event of interest before time \(t\), i.e.: \[S(t|x):=\mathbb{P}(T\geq t|x)\] Equivalently, one aims to estimate the _cumulative hazard function_\(\Lambda(t|x)\) related to \(S\) as follows: \[S(t|x):=\exp\left[-\Lambda(t|x)\right]=\exp\left[-\int_{0}^{t}\lambda(u|x)du\right]\] where \(\lambda(t|x)=\lim_{\delta t\to 0}\frac{\mathbb{P}(t<T<t+\delta t, \text{risk}=r|T\geq t,\,x)}{\delta t}\) is the instantaneous hazard of observing the event of interest, assuming no previous event(s). Estimating this quantity may rely on maximising the likelihood of the observed data. The assumption of non-informative censoring, i.e., event and censoring times are independent given the covariates, is necessary to express the likelihood. Specifically, each patient \(i\) with an observed event contributes to the likelihood, the probability of experiencing the event at \(t_{i}\) without previous events, i.e., \(\lambda(t_{i}|x_{i})S(t_{i}|x_{i})\). The likelihood associated with each censored patient is the probability of not experiencing the event until \(t_{i}\), i.e., \(S(t_{i}|x_{i})\). This results in the following log-likelihood: \[l=\sum_{i,d_{i}\neq 0}\log\lambda(t_{i}|x_{i})-\sum_{i}\Lambda(t_{i}|x_{i}) \tag{1}\] Competing risks.In the context of competing risks \(R>1\), a patient may leave a study for reasons correlated with the event of interest. Practitioners often consider these events as censoring and rely on single-risk models. However, this practice breaks the common assumption of non-informative censoring and results in misestimation of the survival function. When other events may be observed, \(S(t\mid x)\) is defined as the probability of observing none of the competing risks before time \(t\), i.e.: \[S(t|x)=1-\sum_{r\in\llbracket 1,R\rrbracket}F_{r}(t|x)\] where \(F_{r}\), the _Cumulative Incidence Function_ (CIF) for the event \(r\) denotes the probability of observing the event \(r\) before time \(t\) without prior occurrence of any competing event(s), i.e.: \[F_{r}(t|x)=\mathbb{P}(T<t,\text{risk}=r|x) \tag{2}\] with \(T\), the random variable denoting the time of observation of any event. Note that the CIF can be expressed as an integral of observing the event in an infinitesimal interval given that no other event was observed until \(t\): \[F_{r}(t|x)=\int_{0}^{t}\lambda_{r}(u|x)e^{-\int_{0}^{t}\sum_{r}\lambda_{r}(s) ds}du \tag{3}\] with \(\lambda_{r}(t|x)=\lim_{\delta t\to 0}\frac{\mathbb{P}(t<T<t+\delta t,\,\text{risk}=r|T \geq t,\,x)}{\delta t}\), the cause-specific hazard, i.e., the instantaneous risk of observing the event \(r\), with no other previous event. A final quantity of interest is the cause-specific survival \(S_{r}(t|x)\) that expresses the probability of not observing a given outcome \(r\) by time \(t\), i.e., \[S_{r}(t|x) =\mathbb{P}((T\geq t)\;\cup\;(T<t,\text{risk}\neq r)|x)\] \[=1-F_{r}(t|x)\] Similar to the single-risk settings, we maximise the likelihood to estimate \(F_{r}\). Importantly, we assume non-informative censoring _once controlled_ on all identified competing risks. While this assumption is more likely to hold once all competing risks are accounted for, practitioners suspecting its implausibility should perform sensitivity analysis for this assumption (Jackson et al., 2014). Under this assumption, the likelihood can be expressed analogously to (1): patients with an observed event contribute to the likelihood as the probability of observing the event \(d_{i}\) at \(t_{i}\) without observing any events until \(t_{i}\), i.e., \(\lambda_{r}(t_{i}|x_{i})S(t_{i}|x_{i})\). This quantity is the partial derivative of \(F_{r}\) with respect to \(t\) evaluated at \(t_{i}\). Remaining censored patients influence the likelihood as the probability of observing no event until \(t_{i}\), i.e., \(S(t_{i}|x_{i})\). The competing risks log-likelihood can, therefore, be expressed as: \[l=\sum_{r\in[\![1,R]\!]}\sum_{i,d_{i}=r}\log\frac{\partial F_{r}(u|x_ {i})}{\partial u}\bigg{|}_{u=t_{i}} \tag{4}\] \[+\sum_{i,d_{i}=0}\log[1-\sum_{r}F_{r}(t_{i}|x_{i})]\] One may extend existing models to the competing risks setting by performing the integration in (3). For instance, the cause-specific Cox model (Prentice et al., 1978) consists of Cox models independently trained on each risk, i.e., treating all other outcomes as censored. Then one evaluates the CIF through (3) using the estimated hazards. However, this staged modelling does not jointly consider the outcomes and may misestimate the covariate effects (Van Der Pas et al., 2018). Fine-Gray (Fine and Gray, 1999) overcomes this issue by directly modelling the sub-distribution hazards \(h_{r}(t|x)=\lim_{\delta t\to 0}\frac{\mathbb{P}(t<T<t+\delta t,\,\text{risk}=r|(T \geq t)\cup(T<t\cap\,\text{risk}\neq r),\,x)}{\delta t}\), relying on a proportionality assumption of these quantities. Likewise, one can extend machine learning architecture to enable the integration of the CIF and maximise the associated likelihood in (4). However, in the absence of a closed-form expression, this would necessitate numerical integration. This approximation may impact performance with added computational costs for training and predictions. Integration is computationally expensive, whereas derivation can be computed exactly in one backward pass by automatic differentiation - available in most machine learning libraries. Therefore, our approach reduces the computational cost of the likelihood estimation by modelling \(F_{r}\) and differentiating it to obtain \(\lambda_{r}S\), resulting in the exact computation of all the previously described quantities of interest. ### Architecture Neural Fine-Gray, illustrated in Figure 1, aims to model \([F_{r}]_{r\in[\![1,R]\!]}\) without relying on numerical integration to tackle the problem of competing risks. We decompose \(F_{r}\) as: \[F_{r}(t|x) =\mathbb{P}(\text{risk }=r|x)\cdot\mathbb{P}(T\leq t|\text{risk }=r,x)\] \[=B(E(x))_{r}\cdot[1-\exp(-t\times M_{r}(t,E(x)))]\] Embedding network (\(E\)).A first multi-layer perceptron \(E\) with inter-layer dropout extracts an embedding \(\tilde{x}\) from the covariates \(x\). Sub-distribution networks (\([M_{r}]_{r\in[\![1,R]\!]}\)).The embedding \(\tilde{x}\) is inputted in \(R\) positive monotonic networks \([M_{r}]_{r\in[\![1,R]\!]}\) representing a lifetime distribution conditioned on one risk \(r\), through the relation \(1-\exp(-t\times M_{r}(t,\tilde{x}))=\mathbb{P}(T\leq t|x,\text{risk }=r)\). A _positive monotonic neural network_ is a network constrained to have its outcome monotonic and positive given its input (see Daniels and Velikova (2010) for theoretical analysis and Lang (2005) for proof of universal approximator). Enforcing these constraints may rely on different transformations of the neural networks' weights (Omi Figure 1: Neural Survival Analysis Architecture. \(E\)_embeds the covariate(s) \(x\), which are then inputted in the monotonic networks \(M\) and balancing network \(B\) to estimate the CIFs._ et al., 2019; Rindt et al., 2022; Chilinski and Silva, 2020). In our work, we enforce all the neural networks' weights to be positive through a square function and use a final _SoftPlus_ layer to fulfil these constraints. Enforcing positive weights ensures that the outcome increases with the time dimension \(t\). Additionally, enforcing a smooth function ensures a low computational cost and stable optimisation. Note that for model flexibility, we used \(R\) monotonic networks. We explore in Appendix B how using one network with \(R\) outcomes would impact performance. Balancing network (\(B\)).A multi-layer perceptron \(B\) with a final _SoftMax_ layer leverages \(\tilde{x}\) to balance the probability of observing each risk \(B(\tilde{x}):=[\mathbb{P}(\text{risk }=r|x)]_{r}\). This weighting ensures that the survival function is correctly specified, i.e., \(\sum_{r\in[\![1,R]\!]}F_{r}(t|x)\leq 1\). The proposed approach directly models \(F_{r}\) by multiplying the outputs of the distribution and balancing networks. Automatic differentiation of the model's output results in the derivative \(\frac{\partial F_{r}(u|x_{i})}{\partial u}\bigg{|}_{u=t_{i}}\). The model can then be trained end-to-end by maximising the _exact_ log-likelihood proposed in Equation (4). By jointly modelling the competing risks, this proposed model is reminiscent of the Fine-Gray approach. The following equation exhibits the link between sub-distribution hazards and CIFs, i.e., between the standard and neural Fine-Gray models: \[h_{r}(t|x)=\frac{1}{1-F_{r}(t|x)}\cdot\frac{\partial F_{r}(u|x)}{\partial u} \bigg{|}_{u=t}\] **Remark 1**: _Shchur et al. (2020) raise a limitation of monotonic neural networks that may attribute non-null density to negative times, i.e., \(F_{r}(t=0|x)\neq 0\). In contrast to Omi et al. (2019); Rindt et al. (2022), we model \(\mathbb{P}(T\leq t|\text{risk }=r,x)\) as \(1-\exp(-t\times M_{r}(t,\tilde{x}))\) instead of \(M_{r}(t,\tilde{x})\) to address this issue._ **Remark 2**: _The proposed methodology is a generalisation of the survival model SumoNet (Rindt et al., 2022) that estimates \(S\) in the single-risk setting. If \(R=1\), then \(F_{r}=1-S\) and \(B_{r}=1\). In this context, the proposed approach results in Sumo-Net. Moreover, the architecture resembles the one proposed in DeSurv (Danks and Yau, 2022) while avoiding numerical integration._ ### Computational complexity Our modelling choices result in the exact computation of the likelihood. However, the other methodologies relying on integral approximation and outcome discretisation converge towards \(F_{r}\) in the upper limit, i.e., when increasing the number of point estimates, or using a finer discretisation. One may therefore question the advantage of the proposed methodology. In this section, we compare the complexity in estimating the CIF and likelihood for DeSurv (Danks and Yau, 2022), the closest method to our proposed model, and NeuralFG. DeSurv(Danks and Yau, 2022).This approach models \(F_{r}(t|x)\) as \(\text{Tanh}(v(x,t))\) with \(v\) being the solution to the ODE defined as \(\left.\frac{\partial v(x,u)}{\partial u}\right|_{u=t}=g(x,t)\) and \(v(x,0)=0\) with \(g\), a neural network. For efficiency, the authors propose a Gauss-Legendre quadrature to solve the ODE and obtain \(v\). This approximation necessitates \(n\) evaluations of \(g\) at defined times \([t_{j}(t)]_{j\in[\![1,n]\!]}\) weighted by the associated \([w_{j}]_{j\in[\![1,n]\!]}\) (see Press et al. (2007) for a detailed description of Gauss-Legendre quadrature). Each forward pass estimates \(\left.\frac{\partial v(x,u)}{\partial u}\right|_{u=t_{j}(t)}\) at the points used to approximate the integral, then \[\hat{F}_{r}(t|x)=\text{Tanh}\left(\frac{t}{2}\sum_{j\in[\![1,n]\!]}w_{j}g(x, \frac{t}{2}t_{j}(t))\right)\] DeSurv's computational cost.Computation of \(F_{r}\) relies on \(n\) forward passes through the network. Moreover, the estimation of \(\left.\frac{\partial\hat{F}_{r}(u|x_{i})}{\partial u}\right|_{u=t_{i}}\) necessary to compute the competing risk likelihood is \(g(x,t_{i})(1-\text{Tanh}(\hat{F}_{r}(t_{i}|x))^{2})\), i.e., \(n+1\) forward passes. The likelihood has a \(\mathcal{O}(nN)\) computational complexity with \(N\) the number of patients in the study. NeuralFG's computational cost.\(F_{r}\) is estimated in one forward pass and \(\left.\frac{\partial\hat{F}_{r}(u|x_{i})}{\partial u}\right|_{u=t_{i}}\) in one backward pass. Assuming the same computational cost for forward and backward passes, the likelihood estimation has a \(\mathcal{O}(2N)\) complexity. Our proposed methodology, therefore, presents more than an \(n/2\) computational gain compared to DeSurv in estimating the likelihood used for training, and an \(n\) gain in inferring \(F_{r}\). ## 4 Experiments This section introduces the datasets and experimental settings. ### Datasets We explore the model performance on four datasets with competing risks: * PBC (Therneau et al., 2000) comprises 25 covariates in 312 patients over a 10-year randomised control trial to measure the impact of D-penicillamine on Primary Biliary Cholangitis (PBC). Death on the waiting list is the primary outcome with transplant being a competing risk. * Framingham (Kannel and McGee, 1979) is a cohort study gathering 18 longitudinal measurements on male patients over 20 years. Our analysis focuses on the first observed covariates of 4,434 patients to model cardiovascular disease (CVD) risk. Death from other causes is treated as a competing risk. * Synthetic (Lee et al., 2018), this dataset consists of 30,000 synthetic patients with 12 covariates following exponential event time distributions, non-linearly dependent on the covariates. * SEER6: the Surveillance, Epidemiology, and End Results Program gathers covariates and outcomes of patients diagnosed with breast cancer between 1992 and 2017. Following the preprocessing proposed by Lee et al. (2018); Danks and Yau (2022), we select 658,354 patients and 23 covariates describing the patient demographics and disease characteristics at diagnosis. Death from breast cancer (BC) is our primary outcome, with CVD, a competing risk. Footnote 6: [https://seer.cancer.gov/](https://seer.cancer.gov/) Table 1 summarises the datasets' characteristics with the respective proportion of outcome and censoring. ### Baseline models The proposed Neural Fine-Gray (**NeuralFG**) was compared against six strategies. First, we considered the well-established cause-specific Cox model (**CS Cox** Prentice et al. (1978)) and **Fine-Gray** model (Fine and Gray, 1999) with a linear parametric form for the covariate effect. The cause-specific Cox model models each cause independently using a Cox proportional-hazards model, while Fine-Gray models the sub-hazard functions assuming proportional sub-hazards. Thereafter, we compare state-of-the-art competing risk survival neural networks proposed in the machine learning literature. First, Deep Survival Machine (**DSM**, Nagpal et al. (2021)) consists of a mixture of Weibull distributions parameterised by neural networks. Each point is then assigned to these distributions through an assignment network. Using parametric distributions results in a closed-form likelihood in the competing risks setting. **DeepHit**(Lee et al., 2018) discretises the survival horizon and leverages a multi-head network to associate each patient to the interval corresponding to its observed event time and type. Each head of the network is associated with one cause as in the proposed NeuralFG. The time-discretisation results in a discrete likelihood further penalised by a C-index-like regu \begin{table} \begin{tabular}{c|c c c c c} Dataset & Observations & Features & Primary & Competing risk & Censored \\ \hline PBC & 312 & 25 & Death (44.87 \% ) & Transplant (9.29 \%) & 45.83 \% \\ Framingham & 4,434 & 18 & CVD (26.09 \%) & Death (17.75 \%) & 56.16 \% \\ Synthetic & 30,000 & 12 & * (25.33 \%) & * (24.67 \%) & 50.00 \% \\ SEER & 658,354 & 23 & BC (16.51 \%) & CVD (5.69 \%) & 77.80 \% \\ \end{tabular} \end{table} Table 1: Datasets characteristics larisation for model training. Closer to our work, **DeSurv**(Danks and Yau, 2022) approaches \(F_{r}\) as the solution to an ODE. ### Experimental settings The analysis relies on 5-fold cross-validation with \(10\%\) of each training set left aside for hyper-parameter tuning. Random search is used on the following grid over \(100\) iterations: learning rate (\(10^{-3}\) or \(10^{-4}\)), batch size (\(100\), \(250\), except for SEER: \(1,000\) or \(5,000\)), dropout rate (\(0\), \(0.25\), \(0.5\) or \(0.75\)), number of layers (\([1,4]\)) and nodes (\(25\) or \(50\)). All activation functions are fixed to _Tanh_ to ensure a properly defined derivative - note that any \(\mathcal{C}^{1}\) activation would work. All models are optimised using an Adam optimiser (Kingma and Ba, 2015) over \(1,000\) epochs, with an early stopping criterion computed on a \(10\%\) left-aside subset of the training set. Other methods are optimised over the same grid (if applicable). Additionally, we explore both Log-Normal and Weibull distributions for DSM and use \(10,000\) warm-up iterations to estimate the parametric form closest to the average survival as proposed in the original paper (Nagpal et al., 2021). For DeSurv, we followed the original paper's recommendation of a 15-degree Gauss-Legendre quadrature to estimate the CIFs. In Appendix C.1, we further investigate how increasing the number of point estimates impacts performance. We use a similar approximation for DeepHit with a 15-split time discretisation. Finally, for a fair compari \begin{table} \begin{tabular}{c|c|c c c||c c c} \multirow{3}{*}{**\(\mathcal{C}^{1}\)**} & \multirow{3}{*}{**Model**} & \multicolumn{3}{c}{C-Index _(Larger is better)_} & \multicolumn{3}{c}{Brier Score _(Smaller is better)_} \\ & & & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{1}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & 0.810 (0.079) & 0.795 (0.114) & 0.762 (0.123) & 0.099 (0.028) & 0.140 (0.020) & 0.169 (0.050) \\ & & DeepHit & 0.822 (0.099) & 0.844 (0.036) & 0.782 (0.033) & _0.090_ (0.030) & 0.132 (0.013) & 0.180 (0.021) \\ & & DeSurv & 0.821 (0.089) & 0.837 (0.050) & 0.815 (0.068) & **0.088** (0.022) & 0.113 (0.011) & **0.136** (0.047) \\ & & DSM & **0.867** (0.065) & **0.864** (0.037) & **0.828** (0.052) & 0.091 (0.039) & 0.124 (0.015) & 0.161 (0.022) \\ & & Fine-Gray & 0.831 (0.136) & _0.852_ (0.045) & _0.816_ (0.059) & 0.091 (0.042) & _0.103_ (0.009) & 0.150 (0.038) \\ & & CS Cox & _0.833_ (0.125) & 0.851 (0.040) & 0.811 (0.065) & 0.091 (0.038) & **0.102** (0.008) & _0.148_ (0.038) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{2}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & **0.872** (0.024) & **0.812** (0.029) & **0.782** (0.018) & _0.050_ (0.003) & **0.095** (0.010) & **0.128** (0.004) \\ & & DeepHit & 0.855 (0.026) & 0.781 (0.026) & 0.743 (0.014) & 0.053 (0.003) & 0.102 (0.007) & 0.141 (0.002) \\ & & DeSurv & **0.872** (0.027) & _0.807_ (0.031) & 0.775 (0.022) & **0.049** (0.005) & **0.095** (0.009) & _0.129_ (0.003) \\ & & DSM & _0.866_ (0.023) & 0.806 (0.023) & _0.778_ (0.014) & 0.057 (0.005) & 0.104 (0.006) & 0.141 (0.002) \\ & & Fine-Gray & 0.842 (0.025) & 0.794 (0.024) & 0.772 (0.015) & 0.057 (0.006) & 0.099 (0.007) & 0.131 (0.003) \\ & & CS Cox & 0.845 (0.020) & 0.798 (0.022) & 0.774 (0.015) & 0.056 (0.006) & _0.098_ (0.007) & 0.131 (0.003) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{3}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & _0.791_ (0.013) & _0.754_ (0.013) & **0.715** (0.011) & **0.068** (0.003) & _0.125_ (0.004) & **0.192** (0.005) \\ & & DeepHit & 0.783 (0.012) & 0.747 (0.013) & _0.714_ (0.008) & 0.079 (0.003) & 0.136 (0.002) & _0.212_ (0.003) \\ & & DeSurv & **0.793** (0.013) & **0.756** (0.014) & _0.714_ (0.014) & **0.068** (0.002) & **0.124** (0.004) & **0.192** (0.004) \\ & & DSM & 0.776 (0.013) & 0.742 (0.013) & 0.710 (0.013) & _0.073_ (0.002) & 0.139 (0.002) & 0.220 (0.003) \\ & & Fine-Gray & 0.611 (0.014) & 0.587 (0.007) & 0.568 (0.009) & 0.078 (0.002) & 0.159 (0.003) & 0.241 (0.002) \\ & & CS Cox & 0.609 (0.015) & 0.586 (0.006) & 0.568 (0.009) & 0.078 (0.002) & 0.159 (0.003) & 0.240 (0.002) \\ \hline \multirow{5}{*}{**\(\mathcal{C}^{1}\)**} & \multirow{5}{*}{**Model**} & **NeuralFG** & _0.893_ (0.002) & _0.855_ (0.001) & _0.815_ (0.001) & **0.038** (0.000) & **0.069** (0.001) & **0.101** (0.000) \\ & & DeepHit & **0.899** (0.002) & **0.860** (0.001) & **0.818** (0.001) & **0.038** (0.000) & _0.070_ (0.000) & _0.102_ (0.001) \\ \cline{1-1} & & DeSurv & 0.892 (0.003) & 0.852 (0.002) & 0.813 (0.001) & **0.038** (0.000) & _0.070_ (0.000) & _0.102_ (0.001) \\ \cline{1-1} & & DSM & 0.884 (0.001) & 0.842 (0.002) & 0.805 (0.002) & _0.039_ (0.000) & 0.076 (0.001) & 0.112 (0.000) \\ \cline{1-1} & & Fine-Gray & 0.836 (0.003) & 0.786 (0.003) & 0.742 (0.002) & 0.043 (0.001) & 0.081 (0.000) & 0.118 (0.000) \\ \cline{1-1} & & CS Cox & 0.837 (0.003) & 0.786 (0.003) & 0.742 (0.002) & 0.042 (0.001) & 0.081 (0.000) & son, we double the number of possible layers for architectures without embedding networks. ### Evaluation metrics As per current practice in survival literature, we used the time-dependent Brier score (Graf et al., 1999) to quantify calibration, and the C-index (Antolini et al., 2005) for discrimination at the dataset-specific 0.25, 0.5 and 0.75 quantiles of the uncensored population event times (See Appendix A.1 for data characteristics, A.2 for further description of the metrics and A.4 for the cumulative version of these metrics). Means and standard deviations are computed over the 5 folds of cross-validation. ## 5 Results Table 2 summarises the calibration and discriminative performance of the analysed models on the primary outcome (see Appendix A.3 for the performances on the competing risk). ### Model's strengths NeuralFG demonstrates lower or equal Brier scores than other state-of-the-art machine learning models across the majority of datasets and time horizons. While DSM presents good discriminative performances, this edge is not reflected in its calibration. This observation indicates that parametric assumptions may result in estimated survival functions discriminative of the outcome but further from the underlying survival distribution. Deep-Hit penalisation results in better C-Index values but hurts model calibration, with misaligned discrimination and calibration throughout the different datasets. Finally, performances are comparable to DeSurv. However, DeSurv's likelihood approximation multiplies its computational cost by the numerical integration complexity (see Appendix C.2 for a comparison of training speed on the Framingham dataset). NeuralFG, therefore, achieves state-of-the-art performance while avoiding computationally-expensive approximations. ### Model's limitations The proposed methodology has lower performance on the PBC dataset, which notably comprises a limited amount of data. In small-data settings, practitioners should prefer simpler models to avoid overfitting. For instance, the linear Fine-Gray and CS Cox models result in competitive performances on PBC. However, this linearity assumption hurts performance under more complex covariate effects as in the SEER and Synthetic datasets. Note that leveraging domain expertise could enhance performance through the addition of interactions and the use of alternative models. However, these approaches deviate from the automated discovery of interactions facilitated by neural networks. Similarly, the parametric assumption of DSM results in the best discrimination in PBC, but it under-performs under more complex survival distributions. Furthermore, the DeSurv model performs better than the proposed methodology on PBC. This may reflect that approximating the likelihood can regularise model training, which is beneficial in the context of small data. \begin{table} \begin{tabular}{c|c|c c c||c c c} \multirow{2}{*}{Death} & \multirow{2}{*}{Model} & \multicolumn{4}{c}{C-Index _(Larger is better)_} & \multicolumn{4}{c}{Brier Score _(Smaller is better)_} \\ & & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) \\ \hline \multirow{2}{*}{CVD} & **Competing** & **0.872** (0.024) & **0.812** (0.029) & **0.782** (0.018) & **0.050** (0.003) & **0.095** (0.010) & **0.128** (0.004) \\ & Non-Competing & 0.862 (0.029) & 0.807 (0.032) & 0.780 (0.020) & 0.053 (0.004) & 0.099 (0.011) & 0.129 (0.005) \\ \hline \multirow{2}{*}{Death} & **Competing** & **0.745** (0.055) & 0.717 (0.038) & 0.713 (0.022) & **0.027** (0.003) & **0.070** (0.004) & 0.112 (0.005) \\ & Non-Competing & 0.741 (0.053) & **0.718** (0.045) & **0.719** (0.025) & **0.027** (0.003) & 0.071 (0.002) & **0.109** (0.004) \\ \end{tabular} \end{table} Table 3: Modelling competing risk - means (standard deviations) across the 5-fold cross-validation. ### Modelling vs ignoring competing risks This last section explores the importance of modelling competing risks in the Framingham dataset. First, we present the performance differences between the proposed model in comparison to the same architecture maximising the cause-specific likelihoods. Then, we explore which subgroups of the population most benefit from this modelling. Finally, we study how guidelines would differ under the proposed NeuralFG and its non-competing alternative. Why account for competing risks?To measure how modelling competing risks impacts performance, while ensuring the _same number of parameters_, we propose to use the same architecture presented in Section 3.3 whilst maximising the sum of the cause-specific likelihoods, i.e.: \[l=\sum_{r}\left[\sum_{i,d_{i}=r}\log\lambda_{r}(t_{i}|\tilde{x}_{i})-\sum_{i} \Lambda_{r}(t_{i}|\tilde{x}_{i})\right]\] Each monotonic network, therefore, models the cumulative hazard function for risk \(r\), \(\Lambda_{r}\), by maximising the likelihood of one cause whilst considering the rest of the population as censored, relying on a shared embedding \(\tilde{x}\). Automatic differentiation outputs \([\lambda_{r}]_{r\in\llbracket 1,R\rrbracket}\). Table 3 summarises the discrimination and calibration differences in the non-competing survival \(e^{-\Lambda_{r}(t|x)}\) obtained with this model and the previously described NFG's cause-specific survival \(1-F_{r}(t|x)\). Note how modelling competing risks significantly improves performance for the primary outcome of interest, CVD, without significant differences for the competing risk. Since patients who die from other causes during the study period do not present the same risk of CVD as patients remaining in the study, not accounting for all-cause mortality results in a misestimation of CVD risk. Who may benefit?One can explore which subgroups benefit the most from modelling competing risks. Intuitively, patients who are the most likely to suffer from competing risks may benefit the most from this modelling. Table 4 illustrates this with older patients benefiting the most from modelling death as a competing risk. What is the impact on medical practice?The Framingham dataset was used to model the eponymous 10-year cardiovascular disease (CVD) risk score (Wilson et al., 1998). This score guides clinical practice in preventatively treating patients, usually with a combination of cholesterol-lowering therapy, e.g., statins, and holistic treatment of other CVD risk factors (Bosomworth, 2011). To minimise overtreatment and adverse side effects, accurate risk estimates are critical for targeting the population most at risk so as to maximise the benefit-risk ratio (Mangione et al., 2022). However, the original Framingham score relies on a non-competing risk model (Mangione et al., 2022; van Kempen et al., 2014). Clinical treatment often relies on a discretisation of this risk (Bosomworth, 2011): low, intermediate and high risk, at \(<10\%\), \(10-20\%\) and \(>20\%\) chance, respectively, of observing a CVD event in the following 10 years. Current guidelines in the United States suggest placing all patients with \(\geq 10\%\) risk on cholesterol-lowering drugs (Mangione et al., 2022). Furthermore, in the US alone, several million patients are on these medications (Wall et al., 2018). Therefore, even modest shifts in patient risk classification could, at scale, amount to considerable numbers either inappropriately receiving preventative treatment or inappropriately receiving none. To demonstrate how considering competing risks can fundamentally alter such risk profiling, we present in Table 5 the reclassification matrices of risk levels given competing and non-competing NeuralFG differentiated by observed outcomes for patients aged 50 or over. For instance, note that 251 deemed intermediate-to \begin{table} \begin{tabular}{c|c c c} Age & \multicolumn{3}{c}{Brier Score Difference} \\ & \(q_{0.25}\) & \(q_{0.50}\) & \(q_{0.75}\) \\ \hline \(<40\) & -0.000 (0.000) & -0.001 (0.002) & 0.000 (0.005) \\ 40-50 & -0.001 (0.001) & -0.002 (0.003) & -0.002 (0.001) \\ 50-60 & _-0.003_ (0.005) & _-0.004_ (0.003) & _-0.006_ (0.007) \\ 60+ & **-0.013** (0.011) **-0.022** **(0.018) **-0.007** (0.024) \\ \end{tabular} \end{table} Table 4: Calibration differences - Means and standard deviations over 5-fold cross-validation. _Larger negative values correspond to better calibration for the competing risk model._ high risk by the non-competing risks model are reclassified as lower risk by the competing-risks model, who, in turn, could have avoided the initiation of therapy. These results echo the medical literature's findings of risk misestimation due to the non-consideration of competing risks in this risk score (Lloyd-Jones et al., 2004; van Kempen et al., 2014). More accurate simulations to estimate the potential lives saved and harmed through such reclassification is beyond the scope of this article but could provide insight into the possible consequences of considering competing risks. In summary, using a non-competing risk score would have important clinical consequences of over- and under-treatment (Schuster et al., 2020). More predictive models accounting for competing risks must be preferred to ensure better care. ## 6 Conclusion This work provides a solution to address competing risks that preclude the observation of the outcome of interest, often present in medical applications. We introduce Neural Fine-Gray, a monotonic neural network architecture, to tackle the problem of competing risks in survival modelling. The model outputs the cumulative incidence functions and, consequently, allows the exact likelihood computation. Importantly, this architecture choice achieves competitive performance while avoiding the parametric assumptions or computationally expensive approximations made by state-of-the-art survival neural networks. Further analysis of the Framingham dataset contributes to the literature, inviting practitioners to use competing-risk modelling in risk score development for improved care (Abdel-Qadir et al., 2018; Austin et al., 2016; Lloyd-Jones et al., 2004; Schuster et al., 2020). Our future work will (i) extend this architecture to model other modalities such as time series as in Nagpal et al. (2021) and, (ii) explore medically interpretable survival clusters as presented in Jeanselme et al. (2022); Nagpal et al. (2022). ## Acknowledgments This work was supported by The Alan Turing Institute's Enrichment Scheme and partially funded by UKRI Medical Research Council (MC_UU_0002/5 and MC_UU_0002/2). \begin{table} \end{table} Table 5: Reclassification matrices between competing and non-competing risk scores for patients older than 50. _Red (resp. blue) shows when the competing risks score is less aligned with the 10-year observed outcome than the non-competing model (resp. more aligned). Note that censored patients are ignored._
2307.10718
Differences Between Hard and Noisy-labeled Samples: An Empirical Study
Extracting noisy or incorrectly labeled samples from a labeled dataset with hard/difficult samples is an important yet under-explored topic. Two general and often independent lines of work exist, one focuses on addressing noisy labels, and another deals with hard samples. However, when both types of data are present, most existing methods treat them equally, which results in a decline in the overall performance of the model. In this paper, we first design various synthetic datasets with custom hardness and noisiness levels for different samples. Our proposed systematic empirical study enables us to better understand the similarities and more importantly the differences between hard-to-learn samples and incorrectly-labeled samples. These controlled experiments pave the way for the development of methods that distinguish between hard and noisy samples. Through our study, we introduce a simple yet effective metric that filters out noisy-labeled samples while keeping the hard samples. We study various data partitioning methods in the presence of label noise and observe that filtering out noisy samples from hard samples with this proposed metric results in the best datasets as evidenced by the high test accuracy achieved after models are trained on the filtered datasets. We demonstrate this for both our created synthetic datasets and for datasets with real-world label noise. Furthermore, our proposed data partitioning method significantly outperforms other methods when employed within a semi-supervised learning framework.
Mahsa Forouzesh, Patrick Thiran
2023-07-20T09:24:23Z
http://arxiv.org/abs/2307.10718v1
# Differences Between Hard and Noisy-labeled Samples: ###### Abstract Extracting noisy or incorrectly labeled samples from a labeled dataset with hard/difficult samples is an important yet under-explored topic. Two general and often independent lines of work exist, one focuses on addressing noisy labels, and another deals with hard samples. However, when both types of data are present, most existing methods treat them equally, which results in a decline in the overall performance of the model. In this paper, we first design various synthetic datasets with custom hardness and noisiness levels for different samples. Our proposed systematic empirical study enables us to better understand the similarities and more importantly the differences between hard-to-learn samples and incorrectly-labeled samples. These controlled experiments pave the way for the development of methods that distinguish between hard and noisy samples. Through our study, we introduce a simple yet effective metric that filters out noisy-labeled samples while keeping the hard samples. We study various data partitioning methods in the presence of label noise and observe that filtering out noisy samples from hard samples with this proposed metric results in the best datasets as evidenced by the high test accuracy achieved after models are trained on the filtered datasets. We demonstrate this for both our created synthetic datasets and for datasets with real-world label noise. Furthermore, our proposed data partitioning method significantly outperforms other methods when employed within a semi-supervised learning framework1. Footnote 1: Code is available at [https://github.com/mahf93/Hard-vs-Noisy](https://github.com/mahf93/Hard-vs-Noisy). _Index terms--_ hard samples, label noise, datasets, neural networks, image classification ## 1 Introduction Deep neural networks have revolutionized many applications, especially in the field of image classification, mostly due to the availability of large, high-quality labeled datasets (Rawat and Wang, 2017). In practice, obtaining such datasets is often challenging, time-consuming, and expensive, thus leading to the inclusion of label noise in the obtained datasets (Roh et al., 2019). Label noise can arise for various reasons such as the use of cheap label collection alternatives, for instance crowdsourcing, or the obtainment of the label of an image from the accompanying text on the Web (Cordeiro and Carneiro, 2020; Frenay and Verleysen, 2013; Algan and Ulusoy, 2020; Karimi et al., 2020). The problem with label noise is that deep neural networks tend to easily memorize these noisy labels, which can negatively impact their generalization performance (Zhang et al., 2021). Consequently, an important line of research is to address this issue. Many methods propose to mitigate the effects of label noise, including the use of robust loss functions (Ma et al., 2020; Thulasidasan et al., 2019; Wang et al., 2019; Patrini et al., 2017) and modifications to the training procedures (Yu et al., 2019; Zhang et al., 2020; Jiang et al., 2018; Malach and Shalev-Shwartz, 2017). Another popular approach to deal with label noise is to use a noisy-label detection method (Nguyen et al., 2019; Li et al., 2020; Huang et al., 2019; Pleiss et al., 2020, ), where some of these methods may require an additional clean validation set for hyper-parameter selection (Paul et al., 2021). Noisy-label detection methods can involve data cleansing, where the noisy data is entirely removed from the data set altogether, data re-weighting, where noisy data are given lower weights during training, or re-labeling, where the noisy data is re-annotated by experts. Alternatively, the detected noisy-labeled data could be used as unlabeled data in a semi-supervised learning fashion. When using noisy-label detection methods, an issue that often arises is their inability to differentiate between noisy-labeled samples and hard-to-learn samples. Hard-to-learn samples, also simply known as hard samples, refer to samples in a dataset that are particularly challenging for the classifier to learn (Arpit et al., 2017; Kishida and Nakayama, 2019; Wu et al., 2018). Empirical observations have revealed that noisy samples and hard samples share certain characteristics, such as high loss or low confidence or being learned later in the training process. Consequently, when noisy label detection methods are employed, which are based on these characteristics, hard samples are also treated as noisy samples and are either filtered out or given lower emphasis during training. Unfortunately, discarding hard samples may result in gaps in the classifier's knowledge of the true decision boundary, making it crucial to preserve as many hard samples as possible and prioritize their leaning (Chang et al., 2017; Bengio et al., 2009; Wang et al., 2020). It is thus vital to propose noisy-label detection methods capable of distinguishing between noisy-labeled samples and hard samples, keeping as many hard samples as possible while removing noisy ones. Recent studies that propose noisy-label detection methods which are claimed to retain hard samples lack a quantitative evaluation of such claim, as there is no precise measure to quantify sample hardness (Liang et al., 2022; Bai and Liu, 2021; Zhu et al., 2021; Zhang et al., 2022). To overcome this limitation, in this work, we propose synthetic dataset transformations to simulate varying levels of hardness and noisiness. Although synthetic label noise has been previously studied (Zhang et al., 2021; Rolnick et al., 2017), our work, to the best of our knowledge, is the first to introduce synthetic hardness levels and to associate each sample of a dataset with a custom hardness level. There can be various reasons that a sample is hard to classify: it can be under-represented in the data set, it can have distinct characteristics from other samples in its class, or it can be close to the decision boundary. To introduce synthetic hardness levels into an original dataset, we propose three main approaches, which involve applying transformations to the original dataset input samples to artificially make them more/less hard to classify. We refer to these approaches as _hardness types_. We have: (i) imbalance, (ii) diversification, and (iii) closeness to the decision boundary. In the first hardness type (i), sample hardness is introduced by creating an imbalance dataset, and subsampling different classes with varying cardinality. This results in under-represented classes being harder to learn. In the second hardness type (ii), hardness is introduced by making different classes more or less distinct in their samples. This is achieved by applying a varying number of data augmentations to different classes, thus resulting in classes with fewer distinct samples and with more augmented samples per distinct sample being easier to learn. In the third hardness type (iii), hardness is introduced by modifying the input samples to be closer to the decision boundary. The decision boundary is estimated using a pre-trained model with a high test accuracy, and the samples are modified to be closer to this estimated decision boundary. A schematic visualization of these different hardness types and transformations from the original dataset is given in Figure 1. We do not claim these approaches cover all contexts in which hard samples arise, and in practice, a sample might be hard because of any combination of the aforementioned reasons, or some other reason. Yet, these common-sense approaches of introducing sample hardness enable us to perform controlled experiments to assess different metrics/methods in terms of their ability to distinguish between hard and noisy samples. Our key observation is that the feature embeddings of hard samples become closer to each other during training, whereas noisy-labeled samples do not necessarily exhibit this behavior because of the visual dissimilarity to other Figure 1: A schematic visualization of the hard sample creation approaches. We transform an original dataset (a) into datasets with varying levels of hardness for different classes (b)-(d). We consider three hardness types: imbalance (b), diversification (c), and closeness to the decision boundary (d). In all three transformations, we keep the samples of the red class (circles) unchanged, and transform samples of the blue class (crosses) to become harder (in (b) and (d)) or easier (in (c)), compared to the samples of the red class. In each hardness type, the created dataset has samples with custom hardness levels depending on the degree to which we apply the above transformations. samples in the same class. This observation leads us to propose the distance between the feature layer vector of each sample and the centroid feature vector of its assigned class, which we call _static centroid distance_ (SCD), as a metric to distinguish between hard and noisy-labeled samples. Next, we propose a label noise detection method based on SCD. While other methods perform well in only one of the two tasks among filtering out noisy samples and retaining hard samples, our method is the only one that performs well in _both_ tasks, and consistently so for all hardness types as well as datasets with real-world label noise. We demonstrate the superior performance of this method for noisy label detection when used for data cleansing as well as semi-supervised learning. Our main contributions are as follows: * We propose a novel approach for synthetically transforming samples to attain custom hardness levels across three hardness types: imbalance, diversification, and closeness to the decision boundary. To the best of our knowledge, we are the first to use controlled experiments to quantitatively assess how different label noise detection methods perform in terms of retaining hard samples. * We study various metrics for detecting noisy labels and show that static centroid distance (SCD) is the most effective metric in distinguishing between hard and noisy-labeled samples. While other metrics remain monotonic by increasing either as a function of hardness or noisiness, we show that SCD is the only metric that is increasing with noisiness but is _not_ increasing with hardness. * We propose and evaluate different methods for data cleansing and sample selection and we show that a two-dimensional Gaussian mixture model, which uses the accuracy over the training and SCD as features, performs the best in terms of filtering out noisy samples while retaining hard ones. * We empirically show that our method produces the best generalization performance when models are trained on the filtered datasets. This holds both in the synthetic datasets and even better in datasets with real-world label noise. Moreover, if after the data filtration, semi-supervised learning is applied, our method significantly outperforms other label noise detection methods. ## 2 Background and Related Work In this section, we first introduce the problem setup developed in this work to produce datasets with custom noisiness and hardness levels. Next, we recall some metrics from previous works that are relevant to our study, and, we introduce static centroid distance (SCD). Finally, we present various partitioning approaches that can be used in conjunction with each metric for sample selection. We assume that the training set \(S\) can be partitioned as \(S=S_{n}\cup S_{e}\cup S_{h}\), where \(S_{n}\) are the incorrectly-labeled samples (or noisy samples), \(S_{e}\) are the correctly-labeled and easy-to-learn samples, and \(S_{h}\) are the correctly-labeled and hard-to-learn samples. In practical applications, it is often challenging to clearly distinguish between easy and hard samples in a given dataset, and hence correctly place samples in \(S_{e}\) and \(S_{h}\). This can be a limitation when studying the performance of different methods in terms of their ability to remove incorrectly labeled samples while retaining hard samples. To address this issue, we synthetically apply the transformation \(\mathcal{T}\) to some original dataset \(S_{\text{org}}\), such that a spectrum of samples with hardness levels \(h\in\{0,1,2,3,4\}\) and noisiness levels \(n\in\{0,1,2,3,4\}\) emerge: samples are harder to learn as \(h\) increases, and have noisier labels as \(n\) increases. We will now discuss how \(\mathcal{T}\) transforms \(S_{\text{org}}\), with no label noise and with uniform hardness levels among its samples, to a dataset \(S=\mathcal{T}(S_{\text{org}})\). This transformation provides us with a knob that we can tune to make alterations, allowing for a systematic study of easy, hard, and noisy samples while offering a good comparison base. Consider a classification task with input \(x\in\mathcal{X}\) and ground truth one-hot label vector \(y\in\{0,1\}^{K}\), where \(K\) is the number of classes. The original set with \(N_{\text{org}}\) input-output pairs \(S_{\text{org}}=\{(x_{i},y_{i})\}_{i=1,\cdots,N_{\text{org}}}\) is transformed into the given training set with \(N\) training samples, \(S=\mathcal{T}(S_{\text{org}})=\{(\tilde{x}_{i},\tilde{y}_{i})\}_{i=1,\cdots,N}\). **Noisiness transformation** Let \(S^{\prime}=\{(x_{i},y_{i})\}_{i=1,\cdots,N^{\prime}}\subset S_{\text{org}}\) be a subset of size \(N^{\prime}\) of the original dataset \(S_{\text{org}}\). Transformation \(\mathcal{T}_{n}\) maps \(S^{\prime}\) to \(S^{\prime}_{t}=\mathcal{T}_{n}(S^{\prime})=\{(x_{i},\tilde{y}_{i})\}_{i=1, \cdots,N^{\prime}}\), where with probability \(1-q(n)\), \(\tilde{y}_{i}=y_{i}\), and with probability \(q(n)\), the non-zero element of \(\tilde{y}_{i}\) is set uniformly at random at index \(j\sim U(1,2,\cdots,K)\). The label noise level \(q(n)\in[0,1]\) is adjusted as a function of the desired noisiness level \(n\in\{0,1,2,3,4\}\). Such a dataset transformation is common practice to study label noise in a controlled setting, which is done by fixing some label noise level \(q\) for the entire dataset [1]. In our work, the label noise level \(q(n)\) is not fixed for the entire dataset, and depending on the noisiness level \(n\), it varies between different subsets of samples. **Hardness transformation** Let \(S^{\prime}=\{(x_{i},y_{i})\}_{i=1,\cdots,N^{\prime}}\subset S_{\text{org}}\), be a subset of size \(N^{\prime}\) from the original dataset \(S_{\text{org}}\). We want to transform \(S^{\prime}\) such that the samples of \(S^{\prime}_{t}=\mathcal{T}_{h}(S^{\prime})\) have hardness level \(h\in\{0,1,2,3,4\}\). To the best of our knowledge, such a transformation has not been done in prior work. This transformation is a composition of two transformations, i.e., \(\mathcal{T}_{h}=\mathcal{F}_{h}\circ\mathcal{I}_{h}\). The first transformation \(\mathcal{I}_{h}\) is one-to-one, and maps \(S^{\prime}\) into \(S^{\prime\prime}=\mathcal{I}_{h}(S^{\prime})=\{(x_{j},y_{j})\}_{j\in J_{h}}\), where \(J_{h}\subseteq\{1,\cdots,N^{\prime}\}\). The second transformation \(\mathcal{F}_{h}\) is one-to-many, and maps \(S^{\prime\prime}\) into \(S^{\prime}_{t}=\mathcal{F}_{h}(S^{\prime\prime})=\{(\tilde{x}_{i},y_{j})|i\in I (j)\}_{j\in J_{h}}\), where \(\tilde{x}_{i\in I(j)}\) is a transformed version of input sample \(x_{j}\), and \(I(j)\) can be a one-to-many mapping. In Section 3, we elaborate on the transformation \(\mathcal{T}_{h}\) for each hardness type. The final transformed dataset with \(N\) training samples is \(S=\cup_{S^{\prime}_{h,n}}\mathcal{T}_{n}\circ\mathcal{T}_{h}(S^{\prime}_{h,n})\), where the subsets \(S^{\prime}_{h,n}\subset S_{\text{org}}\forall h,n\in\{0,1,2,3,4\}\) are determined according to a pre-defined policy. With \(S^{\prime}_{t,h,n}=\mathcal{T}_{n}\circ\mathcal{T}_{h}(S^{\prime}_{h,n})\), and for some hardness threshold \(h_{\text{threshold}}\in\{0,1,2,3,4\}\), the three sets partitioning \(S\) are \[S_{n} =\{(\tilde{x}_{i},\tilde{y}_{i})\in S|\tilde{y}_{i}\neq y_{i}\},\] \[S_{h} =\{(\tilde{x}_{i},\tilde{y}_{i})\in\cup_{h\geq h_{\text{threshold} }}S^{\prime}_{t,h,n}|\tilde{y}_{i}=y_{i}\},\] \[S_{e} =S\setminus(S_{h}\cup S_{n}). \tag{1}\] In the dataset partition, we set \(h_{\text{threshold}}=4\), hence the samples in \(S_{h}\) consist of the hardest samples in the dataset. In this work, the classifier trained on the dataset \(S\) is a neural network. For each input sample \(\tilde{x}_{i}\), let \(\mathbf{p}(\tilde{x}_{i})=(p^{1}_{i},p^{2}_{i},\cdots,p^{K}_{i})\) be the prediction probability output vector of the neural network. Furthermore, the last layer of the neural network is a fully-connected layer with feature vector \(\mathbf{h}(\tilde{x}_{i})\in\mathbb{R}^{m}\), where \(m\) is the number of units in the feature layer of the neural network. Note that because the parameters of the neural network depend on the epoch during training, both vectors \(\mathbf{p}(\tilde{x}_{i})\) and \(\mathbf{h}(\tilde{x}_{i})\) depend on the epoch \(t\in\{1,2,\cdots,T\}\), where \(T\) is the maximum number of training epochs. We often remove the explicit dependence on \(t\) for simplicity. Training is done by performing stochastic gradient descent (SGD) on the cross-entropy training loss function at each epoch \(t\): \[L_{S}(t)=\frac{1}{N}\sum_{i=1}^{N}l_{i}(t)=-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1} ^{K}\tilde{y}_{i}^{j}\log p^{j}_{i}(t)=-\frac{1}{N}\sum_{i=1}^{N}\log p^{c_{i} }_{i}(t)\] where \(l_{i}(t)\) is the training cross-entropy loss of sample \(\tilde{x}_{i}\) at epoch \(t\in\{1,2,\cdots,T\}\). The prediction of the neural network classifier at epoch \(t\) for sample \(\tilde{x}_{i}\) is class \(\tilde{c}_{i}=\arg\max_{j}p^{j}_{i}(t)\), and the confidence of the classifier is the prediction probability \(p^{\tilde{c}_{i}}_{i}\) of the classifier for the predicted class label. **Metrics** Here, we recall a few metrics that are introduced in prior work and introduce a new metric, static centroid distance (SCD). In the following sections, we perform a comprehensive study on all these metrics in order to detect which metric, or combination of which metrics, is the best at distinguishing between samples in \(S_{n}\) and \(S_{h}\). In order to remain on a computationally limited budget, and on a practical setting, we study metrics that require only a single model and not an ensemble of models, and that do not require hyper-parameter tuning using a clean validation set. For each sample \(\tilde{x}_{i}\), we compute the following metrics defined below. Throughout our study, we primarily focus on loss, confidence, and SCD metrics. The former two are widely used in the literature due to the valuable information they provide about each sample, while the latter is a metric proposed in our work. * _Loss_: The training loss \(l_{i}(T)\) at the end of the training, i.e., at epoch \(T\). * _Confidence_: The prediction probability \(p^{\tilde{c}_{i}}_{i}(T)\) at the end of the training, i.e., at epoch \(T\). * First Prediction Epoch: The epoch \(t^{*}\) such that \(\tilde{c}_{i}(t^{*})=c_{i}(t^{*})\) and \(\tilde{c}_{i}(s)\neq c_{i}(s)\) for \(s<t^{*}\). * Accuracy of Predictions over Training: The accuracy of the classifier predictions for \(\tilde{x}_{i}\) against the assigned class label \(c_{i}\) over the training process, i.e., \(\frac{1}{T}\sum_{t=1}^{T}\mathbb{1}(c_{i}(t)=\tilde{c}_{i}(t))\), where \(\mathbb{1}\) is the indicator function. This is used by Sun et al. (2020) to detect noisy samples and to remove them from the dataset. * AUL: Area under Loss: \(\sum_{t=1}^{T}l_{i}(t)\), which is used by Pleiss et al. (2020) to detect noisy samples. * AUM: Area under Margin: \(\frac{1}{T}\sum_{t=1}^{T}p^{\tilde{c}_{i}}_{i}(t)-p^{c^{\prime}_{i}}_{i}(t)\), where \(p^{c^{\prime}_{i}}_{i}(t)\) is the largest second logit at time \(t\). This is used by Pleiss et al. (2020) to detect noisy samples. * JSD: Jensen-Shannon divergence between \(\mathbf{p}_{i}\) and \(\tilde{y}_{i}\) at the end of the training, i.e., at epoch \(T\). Its weighted version, WJSD, is also proposed by Zhang et al. (2022) to detect noisy samples. * ACD: Adaptive Centroid Distance, which is the cosine distance between the feature vector \(\mathbf{h}(\tilde{x}_{i})\) and the _adaptive_ centroid of feature vectors \(\mathbf{o}_{c_{i}}\) at the end of the training, i.e., at epoch \(T\). The term _adaptive_ refers to the computation of \(\mathbf{o}_{c_{i}}\) as it is the centroid for samples that the classifier suspects to be in class \(c_{i}\): either these samples have been assigned to another class but the classifier predicts them to be in class \(c_{i}\), or they have been assigned to class \(c_{i}\) and the classifier has high \(p_{i}^{c_{i}}\). This metric is proposed by Zhang et al. (2022) to detect noisy samples. * SCD (in our work): Static Centroid Distance, which is the Euclidean distance between \(\mathbf{h}(\tilde{x}_{i})\) and the _static_ centroid of feature vectors \(\mathbf{o}_{c_{i}}^{s}\) at the middle of training, \(t^{*}\), where training accuracy\((t^{*})\geq 50\%\) and training accuracy\((s)<50\%\) for \(s<t^{*}\). This is developed in our work as opposed to ACD, as a result of our empirical observations in studying hard samples. The adaptations from ACD to compute SCD are the following: (1) the distance function in SCD is Euclidean distance instead of cosine distance as we are interested in different regions of the feature space and not in different angles/directions of the feature vectors; (2) SCD is computed in the middle of training, where the training accuracy is \(50\%\). We observe that in this intermediate stage of training the differences between hard and noisy samples are at their highest; (3) the centroid vector \(\mathbf{o}_{c_{i}}^{s}\) is computed for _all_ samples that are assigned to class \(c_{i}\), and not only for those samples which the classifier predicts to be in class \(c_{i}\). Because for hard classes, the classifier, which might not have learned them correctly, does not predict samples that actually belong to class \(c_{i}\), the distance to \(\mathbf{o}_{c_{i}}\) becomes rather meaningless. We observe that relying too much on the classifier to compute this centroid, which is done in the computation of ACD, results in inferior performance, especially when the quality of the data is low. In Table 5, we provide an ablation study that transitions from using different variations of ACD to using SCD, and we can observe that using SCD results in both removing incorrectly labeled samples and retaining correctly labeled hard samples. **Partitioning Methods** Using the above metrics, one can then cluster the available samples in \(S\) in order to find estimates for \(S_{e}\), \(S_{h}\), and \(S_{n}\), which we refer to by \(\tilde{S}_{e}\), \(\tilde{S}_{h}\) and \(\tilde{S}_{n}\), respectively. Note that our overall goal is to find the estimated noisy subset \(\tilde{S}_{n}\) and the estimated clean subset \(\tilde{S}_{e}=\tilde{S}_{e}\cup\tilde{S}_{h}\), but not precisely the two subsets \(\tilde{S}_{e}\) and \(\tilde{S}_{h}\). * In this method, a threshold value is chosen for the specific metric, which can be the average value over the samples or any other predetermined value. The samples are then partitioned into two regions, one containing samples whose values on the metric are greater than the threshold, and the other one containing the samples whose values on the metric are smaller than the threshold. Because we work with values in the dataset that might have outliers (in particular for hard samples and noisy samples), we choose the median value as the default threshold when using this method. * A one-dimensional Gaussian mixture model is used for partitioning the dataset into multiple subsets or partitions by modeling the values of the metric for each sample as a mixture of several Gaussian distributions, and each Gaussian distribution represents one of the partitions. * Similarly to 1d-GMM, a two-dimensional GMM can partition the dataset by using two metrics instead of only one. ## 3 Dataset Design with Different Hardness Levels In this section, we discuss about the hardness and noisiness transformations \(\mathcal{T}_{h}\) and \(\mathcal{T}_{n}\). Sections 3.1, 3.2, and 3.3 provide details of the transformation \(\mathcal{T}_{h}\) in each of the three hardness types: imbalance, diversification, and closeness to the decision boundary, respectively. Finally, Section 3.4 provides details of \(\mathcal{T}_{n}\). The original dataset \(S_{\text{org}}\) is taken here as the TinyImagenet dataset (Le and Yang, 2015) with \(200\) classes. We have chosen this dataset because it possesses sufficient complexity to capture the underlying input-output relations involved in intricate image classification tasks. Later in this paper, we assess the generalizability of our findings by testing them on real-world datasets that contain noisy labels. To ensure consistency within each class, we assign the same levels \(h\) and \(n\) to all samples in the same class. To provide a comprehensive spectrum of samples, we vary \(h\) and \(n\) along two orthogonal axes and across five levels, resulting in a 2D space with four quadrants (and a total of \(25\) different sample categories): easy-clean (\(h=0\) and \(n=0\)), easy-noisy (\(h=0\) and \(n=4\)), hard-clean (\(h=4\) and \(n=0\)), and hard-noisy (\(h=4\) and \(n=4\)). Each sample in \(S_{\text{org}}\) is defined by a pair \((h,n)\) of levels according to Figure 2, and the set of samples with levels \((h,n)\) is denoted by \(S^{\prime}_{h,n}\), its size by \(N^{\prime}_{h,n}\). We first apply a hardness transformation (one of the three hardness types) to \(S^{\prime}_{h,n}\) to produce \(S^{\prime\prime}_{h,n}=\mathcal{T}_{h}(S^{\prime}_{h,n})\). We then apply the noisiness transformation to \(S^{\prime\prime}_{h,n}\) and finally get the transformed subset \(S^{\prime}_{t,h,n}=\mathcal{T}_{n}\circ\mathcal{T}_{h}(S^{\prime}_{h,n})\). In order to demonstrate the effectiveness of the hardness transformation in terms of actually making samples more or less hard, we use metrics that are widely accepted from the literature, such as loss (Kishida and Nakayama, 2019; Loshchilov and Hutter, 2015), and confidence (Swayamdipta et al., 2020; Chang et al., 2017; Wang et al., 2020). Prior studies show that hard samples have larger losses and lower confidence. By examining the relation between loss/confidence and the hardness levels assigned by our approach, we find that our hardness level assignments are well-founded. ### Hardness via Imbalance In this dataset, we make samples with different levels \(h\), by making the dataset imbalanced. To achieve this, we subsample the dataset so that the number of samples for each class with hardness \(h\) is \(X/2^{h}\), where \(X\) is the maximum number of samples per class in the dataset. With this approach, the number of samples per class decreases exponentially as \(h\) increases, which makes these samples under-represented hence more difficult to learn. We have \[S^{\prime\prime}_{h,n}=\mathcal{T}_{h}(S^{\prime}_{h,n})=\mathcal{I}_{h}(S^{ \prime}_{h,n})=\{(x_{j},y_{j})\}_{j\in J_{h}}\] where \(J_{h}\subseteq\{1,\cdots,N^{\prime}_{h,n}\}\) uses stratified sampling so that there are \(X/2^{h}\) samples per class. In this hardness type, the transformation \(\mathcal{F}_{h}\) is the identity map and \(\mathcal{T}_{h}=\mathcal{I}_{h}\). **Are samples in \(S_{h}\) with this hardness type actually harder to learn?** As we can see in Figure 3a, samples that are assigned with a larger hardness level \(h\) have indeed a higher loss and lower confidence values. We conclude that samples with \(h=4\), which are those in \(S_{h}\), are the hardest for the classifier to learn compared to the rest of the samples. ### Hardness via Diversification Compare the following two sets of samples: (i) 16 images of different cats possibly with different breads, (ii) 16 images of the same cat with different image augmentations, (augmentations can be rotation, flipping, scaling and blurring (Shorten and Khoshgoftaar, 2019)). Which of the two sets of samples is harder for a classifier to learn? Figure 2: For a dataset with 200 classes, the hardness level \(h\) and noisiness level \(n\) of samples in the dataset are determined by their class, as shown on this 2-dimensional figure. For each pair \((h,n)\), we allocate class number \(c=40*(4-n)+8*h+\beta\) where \(\beta\in\{0,1,\cdots,7\}\), so that each pair \((h,n)\) is represented by 8 different classes. For example, samples with \(h=0\) and \(n=4\) are in classes 0 to 7. The pre-defined policy to determine subsets \(S^{\prime}_{h,n}\subset S_{\text{org}}\) is according to this figure. We can argue that the first set is more difficult, because the classifier requires to learn some common features that belong to 16 different images instead of 16 variations of the same image. The first set of samples are more _diverse_ and such diversification is the second hardness type we consider. It is also observed by Kishida and Nakayama (2019), that easy examples in the datasets are visually similar to each other whereas hard samples are visually diverse. In this dataset, we keep the number of samples balanced between classes, but we vary the diversity of samples per class. To do so, we maintain the original samples in hard classes in order to preserve their level of diversity/difficulty. For samples in the easy classes, we perform data augmentation techniques, such as rotation or flipping, to make samples less diverse and hence easier to learn. By changing the number of data augmentations used in each level of hardness \(h\), this approach enables us to generate a range of hardness levels within a single dataset. In particular, for hardness level \(h\), we first subsample the number of samples per class by \(X/2^{4-h}\), where \(X\) is the maximum number of unique samples per class. Then, we create \(2^{4-h}-1\) new augmented samples per sample. As a result, all classes have the same overall number of samples and the dataset is balanced. We have \[S^{\prime\prime}_{h,n}=\mathcal{T}_{h}(S^{\prime}_{h,n}) =\mathcal{F}_{h}\circ\mathcal{I}_{h}(S^{\prime}_{h,n})=\mathcal{ F}_{h}\left(\{(x_{j},y_{j})\}_{j\in J_{h}}\right)\] \[=\{(\tilde{x}_{i},y_{j})|i\in I_{h}(j)\}_{j\in J_{h}},\] where \(J_{h}\subseteq\{1,\cdots,N^{\prime}_{h,n}\}\) uses stratified sampling so that there are \(X/2^{4-h}\) samples per class, \(I_{h}(j)=\{j+a\cdot J_{h}|\}_{a\in\{0,\cdots 2^{4-h}-1\}}\), and for \(i\in I_{h}(j),i\neq j\), \(\tilde{x}_{i}\) is an augmented version of \(x_{j}\). **Are samples in \(S_{h}\) with this hardness type actually harder to learn?** As we can see in Figure 2(b), samples that are assigned with a larger hardness level \(h\), have higher loss and lower confidence values. Hence, we conclude that samples with \(h=4\), which are in \(S_{h}\), are the hardest for the classifier to learn. ### Hardness via Closeness to the Decision Boundary In this dataset, we produce hard samples by modifying them to be closer to the decision boundary. To achieve this, we must first identify the true decision boundary for the classification task at hand. As we are working with the TinyImagenet dataset, the true decision boundary is unknown. The closest estimate to the true decision boundary that we found is a DeiT model (Touvron et al., 2021) that was pre-trained on ImageNet and fine-tuned on TinyImagenet, with a training accuracy of 98.8% and a test accuracy of 90.88% on the original train and test Figure 3: In each of the three created datasets, we observe that samples in classes with higher \(h\) have generally traits that correspond to their difficulty. These traits include having a higher loss value and having a lower confidence (or prediction probability). To further show the significance of loss and confidence values as \(h\) increases, we perform a one-way ANOVA test, which results in the following F-statistics and p-values: **(a) Hardness via Imbalance:** for the loss we have F=672 and p\(<1e-100\), for the confidence we have F=1519 and p= 0; **(b) Hardness via Diversification:** for loss we have F=234 and p\(<1e-190\), for confidence we have F=237 and p\(<1e-200\); **(c) Hardness via Closeness to the Decision Boundary:** For loss we have F=5.7 and p\(<1e-3\); for confidence we have F=5.8 and p\(<1e-3\). We can observe that in all three settings, the F-statistics are relatively large and the p-values are small. We conclude that our assigned \(h\) values do indeed indicate sample hardness and in all three settings, our approaches for creating hard samples are justified. **(d) Noisiness:** We observe that, similarly to the effect of hardness increase discussed in (a)-(c), an increase in the level of noisiness increases loss and decreases confidence. Therefore, if we rely solely on these metrics to identify and remove noisy-labeled samples, hard samples can also be mistakenly removed from the dataset. This is problematic because hard samples contain valuable information about the underlying data distribution and should not be ignored. sets of TinyImagenet. Then, we create samples that are closer to the decision boundary of this model, hoping that they will also be closer to the decision boundary of the actual ground truth model. We have \[S_{h,n}^{\prime\prime}=\mathcal{T}_{h}(S_{h,n}^{\prime})=\mathcal{F}_{h}(S_{h,n}^{ \prime})=\{(\tilde{x}_{j},y_{j})\}_{j\in J_{h}},\] where \(J_{h}=\{1,\cdots,N_{h,n}^{\prime}\}\), and \(\tilde{x}_{j}\) is a transformation of the sample \(x_{j}\), such that \(\tilde{x}_{j}\) is closer to the decision boundary compared to \(x_{j}\). In this hardness type the transformation \(\mathcal{I}_{h}\) is an identical map and \(\mathcal{T}_{h}=\mathcal{F}_{h}\). Our hard sample creation is done similarly to the fast gradient sign method for creating adversarial samples, which is introduced by Goodfellow et al. (2014). For an input sample \(x_{i}\), this method creates adversarial input \[x_{i}^{\text{adv}}=x_{i}+\epsilon\quad\text{sign}\left(\nabla_{x}l_{i}\right)\] where \(\epsilon\) is a hyper-parameter used for scaling the added noise. For creating adversarial samples, \(\epsilon\) should be large enough to change the label of the sample. For our dataset creation however, and unlike with adversarial sample creation, we make sure that the resulting sample \(\tilde{x}_{i}\) does _not_ change its label, by adjusting \(\epsilon\). Different levels of \(h\) are obtained by adjusting the value of \(\epsilon\), and hence \(\epsilon(h)\) is a function of the hardness level \(h\). Higher values of \(\epsilon\) move the sample closer to the decision boundary, making it harder to learn with a new model. After generating new samples, we keep only those that maintain their original class label, as our goal is to increase the difficulty of samples in their original class. It is critical to choose the appropriate range of \(\epsilon(h)\), as a value that is too high can cause most samples to change their predicted class and be excluded from the dataset. This would result in the remaining samples being easier to learn instead of harder, because they were originally very easy and far from the decision boundary. Therefore, we carefully choose the range of \(\epsilon(h)\) during our dataset creation, which results in a smaller variation of the loss/confidence as a function of the hardness level \(h\), as shown in Figure 2(c). These samples become hard, particularly for models that make decisions near the decision boundary, which is why in this setting, unlike the other settings, we use models which are pre-trained on the ImageNet dataset instead of using randomly initialized models. **Are samples in \(S_{h}\) with this hardness type actually harder to learn?** As shown in Figure 2(c), samples that are assigned with a larger hardness \(h\) value, have higher loss and lower confidence values. The loss values vary however less with the hardness level \(h\) than in the two previous hardness types, because the sensitivity of the range of \(\epsilon(h)\) must be large enough to bring the sample close to the boundary, but small enough to keep it from changing its label. Figure 2(c) shows that samples with \(h=4\), which are in \(S_{h}\), are still the hardest for the classifier to learn. ### Addition of Label Noise; Its Similarities to Hardness After creating each of the hard datasets, we add label noise to samples of the dataset according to Figure 2. For each sample with noisiness \(n\), we assign the label noise level of \(q(n)=\delta*n/4\), where \(\delta\in[0,1]\) is the maximum label noise level of the created dataset (by default we use \(\delta=0.4\) in our experiments). Label-noise level refers to the probability of the label being replaced uniformly at random with one of the class labels. For example, samples with \(n=1\), have \(0.25\delta\) probability of having a noisy label. To prevent confusion between samples in classes with different levels of noisiness \(n\), we limit the choice of class labels to only those with the same \(n\) when assigning noisy labels. This ensures that samples are assigned to a class with \(n=0\) if and only if they actually belong to that class. Overall, for the set \(S_{h,n}^{\prime\prime}=\{(\tilde{x}_{i},y_{i})\}_{i\in\{1,\cdots,N_{h,n}^{ \prime\prime}\}}\), we have \[S_{t,h,n}^{\prime}=\mathcal{T}_{n}(S_{h,n}^{\prime\prime})=\{(\tilde{x}_{i}, \tilde{y}_{i})\}_{i\in\{1,\cdots,N_{h,n}^{\prime\prime}\}},\] where with probability \(1-q(n)\), \(\tilde{y}_{i}=y_{i}\), and with probability \(q(n)\), the non-zero element of \(\tilde{y}_{i}\) is at index \(j\sim U(1,2,\cdots,K)\), where \(U\) is the uniform distribution. In Figure 2(d), we observe that increasing \(n\), similarly to increasing \(h\) as discussed in previous sections, results in a higher loss, and a lower confidence. We observe that the same correlation sign between loss/confidence and hardness \(h\) also exists between loss/confidence and noisiness \(n\). This indicates that hardness and noisiness exhibit similar characteristics, and that removing noisy samples from the training dataset by using loss or confidence could also lead to the inadvertent removal of correctly-labeled hard-to-learn samples. This highlights the need for more advanced metrics/methods to identify and preserve hard samples while removing noisy ones. ## 4 Easy-Hard-Noisy Data Partitioning and Training In this section, we use the metrics and methods discussed in Section 2 to partition the synthetic datasets constructed in Section 3 into two subsets: the estimated clean subset \(\tilde{S}_{c}\) and the estimated noisy subset \(\tilde{S}_{n}\). We evaluate the effectiveness of these methods and metrics in this partitioning task, with a focus on the quality of the filtering between hard and noisy samples. We then compare the performance of different partitioning methods based on the test accuracy of models trained on the estimated clean subset produced from each method. It is important to note that although the training datasets can contain samples with issues, such as hardness or noisiness in certain classes, the test datasets do not contain any such noisy-labeled or imbalanced samples. Therefore, traditional evaluation metrics, such as test accuracy, provide a fair representation of the generalization performance of models in each setting. ### Partitioning We compare different methods that partition the available dataset into two subsets: \(\tilde{S}_{c}\) and \(\tilde{S}_{n}\). Our primary objective is to include the incorrectly labeled samples within \(\tilde{S}_{n}\), which is the focus of previous studies as well, but in addition, we are also interested in including all hard samples within \(\tilde{S}_{c}\). Therefore, we aim to achieve high recall for hard samples, i.e., \(\text{Recall}_{\text{h}}=\frac{|\tilde{S}_{c}\cap\tilde{S}_{n}|}{|\tilde{S}_{n}|}\), while maintaining a high recall for noisy samples, i.e., \(\text{Recall}_{\text{n}}=\frac{|\tilde{S}_{n}\cap\tilde{S}_{n}|}{|\tilde{S}_{n}|}\). Although having a high precision for noisy samples, i.e., \(\text{Precision}_{\text{n}}=\frac{|\tilde{S}_{n}\cap\tilde{S}_{n}|}{|\tilde{S}_{ n}|}\), is desirable, it is less critical. Even if some easy samples are mistakenly included in \(\tilde{S}_{n}\), this is not too harmful because they can be either relabeled or not used in training. This is because such easy and correctly labeled samples are often redundant in the training set, and are likely to be inexpensive to label or replace (Paul et al., 2021). Various label-noise detection and data partitioning methods have been proposed in the literature, including \(\text{Thres}_{\text{Loss}}\)(Huang et al., 2019), \(\text{Thres}_{\text{acc over training}}\)(Sun et al., 2020), \(\text{Thres}_{\text{AUM}}\)(Pleiss et al., 2020), \(\text{1d-GMM}_{\text{AUL}}\)(Pleiss et al., 2020), and \(\text{2d-GMM}_{\text{WJSD}-\text{ACd}}\)(Zhang et al., 2022). However, as discussed in Section 3, metrics Figure 4: ACD (top) and SCD (bottom) applied to the transformed datasets with noisiness level \(n\) and hardness level \(h\). We observe that SCD increases with \(n\), but not with \(h\), and is thus non-monotonic in \(n\) and \(h\). In contrast, ACD is monotonic in \(n\) and \(h\). This is also observed for other metrics such as loss, confidence, AUL, AUM, and JSD in Figure 6. We, therefore, propose SCD as a promising metric to be used in data partitioning for removing noisy samples while retaining hard samples. We quantitatively compare these two metrics with also other metrics in terms of their ability to partition datasets into clean and noisy subsets later in this section in Table 1. such as loss and confidence fail to differentiate between hardness \(h\) and noisiness \(n\), making it challenging to remove only noisy samples without removing hard ones. The same issue applies to other metrics such as accuracy over training, AUL, AUM, JSD, and ACD, as depicted in Figure 6 provided in the appendix; These metrics are monotonic in both \(h\) and \(n\). Consequently, if these metrics are used for data partitioning, the identified noisy subset may mistakenly contain hard samples, reducing the reliability of these methods when dealing simultaneously with noisy and hard samples. In contrast, SCD increases with \(n\) but does _not_ increase with \(h\). We can observe this behavior in Figure 4 and compare it with for example the behavior of ACD when hardness and noisiness increase. This behavior makes SCD a particularly promising metric for removing noisy samples while preserving hard samples. While SCD is a promising metric for distinguishing hard samples from noisy ones, it is beneficial to pair it with another metric that can detect easy samples from the hard and noisy ones. In our experiments, we have observed that accuracy over training is an effective metric for this purpose. It can accurately identify the easy samples in the dataset while mixing the hard and noisy samples together. Since these two metrics - SCD and accuracy over training - are complementary, we use a 2d-GMM with them as its dimensions to estimate all three subsets - \(S_{e}\), \(S_{h}\), and \(S_{n}\). Table 1 presents the results of different partitioning methods on all three synthetic hard datasets, including the filtered dataset size \(|\tilde{S}_{c}|\), correct label percentage of \(\tilde{S}_{c}\), \(\text{Precision}_{n}\), \(\text{Recall}_{n}\), and \(\text{Recall}_{\text{h}}\). Several interesting observations emerge from the results. Firstly, some methods, such as \(\text{Thres}_{\text{AUM}}\), have high \(\text{Recall}_{\text{n}}\) but low \(\text{Recall}_{\text{h}}\). These methods struggle to differentiate hard samples from noisy ones, and thus discard most of the hard samples. Secondly, some methods, such as 2d-\(\text{GMM}_{\text{WJSD}-\text{ACD}}\), have high \(\text{Recall}_{\text{h}}\) but low \(\text{Recall}_{\text{n}}\). Although they preserve hard samples, they also retain noisy ones. Thirdly, these observations vary depending on the type of hardness, making the conclusions less robust. However, we consistently observe that our proposed method, 2d-\(\text{GMM}_{\text{acc}-\text{SCD}}\), performs the best overall and robustly across all three datasets, with high \(\text{Recall}_{\text{n}}\) and \(\text{Recall}_{\text{h}}\). Such robustness is particularly crucial in practice since samples can be hard to learn due to any combination of the hardness types, making a reliable metric even more necessary. ### Training on the Filtered Subset In this section, we present the results of training models on the estimated clean datasets \(\tilde{S}_{c}\) obtained in the previous section. Table 2 displays the generalization performance of models trained on the estimated clean datasets using each method. The results demonstrate that our proposed method 2d-\(\text{GMM}_{\text{acc}-\text{SCD}}\) consistently outperforms the other methods in all three hardness types. We observe that the 1d-\(\text{GMM}_{\text{AUL}}\) method performs slightly better than 2d-\(\text{GMM}_{\text{acc}-\text{SCD}}\) in the second and third hardness types. However, in the first hardness type, 1d-\(\text{GMM}_{\text{AUL}}\) performs significantly worse than 2d-\(\text{GMM}_{\text{acc}-\text{SCD}}\), which suggests that 1d-\(\text{GMM}_{\text{AUL}}\) is not robust to the hardness type and thus unreliable for practical use. Overall, the table emphasizes the importance of selecting an appropriate metric/approach for data filtration in presence of both noisy and hard samples. Moreover, our experiments highlight the advantage of using the label noise detection method 2d-\(\text{GMM}_{\text{acc}-\text{SCD}}\) for different hardness types. **Experiments on datasets with real-world label noise** To further evaluate and compare the performance of each data filtration method, we apply them to datasets with real-world label noise. Table 3 displays the results of each method for partitioning and data filtration, for models trained on the Animal-10N2 and CIFAR-10N [Wei et al., 2022] datasets. Both datasets have real-world label noise and unknown easy-hard-noisy subsets. Since we lack knowledge of which samples are hard or incorrectly labeled, we cannot compute \(\text{Recall}_{\text{h}}\) or \(\text{Recall}_{\text{n}}\), as we did for our created synthetic datasets in Table 1. Nonetheless, we can apply each partitioning/label-noise detection method to these datasets, partition them into an estimated clean subset \(\tilde{S}_{c}\) and an estimated noisy subset \(\tilde{S}_{n}\), and train models on \(\tilde{S}_{c}\). The generalization performance of models trained on \(\tilde{S}_{c}\) obtained from each method is an indication of the quality of the estimated clean subsets \(\tilde{S}_{c}\). Our results in Table 3 demonstrate that our proposed method, 2d-\(\text{GMM}_{\text{acc}-\text{SCD}}\), outperforms all other methods by a significant margin in terms of test performance and in terms of estimating the label noise level of the given dataset. Footnote 2: [https://dm.kaist.ac.kr/datasets/animal-10n/](https://dm.kaist.ac.kr/datasets/animal-10n/) ### Semi-supervised Learning on the Filtered Subsets After the data partitioning step to partition the dataset into an estimated clean subset \(\tilde{S}_{c}\) and an estimated noisy subset \(\tilde{S}_{n}\), we can then apply a semi-supervised learning algorithm and use \(\tilde{S}_{c}\) as the labeled set and \(\tilde{S}_{n}\) as the unlabeled set (by discarding the noisy labels in \(\tilde{S}_{n}\)). We use the Flex-Match semi-supervised learning algorithm [Zhang et al., 2021a], which is shown to have good performance. The results are shown in Table 4, for the two real-world datasets Animal-10N and CIFAR-10N. We can observe the significant performance improvement brought by using \(2\text{d-GMM}_{\text{acc-SCD}}\) as a data partitioning method, compared to other partitioning methods. This once again indicates that our data partitioning method is able to remove incorrectly labeled samples that do not help generalization, and to further retain hard samples which help generalization. Note that semi-supervised learning algorithms are well-suited for settings where there is a large set of unlabeled data and a small set of labeled data. However, in our settings, where we use different data partitioning methods on the Animal-10N and CIFAR-10N datasets which are datasets with low label noise levels, the estimated noisy subset is not large. Hence, algorithms such as Flex-Match require a large number of epochs in order to get to convergence. This was computationally expensive and hence we present results of networks that are stopped at a training loss value of 0.45. This is the reason we are observing a relatively lower performance when using a semi-supervised learning approach compared to training only on the filtered labeled dataset (reported in Table 3). ## 5 Discussion **Robustness to Hardness Type** In this study, we aim to explore different approaches for manipulating the hardness of samples and investigate their impact on label noise detection methods. We simulate three hardness types and compared the performance of various methods in identifying and removing noisy samples while retaining hard ones. It is important to highlight that sample hardness level \(h\) is a relative concept and can only be evaluated when comparing samples within a given dataset. This is different from the noisiness level \(n\), which is an absolute measure. Our simulations showed that the three hardness types that we test are sufficiently distinct. Interestingly, we find that some methods perform better in distinguishing certain types of hard samples from noisy ones, while not performing well in distinguishing other hard samples. However, the 2D-GMM on top of accuracy over training and SCD, demonstrated the most robust performance across all hardness types. The reason for the rather significant superior performance of this method in real-world datasets stems from its ability to effectively detect label noise and hard samples caused by various underlying reasons, which are often unpredictable in real-world datasets. When proposing a label noise detection method, it is essential to consider the unpredictability of hard samples in real-world datasets. **Number of Clusters in GMM** We observe that in our synthetic datasets, the location of easy \(S_{e}\), hard \(S_{h}\), and noisy \(S_{n}\) samples in the 2D spectrum with accuracy over training and SCD followed a certain pattern. We illustrate this pattern in Figure 5 for the hardness via diversification dataset. Furthermore, we find that this pattern is consistent across different hardness types. To further investigate this observation, we compare GMM models with two and three clusters and analyze the resulting clusters. Our observation is that when we use two clusters, many hard and noisy samples are clustered together, resulting in poor detection performance. However, when we use three clusters, the detection performance improves significantly, and the resulting clusters are much more coherent with the actual clusters. This observation led us to use a 3-cluster GMM in our proposed method, \(2\text{d-GMM}_{\text{acc-SCD}}\), which produced more accurate results. It is important to note that such investigation is only possible through our controlled experiments on synthetic datasets because we know the exact partitions of the datasets into easy, hard, and noisy samples. Nevertheless, our findings provide valuable insights into the design of label noise detection methods even when used with real-world datasets. **Conclusion** We propose an empirical approach to investigate hard samples in an image classification setting. To this end, we create synthetic datasets that enable us to study hard and noisy samples in a controlled environment. Through our investigation, we make several interesting observations, including the importance of analyzing feature layer vectors of neural networks to distinguish between hard and noisy samples. Furthermore, we propose a label noise detection method that outperforms other existing methods in terms of both removing noisy samples and retaining hard ones. Our method can be applied to filter datasets with label noise, leading to better generalization performance when training models on the filtered set. Importantly, our label noise detection method is of quite general use and can be used in combination with any other method designed to deal with label noise and/or hard samples. Although our synthetic datasets were tailored toward image classification tasks, our conclusions could be applied to other applications. Moreover, our data filtration method could be combined with semi-supervised learning methods, such as FixMatch and FlexMatch, to further improve the model's generalization performance by making use of the discarded subset of the data. Overall, our study provides valuable insights into the design and optimization of label noise detection methods and their applications in improving the performance of machine learning models in real-world settings. \begin{table} \end{table} Table 1: Partitioning results obtained by applying various label noise detection methods on our three synthetic datasets with hard samples. We can observe that, on the one hand, some methods have a good performance on recall only on noisy samples \(\text{Recall}_{\text{n}}\) (for example \(\text{Thres}_{\text{acc over training}}\)), whereas they have very bad performance in terms of recalling hard samples (low \(\text{Recall}_{\text{h}}\)). This means that, in an attempt to remove noisy samples from the dataset, they remove almost all hard samples as well. On the other hand, some methods have a good performance in terms of keeping hard samples (high \(\text{Recall}_{\text{h}}\)) (for example \(2\text{d-GMM}_{\text{WJSD-ACD}}\)), but fail to remove the noisy samples from the training dataset, as evidenced by the very low value of \(\text{Recall}_{\text{n}}\). Our proposed metric, \(2\text{d-GMM}_{\text{acc-SCD}}\), shows the best overall performance in terms of removing noisy samples while keeping hard samples, as evidenced by the relatively high values of \(\text{Recall}_{\text{h}}\) and \(\text{Recall}_{\text{n}}\). The result is consistent in all three settings, unlike some methods that only perform well in one of the hardness types. \begin{table} \end{table} Table 2: DenseNet test accuracy comparison on the estimated clean datasets using different data filtration methods for datasets with different hardness types. Our method \(2\text{d-GMM}_{\text{acc-SCD}}\) consistently performs well in all hardness types, unlike the \(1\text{d-GMM}_{\text{AUL}}\) method that performs well only in the second and third hardness types. It is important to choose a method that works well with all hardness types because, in practice, hard samples can arise from any of the three types tested here. \begin{table} \begin{tabular}{|c|c|c|} \hline MethodDataset & Animal-10N & CIFAR-10N \\ \hline \hline Thres\({}_{\text{Loss}}\)[22, 23, 24] & \(\mathbf{77.84}_{\pm 0.50}\) & \(63.85_{\pm 2.25}\) \\ Thres\({}_{\text{acc over training}}\)[22, 23] & \(71.60_{\pm 0.54}\) & \(69.44_{\pm 0.18}\) \\ Thres\({}_{\text{AUM}}\)[22, 23] & \(72.40_{\pm 0.39}\) & \(61.32_{\pm 0.57}\) \\ \hline 1d-GMM\({}_{\text{Loss}}\)[22, 23] & \(72.15_{\pm 1.81}\) & \(65.67_{\pm 0.84}\) \\ 1d-GMM\({}_{\text{AUL}}\)[22, 23] & \(70.11_{\pm 1.55}\) & \(66.65_{\pm 0.94}\) \\ \hline 2d-GMM\({}_{\text{WJSD}-\text{ACD}}\)[22, 23] & \(69.88_{\pm 0.67}\) & \(76.43_{\pm 0.80}\) \\ 2d-GMM\({}_{\text{acc}-\text{SCD}}\) (Ours) & \(\mathbf{77.43}_{\pm 0.46}\) & \(\mathbf{80.45}_{\pm 0.20}\) \\ \hline \end{tabular} \end{table} Table 4: Test accuracy percentage comparison for models trained using Flex-Match algorithm on the partitioned datasets found from each method. The estimated clean subset \(\tilde{S}_{c}\) and noisy subset \(\tilde{S}_{n}\) are used as the labeled and unlabeled sets, respectively. We can observe that 2d-GMM\({}_{\text{acc}-\text{SCD}}\) achieves the best dataset quality as evidenced by the high test accuracy of the trained models in both Animal-10N and CIFAR-10N datasets. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Dataset & \multicolumn{4}{|c|}{Animal-10N} & \multicolumn{4}{|c|}{CIFAR-10N} \\ \hline \multicolumn{2}{|c|}{Performance Metric} & \multicolumn{1}{c|}{Test Accuracy \%} & \multicolumn{1}{c|}{Test Loss} & \multicolumn{1}{c|}{Estimated LNL} & \multicolumn{1}{c|}{Test Accuracy \%} & \multicolumn{1}{c|}{Test Loss} & \multicolumn{1}{c|}{Estimated LNL} \\ \hline Thres\({}_{\text{Loss}}\)[22, 23, 24] & \(80.32_{\pm 0.38}\) & \(0.62_{\pm 0.01}\) & \(50\%\) & \(85.45_{\pm 0.22}\) & \(0.64_{\pm 0.01}\) & \(50\%\) \\ Thres\({}_{\text{AUM}}\)[22, 23] & \(83.03_{\pm 0.11}\) & \(0.52_{\pm 0.01}\) & \(2.48\%\) & \(83.61_{\pm 0.17}\) & \(0.78_{\pm 0.01}\) & \(46\%\) \\ Thres\({}_{\text{AUM}}\)[22, 23] & \(76.30_{\pm 0.16}\) & \(0.90_{\pm 0.04}\) & \(50\%\) & \(81.89_{\pm 0.18}\) & \(0.83_{\pm 0.01}\) & \(50\%\) \\ \hline 1d-GMM\({}_{\text{Loss}}\)[22, 23] & \(83.05_{\pm 0.25}\) & \(0.53_{\pm 0.01}\) & \(2.82\%\) & \(85.87_{\pm 0.30}\) & \(0.62_{\pm 0.00}\) & \(46\%\) \\ 1d-GMM\({}_{\text{AUL}}\)[22, 23] & \(82.80_{\pm 0.28}\) & \(0.53_{\pm 0.01}\) & \(2.11\%\) & \(86.21_{\pm 0.06}\) & \(0.66_{\pm 0.01}\) & \(30\%\) \\ \hline 2d-GMM\({}_{\text{WJSD}-\text{ACD}}\)[22, 23] & \(83.06_{\pm 0.26}\) & \(0.54_{\pm 0.01}\) & \(1.03\%\) & \(89.44_{\pm 0.22}\) & \(0.35_{\pm 0.01}\) & \(17\%\) \\ 2d-GMM\({}_{\text{acc}-\text{SCD}}\) (Ours) & \(\mathbf{83.11}_{\pm 0.10}\) & \(\mathbf{0.51}_{\pm 0.01}\) & \(\mathbf{3.3}\%\) & \(\mathbf{89.90}_{\pm 0.21}\) & \(\mathbf{0.31}_{\pm 0.00}\) & \(\mathbf{10\%}\) \\ \hline \end{tabular} \end{table} Table 3: Performance comparison of models trained on the estimated clean datasets using different data filtration methods for the Animal-10N and CIFAR-10N datasets. The estimated label noise level (LNL) of Animal-10N and CIFAR-10N are around 8% and 9.01%, respectively. We observe that our method provides the best LNL estimate. The LNL estimate of each method is computed using \(1-\left|\tilde{S}_{c}\right|/|S|\). Moreover, our method 2d-GMM\({}_{\text{acc}-\text{SCD}}\) provides the cleanest dataset as evidenced by the large margin in the generalization performance improvement; both test accuracy and test loss are the best for 2d-GMM\({}_{\text{acc}-\text{SCD}}\). Figure 5: (a) Easy-hard-noisy clusters for samples of the hardness via diversification dataset with their associated static centroid distance (SCD) and accuracy over training values. We can observe that noisy samples are located on the top left of the figure, followed by the hard samples in the middle parts and the easy samples on the other end, i.e., the bottom right. Easy samples have a high accuracy over training and a low SCD. Depicting such figures requires full ground-truth access to the available dataset and knowledge about label noise and hardness levels of samples. However, this information is not possible in practice, hence in the sub-figures (b) and (c) we evaluate the results of Gaussian mixture models to recover these clusters without any knowledge about samples. (b) Clustering results of 2d-GMM\({}_{\rm acc-SCD}\) with 2 number of clusters. This clustering does not require the ground-truth access of samples and only requires the computation of SCD and accuracy over training. If we compare the clusters with actual easy-hard-noisy partitions in sub-figure (a) we can observe that most of the hard and noisy samples are mixed in the second cluster. (c) Clustering results of 2d-GMM\({}_{\rm acc-SCD}\) with 3 number of clusters. In contrast to clustering with 2 clusters, we can observe that clustering with 3 cluster is much better at including as many noisy samples as possible in the second cluster while not including hard samples. Hence, throughout our study, when referring to our label noise detection method 2d-GMM\({}_{\rm acc-SCD}\), we apply 3 clusters and use the top left cluster as the detected noisy subset.
2303.15397
Quantum Effects of the Conformal Anomaly in a 2D Model of Gravitational Collapse
The macroscopic effects of the quantum conformal anomaly are evaluated in a simplified two-dimensional model of gravitational collapse. The effective action and stress tensor of the anomaly can be expressed in a local quadratic form by the introduction of a scalar conformalon field which satisfies a linear wave equation. A wide class of non-vacuum initial state conditions is generated by different solutions of this equation. An interesting subclass of solutions corresponds to initial states that give rise to an arbitrarily large semi-classical stress tensor on the future horizon of the black hole formed in classical collapse. These lead to modification and suppression of Hawking radiation at late times after the collapse, and potentially large backreaction effects on the horizon scale due to the conformal anomaly. The probability of non-vacuum initial conditions large enough to produce these effects is estimated from the Gaussian vacuum wave functional in the Schrodinger representation and shown to be of order 1. These results indicate that quantum effects of the conformal anomaly in non-vacuum states are relevant for gravitational collapse in the effective theory of gravity in four dimensions as well.
Emil Mottola, Mani Chandra, Gian Mario Manca, Evgeny Sorkin
2023-03-27T17:15:44Z
http://arxiv.org/abs/2303.15397v2
# Quantum Effects of the Conformal Anomaly ###### Abstract The macroscopic effects of the quantum conformal anomaly are evaluated in a simplified two-dimensional model of gravitational collapse. The effective action and stress tensor of the anomaly can be expressed in a local quadratic form by the introduction of a scalar conformal field \(\varphi\), which satisfies a linear wave equation. A wide class of non-vacuum initial state conditions is generated by different solutions of this equation. An interesting subclass of solutions corresponds to initial states that give rise to an arbitrarily large semi-classical stress tensor \(\left\langle T_{\mu}^{\ \nu}\right\rangle\) on the future horizon of the black hole formed in classical collapse. These lead to modification and suppression of Hawking radiation at late times after the collapse, and potentially large backreaction effects on the horizon scale due to the conformal anomaly. The probability of non-vacuum initial conditions large enough to produce these effects is estimated from the Gaussian vacuum wave functional of \(\varphi\) in the Schrodinger representation and shown to be \(\mathcal{O}(1)\). These results indicate that quantum effects of the conformal anomaly in non-vacuum states are relevant for gravitational collapse in the effective theory of gravity in four dimensions as well. ###### Contents * I Introduction * II Radial Collapse Geometry in Double Null Coordinates * III Classical Radial Collapse of a Null Shell * IV The Stress Tensor of the Conformal Anomaly and the BH Horizon * V Non-Vacuum Initial States and Suppression of the Hawking Flux * VI Probability Distribution for Non-Vacuum Initial Conditions * VII Discussion and Outlook * A Curvature Components in Double Null Coordinates * B The Functions \(r(u,v)\) and \(\sigma(u,v)\) in regions I and II * C Three Sets of Double Null Coordinates and Horizon Finiteness Conditions Introduction Black holes are solutions of the Einstein eqs. of classical general relativity (GR) in the absence of sources, except for interior singularities where matter is compressed to infinite pressures and densities. In addition to these singularities, the characteristic feature of a classical black hole (BH) is its event horizon, the critical null surface of finite area from which outwardly directed light rays cannot escape. Whereas it is widely believed that quantum effects intervene to regulate interior BH singularities, the horizon region is generally supposed to remain substantially unchanged from the classical description. This description includes the important assumption of vanishing stress tensor \(T_{\mu\nu}=0\) on the horizon that permits continuation of the exterior geometry into the BH interior by means of a (singular) transformation of coordinates [1; 2]. It is important to examine this assumption for a number of reasons. Even in classical GR, the hyperbolic character of Einstein's eqs. allows generically for \(T_{\mu\nu}\) sources and discontinuities on the horizon which would violate the hypothesis of analytic continuation through it, potentially altering the geometry of the singular interior as well. The analogous issue of stress tensor sources arises when quantum effects are considered. If the quantum state is assumed to be the local vacuum at the horizon, the expectation value of the stress tensor \(\langle T_{\mu\nu}\rangle\) in this state can remain negligibly small, provided also that quantum fluctuations measured by higher point correlation functions such as \(\langle T_{\alpha\beta}T_{\mu\nu}\rangle\) also remain small on the horizon. Both of these conditions are open to question in the quantum theory. It is well known that there is no unique vacuum state in curved spacetime [3]. The unique vacuum of quantum field theory (QFT) in flat Minkowski space relies upon the Lorentz invariant separation of positive and negative frequency modes, hence particle and anti-particles, over a complete Cauchy surface, and the existence of a positive definite Hamiltonian with respect to that surface. These requirements are not satisfied in general curved spacetimes, and are particularly problematic when horizons are present. It is at the BH horizon where the timelike Killing field \(\partial_{t}\) (or the co-rotating Killing field \(\partial_{t}+\omega\,\partial_{\phi}\) for rotating BHs) becomes null, the clean separation of particle and anti-particle modes breaks down and the corresponding Hamiltonian becomes unbounded from below. There is thus no _a priori_ reason for the state of QFT to correspond to the 'empty' Minkowski vacuum at the horizon. Certainly a large variety of non-vacuum states are also allowed. Early work established that the Hawking effect is dependent upon the choice of quantum state, and is also closely related to the conformal anomaly that arises in defining the renormalized \(\langle T_{\mu}^{\ \nu}\rangle\) in BH spacetimes [4; 5]. Later it was shown that Hawking thermal emission at late times after gravitational collapse to a BH can be derived directly from the assumption that the short distance properties of the quantum state and the Hadamard behavior of its Green's functions on the future horizon region are the same as those in flat space [6]. This assumption also guarantees that the future horizon is smooth, and \(\langle T_{\mu}^{\ \nu}\rangle\) remains regular there, so that quantum backreaction effects remain small. These conditions correspond to the initial state of QFT in gravitational collapse to be the Unruh state [7]. Virtually all later investigations have assumed this state, including those with dynamical backreaction [8; 9]. It is also the regularity of the horizon and absence of any stress tensor source there that allows association of a temperature \(T_{\mu}=1/\beta_{\mu}\) with the periodicity \(\beta_{\mu}\) of the metric at the horizon continued to Euclidean time [10; 11]. Yet paradoxically, it is just this assumption of a smooth horizon and the Hawking temperature associated to it that leads to an enormous Bekenstein-Hawking BH entropy equal to \(1/4\) of the area of the horizon, which is particularly difficult to understand if the BH horizon is a smooth mathematical boundary only, with no sources or independent degrees of freedom of its own. If matter and information can freely fall just one-way through this mathematical horizon boundary, the effect of Hawking thermal radiation also suggests the possibility of pure states evolving into mixed states and the breakdown of quantum unitary evolution [12]. The difficulty (if not impossibility) of recovering this lost information at the late or final stages of the BH evaporation process leads to a severe 'information paradox,' that has been the subject of numerous investigations and speculations spanning several decades [13; 14; 15; 16; 17; 18; 19; 20; 21]. Although the Hawking temperature \(T_{\mu}\) of radiation far from the BH is very small, the inverse of the gravitational redshift implies infinitely blueshifted local temperatures and energies if traced back to the horizon. It is thus by no means clear that quantum fluctuations \(\langle T_{\alpha\beta}T_{\mu\nu}\rangle\) from the mean and their backreaction on the near-horizon geometry can be neglected, as is usually assumed. The increasing time dilation and gravitational blueshift of frequency and energy scales with respect to the asymptotically flat region as the horizon is approached results in all fixed finite mass scales becoming negligible there, and an effective classical conformal symmetry in the near-horizon region [22; 23; 24]. This implies in turn that the conformal behavior and conformal anomaly of QFT are relevant there [25; 26; 27]. It is also known that the conformal anomaly is necessarily associated with the existence and residue of a \(1/k^{2}\) massless pole in stress tensor correlation functions, even in flat space [28; 29; 30; 26; 26]. Since this massless anomaly pole in quantum correlation functions is a lightlike singularity, it is associated with effects on the light cone, which can extend to arbitrarily large macroscopic scales, and is particularly relevant on null horizons. The \(1/k^{2}\) pole can be expressed as the propagator of an effective scalar degree of freedom \(\varphi\), a collective _conformal_ mode of the underlying massless (or sufficiently light) quantum fields, whose fluctuations and correlations are significantly enhanced in the vicinity of a BH horizon. The existence of a lightlike singularity implies quantum correlations due to the anomaly which influence the semi-classical mean value \(\langle T_{\mu}^{\ \nu}\rangle\) as well. The dependence of the long range conformalon scalar on the norm of the Killing vector \(\partial_{t}\) carries non-local information about the conformal transformation of the vacuum from the asymptotically flat region where the Minkowski vacuum is preferred, to the expectation value \(\langle T_{\mu}^{\ \nu}\rangle\) on the BH horizon. These quantum anomaly effects on the horizon are generically large for wide classes of non-vacuum initial conditions, notwithstanding the smallness of the curvature there [25; 26; 32]. The local form of the anomaly effective action and stress tensor in terms of the scalar \(\varphi\) makes the quantitative evaluation of these effects much simpler technically than the much more involved and laborious method of obtaining renormalized expectations values \(\langle T_{\mu}^{\ \nu}\rangle\) directly from the underlying QFT [33]. Indeed the technical complexity of the direct method of calculating \(\langle T_{\mu}^{\ \nu}\rangle\) has been sufficient to deter any systematic investigation of all but a small number of special quantum states, in specific QFTs. In contrast, a very wide class of states in generic conformal QFTs can be investigated by simply considering the variety of possible solutions to the _linear_ wave eq. satisfied by the conformalon scalar \(\varphi\) field, and computing its semi-classical \(T_{\mu}^{\ \nu}[\varphi]\), which is already renormalized. Since the corresponding effective action of the anomaly is also quadratic in \(\varphi\), any particular occurrence of non-vacuum initial data in gravitational collapse is described by a Gaussian wavefunctional in the Schrodinger representation, and its probability is therefore also easily estimated. Because all of these essential features are present in both two and four spacetime dimensions, it is advantageous to investigate their consequences first in the 2D case, in a simplified computable model of gravitational collapse without backreaction, as a proxy and warm-up to the more realistic 4D dynamical problem. The organization of the paper is as follows. In the next section we define the two-dimensional model, and set notations and conventions in double null coordinates suitable for our problem. In Sec. III we specify and solve for the interior and exterior geometry of an imploding null shell which creates a classical BH. In Sec. IV we review the two-dimensional conformal anomaly and non-local Polyakov effective action corresponding to it, the massless pole it generates in vacuum polarization, and the local representation of the effective action by the introduction of the massless scalar conformalon field \(\varphi\), showing how it can have significant effects on BH horizons. In Sec. V we evaluate the anomaly stress tensor \(T_{\mu}^{\ \nu}[\varphi]\) in a class of interesting non-vacuum states where it can become large and modify the Hawking effect. In Sec. VI we make use of the Gaussian distribution corresponding to these initial states in the wavefunctional of the anomaly effective action to show that the probability of non-vacuum initial conditions producing such effects on the horizon are non-negligible and \(\mathcal{O}(1)\), discussing how this is consistent with general theorems of finite initial data, such as [34]. Sec. VII contains a discussion of the results and outlook for the extension to the full four-dimensional effective field theory (EFT) of gravity proposed in [27], for the open problem of dynamical collapse including the relevant quantum effects in the full EFT. There are three appendices wherein are collected for the convenience of the reader the curvature components in double null coordinates, Appendix A, the metric functions for the collapsing null shell geometry, Appendix B, and the stress tensors and horizon finiteness conditions in the various coordinates used, and relations between them, Appendix C. ## II Radial Collapse Geometry in Double Null Coordinates The general spherically symmetric line element in \(3+1\) dimensions may be expressed in the factorized \(2\times 2\) form \[ds_{4}^{2}=\gamma_{ab}\,dx^{a}dx^{b}+r^{2}d\Omega^{2} \tag{2.1}\] where \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta\,d\phi^{2}\) is the standard line element on the unit \(S^{2}\), \(\gamma_{ab}(x^{1},x^{2})\) is the metric on the two-dimensional subspace of constant \(\theta,\phi\), and \(r=r(x^{1},x^{2})\) is a scalar function of the arbitrary two-dimensional coordinates \(x^{a}\,(a=1,2)\). The radius \(r\) is uniquely defined by the condition that the proper area of the sphere of constant \(r\) is \(A=4\pi r^{2}\) in the spherically symmetric spacetime. The various geometric quantities for the metric (2.1) are given in Appendix A. In particular the Einstein tensor of the full four-dimensional spacetime with the line element (2.1) has the components [35] \[G_{ab}=\frac{\gamma_{ab}}{r^{2}}\,\left[(\nabla r)^{2}-1+2\,r \square\,r\right]-\frac{2}{r}\,\nabla_{a}\nabla_{b}r\,,\qquad a,b=1,2 \tag{2.2a}\] \[G^{\theta}_{\ \theta}=\ G^{\phi}_{\ \phi}=r\,\square\,r-\frac{r^{2}}{2}R \tag{2.2b}\] with all other components vanishing. In (2.2) we use the notations \[\nabla_{a}r=\partial_{a}r\equiv\frac{\partial r}{\partial x^{a}}\,,\qquad( \nabla r)^{2}\equiv\gamma^{ab}(\nabla_{a}r)(\nabla_{b}r)\,,\qquad\square\,r \equiv\gamma^{ab}\nabla_{a}\nabla_{b}r \tag{2.3}\] with \(\nabla_{a}\) the covariant derivative with respect to the two-dimensional metric \(\gamma_{ab}\), and \(R\) the corresponding two-dimensional Ricci scalar. We shall generally suppress any special notation distinguishing quantities derived from the two-dimensional metric _vs._ the full four-dimensional line element (2.1), as which is meant should be clear from the context. For example eqs. (2.2) clearly refer to the four dimensional Einstein tensor, since the Einstein tensor of any two-dimensional space vanishes identically. It is useful also to define the three functions \(h,m\) and \(\kappa\) in terms of \(r(x^{1},x^{2})\) by \[h\equiv(\nabla r)^{2}\equiv 1-\frac{2Gm}{r} \tag{2.4a}\] \[\kappa\equiv-\frac{Gm}{r^{2}}=\frac{(\nabla r)^{2}-1}{2r} \tag{2.4b}\] which are also scalars with respect to the two-geometry \(\gamma_{ab}\). The quantity \(m\) is the Misner-Sharp mass function and \(\kappa\) is the acceleration or surface gravity at \(r\).1 Footnote 1: The definition of \(\kappa\) in this paper follows the conventions of [35], which differ from the more general definition of the surface gravity \(\kappa=\frac{1}{2}\int\frac{h}{f}\frac{df}{dr}\). The two become equal, except for a sign change, when \(f=h\) and \(m\) is independent of \(r\). The Einstein eqs. for the general spherically symmetric four-geometry (2.1) are \[-\nabla_{a}\nabla_{b}\,r+\left(\Box r+\kappa\right)\gamma_{ab}=4 \pi r\,G\,T_{ab} \tag{2.5a}\] \[\Box r-\frac{r}{2}\,R=8\pi r\,G\,p_{\perp} \tag{2.5b}\] where \[T^{\theta}_{\ \theta}=T^{\phi}_{\ \phi}\equiv p_{\perp} \tag{2.6}\] is the tangential pressure, which spherical symmetry requires must have equal \(\theta\) and \(\phi\) components. If one defines the effective two-dimensional stress tensor \(\tau_{ab}\) by \[T_{ab}\equiv\frac{\tau_{ab}}{4\pi r^{2}}\,,\qquad a,b=1,2\,, \tag{2.7}\] covariant conservation of the full four-dimensional stress tensor gives \[\nabla_{b}\tau^{ab}=4\pi\nabla_{b}(r^{2}T^{ab})=8\pi p_{\perp}\nabla^{a}r\,, \tag{2.8}\] all other components being satisfied identically. Hence the stress tensor \(\tau_{ab}\) is covariantly conserved purely in two dimensions if and _only if_ the tangential pressure vanishes identically, _i.e._ \[\nabla_{b}\tau^{ab}=0\,,\qquad\Leftrightarrow\quad p_{\perp}=0 \tag{2.9}\] which we shall assume for a simplified model of gravitational collapse. In this model with \(p_{\perp}=0\), the Einstein eqs. (2.5) with (2.7) become \[-\nabla_{a}\nabla_{b}\,r+(\,\square r+\kappa)\,\gamma_{ab}=\frac{G}{r }\,\tau_{ab}\,, \tag{2.10a}\] \[R=\frac{2}{r}\,\square r \tag{2.10b}\] which defines an effective two-dimensional theory with covariantly conserved \(\nabla_{b}\tau^{ab}=0\). By differentiating (2.4a) and using (2.4b) and (2.10) we obtain the useful relation \[\frac{\partial m}{\partial x^{a}}=(\tau_{a}^{\ b}-\delta_{a}^{\ b}\,\tau_{c}^{ \ c})\,\frac{\partial r}{\partial x^{b}} \tag{2.11}\] for the Misner-Sharp mass flux or gradient, where \(\tau_{c}^{\ c}=\gamma^{cd}\tau_{cd}\) is the two-dimensional trace. To this point the coordinates \((x^{1},x^{2})\) of the two-geometry at fixed \(\theta,\phi\) have been left arbitrary to emphasize covariance under arbitrary coordinate transformations of \((x^{1},x^{2})\). We will make use of two specific useful choices of coordinates. The first is that of Schwarzschild coordinates, obtained by identifying one of the coordinates \((x^{2}\) say) with \(r\) itself. A possible \(dt\,dr\) cross term can be eliminated by a redefinition of \(t\), so that \(x^{1}\) can then be identified as the Schwarzschild time \(t\). This results in the line element taking on the standard Schwarzschild form [1] \[\gamma_{ab}\,dx^{a}dx^{b}=-f\,dt^{2}+\frac{dr^{2}}{h} \tag{2.12}\] with \(f\) and \(h\) two functions of \((t,r)\). In these coordinates \(h=g^{rr}\) is the same function defined in general two-dimensional coordinates by (2.4a), while (2.11) becomes \[\frac{\partial m}{\partial r}=-\tau_{t}^{\ t}=-4\pi r^{2}\,T_{t}^{\ t}=4\pi r ^{2}\rho \tag{2.13}\] in terms of the energy density \(\rho\). Integrating this eq. with respect to \(r\) shows that \(m(t,r)\) is the Misner-Sharp mass-energy within the sphere of radius \(r\) on the time slice fixed by \(t\). Since Schwarzschild coordinates (2.12) become singular at \(h=0\), and the causal structure is tied to the behavior of null rays, a different coordinate choice that proves useful is that of double null \((u,v)\) coordinates. These rely on the fact that every two-geometry is locally conformally flat, so the two-dimensional line element (2.1) can be expressed in the form \[\gamma_{ub}\,dx^{a}dx^{b}=-e^{2\sigma}\,du\,dv \tag{2.14}\] with the metric \(\gamma_{w}=\gamma_{eu}=-\frac{1}{2}e^{2\sigma}\) and inverse \(\gamma^{\mu w}=\gamma^{\mu u}=-2e^{-2\sigma}\), in terms of \(\sigma(u,v)\). The line element (2.14) is invariant under the redefinitions \[u\to\tilde{u}(u)\,,\qquad v\to\tilde{v}(v) \tag{2.15}\] with the simultaneous redefinition of \[\sigma\to\tilde{\sigma}=\sigma-\frac{1}{2}\ln\left(\frac{d\tilde{u}}{du}\right) -\frac{1}{2}\ln\left(\frac{d\tilde{v}}{dv}\right)\,,\qquad\frac{d\tilde{u}}{du }>0\,,\quad\frac{d\tilde{v}}{dv}>0\,. \tag{2.16}\] Thus there is still considerable coordinate freedom to redefine \(u\) and \(v\) independently, and we will make use of several different sets of double null coordinates. Since the conformal factor \(e^{\sigma}\) changes under the coordinate transformation (2.15)-(2.16), such coordinate transformations are also conformal transformations, and form the infinite dimensional conformal group in two dimensions. The coordinate freedom can be fixed by _e.g._ setting \(\sigma=0\) in a region where the spacetime is flat, so that \(u=t-r,v=t+r\) become the standard radial null coordinates in two-dimensional flat spacetime. In double null coordinates the coordinate invariant condition for the location of the apparent horizon (AH) is \[h=(\nabla r)^{2}=-4\,e^{-2\sigma}\,\frac{\partial r}{\partial u}\frac{\partial r }{\partial v}\,\stackrel{{ AH}}{{=}}\,0 \tag{2.17}\] showing that the rate of change of the radius with respect to at least one of the null coordinates must vanish there. The conditions \[\frac{\partial r}{\partial v}=0\qquad\text{future AH} \tag{2.18a}\] \[\frac{\partial r}{\partial u}=0\qquad\text{past AH} \tag{2.18b}\] define the future or past apparent horizons respectively, which are also invariant under (2.15). The two-dimensional scalar curvature in double null coordinates (2.14) is \[R=-2\,\square\,\sigma=8\,e^{-2\sigma}\,\frac{\partial^{2}\sigma}{\partial u \partial v} \tag{2.19}\] and the Einstein eqs. (2.10) with \(p_{\perp}=0\) take the form of (A13), which are covariant with respect to the two-dimensional coordinate/conformal transformation (2.15)-(2.16). Thus \(\tau_{ab}\,dx^{a}dx^{b}=\tau_{\bar{a}\bar{b}}\,dx^{\bar{a}}dx^{\bar{b}}\), so for example \(\tau_{uu}\) transforms as \[\tau_{uu}=\left(\frac{d\bar{u}}{du}\right)^{2}\,\tau_{\bar{u}\bar{u}} \tag{2.20}\] under (2.15)-(2.16). The Misner-Sharp mass is given by \[m(u,v)=\frac{r}{2G}\left[1+4\,e^{-2\sigma}\left(\frac{\partial r}{\partial u} \right)\left(\frac{\partial r}{\partial v}\right)\right]\,, \tag{2.21}\] while eqs. (2.11) become \[\frac{\partial m}{\partial u} =2\,e^{-2\sigma}\left(\tau_{w}\frac{\partial r}{\partial u}-\tau _{w}\frac{\partial r}{\partial v}\right) \tag{2.22a}\] \[\frac{\partial m}{\partial v} =2\,e^{-2\sigma}\left(\tau_{w}\frac{\partial r}{\partial v}-\tau _{w}\frac{\partial r}{\partial u}\right) \tag{2.22b}\] in double null coordinates. ## III. Classical Radial Collapse of a Null Shell The simplest model of radial collapse which will form a BH classically is that of a spherical shell imploding upon its center at the speed of light. The classical energy-momentum-stress tensor of such a lightlike infalling shell is \[\tau_{w}^{C}=\frac{dE}{dv}\,, \tag{3.1}\] with \(E(v)\) determining its profile as function of the advanced null coordinate time \(v\), and with all other components of \(\tau_{ab}^{C}\) vanishing. The total classical mass-energy carried by the incoming null shell of radiation is \[M=\int_{-\infty}^{\infty}\frac{dE}{dv}\,dv\,. \tag{3.2}\] The simplest case to analyze and solve explictly is that of an infinitesimally thin shell for which \[\frac{dE}{dv}=M\,\delta(v-v_{0})\,,\qquad E(v)=M\,\theta(v-v_{0}) \tag{3.3}\] so that the four-dimensional classical energy-momentum tensor is \[T_{w}^{C}=\frac{\tau_{w}^{C}}{4\pi r^{2}}=\frac{M}{4\pi r^{2}}\,\delta(v-v_{0}) \tag{3.4}\] on the incoming null shell. In this case the metric functions can be found explicitly in each region as follows. In the first region I, for \(v<v_{0}\) interior to the imploding shell, spacetime is flat, so that the two-dimensional line element at constant \(\theta,\phi\) is \[\text{I}:\qquad ds^{2}=-dt^{2}+dr^{2}=-du\,dv\,,\qquad\text{with} \qquad u\equiv t-r,\quad v\equiv t+r<v_{0}\,,\] \[\sigma(u,v)=0\,,\qquad r(u,v)=\frac{v-u}{2} \tag{3.5}\] which satisfies (A13) with \(\tau_{ab}=0\). In the exterior region \(v>v_{0}\) outside of the shell, the geometry is that of the sourcefree four-dimensional Schwarzschild solution, _i.e._ the two-dimensional solution is \[\text{II}:\quad ds^{2}=f(r)\,\left(-dt^{2}+{dr^{*}}^{2}\right)=-f(r )\,d\bar{u}\,d\bar{v}\,,\quad\text{with}\quad f(r)=1-\frac{r_{u}}{r}\,,\quad r _{u}\equiv\frac{2GM}{c^{2}}\] \[\quad dr^{*}=\frac{dr}{f(r)}\,,\quad r^{*}\equiv r+r_{u}\,\ln \left(\frac{r}{r_{u}}-1\right)\,,\qquad\tilde{u}\equiv t-r^{*},\quad\tilde{v} \equiv t+r^{*}>\tilde{v}_{0}\,. \tag{3.6}\] We denote with tildes the Schwarzschild null coordinates \((\tilde{u},\bar{v})\), since they are allowed to differ from the corresponding \((u,v)\) coordinates in the flat region (3.5). The relations (3.6) yield a solution to the sourcefree Einstein eqs. (A13) with \(\tau_{ab}=0\) and \[\bar{\sigma}=\frac{1}{2}\ln f(r)\,,\qquad\frac{\bar{v}-\tilde{u}}{2}=r^{*}=r+ r_{u}\ln\left(\frac{r}{r_{u}}-1\right) \tag{3.7}\] determining \(r\) and \(\bar{\sigma}\) implicitly as functions of \(\bar{v}-\tilde{u}\), and \(\bar{v}+\tilde{u}=2t\) in this Schwarzschild region II. The two sets of double null coordinates must be matched for a continuous (\(C^{0}\)) metric at \(v=v_{0}\). This is accomplished by noting that the radius \(r\) has the same invariant geometric meaning in terms of the four dimensional metric (2.1) in either region. Comparison of (3.5) and (3.7) shows that \(\sigma\neq\bar{\sigma}\), so that the solution in the two regions in these coordinates as they stand is discontinuous across the null shell. In order to find a solution to the geometry of the spherical collapse of a null shell with \(\mathcal{C}^{0}\) continuous metric functions we utilize the gauge freedom (2.15)-(2.16) to match the solution I (3.5) of the interior to the exterior solution II (3.6). For \(r\gg r_{u}\) and \(u,\tilde{u}\to-\infty\), both regions I and II are asymptotically flat, so that we may choose the advanced null coordinates \(v\) and \(\bar{v}\) to be equal there. The reparametrization freedom in \(v\) can be used to require the interior \(v\) coordinate to match the exterior \(\bar{v}\) coordinate for all \(u,\tilde{u}\). Hence \[\bar{v}=v\,,\qquad d\bar{v}=dv\,,\qquad\bar{v}_{0}=v_{0}\,. \tag{3.8}\] Then requiring the metric function \(r=(v-u)/2\) from (3.5) to be equal to that from (3.7) at the location of the null shell at \(\tilde{v}_{0}=v_{0}\) gives \[r^{*}\big{|}_{v=v_{0}}=\frac{v_{0}-\tilde{u}}{2}=r_{0}(u)+r_{u}\ln\left(\frac{r_{ 0}(u)}{r_{u}}-1\right) \tag{3.9}\] with \[r_{0}(u)\equiv r(u,v_{0})=\frac{v_{0}-u}{2}\,, \tag{3.10}\] so that the radius \(r\) is continuous across the shell. Eq. (3.9) determines \[\tilde{u}(u)=u-2r_{u}\ln\left(\frac{v_{0}-u}{2r_{u}}-1\right) \tag{3.11}\] as a function of \(u\) and \[r^{*}(u,v)=r(u,v)+r_{u}\ln\left(\frac{r(u,v)}{r_{u}}-1\right)=\frac{v-u}{2}+r_ {u}\ln\left(\frac{r_{0}(u)}{r_{u}}-1\right) \tag{3.12}\] as an implicit function of the original \((u,v)\) of region I, in region II. Differentiating (3.10) and using \(dr^{*}=dr/f(r)\), or directly from (3.11) we have \[\left.\frac{d\tilde{u}}{du}=\frac{1}{f(r)}\right|_{r=r_{0}(u)}=\left(1-\frac{ r_{u}}{r_{0}(u)}\right)^{-1}=\left(1-\frac{2r_{u}}{v_{0}-u}\right)^{-1}\equiv \frac{1}{f_{0}} \tag{3.13}\] so that using (2.16) with (3.7) and (3.8), we obtain \[\sigma=\bar{\sigma}+\frac{1}{2}\ln\left(\frac{d\tilde{u}}{du}\right)=\frac{1} {2}\ln\left(\frac{f(r)}{f(r_{0})}\right)=\frac{1}{2}\ln\left(\frac{f}{f_{0}}\right) \tag{3.14}\] in region II, determining also the second metric function \(\sigma\) in the Schwarzschild region II, now expressed in the original \((u,v)\) coordinates. Since (3.14) vanishes at \(v=v_{0},r=r_{0}(u)\), \(\sigma(u,v_{0})\) is continuous with \(\sigma=0\), (3.5) of the interior flat region I. Thus the two-dimensional line element \[ds^{2}=-e^{2\sigma}\,du\,dv=-\frac{f(r)}{f(r_{0})}\,du\,dv=-f(r)\,d\tilde{u}\, d\tilde{v}=-f(r)\,dt^{2}+\frac{dr^{2}}{f(r)} \tag{3.15}\] is indeed the Schwarzschild exterior geometry in region II for \(\tilde{v}=v>v_{0}\), after the passage of the null shell, continuously matched to the flat region I at \(v=v_{0}\), with the coordinate transformation (3.11). The piecewise solutions to \(r\) and \(\sigma\) in the two regions and the full geometry determined by the impolding null shell localized at \(v=v_{0}\) according to (3.1)-(3.3) can be combined in terms of Heaviside step function \[\Theta(v-v_{0})=\left\{\begin{array}{ll}1,&\quad v>v_{0}\\ 0,&\quad v<v_{0}\end{array}\right.\] in the form \[\sigma(u,v)=\frac{1}{2}\ln\left(\frac{f(r)}{f(r_{0})}\right)\,\Theta(v-v_{0}) \tag{3.16}\] with \(r(u,v)\) determined by the implicit relation for \(v>v_{0}\) in region II \[r(u,v)=\frac{v-u}{2}+r_{u}\,\ln\left(\frac{r_{0}f_{0}}{rf}\right)\,\Theta(v-v_{ 0})=\frac{v-u}{2}+r_{u}\,\ln\left(\frac{r_{0}-r_{u}}{r-r_{u}}\right)\,\Theta(v -v_{0}) \tag{3.17}\] and \(r_{0}(u)\) given by (3.10). From (3.16)-(3.17) it is clear that although \(\sigma\) and \(r\) are \(C^{0}\) continuous at \(v=v_{0}\), their first derivatives with respect to \(v\) are not. Since the derivative of the Heaviside step function \(\Theta\) is a Dirac \(\delta\)-function, the second derivative \[\frac{\partial^{2}r}{\partial v^{2}}=-\frac{r_{u}}{r^{2}}\,\delta(v-v_{0})+\ldots \tag{3.18}\] contains a Dirac \(\delta\)-function contribution at \(v=v_{0}\) (with the ellipsis indicating the remaining terms which are non-singular). The various first and second derivatives of \(r\) and \(\sigma\) with respect to \(u\) and \(v\) in each region are catalogued in Appendix B. With those full expressions one may check that the classical Einstein eqs. (A13) are satisfied everywhere, including the only component with a non-zero source \[G_{vv}=8\pi G\,T_{vv}^{C} \tag{3.19}\] from the stress tensor (3.4) of the null shell, with the \(\delta\)-function from (3.18). The Carter-Penrose conformal diagram for the classical geometry of the radially collapsing null shell of finite mass \(M\) but infinitesimal thickness is illustrated in Fig. 1. From (3.11) as the \(u\) coordinate in region I Figure 1: Carter-Penrose conformal diagram of radial collapse of a null shell. The shaded region I, \(v<v_{0}\) is flat, while the unshaded region II, \(v>v_{0}\) is Schwarzschild with mass \(M\). The point C with coordinates (3.22) is where the shell crosses its future event horizon. \[I:u\to v_{0}-2r_{{}_{M}} \tag{3.20}\] where \(r\to r_{{}_{M}}\), which corrresponds to \[II:\tilde{u}\to+\infty\,,\quad\frac{\partial r}{\partial v}=\frac{f}{2}\to 0 \tag{3.21}\] in the Schwarzschild region II, the condition (2.18a) is satisfied. Thus \(u=v_{0}-2r_{{}_{M}},v\geq v_{0}\) is the location of the future marginally outermost trapped surface and apparent horizon (AH). There is a last incoming null ray at \(v=v_{0}-2r_{{}_{M}}\) which reflects from the origin at \(u=v=v_{0}-2r_{{}_{M}}\) and becomes the outgoing null ray defining the future BH horizon, but the conditions (2.17)-(2.18a) are not satisfied until \(v\geq v_{0}\). Incoming rays with \(v_{0}-2r_{{}_{M}}<v<v_{0}\) reflect from the origin too late and are trapped, being pulled back finally to the future singularity at \(r=0\). Thus the point \(C\) at which the imploding null shell crosses its future horizon, with coordinates \[(u,v)_{C}=(v_{0}-2r_{{}_{M}},v_{0}) \tag{3.22}\] is where the AH and marginally trapped surface first appears, and a classical BH is formed, _cf._ Fig. 1. Since the approach of \(u\) to the horizon is important in evaluating the quantum effects in the following sections, we note that (3.17) may be written in the form \[\exp\left(\frac{r}{r_{{}_{M}}}\right)\left(\frac{r}{r_{{}_{M}}}-1\right)=\exp \left(\frac{v-u}{2r_{{}_{M}}}\right)\left(\frac{r_{0}}{r_{{}_{M}}}-1\right)\,, \qquad v>v_{0}\,, \tag{3.23}\] so that if \(u=v_{0}-2r_{{}_{M}}(1+\epsilon)\) \[\frac{r_{0}}{r_{{}_{M}}}=1+\epsilon\,,\qquad\frac{r}{r_{{}_{M}}}=1+\epsilon\, \exp\left(\frac{v-v_{0}}{2r_{{}_{M}}}\right)+{\cal O}(\epsilon^{2}) \tag{3.24}\] as \(\epsilon\to 0\). Thus both \(r_{0}\to r_{{}_{M}}\) and \(r\to r_{{}_{M}}\) at fixed \(v\) in the horizon limit, and both \(f_{0},f\to 0\), while \[\frac{f}{f_{0}}\to\exp\left(\frac{v-v_{0}}{2r_{{}_{M}}}\right) \tag{3.25}\] remains finite in this limit at fixed \(v\). The Stress Tensor of the Conformal Anomaly and the BH Horizon With the classical geometry of the imploding null shell forming a BH determined in Sec. III, we turn to quantum effects in this two-dimensional spacetime. Since with \(p_{\perp}=0\), \(\tau_{ab}\) is the conserved stress tensor of the 2D spacetime at fixed \((\theta,\phi)\), we can model the quantum effects from the stress tensor of the two-dimensional conformal anomaly, which has been considered previously for the vacuum state in [9]. The four-dimensional conformal anomaly and effective action [36, 37, 25] does not reduce to the two-dimensional one in the spherically symmetric product space (2.1), but the corresponding effects on the horizon are very similar, so that the simplified two-dimensional model will illustrate the main points to be investigated in the more realistic full four-dimensional case. The two-dimensional effective action corresponding to the conformal trace anomaly was given in Ref. [38] in the non-local form \[S_{\rm anom}[\gamma]=-\frac{N\hbar}{96\pi}\int\!d^{2}x\,\sqrt{- \gamma}\int\!d^{2}x^{\prime}\,\sqrt{-\gamma^{\prime}}\,\,R_{x}\,(\,\square^{ -1})_{x,x^{\prime}}R_{x^{\prime}} \tag{4.1}\] where \(N=N_{s}+N_{f}\) is the number of free massless fields (scalar or fermion) in the underlying QFT. This effective action is the result of functionally integrating out \(N\) free massless quantum fields \(\psi_{i},i=1,\ldots,N\) with classical action \(S_{cl}[\psi_{i};\gamma]\) in two-dimensional curved spacetime, _i.e._ \[\exp\left\{\frac{i}{\hbar}\,S_{\rm anom}[\gamma]\right\}=\int\prod _{i=1}^{N}[\mathcal{D}\psi_{i}]\,\exp\left\{\frac{i}{\hbar}\,S_{cl}[\psi_{i}; \gamma]\right\} \tag{4.2}\] which defines the one-particle irreducible (1PI) effective action of the quantum fields in a general 2D curved space with metric \(\gamma_{ab}\). The explicit factor of \(\hbar\) in (4.1) records the fact that the quantum functional integral (4.2) gives the compact (and exact) result of all connected one-loop stress tensor correlation functions \(\langle\tau_{a_{1}}^{\,b_{1}}(x_{1})\ldots\tau_{a_{n}}^{\,b_{n}}(x_{n})\rangle\) by successive variations of \(S_{\rm anom}[\gamma]\) with respect to the arbitrary metric \(\gamma_{ab}\). A normalization factor, which drops out of all 1PI connected correlation functions for \(n>1\) has been set equal to unity in (4.2), so that \(S_{\rm anom}[\gamma]\) and \(\langle\tau_{a}^{\,b}(x)\rangle\) vanishes in infinite flat space with no boundaries. In other words, \(S_{\rm anom}[\gamma]\) is the renormalized effective action functional, whose variations define the renormalized stress tensor correlation functions, and no further renormalization is required. In the form (4.1) it should be clear that non-local quantum effects are contained in this effective action through the boundary conditions needed to specify the Green's function \((\,\square^{-1})_{x,x^{\prime}}\) of the scalar wave operator. It is this essential non-local state dependence that leads to the possibility of novel quantum effects on BH horizons, which are not determined by the local curvature alone. However, the non-local action (4.1) may also be written in the local form \[S_{\mathcal{A}}[\gamma;\varphi]\equiv-\frac{N\hbar}{96\pi}\int d^{2}x\sqrt{- \gamma}\,\left(\gamma^{ab}\,\nabla_{a}\varphi\,\nabla_{b}\varphi-2R\,\varphi\right) \tag{4.3}\] by the introduction of a new scalar field \(\varphi\), called a _conformal_, since shifts in \(\varphi\) correspond to conformal transformations \(e^{\varphi}\) of the metric. The equivalence of (4.1) and (4.3) is demonstrated by variation of (4.3) with respect to \(\varphi\) which yields its eq. of motion \[-\,\square\varphi=R \tag{4.4}\] which is linear in \(\varphi\), since (4.3) is quadratic in \(\varphi\). If (4.4) is formally solved for \(\varphi=-\,\square^{-1}R\) by means of its Green's function, and substituted back into (4.3) the non-local form of the action (4.1) is recovered, up to a surface term. Clearly this inversion of (4.4) is not unique since the Green's function \(\,\square^{-1}\) depends on as yet unspecified boundary conditons, which are in one-to-one correspondence with the specification of the solution to (4.4) by the fixing of solutions \(\varphi_{0}\) to the corresponding homogeneous eq. \(\square\varphi_{0}=0\). Thus in the local form (4.3), the state dependent effects of the underlying QFT are contained in the choice of the particular homogeneous solution to the wave eq. (4.4). Varying the local form of the action (4.3) with respect to the two dimensional metric \(\gamma^{ab}\) gives the energy-momentum tensor of the 2D quantum conformal anomaly \[\tau^{\mathcal{A}}_{ab}\equiv-\frac{2}{\sqrt{-\gamma}}\frac{\delta}{\delta \gamma^{ab}}\,S_{\mathcal{A}}[\gamma;\varphi]=\frac{N\hbar}{48\pi}\left(2 \nabla_{a}\nabla_{b}\varphi-2\gamma_{ab}\,\square\varphi+\nabla_{a}\varphi\, \nabla_{b}\varphi-\frac{1}{2}\gamma_{ab}\nabla_{c}\varphi\nabla^{c}\varphi\right) \tag{4.5}\] which is covariantly conserved in 2D, by use of (4.4) and by virtue of the vanishing of the Einstein tensor in two dimensions. The trace of (4.5) reproduces the 2D trace anomaly [3], _i.e._ \[\tau^{\mathcal{A}a}_{a}=-\frac{N\hbar}{24\pi}\,\square\varphi=\frac{N\hbar}{24 \pi}\,R \tag{4.6}\] upon making use of (4.4). Henceforth we drop the superscript \(\mathcal{A}\) on the the anomaly stress tensor (4.5) to simplify notation, since it is clearly distinguished from the classical stress tensor (3.1)-(3.4) of the null shell. The scalar conformalon field \(\varphi\) may be regarded as an effective or collective degree of freedom that can be related to two-particle Cooper-pair intermediate states of the underlying massless conformal field theory [30]. This may be seen by taking a second variation of (4.3) with respect to the arbitrary metric \(\gamma^{cd}\) and then evaluating the result in flat space. This results in the vacuum polarization diagram of \(\Pi_{abcd}=i(\tau_{ab}\tau_{cd})\), whose intermediate two particle state exhibits a \(1/k^{2}\) pole in momentum space that can be expressed as the Greens' function propagator of the effective scalar degree of freedom \(\varphi\). Thus the one-loop \(\Pi_{abcd}\) may be represented by a classical _tree_ graph in \(\varphi\), with no loops _cf._ Fig. 2. The one-loop polarization tensor in the underlying quantum theory has the form in momentum space \[\Pi_{abcd}(k)\Big{|}_{\text{2D}}=\frac{N\hbar}{12\pi k^{2}}\left( \eta_{ab}k^{2}-k_{a}k_{b}\right)\left(\eta_{cd}k^{2}-k_{c}k_{d}\right) \tag{4.7a}\] \[\Pi_{ab\ \ \epsilon}^{\ \ \ \ c}(k)\Big{|}_{\text{2D}}=\frac{N \hbar}{12\pi}\left(\eta_{ab}k^{2}-k_{a}k_{b}\right) \tag{4.7b}\] showing that the non-zero trace and coefficient on the right side of (4.6) is directly related to the existence and residue of the \(1/k^{2}\) pole in \(\Pi_{abcd}\). In fact, once the tensor index structure indicated in (4.7a) is fixed, as required by symmetries and the covariant conservation Ward identity \(k^{a}\Pi_{abcd}(k)=0\) on any index, the one-loop diagram of Fig. 2 is UV finite and completely determined, with (4.7b) the result [28]. This shows that the conformal anomaly and pole is independent of the regularization scheme and detailed UV behavior of the quantum theory, provided that the identities following from the covariant conservation law (2.8) are maintained. The correspondance with the propagator tree graph in Fig. 2 is established by defining the vertex \(\tau^{(1)}_{ab}\) by the term linear in \(\varphi\) in (4.5), _i.e._ \[\tau^{(1)}_{ab}=\frac{N\hbar}{24\pi}\left(\nabla_{a}\nabla_{b}\varphi-\gamma_ {ab}\,\square\,\varphi\right) \tag{4.8}\] and recognizing that the normalization of the \(\varphi\) field in (4.3) differs by a factor of \(N\hbar/48\pi\) from that of a canonically normalized scalar field, so that its propagator is \((48\pi/N\hbar)\times 1/k^{2}\). Attaching the vertex factor (4.8) to each vertex in the \(\varphi\) tree graph of Fig. 2 and taking account of the normalization of the Figure 2: Left: The one-loop stress tensor vacuum polarization of a 2D CFT, which exhibits the massless \(1/k^{2}\) pole of (4.7a). Right: The equivalent classical tree graph of the conformalon scalar \(1/k^{2}\) propagator. See the text and Ref. [30] for the details of this correspondence. \(\varphi\) propagator gives for the \(\varphi\) tree graph in momentum space \[\left(\frac{N\hbar}{24\pi}\right)^{2}\,\left(\frac{48\pi}{N\hbar\,k^{2}}\right)( \eta_{ab}k^{2}-k_{a}k_{b})\left(\eta_{cd}k^{2}-k_{c}k_{d}\right)=\frac{N\hbar}{1 2\pi k^{2}}\left(\eta_{ab}k^{2}-k_{a}k_{b}\right)\left(\eta_{cd}k^{2}-k_{c}k_{ d}\right) \tag{4.9}\] which coincides with (4.7a), establishing their equivalence. The essential point now is that the massless pole in (4.7a), equivalently (4.9), is a lightlike singularity, signaling significant effects of the quantum conformal anomaly on the light cone, which extends to macroscopic distance scales, irrespective of the local curvature \(R\). Note that the classical theory of 2D gravity defined by \(\int d^{2}x\,\sqrt{\varphi}\,R\) has no transverse modes and no propagating degrees of freedom at all, so the \(1/k^{2}\) propagator and effective scalar degree of freedom it describes arises entirely from the quantum effect of the anomaly, described by (4.3) in which \(\hbar\) is a parameter, but in terms of an effective classical field satisfying (4.4) [26; 30]. To see the effect of the anomaly and \(\varphi\) on horizons directly, and to relate it to the classical BH geometry of the Sec. III, consider the 2D line element of the Schwarzschild form (3.6). The components of the 2D anomaly stress tensor (4.5) in the \((t,r^{*})\) coordinates of (3.6) are \[\tau_{t}{}^{t} =\frac{N\hbar}{24\pi}\left\{-\frac{1}{4f}\left(\dot{\varphi}^{2}+ \varphi_{,r^{*}}^{2}-2f^{\prime}\varphi_{,r^{*}}\right)-\frac{\ddot{\varphi}} {f}+R\right\} \tag{4.10a}\] \[\tau_{r^{*}}{}^{t} =\frac{N\hbar}{48\pi f}\left\{\,-2\,\dot{\varphi}_{,r^{*}}+\dot{ \varphi}\left(f^{\prime}-\varphi_{,r^{*}}\right)\,\right\}\] (4.10b) \[\tau_{r^{*}}{}^{t^{*}} =\frac{N\hbar}{24\pi}\left\{\frac{1}{4f}\left(\dot{\varphi}^{2}+ \varphi_{,r^{*}}^{2}-2f^{\prime}\varphi_{,r^{*}}\right)+\frac{\varphi_{,r^{*} }{r^{*}}}{f}+R\right\} \tag{4.10c}\] where \(\varphi_{,r^{*}}=\frac{\partial\varphi}{\partial r^{*}}\) and \(\varphi_{,r^{*}r^{*}}=\frac{\partial^{2}\varphi}{\partial r^{*2}}\,\). The linear eq. (4.4) for \(\varphi\) is \[\square\varphi=-\frac{1}{f}\frac{\partial^{2}\varphi}{\partial t^{2}}+\frac{ \partial}{\partial r}\left(f\,\frac{\partial\varphi}{\partial r}\right)= \frac{1}{f}\left(-\frac{\partial^{2}}{\partial t^{2}}+\frac{\partial^{2}}{ \partial r^{*2}}\right)\varphi=-R=f^{\prime\prime}=\frac{d^{2}f}{dr^{2}} \tag{4.11}\] in these coordinates. A particular solution to this inhomogeneous eq. is \(\varphi=\ln f\). The associated homogeneous wave eq. has general wave solutions \(\exp\{ik(r^{*}\pm t)\}\). If we are interested in stationary states, and restrict to \(k=0\), we may illustrate the behavior of the anomaly stress tensor on the horizon with linear functions of \(t\) and \(r^{*}\). In this case one can examine the effect of a stationary state solution of (4.11) in the form \[\varphi_{r,q}=Pt+Qr^{*}+\ln f(r)=\frac{P+Q}{2}\,v+\frac{P-Q}{2}\,\tilde{u}+\ln f(r) \tag{4.12}\] where an irrelevant constant is set to zero because (4.3) and (4.5) depend only upon the derivatives of \(\varphi\). Substituting this solution into the stress tensor (4.4) with \(\varphi_{,r^{*}}=Q+f^{\prime}\) and \(\varphi_{,r^{*}r^{*}}=ff^{\prime\prime}\), we find \[\tau_{t}{}^{t} =-\frac{N\hbar}{24\pi}\left\{\frac{1}{4f}\left(P^{2}+Q^{2}-f^{ \prime\,2}\right)+f^{\prime\prime}\right\} \tag{4.13a}\] \[\tau_{r^{*}}{}^{t} =-\frac{N\hbar}{48\pi}\frac{PQ}{f}\] (4.13b) \[\tau_{r^{*}}{}^{r^{*}} =\frac{N\hbar}{96\pi}\frac{1}{f}\left(P^{2}+Q^{2}-f^{\prime\,2}\right) \tag{4.13c}\] in the \((t,r^{*})\) coordinates. If one then specializes to the Schwarzschild exterior line element of (3.6), with \[f(r)=1-\frac{r_{u}}{r}\,,\qquad f^{\prime}=\frac{r_{u}}{r^{2}}\,,\qquad f^{ \prime\prime}=-\frac{2r_{u}}{r^{3}}=-R \tag{4.14}\] the stress tensor (4.13) of the quantum anomaly becomes \[\tau_{t}{}^{t} =-\frac{N\hbar}{24\pi}\left\{\frac{1}{4f}\left(\frac{p^{2}+q^{2}} {r_{u}^{2}}-\frac{r_{u}^{2}}{r^{4}}\right)-\frac{2r_{u}}{r^{3}}\right\} \tag{4.15a}\] \[\tau_{r^{*}}{}^{t} =-\frac{N\hbar}{48\pi r_{u}^{2}}\frac{pq}{f}\] (4.15b) \[\tau_{r^{*}}{}^{r^{*}} =\frac{N\hbar}{96\pi}\frac{1}{f}\left(\frac{p^{2}+q^{2}}{r_{u}^{2 }}-\frac{r_{u}^{2}}{r^{4}}\right) \tag{4.15c}\] where we have set the constants \(P=p/r_{u}\) and \(Q=q/r_{u}\), with \((p,q)\) dimensionless. Eqs. (4.15) show that the stress tensor due to the quantum anomaly generically gives divergent \(1/f\) contributions as \(r\to r_{u},f\to 0\) on the BH horizon. This is a reflection of the \(1/k^{2}\) light cone singularities of (4.7a). These divergences can be arranged to cancel on the future horizon by the particular choice \(p=-q=\pm 1/2\), or on the past horizon by the choice \(p=q=\pm 1/2\), corresponding to the future or past Unruh states [7], or on both horizons by the choice \(p=0,q=\pm 1\), corresponding to the Israel-Hartle-Hawking thermal state [10; 39; 40] at the price of being non-vanishing as \(r\to\infty\) (and being thermodynamically unstable due to negative heat capacity [41]). Any other values for \((p,q)\) result in divergences on the horizon. If one requires a time independent truly static solution then \(p=0\). The case \(p=q=0\) is both time independent and gives a \(\varphi\) and stress tensor that tends to zero as \(r\to\infty\), corresponding to asymptotically flat conditions, but for this choice \[\tau_{a}^{\ b}\big{|}_{p=q=0}\to-\frac{N\hbar}{96\pi r_{u}^{2}f}\begin{pmatrix}-1& 0\\ 0&1\end{pmatrix}\to\infty\qquad\text{as}\qquad r\to r_{u} \tag{4.16}\] which diverges on the two-dimensional horizon as \(r\to r_{u}\), \(f\to 0\). These conditions correspond to the Boulware state [5; 42]. The significance of the solution \(\varphi=\ln f\) to (4.12) corresponding to this state is that \(e^{\varphi}=f\) is the conformal transformation that takes the 2D flat line element \(-dt^{2}+dr^{*\,2}\) to the curved space line element of (3.6). The stress tensor (4.16) is the effect on the expectation value of \(\tau_{a}^{\ b}\) of this conformal transformation on the quantum vacuum state. In this way the local conformalon scalar incorporates information about the non-local quantum state over the entire \(t=\) const. Cauchy surface, relating the value of \(\langle\tau_{a}^{\ b}\rangle\) to the standard Minkowski vacuum state in the asymptotically flat region where \(f\to 1\) and \(\varphi\to 0\). The divergence of \(\varphi=\ln f\) as \(r\to r_{u}\) reflects the vanishing of the norm of the timelike Killing vector \(\partial_{t}\) on the horizon, and breakdown of the separation of positive and negative frequency (particle and anti-particle) solutions of the underlying quantum field theory, upon which the definition of the unique quantum vacuum state in flat Minkowski space is based. The results (4.15) show that the special states which are regular on the horizon are isolated points of measure zero in the two-parameter space of general \((p,q)\), and in particular, there is no value of \((p,q)\) which yields a time independent regular solution for \(\varphi\) and (4.15) on both the horizon and as \(r\to\infty\). Apart from these specific states and particular values of \((p,q)\), each of which would require a rather technically involved calculation and renormalization of a quantum stress tensor to derive directly from the underlying quantum field theory in curved space, the effective action (4.3) of the conformal anomaly and its stress tensor (4.5) permits consideration of a wide class of non-vacuum initial states and their possible quantum effects, simply by changing the integration constants or more general homogeneous solutions of the conformalon \(\varphi\) field eq. (4.4). This permits the investigation of quantum effects of non-vacuum initial conditions for general quantum fields on the BH horizon simply and systematically. ## V Non-Vacuum Initial States and Suppression of the Hawking Flux To apply the anomaly stress tensor (4.5), (4.10) for non-vacuum states in the case of gravitational collapse of the null shell and formation of the BH considered in Sec. III, consider eq. (4.4) in the double null coordinates (2.14) \[\frac{\partial^{2}\varphi}{\partial u\partial v}=2\,\frac{\partial^{2}\sigma}{ \partial u\partial v} \tag{5.1}\] the general solution of which may be expressed \[\varphi(u,v)=2\left[\sigma(u,v)+A(u)+B(v)\right] \tag{5.2}\] in terms of two arbitrary functions \(A(u),B(v)\). The particular solution \(\varphi=2\sigma\) with \(A=B=0\) gives \(\tau_{ab}=0\) in the flat region I, corresponding to the initial state being the Minkowski vacuum. However in the Schwarzschild region II, \(\varphi=2\sigma=\ln(f/f_{0})\) from (3.14). Note that in relation to (4.12), \(\varphi=\ln f-\ln f_{0}\) in region II corresponds to adding a particular homogeneous solution, namely \(-\ln f_{0}(u)\) to the solution of the inhomogeneous eq., \(\ln f\). Tying \(\varphi\) rigidly to the geometry in this way, with a very particular homogeneous solution to the \(\varphi\) eq. (4.12), as was assumed in earlier works [4; 7; 9] corresponds to the Unruh vacuum initial conditions after the passage of the null shell in the Schwarzschild region II, as we shall see presently. The formulation in terms of a local independent field \(\varphi\) is considerably more general and allows for arbitrary homogeneous solutions of the differential eq. (4.4) to be added as in (5.2), corresponding to non-vacuum initial states. Substituting the general solution (5.2) for \(\varphi\) into the stress tensor (4.5) we obtain the general form of the two-dimensional quantum anomaly stress tensor in the double null coordinates, with components \[\tau_{uu} =\frac{N\hbar}{12\pi}\left[\frac{\partial^{2}\sigma}{\partial u^{ 2}}-\left(\frac{\partial\sigma}{\partial u}\right)^{2}+\frac{d^{2}A}{du^{2}}+ \left(\frac{dA}{du}\right)^{2}\right] \tag{5.3a}\] \[\tau_{uv} =-\frac{N\hbar}{12\pi}\frac{\partial^{2}\sigma}{\partial u \partial v}\,,\] (5.3b) \[\tau_{uv} =\frac{N\hbar}{12\pi}\left[\frac{\partial^{2}\sigma}{\partial v^{ 2}}-\left(\frac{\partial\sigma}{\partial v}\right)^{2}+\frac{d^{2}B}{dv^{2}}+ \left(\frac{dB}{dv}\right)^{2}\right]. \tag{5.3c}\] It should be noted that (5.3) does not obey classical positivity conditions, nor should that be expected for the expectation value of a quantum stress tensor [3]. In the Schwarzschild region II (5.3) may be evaluated in the classical background geometry (_i.e._ ignoring any backreaction), with the aid of eqs. (B5) to obtain \[\tau_{uu} =\frac{N\hbar r_{u}}{48\pi f_{0}^{2}}\left[\frac{1}{r_{0}^{3}}- \frac{1}{r^{3}}+\frac{3r_{u}}{4}\left(\frac{1}{r^{4}}-\frac{1}{r_{0}^{4}} \right)\right]+\frac{N\hbar}{12\pi}\left[\frac{d^{2}A}{du^{2}}+\left(\frac{dA }{du}\right)^{2}\right]\,, \tag{5.4a}\] \[\tau_{uv} =-\frac{N\hbar r_{u}}{48\pi r^{3}}\frac{f}{f_{0}} \tag{5.4b}\] \[\tau_{vv}=-\frac{N\hbar r_{u}}{48\pi r^{3}}\left(1-\frac{3r_{u}}{4r}\right)+\frac{ N\hbar}{12\pi}\left[\frac{d^{2}B}{dv^{2}}+\left(\frac{dB}{dv}\right)^{2} \right]\,. \tag{5.4c}\] for \(v>v_{0}\). An important observation about the vacuum \(A=B=0\) terms in (5.4) is that all components satisfy the finiteness conditions of [5] and Appendix C. In particular, although \(\tau_{uu}\) of (5.4a) contains a factor of \(1/f_{0}^{2}\), the quantity in square brackets multiplying it vanishes up to second order in \(\epsilon\) in the expansion near horizon limit (3.24). From the last eq. for \(\tau_{vv}\) it is also clear that the function \(B(v)\) adds to the classical stress tensor of the null shell (3.4) an ingoing flux contribution from non-vacuum initial conditions at \(\mathcal{I}^{-}\), which would change the mass \(M\) and position of the BH horizon, but is otherwise of no particular interest for the behavior of the geometry near the future horizon, or the Hawking effect on \(\mathcal{I}^{+}\). Therefore we set \(B(v)=0\) and focus on the possible effects of non-vacuum initial conditions determined by \(A(u)\). Evaluating the derivatives of the flux of energy associated with the quantum energy-momentum tensor (5.4) with \(B=0\), from the time derivative of the Misner-Sharp mass in region II in the Schwarzschild \((t,r)\) coordinates using (2.11) we find \[\frac{\partial m}{\partial t}\biggm{|}_{B=0}=f_{0}\,\frac{ \partial m}{\partial u}+\frac{\partial m}{\partial v}=-f_{0}^{2}\,\tau_{uu}+ \tau_{vv}\] \[=-\frac{N\hbar r_{u}}{48\pi r_{0}^{3}}\left(1-\frac{3r_{u}}{4r_{ 0}}\right)-\frac{N\hbar f_{0}^{2}}{12\pi}\left[\frac{d^{2}A}{du^{2}}+\left( \frac{dA}{du}\right)^{2}\right]\,. \tag{5.5}\] For the vacuum initial conditions, \(A=B=0\), at late times \(t\to\infty\) as \(\tilde{u},v\to\infty,u\to v_{0}-2r_{u}\) at future null infinity \(\mathcal{I}^{+}\), \(r_{0}\to r_{u}\) and outgoing quantum energy flux goes to the limit \[\dot{m}_{n}=\frac{\partial m}{\partial t}\biggm{|}_{A=B=0}\to-\frac{N\hbar}{19 2\pi r_{u}^{2}}=-\frac{N\pi}{12\hbar}\left(k_{{}_{B}}T_{{}_{B}}\right)^{2} \tag{5.6}\] which is exactly the flux of \(N\) quantum fields radiating at the Hawking temperature \(T_{{}_{B}}=\hbar/(8\pi k_{{}_{B}}GM)\) in two dimensions expected in the Unruh state. We obtain the Hawking flux for two dimensions and not four dimensions because we are using the two-dimensional conformal anomaly as a proxy for the quantum anomaly in four dimensions. This is in agreement with earlier results [4; 5; 7; 9]. Note that the full energy flux (5.5) is a function only of \(u\) if \(B=0\) (as we neglect any backeaction) and that the factor of \(f_{0}^{2}\) multiplying \(\tau_{uu}\) can lead to finite result at late times on \(\mathcal{I}^{+}\) as \(u\to v_{0}-2r_{u},f_{0}\to 0\), only if there is a compensating factor of \(1/f_{0}^{2}\) in (5.4a). Stated in a different way, the Hawking flux result (5.6) is dependent upon the regularity of the vacuum stress tensor on the horizon, but conversely if the regularity conditions are violated by non-vacuum terms from \(A(u)\), then they can change the energy flux (5.5) at \(\mathcal{I}^{+}\) at late times. This is possible if and only if the non-vacuum terms in \(\tau_{uu}\) are \(1/f_{0}^{2}\) singular on the future horizon, consistent with the analysis of [6]. Comparing the general solution (5.2) for \(\varphi\) in the Schwarzschild region II after the null shell collapse with the particular solution (4.12) in the static Schwarzschild geometry, we see that it corresponds to \(p=-q\) and \[A(u)\big{|}_{p=-q}=\left(q+\frac{1}{2}\right)\ln f_{0}+A_{\rm reg}(u)\,,\qquad \text{where}\qquad A_{\rm reg}(u)=-\frac{qu}{2r_{u}}+q\ln\left(\frac{r_{0}}{r_ {u}}\right) \tag{5.7}\] and the latter \(A_{\rm reg}(u)\) is finite and regular on the horizon, \(u=v_{0}-2r_{u},r_{0}=r_{u}\). Since the important effects on the horizon are associated with the divergent \(\ln f_{0}\) term, we drop the regular contributions and consider the effects of the simpler non-vacuum perturbation of the form \[A(u)=\left(q+\frac{1}{2}\right)\ln f_{0}=\left(q+\frac{1}{2}\right)\ln\left(1- \frac{r_{u}}{r_{0}}\right)\,,\qquad r_{0}>r_{u}\,. \tag{5.8}\] This gives the additional contribution to \(\tau_{uu}\) \[\tau_{uu}^{A}=\frac{N\hbar}{12\pi}\left[\frac{d^{2}A}{du^{2}}+\left(\frac{dA} {du}\right)^{2}\right]=\frac{N\hbar}{48\pi}\left(q^{2}-\frac{1}{4}\right)\frac {r_{u}^{2}}{r_{0}^{4}f_{0}^{2}}-\frac{N\hbar}{24\pi}\left(q+\frac{1}{2}\right) \frac{r_{u}}{r_{0}^{3}f_{0}} \tag{5.9}\] in (5.4a), which has the \(1/f_{0}^{2}\) behavior in the horizon limit \(f_{0}\to 0\) required to give a non-vanishing contribution to the flux (5.5) at late times. Thus we now find \[\frac{\partial m}{\partial t} =\frac{N\hbar}{48\pi}\left[-\frac{r_{u}}{r_{0}^{3}}+\frac{3r_{u}^ {2}}{4r_{0}^{4}}-\left(q^{2}-\frac{1}{4}\right)\frac{r_{u}^{2}}{r_{0}^{4}}+ \left(q+\frac{1}{2}\right)\frac{2r_{u}}{r_{0}^{3}}\,f_{0}\right]\] \[\to-\frac{N\hbar}{48\pi r_{u}^{2}}\,q^{2} \tag{5.10}\] as \(u\to v_{0}-2r_{u},r_{0}\to r_{u},f_{0}\to 0\) at late times. If \(q=-1/2\) and the non-vacuum perturbation (5.8) vanishes one recovers the Hawking vacuum flux (5.6) in the Unruh state, which is regular on the future horizon, but if \(q=0\) this flux is precisely cancelled, corresponding to the Boulware state, which has a singular stress tensor (4.16) on the horizon, and there is no Hawking radiation. It is clear from this exercise that the Hawking flux and the behavior of the stress tensor on the horizon are intimately connected and dependent upon one another, and both are determined by the particular solution of the \(\varphi\) eq. (4.4) and stress tensor (4.5). That the assumption of regularity of the stress tensor on the horizon implies the Hawking effect was shown in Ref. [6]. The considerations above show that the converse is also true, and a singular contribution to the quantum stress tensor \(\tau_{uu}\) from an initial state perturbation can modify or even eliminate the Hawking flux. Now a strictly divergent perturbation is disallowed by the requirement that the initial state be UV finite with a Hadamard two-point function in QFT, in accordance with a theorem of [34]. Any \(A(u)\) homogeneous solution to (4.4), if followed backwards in time and reflected from the origin must have been present in the initial state as incoming radiation in \(B(v)\). Hence requiring that \(B(v)\) be non-singular in the initial state on \(\mathscr{I}^{-}\) prior to collapse implies that \(A(u)\) must also be non-singular on the horizon, and the strictly diverging behavior of (5.8) on the future horizon in (5.9) is excluded. On the other hand, there is no need to require the quantum stress tensor to diverge, in order for it to become arbitrarily large, while still finite, and then produce backreaction effects on the horizon that could lead to significantly different results than those obtained with vacuum initial data. Quantitative control of this large growth of the stress tensor on the horizon requires regulating the logarithmic divergence of (5.8) and the corresponding \(1/f_{0}^{2}\) divergence of (5.9) by a smooth cutoff for small but finite \(f_{0}\). Let the divergence in the \(\tau_{uu}\) component of the stress tensor in the non-vacuum state described by (5.8) be regulated by a small quantity \(\epsilon\ll 1\), such that (5.8) holds nearly everywhere but as \(f_{0}\to 0\), the logarithm is cut off by \(\epsilon\). That is, let \(A(u)\) of (5.8) be replaced by \(A_{\epsilon}(u)\) such that \[\lim_{\epsilon\to 0^{+}}A_{\epsilon}(u)=\left(q+\frac{1}{2}\right)\,\ln|f_{0}| \tag{5.11}\] but also such that \[\lim_{u\to v_{0}-2r_{M}}A_{\epsilon}(u)\to\left(q+\frac{1}{2}\right)\,\ln\epsilon \tag{5.12}\] remains finite, regulated by the non-zero value of \(\epsilon\ll 1\). One simple such regulated \(A(u)\) (by no means unique), with the required properties in the near horizon region is \[A_{\epsilon}(u)\stackrel{{?}}{{=}}\frac{1}{2}\,\left(q+\frac{1} {2}\right)\,\ln\left(f_{0}^{2}+\epsilon^{2}\right)=\frac{1}{2}\,\left(q+\frac{ 1}{2}\right)\,\ln\left[\left(1-\frac{r_{u}}{r_{0}(u)}\right)^{2}+\epsilon^{2}\right] \tag{5.13}\] which unlike (5.8) is also defined for \(f_{0}<0\). We may also require that \(A_{\epsilon}(u)\) have no singular behavior at any other \(u\), whereas (5.14) still exhibits singular behavior at the origin \(u=v_{0},r_{0}=0\) where \(f_{0}\to-\infty\). Thus another possible fully regularized \(A(u)\) is \[A_{\epsilon}(u)=\frac{1}{2}\,\left(q+\frac{1}{2}\right)\left\{\ln\left[\left( \frac{r_{0}(u)}{r_{M}}-1\right)^{2}+\epsilon^{2}\right]-\ln\left[\left(\frac{ r_{0}(u)}{r_{M}}\right)^{2}+\epsilon^{2}\right]\right\} \tag{5.14}\] where both logarithmic singularities of (5.8) at \(r_{0}=r_{u}\) and \(r_{0}=0\) are removed and regularized by the same \(\epsilon\ll 1\) small parameter. Then \[A_{\epsilon}(u)\to\pm\left(q+\frac{1}{2}\right)\,\ln\epsilon \tag{5.15}\] for \(u\to v_{0}-2r_{u}\) or \(u\to v_{0}\), respectively, as \(\epsilon\to 0^{+}\). The regularized function \(A_{\epsilon}(u)\) is shown as a function of \(u\) for \(q=0\) and various \(\epsilon\) in Fig. 3. The function \(A^{\prime\prime}+(A^{\prime})^{2}\) which appears in the quantum stress tensor (5.9) has a maximum at \(f_{0}\sim\epsilon\ll 1\) or at \(u-(v_{0}-2r_{u})\sim 2\,\epsilon\,r_{u}\) with that maximum value there of order \(\epsilon^{-2}\). The width in \(u\) of the peak maximum in \(A_{\epsilon}\) is \(\Delta u\sim 4r_{u}\epsilon\). The functions \(A^{\prime\prime},(A^{\prime})^{2}\) and \(A^{\prime\prime}+(A^{\prime})^{2}\) are plotted in Figs. 4. The main contribution comes from the region of \(\Delta u\sim\epsilon r_{u}\) around the maximum. Since \(f_{0}\) is a function of \(u\), this effect is concentrated in an interval of \(u\) near the horizon of order \[\Delta u\sim\Delta r\sim\epsilon r_{u}\sim\sqrt{N}L_{\rm Pl} \tag{5.16}\] which is of the order or somewhat larger than the Planck scale. However because \(h=f(r)\to 0\) for the Figure 4: First Two Panels: \(\epsilon^{2}A^{\prime\prime}\) and \(\epsilon^{2}A^{\prime\,2}\) of the regularized perturbation (5.14) as functions of \(u\) in units of \(1/8r_{u}^{2}\) for \(q=0\). The horizon is at \(u=0\), the \(u-\)axis is rescaled by \(\epsilon\) and the magnitude is rescaled by \(\epsilon^{2}\), showing that the self-similar behavior of the rescaled curves coincide for \(\epsilon\to 0\). Third Panel: The sum which contributes to (5.9) and \(\tau_{uu}\) in units of \(N\hbar/96\pi r_{u}^{2}\), also for \(q=0\) and with axes similarly rescaled. Figure 3: The regularized perturbation in the initial conditions (5.14) for \(q=0\) and various \(\epsilon\). Schwarzschild line element (12), this corresponds to a _physical_ distance scale of \[\ell\sim\frac{\Delta r}{\sqrt{\epsilon}}\sim N^{\frac{1}{4}}\sqrt{r_{u}L_{\rm Pl }}\gg L_{\rm Pl} \tag{5.17}\] from the horizon. For a solar mass BH \(\ell\) is of order \(10^{-14}\) cm or greater. Although very small by astrophysical standards, since \(\ell\gg L_{\rm Pl}\) by some 19 orders of magnitude, one may still expect to be able to apply semi-classical methods in this regime. The behavior of the Hawking flux suppression for some moderately small values of \(\epsilon\) is illustrated in Fig. 5, showing that this suppression persists for longer and longer retarded \(u\) times closer to \(u=v_{0}-2r_{u}\) on the future horizon, for smaller and smaller \(\epsilon\). Given (3.6) and (3.11), this corresponds at fixed \(r\) to times \(t\propto r_{u}\ln(1/\epsilon)\) after the collapse of the null shell. Fig. 5 also exhibits the self-similar behavior of the flux suppression as \(u\to v_{0}-2r_{u}\) for \(\epsilon\to 0\), which is a consequence of the conformal properties of the spacetime in near-horizon region [22; 23; 24]. For a quantitative estimate of how large the effects of the perturbation (5.14) on the geometry would be, if backreaction were to be taken into account, note that the overall scale of the quantum Figure 5: Upper Panels: Mass flux (5.5) as a function of horizon advanced time \(u\), showing the suppression of the Hawking flux by the perturbation \(A_{\epsilon}(u)\) in the initial state for \(u\) increasingly close to the horizon at \(u=0\) for decreasing values of \(\epsilon\). \(\dot{m}_{H}\) denotes the value of the 2D Hawking flux (5.6) to which all regular perturbations tend finally at \(u=0\). Lower Panel: Expanded \(u\) scale showing the self-similar behavior under rescalings of \(\epsilon\). effects encoded in \(\tau_{ab}\) are of order \(N\hbar/48\pi r_{ M}^{2}\). From the four-dimensional Einstein tensor (2.2) and stress tensor (2.7), \(\tau_{ab}\) leads to effects on \(G_{ab}\) of order \((8\pi G/4\pi r^{2})\,\tau_{ab}\), or \(N\hbar G/24\pi r_{ M}^{4}\). This is to be compared with the 4D classical curvature components computed in the Schwarzschild geometry, given in Appendix A which are of order \(1/r_{ M}^{2}\) at the horizon. Thus the quantum backreaction effects are generally suppressed by an overall relative factor of \[\alpha_{G}\equiv\frac{N\hbar G}{24\pi r_{ M}^{2}}=\frac{N}{24\pi}\left(\frac{L_{Pl}}{r_{ M}}\right)^{2}\ll 1 \tag{5.18}\] compared to the classical geometry, where \(L_{\rm Pl}\equiv\sqrt{\hbar G/c^{3}}=1.616\times 10^{-33}\) cm. This is certainly a very substantial suppression for a macroscopically large BH compared to the Planck scale, and the reason that quantum effects in classical GR are generally considered to be quite negligible. However even such an enormous suppression factor as (5.18) can be overcome if the quantum stress-tensor (5.3) components become large enough in the vicinity of the future apparent horizon. With (5.14) as a complete regularization of the non-vacuum initial state perturbation (5.8) in both regions, the \(A^{\prime\prime}+(A^{\prime})^{2}\) term in (5.9) is of order \(\epsilon^{-2}\) in the near horizon region and the suppression (5.18) is overcome if \[\frac{\alpha_{G}}{\epsilon^{2}}\,\left(q^{2}-\frac{1}{4}\right) \gtrsim 1\,,\qquad\text{or}\qquad\epsilon\ \lesssim\ \text{Max}(1,|q|)\times\sqrt{\frac{N}{6\pi}}\,\left(\frac{L_{\rm Pl}}{2r_{ M}}\right)\,. \tag{5.19}\] For large \(|q|\gg 1\) the condition on how small \(\epsilon\) must be to overcome the suppression of quantum non-vacuum effects on the horizon is weakened by the appearance of a large factor of \(|q|\) in (5.19), but in the following we assume that \(q\) is of order \(1\) and not particularly large, which we show in Sec. VI has the highest probability of occuring in the initial state. Since the finite regularized perturbation (5.14) is present in the initial state, prior to the formation of the BH so we also estimate its total Misner-Sharp energy in the flat space region I where \(R=0\) and (3.5) applies. Using (2.22) and (5.3) with (5.16) gives \[m=\int_{-\infty}^{\infty}du\,\frac{\partial m}{\partial u}=\int_{-\infty}^{ \infty}du\,\tau_{uu}\sim\Delta u\,\frac{\hbar N}{24\pi(\Delta u)^{2}}\sim\frac {\hbar N}{24\pi\epsilon r_{ M}}\sim\sqrt{\frac{N}{6\pi}}\,\frac{M_{\rm Pl}}{2}\ll M \tag{5.20}\] of the order the Planck mass \(M_{\rm Pl}=2.177\times 10^{-5}\) gm. In the flat region \(\Delta u\sim L_{\rm Pl}\), so that a quantum perturbation on the future apparent horizon of the BH large enough to overcome the suppression (5.18) and produce significant backreaction on the classical geometry only requires a Planck mass-energy fluctuation \(M_{\rm Pl}\) concentrated within a Planck length \(L_{\rm Pl}\) distance, just the scale at which such quantum fluctuations in the initial state are expected on general grounds of the uncertainty principle. In the next section we give a quantitative estimate of the probability that such a non-vacuum quantum fluctuation large enough to satisfy the conditions (5.19)-(5.20) exists in the wave functional of the initial vacuum state. ## VI Probability Distribution for Non-Vacuum Initial Conditions The effective action of the conformal anomaly (4.3) is quadratic in the conformalon scalar field \(\varphi\), and its eq. of motion (4.4) in the asymptotically flat region where \(R=0\) is that of a free scalar field. Since in a free theory the wave functional of the ground state vacuum is a simple Gaussian, evaluating the width of this Gaussian enables us to give a quantitative estimate of the probability of the coherent state perturbation of the form of (5.14) parametrized by \(\epsilon\) and \(q\). For one simple harmonic oscillator with frequency \(\omega\), the action \[S_{\rm osc}[x]=\frac{1}{2}\int dt\,\left(\dot{x}^{2}-\omega^{2}x^{2}\right) \tag{6.1}\] is quadratic in \(x\), and the ground state of the is described by the Schrodinger wave function \[\psi(x)=\left(\frac{\omega}{\pi\hbar}\right)^{\frac{1}{2}}\exp\left(-\frac{ \omega x^{2}}{2\hbar}\right) \tag{6.2}\] which is a simple Gaussian, normalized to \(\int_{-\infty}^{\infty}dx\,|\psi(x)|^{2}=1\). Since \(dx\,|\psi(x)|^{2}\) is the probability of finding the oscillator with a value of the coordinate between \(x\) and \(x+dx\), the probability of finding the coordinate \(x\) with any absolute value \(|x|\geq\bar{x}>0\) is \[P(\bar{x})=2\int_{\bar{x}}^{\infty}dx\,|\psi(x)|^{2}=\mbox{erfc}\left(\sqrt{ \frac{\omega}{\hbar}}\;\bar{x}\right) \tag{6.3}\] in terms of the complementary error function erfc. This simple result can be generalized to a free QFT, viewed as a collection of free harmonic oscillators, in both the fixed time and light cone quantization schemes. For initial data on a lightlike null surface such as \(\mathscr{I}^{-}\) the Schrodinger wave functional formulation is given in [43]. The Gaussian wave functional on the initial data for a canonically normalized scalar field \(\phi\) is proportional to \[\exp\left\{-\frac{1}{\hbar}\left(\phi^{-},\Omega\phi^{+}\right)\right\} \tag{6.4}\] where \(\phi^{\pm}\) are the positive and negative frequency parts of \(\phi\), and \(\Omega=2k\), the analog of \(\omega\) in (6.2), is called the 'covariance' and given in momentum space with \(k\) the momentum conjugate to the light front variable \(u\) or \(v\). For a real scalar field the positive and negative frequency parts are simply related by complex conjugation, _i.e._\(\phi^{-}=(\phi^{+})^{*}\). Applying this general result to the anomaly effective action (4.3), the square of the ground state Schrodinger wave functional for the conformalon scalar \(\varphi\) on an initial null hypersurface is \[\left|\Psi_{0}[\varphi]\right|^{2}\propto\exp\left\{-\frac{N}{24\pi}\int_{0}^{ \infty}\frac{dk}{2\pi}\,\varphi^{-}(k)\,(2k)\,\varphi^{+}(k)\right\} \tag{6.5}\] after account is taken of the normalization of (4.3) with the factor of \(N\hbar/48\pi\) relative to the canonical normalization of \(1/2\) for a free scalar field. The overall normalization factor in (6.5) is to be determined by the requirement that \(|\Psi_{0}|^{2}\) integrated over all values of the parameters characterizing the initial state perturbation is \(\varphi\) is unity. For the unregularized perturbation \(\varphi=2A(u)\) with \(A(u)\) given by (5.8), the positive frequency component in momentum space is \[\varphi^{+}(k)=(2q+1)\int_{-\infty}^{\infty}\!\!du\;e^{iku}\ln\left|f_{0} \right|,\qquad k>0\,. \tag{6.6}\] which is the result of the \(\epsilon\to 0\) limit of the regularized form (5.14). With the change of variables \(u=v_{0}-2r_{ M}x\), (6.6) is \[\varphi^{+}(k)=2r_{ M}\,(2q+1)\,e^{ikv_{0}}\,I(z)\Big{|}_{z=2kr_{ M}} \tag{6.7}\] where the integral \(I(z)\) is finite and given by \[I(z)=\int_{-\infty}^{\infty}\!\!dx\,e^{-ixz}\,\ln\left|1-\frac{1 }{x}\right| =\int_{1}^{\infty}\!\!dx\,e^{-ixz}\,\ln\left(1-\frac{1}{x}\right) +\int_{0}^{1}\!\!dx\,e^{-ixz}\,\ln\left(\frac{1}{x}-1\right)+\int_{0}^{\infty }\!\!dx\,e^{ixz}\,\ln\left(1+\frac{1}{x}\right)\] \[=\frac{\pi}{z}\left(1-e^{-iz}\right)\,. \tag{6.8}\] Although each of the three integrals in (6.8) involves sine-integral (Si) and cosine-integral (Ci) special functions, their sum turns out to be expressible in terms of elementary functions in the last form. Substituting (6.7) with (6.8) and \(z=2kr_{ M}\) into (6.5) gives the probability density of the initial state perturbation \[\left|\Psi_{0}\right|^{2}\propto\exp\left\{-\frac{N}{24\pi^{2}}\,(2q+1)^{2} \int_{0}^{\infty}\!\!dz\,z\,\left|I(z)\right|^{2}\right\} \tag{6.9}\] for the unregularized initial state perturbation (5.8). Now observe from (6.8) that the integrand of the \(z\) integral in (6.9) is \[z\,\left|I(z)\right|^{2}=z\,\frac{\pi^{2}}{z^{2}}\left|1-e^{-iz}\right|^{2}=\frac {4\pi^{2}}{z}\,\sin^{2}\left(\frac{z}{2}\right)\sim\frac{2\pi^{2}}{z} \tag{6.10}\] so that in fact the integral in (6.9) as it stands diverges logarithmically, and would give an identically zero probability for any \(q\neq-1/2\), which is the vacuum state. This is consistent with the general theorem of Ref. [34], which excludes the possibility that truly singular behavior on the future horizon could be generated in gravitational collapse, starting from smooth initial data. The perturbation (5.8) is such a singular perturbation for any \(q\neq-1/2\), also with diverging energy (5.20) in the initial state. It is not difficult to see that the large \(z\) behavior of the integral (6.8) is determined by behavior of \(A(u)\) at its logarithmic singular points where \(f_{0}\) becomes either \(0\) or \(\infty\). Thus if we replace the singular perturbation (5.8) by the finite one (5.14) regularized by a small but finite \(\epsilon\) parameter, the \(1/z\) behavior of (6.8) and (6.10) is cut off at \(z\sim 1/\epsilon\) with the result that \[\int_{0}^{\infty}dz\,z\left|I_{\epsilon}(z)\right|^{2}\sim\ln\left(1/\epsilon\right) \tag{6.11}\] for the regularized perturbation (5.14), and as a result the probability functional (6.9) becomes \[\left|\psi_{\epsilon}(q)\right|^{2}\propto\exp\left\{-\frac{N}{24\pi^{2}}\,(2 q+1)^{2}\,\ln\left(1/\epsilon\right)\right\}=\epsilon^{N(2q+1)^{2}/24\pi^{2}} \tag{6.12}\] which is now a finite normalizable probability density in \(q\) and for any \(\epsilon>0\). If \(\epsilon\) is required to satisfy (5.19) for \(q\) of order unity, it instructive to evaluate the exponent for a typical value of \(r_{{}_{M}}\simeq 3\) km for a solar mass BH, for which \[\frac{r_{{}_{M}}}{L_{\rm Pl}}\simeq 1.9\times 10^{38}\gg 1\,. \tag{6.13}\] Despite this very large value, the exponent in (6.12) is only weakly logarithmically dependent on \(\epsilon\) and \[\frac{1}{24\pi^{2}}\,\ln\left(1/\epsilon\right)=\frac{1}{24\pi^{2}}\,\ln\left( \sqrt{\frac{6\pi}{N}}\,\frac{2r_{{}_{M}}}{L_{\rm Pl}}\right)\simeq 0.38-\frac{ \ln N}{48\pi^{2}} \tag{6.14}\] is actually \(\mathcal{O}(1)\). The \(\ln N\) term in (6.14) is also negligibly small compared to \(0.38\) provided \(\ln N\ll(48\pi^{2})(0.38)\simeq 180\), so that neglecting it, we find from (6.12) the normalized probability distribution in \(q\) is approximately \[\left|\psi_{\epsilon}(q)\right|^{2}\simeq\sqrt{\frac{(1.52)\,N}{\pi}}\,\exp \left\{-(0.38)\,N\,(2q+1)^{2}\right\} \tag{6.15}\] centered on the vacuum value of \(q=-\frac{1}{2}\), where the normalization is now fixed by \(\int_{-\infty}^{\infty}\!dq\,|\psi_{\epsilon}(q)|^{2}=1\). For the perturbation (5.14) with \(q=0\) that produces the large suppression of the Hawking effect and stress tensor on the horizon large enough to produce significant backreaction according to (5.19), \[|\psi_{\epsilon}(0)|^{2}\simeq(0.70)\sqrt{N}\,e^{-(0.38)\,N}=(0.70)\sqrt{N}\,(0.6 8)^{N} \tag{6.16}\] which is \(\mathcal{O}(1)\), unless \(N\) is very large. As in (6.3), the probability of finding a perturbation in the initial state of the form (5.8) varying from the vacuum value by \(|\Delta q|\geq 1/2\) is \[P\left(|\Delta q|\geq\frac{1}{2}\right)\simeq\text{erfc}\left(\sqrt{(0.38)\,N} \right)=\left\{\begin{array}{ll}0.38\,,&N=1\\ 0.08\,,&N=4\end{array}\right. \tag{6.17}\] which is also \(\mathcal{O}(1)\), for \(N\) fields contributing to the 2D conformal anomaly, unless \(N\) is very large. ## VII Discussion and Outlook In this paper we have considered a simple 2D model of gravitational collapse, and studied the effects of the quantum conformal anomaly on the resulting classical BH horizon. Although this and similar 2D models of gravitational collapse have been considered previously [4, 7, 9], attention has been focused almost exclusively on initial conditions corresponding to the Minkowski vacuum on \(\mathcal{F}^{-}\). This choice of initial state leads to the stress tensor on the horizon that is regular in free-falling coordinates and backreaction effects of Hawking radiation that are small, at least initially in the semi-classical approximation, where quantum fluctuations from the mean \(\langle T_{\mu}^{\ \nu}\rangle\) are ignored. This study of a simplified two-dimensional model of gravitational collapse shows instead that the quantum effects of the conformal anomaly can be extraordinarily large on BH horizons, overcoming even the enormous suppression of Planck to macroscopic scales expressed by the ratio (6.13). This suppression, normally expected of quantum effects in classical gravity, can be overcome in the stress tensor of the conformal anomaly because of its sensitivity to light cone pole singularities of quantum field theory, that occur in generic quantum states and extend to macroscopic scales. This specifically quantum, non-local effect, and its importance to the behavior of the stress tensor on BH horizons is illustrated in the simple 2D model of this paper. This supports the conclusion of other studies that the effective action of the conformal anomaly is a relevant addition to the classical theory that should be added in a full effective field theory (EFT) treatment of gravity at macroscopic scales [25, 26, 27, 29, 37]. By recasting the effective action of the 2D conformal anomaly in local form (4.3) via the introduction of a local scalar conformalon field \(\varphi\), a much wider class of initial conditions can be considered, by allowing general homogeneous solutions to the linear wave eq. (4.4) that \(\varphi\) satisfies. As a practical matter, this formulation of general initial conditions is simpler and much less technically involved than calculating the stress tensor of every quantum field in each and every quantum state, by the standard approach of mode sums, which requires a cumbersome process of regularization and renormalization on a case by case basis, even on a fixed background with a great deal of symmetry [3]. Calculations of quantum backreaction in dynamically evolving spacetimes, or those with less symmetry rapidly become prohibitive by this method. The local form of the conformal anomaly stress tensor and eq. of motion provides a more practical approach to make progress in this class of quantum backreaction problems in BH and other curved spacetimes, particularly in the horizon region where the anomaly dominates other vacuum polarization effects because of its lightlike singularity. We have illustrated this relevance of the anomaly stress tensor in the 2D model through its effect on Hawking emission, which can be modified or suppressed for indefinitely long times after gravitational collapse, by different choices of the initial state easily studied by means of different solutions to the \(\varphi\) eq. (4.4). Since the anomaly effective action is quadratic in \(\varphi\), it is also a convenient route to estimating the probability of such non-vacuum initial conditions in the vacuum wave functional. The probability of non-vacuum initial conditions that can significantly affect the BH near-horizon geometry and Hawking effect (6.16)-(6.17) are not negligibly small, but rather of \(\mathcal{O}(1)\). This illustrates in a different way both the ability of the anomaly to overcome large quantum suppression factors in gravitational collapse, and the special and fine-tuned nature of the vacuum initial conditions upon which virtually all inferences of quantum effects in BHs has been based. The present study indicates that a reconsideration of these conclusions for more general initial state conditions may be warranted. Clearly the estimates of the probability based on a 2D model of gravitational collapse (6.16)-(6.17) are only illustrative, given that the 2D model itself ignoring all transverse pressure components as in (2.9). The shortcomings of this model and similar ones have been pointed out [44]. For these reasons we do not take (6.16)-(6.17) as accurate reliable predictions for the probability of non-vacuum initial conditions in 4D gravitational collapse. Nevertheless, general features of weak, logarithmic dependence on the large ratio of scales \(1/\epsilon\sim r_{{}_{M}}/L_{\rm Pl}\) of this probability function, when initial state perturbations are regularized by a small parameter that grow large on the horizon, is expected to hold in four dimensions as well. The 4D effective action of the conformal anomaly is also quadratic in \(\varphi\), and its eq. of motion is also linear [27; 37]. Hence the probability of non-vacuum initial conditions that lead to large effects on the BH horizon found in [25; 26] can be studied by the same methods as those in the 2D case. Thus the study of the simplified 2D model presented here justifies a detailed study of the analogous non-vacuum perturbations by means of the 4D quantum conformal anomaly in more realistic models of gravitational collapse, and in the full EFT of [27], where the \(\varphi\) conformalon is coupled to dynamical vacuum energy, allowing it also to change in the near-horizon region.
2307.07264
On Interpolating Experts and Multi-Armed Bandits
Learning with expert advice and multi-armed bandit are two classic online decision problems which differ on how the information is observed in each round of the game. We study a family of problems interpolating the two. For a vector $\mathbf{m}=(m_1,\dots,m_K)\in \mathbb{N}^K$, an instance of $\mathbf{m}$-MAB indicates that the arms are partitioned into $K$ groups and the $i$-th group contains $m_i$ arms. Once an arm is pulled, the losses of all arms in the same group are observed. We prove tight minimax regret bounds for $\mathbf{m}$-MAB and design an optimal PAC algorithm for its pure exploration version, $\mathbf{m}$-BAI, where the goal is to identify the arm with minimum loss with as few rounds as possible. We show that the minimax regret of $\mathbf{m}$-MAB is $\Theta\left(\sqrt{T\sum_{k=1}^K\log (m_k+1)}\right)$ and the minimum number of pulls for an $(\epsilon,0.05)$-PAC algorithm of $\mathbf{m}$-BAI is $\Theta\left(\frac{1}{\epsilon^2}\cdot \sum_{k=1}^K\log (m_k+1)\right)$. Both our upper bounds and lower bounds for $\mathbf{m}$-MAB can be extended to a more general setting, namely the bandit with graph feedback, in terms of the clique cover and related graph parameters. As consequences, we obtained tight minimax regret bounds for several families of feedback graphs.
Houshuang Chen, Yuchen He, Chihao Zhang
2023-07-14T10:38:30Z
http://arxiv.org/abs/2307.07264v2
# On Interpolating Experts and Multi-Armed Bandits ###### Abstract Learning with expert advice and multi-armed bandit are two classic online decision problems which differ on how the information is observed in each round of the game. We study a family of problems interpolating the two. For a vector \(\mathbf{m}=(m_{1},\ldots,m_{K})\in\mathbb{N}^{K}\), an instance of \(\mathbf{m}\text{-}\mathsf{MAB}\) indicates that the arms are partitioned into \(K\) groups and the \(i\)-th group contains \(m_{i}\) arms. Once an arm is pulled, the losses of all arms in the same group are observed. We prove tight minimax regret bounds for \(\mathbf{m}\text{-}\mathsf{MAB}\) and design an optimal PAC algorithm for its pure exploration version, \(\mathbf{m}\text{-}\mathsf{BAI}\), where the goal is to identify the arm with minimum loss with as few rounds as possible. We show that the minimax regret of \(\mathbf{m}\text{-}\mathsf{MAB}\) is \(\Theta\left(\sqrt{T\sum_{k=1}^{K}\log(m_{k}+1)}\right)\) and the minimum number of pulls for an \((\varepsilon,0.05)\)-PAC algorithm of \(\mathbf{m}\text{-}\mathsf{BAI}\) is \(\Theta\left(\frac{1}{\varepsilon^{2}}\cdot\sum_{k=1}^{K}\log(m_{k}+1)\right)\). Both our upper bounds and lower bounds for \(\mathbf{m}\text{-}\mathsf{MAB}\) can be extended to a more general setting, namely the bandit with graph feedback, in terms of the _clique cover_ and related graph parameters. As consequences, we obtained tight minimax regret bounds for several families of feedback graphs. ## 1 Introduction A typical family of online decision problems is as follows: In each round of the game, the player chooses one of \(N\) arms to pull. At the same time, the player will incur a loss of the pulled arm. The objective is to minimize the expected regret defined as the difference between the cumulative losses of the player and that of the single best arm over \(T\) rounds. The minimax regret, denoted as \(R^{*}(T)\), represents the minimum expected regret achievable by any algorithm against the worst loss sequence. There are variants of the problem according to amount of information the player can observe in each round. In the problem of multi-armed bandit (\(\mathsf{MAB}\)), the player can only observe the loss of the arm just pulled. The minimax regret is \(\Theta\left(\sqrt{NT}\right)\) ([1]). Another important problem is when the player can observe the losses of all arms in each round, often refered to as learning with expert advice. The minimax regret is \(\Theta\left(\sqrt{T\log N}\right)\) ([12, 13]). Bandit with graph feedback generalizes and interpolates both models. In this model, a directed graph \(G\), called the feedback graph, is given. The vertex set of \(G\) is the set of arms and a directed edge from \(i\) to \(j\) indicates that pulling the arm \(i\) can observe the loss of arm \(j\). As a result, the \(\mathsf{MAB}\) corresponds to when \(G\) consists of singletons with self-loop, and learning with expert advice corresponds to when \(G\) is a clique. A number of recent works devote to understanding how the structure of \(G\) affects the minimax regret ([1, 1, 2, 13, 14]). In this paper, we consider a natural interpolation between learning with expert advice and multi-armed bandit. Let \(\mathbf{m}=(m_{1},m_{2},\ldots,m_{K})\in\mathbb{N}^{K}\) be a vector with each \(m_{i}\geq 1\). An instance of \(\mathbf{m}\)-\(\mathtt{MAB}\) is that the all \(N\) arms are partitioned into \(K\) groups and the pull of each arm can observe the losses of all arms in the same group. In the language of bandit with graph feedback, the feedback graph \(G\) is the disjoint union of \(K\) cliques with size \(m_{1},m_{2},\ldots,m_{k}\) respectively. We show that the minimax regret for \(\mathbf{m}\)-\(\mathtt{MAB}\) is \(\Theta\left(\sqrt{T\cdot\sum_{k\in[K]}\log(m_{k}+1)}\right)\). As a result, this generalizes the optimal regret bounds for both \(\mathtt{MAB}\) and learning with expert advice. A closely related problem is the so-called "pure exploration" version of bandit, often referred to as the _best arm identification_ (\(\mathtt{BAI}\)) problem where the loss of each arm follows some (unknown) distribution. The goal of the problem is to identify the arm with minimum mean loss with as few rounds as possible. Similarly, we introduced the problem of \(\mathbf{m}\)-\(\mathtt{BAI}\) with the same feedback pattern as \(\mathbf{m}\)-\(\mathtt{MAB}\). We design an \((\varepsilon,0.05)\)-PAC algorithm for \(\mathbf{m}\)-\(\mathtt{BAI}\) which terminates in \(T=O\left(\frac{1}{\varepsilon^{2}}\sum_{k\in[K]}\log(m_{k}+1)\right)\) rounds for every \(\varepsilon<\frac{1}{8}\). This means that after \(T\) rounds of the game, with probability at least \(0.95\), the algorithm can output an arm whose mean loss is less than \(\varepsilon\) plus the mean of the best one. We show that our algorithm is optimal by proving a matching lower bound \(\Omega\left(\frac{1}{\varepsilon^{2}}\sum_{k\in[K]}\log(m_{k}+1)\right)\) for any \((\varepsilon,0.05)\)-PAC algorithm. Both our upper bounds and lower bounds for the minimax regret of \(\mathbf{m}\)-\(\mathtt{MAB}\) can be generalized to bandit with graph feedback. To capture the underlying structure necessary for our proofs, we introduce some new graph parameters which yield optimal bound for several families of feedback graphs. The main results are summarized in Section 1.1. Our algorithm deviates from the standard _online stochastic mirror descent_ (OSMD) algorithm for bandit problems. We employ the two-stage OSMD developed in [1] and give a novel analysis which yields the optimal regret bound. For the lower bound, we prove certain new "instance-specific" lower bounds for the best arm identification problem. These lower bounds may find applications in other problems. We will give an overview of our techniques in Section 1.2. ### Main Results We summarize our main results in this section. Formal definitions of \(\mathbf{m}\)-\(\mathtt{MAB}\), \(\mathbf{m}\)-\(\mathtt{BAI}\) and bandit with graph feedback are in Section 2. **Theorem 1**.: _There exists an algorithm such that for any instance of \((m_{1},\ldots,m_{K})\)-\(\mathtt{MAB}\), any \(T>0\) and any loss sequence \(\ell^{(0)},\ell^{(1)},\ldots,\ell^{(T-1)}\in[0,1]^{N}\), its regret is at most_ \[c\cdot\sqrt{T\cdot\sum_{k=1}^{K}\log(m_{k}+1)},\] _where \(c>0\) is a universal constant._ Given an instance of \(\mathbf{m}\)-\(\mathtt{BAI}\), for \(\varepsilon,\delta\in(0,1)\), an \((\varepsilon,\delta)\)-PAC algorithm can output an arm whose mean loss is less than \(\varepsilon\) plus the mean of the optimal one with probability at least \(1-\delta\). Using a reduction from \(\mathbf{m}\)-\(\mathtt{BAI}\) to \(\mathbf{m}\)-\(\mathtt{MAB}\) (Lemma 13), we obtain a PAC algorithm for \(\mathbf{m}\)-\(\mathtt{BAI}\): **Theorem 2**.: _There exists an \((\varepsilon,0.05)\)-PAC algorithm for \((m_{1},\ldots,m_{K})\)-\(\mathtt{BAI}\) which pulls_ \[T\leq c\cdot\sum_{k=1}^{K}\frac{\log(m_{k}+1)}{\varepsilon^{2}}\] _arms where \(c>0\) is a universal constant._ Let \(\mathsf{Ber}\left(p\right)\) denote the Bernoulli distribution with mean \(p\). We complement the above algorithm with the following lower bound: **Theorem 3**.: _There exists an instance \(\mathcal{H}\) such that for every \((\varepsilon,0.05)\)-PAC algorithm \(\mathcal{A}\) of \((m_{1},\ldots,m_{K})\)-BAI with \(\varepsilon\in\left(0,\frac{1}{8}\right)\), the expected number of pulls \(T\) of \(\mathcal{A}\) on \(\mathcal{H}\) satisfies_ \[\mathbf{E}\left[T\right]\geq c^{\prime}\cdot\sum_{k=1}^{T}\frac{\log(m_{k}+1) }{\varepsilon^{2}},\] _where \(c^{\prime}>0\) is a universal constant. Moreover, we can pick \(\mathcal{H}\) as the one in which each arm follows \(\mathsf{Ber}\left(\frac{1}{2}\right)\)._ Using the reduction from \(\mathbf{m}\)-BAI to \(\mathbf{m}\)-MAB (Lemma 13) again, we obtain the lower bound for \(\mathbf{m}\)-MAB. **Theorem 4**.: _For any algorithm \(\mathcal{A}\) of \((m_{1},\ldots,m_{k})\)-MAB, for any sufficiently large \(T>0\), there exists a loss sequence \(\ell^{(0)},\ell^{(1)},\ldots,\ell^{(T-1)}\) such that the regret of \(\mathcal{A}\) in \(T\) rounds is at least_ \[c^{\prime}\cdot\sqrt{T\cdot\sum_{k=1}^{K}\log(m_{k}+1)},\] _where \(c^{\prime}>0\) is a universal constant._ Our results generalize to the setting of _bandit with graph feedback_. Let \(G=(V,E)\) be a directed graph with self-loop on each vertex. Let \(V_{1},\ldots,V_{K}\subseteq V\) be subsets of vertices. We say that they form a \((V_{1},\ldots,V_{K})\)-_clique cover_ of \(G\) if each induced subgraph \(G[V_{k}]\) for \(k\in[K]\) is a clique and \(\bigcup_{k\in[K]}V_{k}=V\). **Corollary 5**.: _Let \(G\) be a feedback graph with a self-loop on each vertex. If \(G\) contains a \((V_{1},\ldots,V_{K})\)-clique cover where \(|V_{k}|=m_{k}\) for every \(k\in[K]\), then the minimax regret of bandit with graph feedback \(G\) is at most_ \[c\cdot\sqrt{T\cdot\sum_{k=1}^{K}\log(m_{k}+1)}\] for some universal constant \(c>0\). Our lower bounds generalize to bandit with graph feedback as well. The terms "strongly observable feedback graphs" and "weakly observable feedback graphs" are defined in Section 2. **Theorem 6**.: _Let \(G=(V,E)\) be the feedback graph. Assume that there exist \(K\) disjoint sets \(S_{1},\ldots,S_{K}\subseteq V\) such that_ * _each_ \(G[S_{k}]\) _is a strongly observable graph with a self-loop on each vertex;_ * _there is no edge between_ \(S_{i}\) _and_ \(S_{j}\) _for any_ \(i\neq j\)_._ _Then for any algorithm \(\mathcal{A}\) and any sufficiently large time horizon \(T>0\), there exists some loss sequence on which the regret of \(\mathcal{A}\) is at least \(c^{\prime}\cdot\sqrt{T\cdot\sum_{k=1}^{K}\log\left(|S_{k}|+1\right)}\) for some universal constant \(c^{\prime}>0\)._ The following lower bound for weakly observable feedback graphs confirms a conjecture in [10] and implies the optimality of several regret bounds established there, e.g., when the feedback graph is the disjoint union of loopless complete bipartite graphs. The notion of \(t\)-packing independent set is defined in Section 2. **Theorem 7**.: _Let \(G=(V,E)\) be the feedback graph. Assume that \(V\) can be partitioned into \(K\) disjoint sets \(V=V_{1}\cup V_{2}\cup\cdots\cup V_{K}\) such that_ * _for every_ \(k\in[K]\)_, each_ \(G[V_{k}]\) _is observable;_ * _for every_ \(k\in[K]\)_, there exists a_ \(t_{k}\)_-packing independent set_ \(S_{k}\) _in_ \(G[V_{k}]\) _such that every vertex in_ \(S_{k}\) _does not have a self-loop;_ * _there is no edge from_ \(V_{i}\) _to_ \(S_{j}\) _for any_ \(i\neq j\) _in_ \(G\)_._ _Then for any algorithm \(\mathcal{A}\) and any sufficiently large time horizon \(T>0\), there exists some loss sequence on which the regret of \(\mathcal{A}\) with feedback graph \(G\) is at least \(c^{\prime}\cdot T^{\frac{2}{3}}\cdot\left(\sum_{k=1}^{K}\max\left\{\log|S_{k}|, \frac{|S_{k}|}{t_{k}}\right\}\right)^{\frac{1}{3}}\) for some universal constant \(c^{\prime}>0\)._ Theorem 7 implies tight regret lower bounds for several weakly observable graphs. We summarize the minimax regret for some feedback graphs, weakly or strongly observable, in Table 1. ### Overview of Technique We note that a simple reduction (Lemma 13) implies that any algorithm for \(\mathbf{m}\text{-}\mathbf{M}\mathbf{A}\) can be turned into a PAC algorithm for \(\mathbf{m}\text{-}\mathbf{B}\mathbf{A}\). As a result, Theorems 1 to 4 follow from a minimax regret upper bound for \(\mathbf{m}\text{-}\mathbf{M}\mathbf{A}\) and a lower bound for \(\mathbf{m}\text{-}\mathbf{B}\mathbf{A}\). #### 1.2.1 Upper bounds for \(\mathbf{m}\text{-}\mathbf{M}\mathbf{A}\) We design a new two-stage algorithm (Algorithm 1) to establish an upper bound for \(\mathbf{m}\text{-}\mathbf{M}\mathbf{A}\). The algorithm is similar to the one used in [10] to study weakly observable graphs with a few tweaks to incorporate our new analysis. The algorithm maintains a distribution over \(K\) groups and for each group, it maintains a distribution for arms in that group. In each round of the game, the algorithm pulls an arm in a two-stage manner: First \begin{table} \begin{tabular}{c c c} \hline \hline Graph Type & Previous Result & This Work \\ \hline General strongly & \(O\left(\sqrt{\alpha T}\log NT\right)\) & \(O\left(\sqrt{T\sum_{k=1}^{K}\log m_{k}}\right)\)2 \\ observable graphs with & \(\Omega\left(\sqrt{\alpha T}\right)\)3 & See Theorem 6 for the lower bound \\ Disjoint union of \(K\) & \(O\left(\sqrt{KT}\log NT\right)\) & \\ cliques & \(\Omega\left(\sqrt{KT}\right)\) & \(\Theta\left(\sqrt{T\sum_{k=1}^{K}\log m_{k}}\right)\) \\ General weakly & \(\Omega\left(T^{\frac{2}{3}}\max\left\{\frac{|S|}{k},\log|S|\right\}^{\frac{1}{ 3}}\right)\)4 & \(\Omega\left(T^{\frac{2}{3}}\left(\sum_{k=1}^{K}\max\left\{\log|S_{k}|,\frac{|S _{k}|}{t_{k}}\right\}\right)^{\frac{1}{3}}\right)\)5 \\ Disjoint union of \(K\) & \(O\left(T^{\frac{2}{3}}\left(\log N\right)^{\frac{1}{3}}\right)\) & \(\Omega\left(T^{\frac{2}{3}}\left(\sum_{k=1}^{K}\log m_{k}\right)^{\frac{1}{3}}\right)\) \\ \hline \hline \end{tabular} \end{table} Table 1: Minimax Regret Bound on Various Feedback Graphs pick the group according to the distribution over groups and then pick the arm in that group following the distribution in the group. At the end of each round, all distributions are updated in the manner similar to _online stochastic mirror descent_ (OSMD) with carefully designed loss vectors and various potential functions. Our main technical contribution is a novel analysis of this two-stage algorithm. We design auxiliary two-stage _piecewise continuous processes_ whose regret is relatively easy to analyze. Then we view our algorithm as a discretization of the process and bound the accumulated discretization errors. Since the notion of \(\mathbf{m}\)-\(\mathsf{MAB}\) generalizes both learning with expert advice and multi-armed bandit, we remark that our analysis of Algorithm 1 can specialize to an analysis of both ordinary mirror descent (MD) algorithm and OSMD algorithm. We believe that the viewpoint of discretizing a piecewise continuous process is more intuitive than the textbook analysis of OSMD and may be of independent pedagogical interest. #### 1.2.2 Lower bounds for \(\mathbf{m}\)-\(\mathsf{BAI}\) Our lower bound for the number of rounds in an \((\varepsilon,0.05)\)-PAC algorithm for \(\mathbf{m}\)-\(\mathsf{BAI}\) where \(\mathbf{m}=(m_{1},\ldots,m_{K})\) is \[\Omega\bigg{(}\sum_{k=1}^{K}\frac{\log(m_{k}+1)}{\varepsilon^{2}}\bigg{)},\] which is the sum of lower bounds on each \((m_{k})\)-\(\mathsf{BAI}\) instance. To achieve this, we show that the instance where all arms are \(\mathsf{Ber}(\frac{1}{2})\) is in fact a universal hard instance in the sense that every \((\varepsilon,0.05)\)-PAC algorithm requires \(\Omega\bigg{(}\sum_{k=1}^{K}\frac{\log(m_{k}+1)}{\varepsilon^{2}}\bigg{)}\) to identify. Via a reduction of "direct-sum" flavor, we show that every \((\varepsilon,0.05)\)-PAC algorithm, when applied to this instance, must successfully identify that each group consists of \(\mathsf{Ber}(\frac{1}{2})\) arms. As a result, the lower bound is the sum of the lower bounds for each "all \(\mathsf{Ber}(\frac{1}{2})\)" \((m_{k})\)-\(\mathsf{BAI}\) instance. We then prove the lower bound for "all \(\mathsf{Ber}(\frac{1}{2})\)" \((m)\)-\(\mathsf{BAI}\) instance for every \(m\geq 2\). We use \(\mathcal{H}_{0}^{(m)}\) to denote this instance. The \(\mathcal{H}_{0}^{(m)}\) specified lower bound is obtained by constructing another \(m\) instances \(\mathcal{H}_{1}^{(m)},\ldots,\mathcal{H}_{m}^{(m)}\) and compare the distribution of losses generated by \(\mathcal{H}_{0}^{(m)}\) and the distribution of losses generated by a _mixture_ of \(\mathcal{H}_{1}^{(m)},\ldots,\mathcal{H}_{m}^{(m)}\). For technical reasons, we first prove the lower bound when all arms are Gaussian and reduce the Gaussian arms to Bernoulli arms. ### Organization of the Paper In this paper, we focus on the \(\mathbf{m}\)-\(\mathsf{MAB}\) and the \(\mathbf{m}\)-\(\mathsf{BAI}\) and provide a fine-grained analysis to achieve tight bounds for both problems. The paper is organized in the following way. We outline our main results in Section 1.1 and introduce the preliminaries in Section 2. A two-stage optimal algorithm for \(\mathbf{m}\)-\(\mathsf{MAB}\) is given in Section 3, along with continuous-time and discretized analysis. We then generalize this result to bandit with strongly observable graphs in Section 3.4. We also construct an \((\varepsilon,0.05)\)-PAC algorithm for \(\mathbf{m}\)-\(\mathsf{BAI}\) which terminates in bounded rounds in Section 3.3 via a reduction to \(\mathbf{m}\)-\(\mathsf{MAB}\) problems. In Section 4, we derive a corresponding lower bound for \(\mathbf{m}\)-\(\mathsf{BAI}\). Based on the results in Section 4, we provide a regret lower bound for \(\mathbf{m}\)-\(\mathsf{MAB}\) in Section 5.1 which matches the upper bound in Section 3. We also prove the lower bounds for bandit with strongly and weakly observable feedback graphs in Section 5.2 and Section 5.3 respectively. The result on weakly observable graphs solves an open problem in [11]. ### Related Works The bandit feedback setting as an online decision problem has received considerable attention. The work of [1] first provided a tight bound for the bandit feedback setting, while the full information feedback case has been well studied in [11, 12]. Building upon these works, [13] introduced an interpolation between these two extremes and generalized the feedback of the classic bandit problem to a graph structure. Several prior studies, such as [1, 1, 10, 11, 12], have proposed various graph parameters to characterize the factors that influence regret. However, the algorithms proposed in these works for more general graphs do not yield a tight bound in our specific setting. The pure exploration version of the bandit problem, known as the _best arm identification_ (BAI) problem, has also received significant attention in the literature ([1, 10, 11, 12, 13, 14]). While the BAI problem may appear deceptively simple, determining the precise bound for BAI under the bandit feedback setting remains an open question. However, for the problem of identifying an \(\varepsilon\)-optimal arm with high probability, [1] established a tight bound for the bandit feedback setting, while the bound for the full feedback model is relatively straightforward (see e.g. [1]). #### 1.4.1 Comparison with [1] The very recent work of [1] studied interpolation of learning with experts and multi-armed bandit as well from a different perspective. They proved an \(O\left(\sqrt{T\alpha(1+\log\left(N/\alpha\right))}\right)\) upper bound for the minimax regret of bandit with strongly feedback graph \(G\) where \(\alpha\) is the _independence number_ of \(G\). The parameter is in general _not_ comparable with clique covers used in this work for feedback graphs. Particularly on an \(\mathsf{m}\)-\(\mathsf{M}\mathsf{A}\mathsf{B}\) instance where \(\mathbf{m}=(m_{1},\ldots,m_{K})\), the independence number is \(K\) and therefore their upper bound becomes to \(O\left(\sqrt{TK\log(N/K)}\right)\) while our results showed that the minimax regret is indeed \(\Theta\left(\sqrt{T\sum_{k=1}^{K}\log(m_{k}+1)}\right)\). To see the difference, assume \(K=\lfloor\log N\rfloor\) and \(\mathbf{m}=(1,1,\ldots,1,N-K+1)\), then the minimax regret is \(\Theta\left(\sqrt{T\log N}\right)\) while the upper bound in [1] is \(O\left(\sqrt{T}\log N\right)\). ## 2 Preliminaries In this section, we formally define the notations used and introduce some preparatory knowledge that will help in understanding this work. ### Mathematical Notations Let \(n\) be a non-negative integer. We use \([n]\) to denote the set \(\{1,2,\ldots,n\}\) and \(\Delta_{n-1}=\left\{\mathbf{x}\in\mathbb{R}_{\geq 0}^{n}:\sum_{i=1}^{n} \mathbf{x}(i)=1\right\}\) to denote the \(n-1\) dimensional standard simplex where \(\mathbb{R}_{\geq 0}\) is the set of all non-negative real numbers. For a real vector \(\mathbf{x}\in\mathbb{R}^{n}\), the \(i\)-th entry of \(\mathbf{x}\) is denoted as \(\mathbf{x}(i)\) for every \(i\in[n]\). We define \(\mathbf{e}_{i}^{[n]}\) as the indicator vector of the \(i\)-th coordinate such that \(\mathbf{e}_{i}^{[n]}(i)=1\) and \(\mathbf{e}_{i}^{[n]}(j)=0\) for all \(j\neq i\) and \(j\in[n]\). We may write \(\mathbf{e}_{i}^{[n]}\) as \(\mathbf{e}_{i}\) if the information on \(n\) is clear from the context. Given two vectors \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\), we define their inner product as \(\langle\mathbf{x},\mathbf{y}\rangle=\sum_{i=1}^{n}\mathbf{x}(i)\mathbf{y}(i)\). For any \(a,b\in\mathbb{R}\), let \([a,b]=\{c\in\mathbb{R}\mid\min\left\{a,b\right\}\leq c\leq\max\left\{a,b\right\}\}\) be the interval between \(a\) and \(b\). For any \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}\), we say \(\mathbf{y}\geq\mathbf{x}\) if \(\mathbf{y}(i)\geq\mathbf{x}(i)\) for every \(i\in[n]\). Then we can define the rectangle formed by \(\mathbf{x}\) and \(\mathbf{y}\): \(\mathsf{Rect}(\mathbf{x},\mathbf{y})=\{\mathbf{z}\in\mathbb{R}^{n}:\mathbf{y }\geq\mathbf{z}\geq\mathbf{x}\}\). For any positive semi-definite matrix \(M\in\mathbb{R}^{n\times n}\), let \(\left\|\mathbf{x}\right\|_{M}=\sqrt{\mathbf{x}^{\dagger}M\mathbf{x}}\) be the norm of \(\mathbf{x}\) with respect to \(M\). Specifically, we abbreviate \(\left\|\mathbf{x}\right\|_{(\nabla^{2}\psi)^{-1}}\) as \(\left\|\mathbf{x}\right\|_{\nabla^{2}\psi}\) where \(\nabla^{2}\psi\) is the Hessian matrix of a convex function \(\psi\). Let \(F:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a convex function which is differentiable in its domain \(\mathsf{dom}(F)\). Given \(\mathbf{x},\mathbf{y}\in\mathsf{dom}(F)\), the Bregman divergence with respect to \(F\) is defined as \(B_{F}(\mathbf{x},\mathbf{y})=F(\mathbf{x})-F(\mathbf{y})-\langle\mathbf{x}- \mathbf{y},\nabla F(\mathbf{y})\rangle\). Given two measures \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) on the same measurable space \((\Omega,\mathcal{F})\), the KL-divergence between \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) is defined as \(D_{\text{KL}}\left(\mathbf{P}_{1},\mathbf{P}_{2}\right)=\sum_{\omega\in\Omega} \mathbf{P}_{1}\left[\omega\right]\log\frac{\mathbf{P}_{1}[\omega]}{\mathbf{P}_{2 }[\omega]}\) if \(\Omega\) is discrete or \(D_{\text{KL}}\left(\mathbf{P}_{1},\mathbf{P}_{2}\right)=\int_{\Omega}\log\frac{ \mathbf{P}_{1}[\omega]}{\mathbf{P}_{2}[\omega]}\,\mathrm{d}\mathbf{P}_{1}\left[ \omega\right]\) if \(\Omega\) is continuous provided \(\mathbf{P}_{1}\) is absolutely continuous with respect to \(\mathbf{P}_{2}\). ### Graph Theory Let \(G=(V,E)\) be a directed graph where \(|V|=N\). We use \((u,v)\) to denote the directed edge from vertex \(u\) to vertex \(v\). For any \(U\subseteq V\), we denote the subgraph induced by \(U\) as \(G[U]\). For \(v\in V\), let \(N_{\text{in}}(v)\coloneqq\{u\in V\colon(u,v)\in E\}\) be the set of in-neighbors of \(v\) and \(N_{\text{out}}(v)\coloneqq\{u\in V\colon(v,u)\in E\}\) be the set of out-neighbors. If the graph is undirected, we have \(N_{\text{in}}(v)=N_{\text{out}}(v)\), and we use \(\mathbf{N}(v)\) to denote the neighbors for brevity. We say \(S\subseteq V\) is an independent set of \(G\) if for every \(v\in S\), \(\{u\in S\mid u\neq v,u\in N_{\text{in}}(v)\cup N_{\text{out}}(v)\}=\varnothing\). The maximum independence number of \(G\) is denoted as \(\alpha(G)\) and abbreviated as \(\alpha\) when \(G\) is clear from the context. Furthermore, we say an independent set \(S\) is a \(t\)-packing independent set if and only if for any \(v\in V\), there are at most \(t\) out-neighbors of \(v\) in \(S\), i.e., \(|N_{\text{out}}(v)\cap S|\leq t\). We say the subsets \(V_{1},\ldots,V_{K}\subseteq V\) form a _\((V_{1},\ldots,V_{K})\)-clique cover_ of \(G\) if each induced subgraph \(G[V_{k}]\) for \(k\in[K]\) is a clique and \(\bigcup_{k\in[K]}V_{k}=V\). ### m-Mab and m-Bai Let \(K>0\) be an integer. Given a vector \(\mathbf{m}=(m_{1},m_{2},\ldots,m_{K})\in\mathbb{Z}_{\geq 1}^{K}\) with \(\sum_{k\in[K]}m_{k}=N\), we now define problems \(\mathbf{m}\)-\(\mathbf{M}\)\(\mathbf{B}\) and \(\mathbf{m}\)-\(\mathbf{B}\)\(\mathbf{B}\)\(\mathbf{B}\)\(\mathbf{B}\) respectively. #### 2.3.1 m-Mab In the problem of \(\mathbf{m}\)-\(\mathbf{M}\)\(\mathbf{B}\), there are \(N\) arms. The arms are partitioned into \(K\) groups and the \(k\)-th group contains \(m_{k}\) arms. Let \(T\in\mathbb{N}\) be the time horizon. Then \(\mathbf{m}\)-\(\mathbf{M}\)\(\mathbf{B}\) is the following online decision game. The game proceeds in \(T\) rounds. At round \(t=0,1,\ldots,T-1\): * The player pulls an arm \(A_{t}\in[N]\); * The adversary chooses a loss function \(\ell^{(t)}\in[0,1]^{N}\); * The player incurs loss \(\ell^{(t)}\left(A_{t}\right)\) and observes the losses of all arms in the group containing \(A_{t}\). Clearly the vector \(\mathbf{m}\) encodes the amount of information the player can observe in each round. Two extremes are the problem of learning with expert advice and multi-armed bandit, which correspond to \((N)\)-\(\mathbf{M}\)\(\mathbf{B}\) and \((1,\ldots,1)\)-\(\mathbf{M}\)\(\mathbf{B}\) respectively. We assume the player knows \(\mathbf{m}\) and \(T\) in advance and use \(\mathcal{A}\) to denote the player's algorithm (which can be viewed as a function from previous observed information and the value of its own random seeds to the arm pulled at each round). The performance of the algorithm \(\mathcal{A}\) is measured by the notion of _regret_. Fix a loss sequence \(\vec{L}=\left\{\ell^{(0)},\ldots,\ell^{(T-1)}\right\}\). Let \(a^{*}=\arg\min_{a\in[N]}\sum_{t=1}^{T}\ell^{(t)}\left(a\right)\) be the arm with minimum accumulated losses. The regret of the algorithm \(\mathcal{A}\) and time horizon \(T\) on \(\vec{L}\) with respect to the arm \(a\) is defined as \(R_{a}(T,\mathcal{A},\vec{L})=\mathbb{E}\left[\sum_{t=0}^{T-1}\ell^{(t)}\left(A_ {t}\right)\right]-\sum_{t=0}^{T-1}\ell^{(t)}\left(a\right)\). If there is no ambiguity, we abbreviate \(R_{a}(T,\mathcal{A},\vec{L})\) as \(R_{a}(T)\). We also use \(R(T)\) to denote \(R_{a^{*}}(T)\). We are interested in the regret of the best algorithm against the worst adversary, namely the quantity \[R_{a}^{*}(T)=\inf_{\mathcal{A}}\sup_{\vec{L}}R_{a}(T,\mathcal{A},\vec{L}).\] We call \(R_{a^{*}}^{*}(T)\) the _minimax regret_ of \(\mathbf{m}\)-\(\mathbf{M}\)\(\mathbf{B}\) and usually write it as \(R^{*}(T)\). We may use the following two ways to name an arm in \(\mathbf{m}\)-\(\mathbf{M}\)\(\mathbf{B}\): * use the pair \((k,j)\) where \(k\in[K]\) and \(j\in[m_{k}]\) to denote "the \(j\)-th arm in the \(k\)-th group"; * use a global index \(i\in[N]\) to denote the \(i\)-th arm. Following this convention, we use \(\ell^{(t)}(i)\) and \(\ell^{(t)}_{k}(j)\) to denote the loss of arm \(i\) and arm \((k,j)\) at round \(t\) respectively. #### 2.3.2 Best Arm Identification and \(\mathbf{m}\)-Bai The _best arm identification_ (BAI) problem asks the player to identify the best arm among \(N\) given arms with as few pulls as possible. To be specific, each arm \(i\) is associated with a parameter \(p_{i}\) and each pull of arm \(i\) gives an observation of its random loss, which is drawn from a fixed distribution with mean \(p_{i}\) independently. The loss of each arm is restricted to be in \([0,1]\). The one with smallest \(p_{i}\), indexed by \(i^{*}\), is regarded as the best arm. An arm \(j\) is called an _\(\varepsilon\)-optimal arm_ if its mean is less than the mean of the best arm plus \(\varepsilon\) for some \(\varepsilon\in(0,1)\), namely \(p_{j}<p_{i^{*}}+\varepsilon\). With fixed \(\varepsilon,\delta>0\), an _\((\varepsilon,\delta)\)-probably approximately correct_ algorithm, or \((\varepsilon,\delta)\)-PAC algorithm for short, can find an \(\varepsilon\)-optimal arm with probability at least \(1-\delta\). In most parts of this paper, we choose \(\delta=0.05\). For an algorithm \(\mathcal{A}\) of BAI, we usually use \(T\) to denote the number of arms \(\mathcal{A}\) pulled before termination. Similarly for any arm \(i\), we use \(T_{i}\) to denote the number of times that the arm \(i\) has been pulled by \(\mathcal{A}\) before its termination. We also use \(N_{i}\) to denote the number of times that the arm \(i\) has been _observed_ by \(\mathcal{A}\). Let \(\mathbf{m}=(m_{1},m_{2},\cdots,m_{K})\in\mathbb{Z}_{\geq 1}^{K}\) be a vector. Similar to \(\mathbf{m}\)-MAB, the arms are partitioned into \(K\) groups and the \(k\)-th group consists of \(m_{k}\) arms. Each pull of an arm can observe the losses of all arms in the group. As usual, the goal is to identify the best arm (the one with minimum \(p_{i}\)) with as few rounds as possible. Similar to \(\mathbf{m}\)-MAB, we use \(i\in[N]\) or \((k,j)\) where \(k\in[K]\) and \(j\in[m_{k}]\) to name an arm. For a fixed algorithm, we use \(T_{i}\) or \(T_{(k,j)}\) to denote the number of times the respective arm has been pulled and use \(N_{i}\) or \(N_{(k,j)}\) to denote the number of times it has been observed. For every \(k\in[K]\) we use \(T^{(k)}\) to denote the number of times the arms in the \(k\)-th group have been pulled, namely \(T^{(k)}=\sum_{j\in[m_{k}]}T_{(k,j)}\). By definition, it holds that \(T=\sum_{k\in[K]}T^{(k)}\) and \(N_{(k,j)}=T^{(k)}\) for every \(j\in[m_{k}]\). ### Bandit with Graph Feedback A more general way to encode the observability of arms is to use feedback graphs. In this problem, a directed graph \(G=(V,E)\) is given. The vertex set \(V=[N]\) is the collection of all arms. The game proceeds in the way similar to \(\mathbf{m}\)-MAB. The only difference is that when an arm \(A_{t}\) is pulled by the player at a certain round, all arms in \(N_{\text{out}}(A_{t})\) can be observed. As a result, given a vector \(\mathbf{m}=(m_{1},m_{2},\cdots,m_{K})\in\mathbb{Z}_{\geq 1}^{K}\), the \(\mathbf{m}\)-MAB problem is identical to bandit with graph feedback \(G=(V,E)\) where \(G\) is the disjoint union of \(K\) cliques \(G_{1}=(V_{1},E_{1}),G_{2}=(V_{2},E_{2}),\ldots,G_{K}=(V_{K},E_{K})\) with \(m_{k}=|V_{k}|\) and \(E_{k}=V_{k}^{2}\) for every \(k\in[K]\). According to [1], we measure the observability of each vertex in terms of its in-neighbors. If a vertex has no in-neighbor, we call it a _non-observable_ vertex, otherwise it is _observable_. If a vertex \(v\) has a self-loop _or_\(N_{\text{in}}(v)\) exactly equals to \(V\setminus\{v\}\), then \(v\) is _strongly observable_. If an observable vertex is not strongly observable, then it is _weakly observable_. In this work, we assume each vertex is observable. If all the vertices are strongly observable, the graph \(G\) is called a strongly observable graph. If \(G\) contains weakly observable vertices (and does not have non-observable ones), we say \(G\) is a weakly observable graph. We can also define the notion of regret for bandit with graph feedback. Assume notations before, the regret of an algorithm \(\mathcal{A}\) with feedback graph \(G\) and time horizon \(T\) on a loss sequence \(\widetilde{L}\) with respect to the arm \(a\) is defined as \(R_{a}(G,T,\mathcal{A},\widetilde{L})=\operatorname{\mathbb{E}}\left[\sum_{t=0 }^{T-1}\ell^{(t)}(A_{t})\right]-\sum_{t=0}^{T-1}\ell^{(t)}(a)\). If there is no ambiguity, we abbreviate \(R_{a}(G,T,\mathcal{A},\vec{L})\) as \(R_{a}(G,T)\) or \(R_{a}(T)\). We also use \(R(T)\) to denote \(R_{a^{*}}(T)\). Then minimax regret is again \[R_{a^{*}}^{*}(G,T)=\inf_{\mathcal{A}}\sup_{\vec{L}}R_{a^{*}}(G,T,\mathcal{A}, \vec{L}).\] When \(G\) is clear from the context, we write it as \(R^{*}(T)\). ## 3 The Upper Bounds In this section, we prove Theorem 1 and Theorem 2. We describe the algorithm for \(\mathbf{m}\)-\(\mathtt{MAB}\) in Section 3.1 and analyze it in Section 3.2. The algorithm for \(\mathbf{m}\)-\(\mathtt{BAI}\) is obtained by a reduction to \(\mathbf{m}\)-\(\mathtt{MAB}\) described in Section 3.3. Finally we discuss how to extend the algorithm to bandit with strongly observable feedback graphs and prove Corollary 5 in Section 3.4. ### The Algorithm As discussed in the introduction, our algorithm basically follows the framework of the two-stage online stochastic mirror descent developed in [10]. However, our updating rules is slightly different from the one in [10] in order to incorporate with our new analysis. Given a \(K\)-dimensional vector \(\mathbf{m}=(m_{1},\ldots,m_{K})\) as input, in each round \(t\), the algorithm proceeds in the following two-stage manner: * A distribution \(Y^{(t)}\) over \([K]\) is maintained, indicating which group of arms the algorithm is going to pick. * For each \(k\in[K]\), a distribution \(X_{k}^{(t)}\) is maintained, indicating which arm in the \(k\)-th group the algorithm will pick conditioned on that the \(k\)-th group is picked in the first stage. * The algorithm then picks the \(j\)-th arm in the \(k\)-group with probability \(Y^{(t)}(k)\cdot X_{k}^{(t)}(j)\). The algorithm is described in Algorithm 1 and we give an explanation for each step below. Assuming \(Y^{(0)}\) and \(X_{k}^{(0)}\) for all \(k\in[K]\) are well initialized, in each time step \(t=0,1,\ldots,T-1\), the player will repeat the following operations: Sampling:For each arm \((k,j)\), the algorithm pulls it with probability \[Z^{(t)}(k,j)=Y^{(t)}(k)\cdot X_{k}^{(t)}(j).\] The arm pulled at this round is denoted by \(A_{t}=(k_{t},j_{t})\). Our algorithm can guarantee that \(Z^{(t)}\) is a distribution over all arms. Observing:Observe partial losses \(\ell_{k_{t}}^{(t)}(j)\) for all \(j\in[m_{k_{t}}]\). Estimating:For each arm \((k,j)\), define the unbiased estimator \(\hat{\ell}_{k}^{(t)}(j)=\frac{1[k-k_{t}]}{\Pr[k=k_{t}]}\cdot\ell_{k}^{(t)}(j)\). It is clear that \(\operatorname{\mathbb{E}}\left[\hat{\ell}_{k}^{(t)}(j)\right]=\ell_{k}^{(t)}(j)\). **Updating:** * For each \(k\in[K]\), update \(X_{k}^{(t)}\) in the manner of standard OSMD: \[\nabla\phi_{k}(\overline{X}_{k}^{(t+1)})=\nabla\phi_{k}(X_{k}^{(t)})-\hat{\ell} _{k}^{(t)};\quad X_{k}^{(t+1)}=\operatorname*{arg\,min}_{\mathbf{x}\in\Delta_ {m_{k}-1}}B_{\phi_{k}}(\mathbf{x},\overline{X}_{k}^{(t+1)}),\] where \(\phi_{k}(\mathbf{x})=\eta_{k}^{-1}\sum_{i}x(i)\log x(i)\) is the negative entropy scaled by the learning rate \(\eta_{k}\). * Define \(\overline{Y}^{(t)}\) in the way that \[\frac{1}{\sqrt{\overline{Y}^{(t+1)}}\left(k\right)}=\frac{1}{\sqrt{Y^{(t)} \left(k\right)}}+\sum_{j\in[m_{k}]}\frac{\eta}{\eta_{k}}X_{k}^{(t)}(j)\left(1- \exp\left(-\eta_{k}\cdot\hat{\ell}_{k}^{(t)}\left(j\right)\right)\right), \quad\forall k\in[K]\] (1) where \(\eta\) is the learning rate. Then let \(Y^{(t+1)}\) be the projection of \(\overline{Y}^{(t+1)}\) on \(\Delta_{K-1}\): \[Y^{(t+1)}=\operatorname*{arg\,min}_{\mathbf{y}\in\Delta_{K-1}}B_{\psi}( \mathbf{y},\overline{Y}^{(t+1)}),\] where \(\psi(\mathbf{y})=-2\sum_{i}\sqrt{y(i)}\) for any \(\mathbf{y}=(y(1),\ldots,y(K))\in\mathbb{R}^{K}\), referred to as Tsallis entropy in literature. Note that when \(x\) is small, \(1-\exp\left(-x\right)\approx x\). So when \(\eta_{k}\) is small (and it is so), the updating rule is approximately \[\frac{1}{\sqrt{\overline{Y}^{(t+1)}}\left(k\right)}=\frac{1}{\sqrt{\overline{ Y}^{(t)}\left(k\right)}}+\eta\sum_{j\in[m_{k}]}X_{k}^{(t)}\left(j\right) \cdot\hat{\ell}_{k}^{(t)}(j),\quad\forall k\in[K],\] which is equivalent to \[\nabla\psi(\overline{Y}^{(t+1)})=\nabla\psi(\overline{Y}^{(t)})-\eta\cdot \widehat{L}^{(t)},\] where \(\widehat{L}^{(t)}=(\widehat{L}^{(t)}\left(1\right),\ldots,\widehat{L}^{(t)} \left(K\right))\in\mathbb{R}^{K}\) satisfying \(\widehat{L}^{(t)}(k)=\sum_{j\in[m_{k}]}X_{k}^{(t)}(j)\cdot\hat{\ell}_{k}^{(t) }(j)\). One can think of \(\widehat{L}^{(t)}\left(k\right)\) as the "average loss" of the arms in the \(k\)-th group at round \(t\). Nevertheless, we use rule (1) in the algorithm since it is convenient for our analysis later. In the realization of Algorithm 1, we will choose \(\eta=\frac{1}{\sqrt{T}}\) and \(\eta_{k}=\frac{\log(m_{k}+1)}{\sqrt{T\sum_{k=1}^{K}\log\left(m_{k}+1\right)}}\). ### Analysis We prove the following theorem, which implies Theorem 1. **Theorem 8**.: _For every \(T>0\) and every loss sequence \(\ell^{(0)},\ldots,\ell^{(T-1)}\in[0,1]^{N}\), the regret of Algorithm 1 satisfies_ \[R(T)\leq O\left(\sqrt{T\sum_{k=1}^{K}\log(m_{k}+1)}\right).\] Instead of directly bounding the regret of the sequence of the action distributions \(\left\{Z^{(t)}\right\}_{0\leq t\leq T-1}\), we study an auxiliary _piecewise continuous_ process \(\left\{\mathcal{Z}^{(s)}\right\}_{s\in[0,T)}\). We define and bound the _regret_ of \(\left\{\mathcal{Z}^{(s)}\right\}_{s\in[0,T)}\) in Section 3.2.1, and compare it with the regret of \(\left\{Z^{(t)}\right\}_{0\leq t\leq T-1}\) in Section 3.2.2. Finally, we prove Theorem 8 in Section 3.2.3 #### 3.2.1 The piecewise continuous process Assuming notations in Algorithm 1, the process \(\left\{\mathcal{Z}^{(s)}\right\}_{s\in\left[0,T\right)}\) is defined as \[\mathcal{Z}^{(s)}(k,j)=\mathcal{Y}^{(s)}(k)\cdot\mathcal{X}_{k}^{(s)}(j),\quad \forall k\in[K],\,j\in[m_{k}],\] where \(\left\{\mathcal{Y}^{(s)}\right\}_{s\in\left[0,T\right)}\) and \(\left\{\mathcal{X}_{k}^{(s)}\right\}_{s\in\left[0,T\right)}\) for every \(k\in[K]\) are piecewise continuous processes defined in the following way. * For every integer \(t\in\left\{0,1,\ldots,T-1\right\}\), we let \(\mathcal{Y}^{(t)}=Y^{(t)}\) and \(\mathcal{X}_{k}^{(t)}=\mathcal{X}_{k}^{(t)}\) for every \(k\in[K]\). * For every integer \(t\in\left\{0,1,\ldots,T-1\right\}\) and every \(k\in[K]\), the trajectory of \(\left\{\mathcal{X}_{k}^{(s)}\right\}_{s\in\left[t,t+1\right)}\) is a continuous path in \(\mathbb{R}^{m_{k}}\) governed by the ordinary differential equation \[\frac{\mathrm{d}\boldsymbol{\nabla}\phi_{k}(\mathcal{X}_{k}^{(s)})}{\mathrm{d}s }=-\widehat{\ell}_{k}^{(t)}.\] (2) * For every integer \(t\in\left\{0,1,\ldots,T-1\right\}\), the trajectory of \(\left\{\mathcal{Y}^{(s)}\right\}_{s\in\left[t,t+1\right)}\) is a continuous path in \(\mathbb{R}^{K}\) governed by the ordinary differential equation \[\frac{\mathrm{d}\boldsymbol{\nabla}\psi(\mathcal{Y}^{(s)})}{\mathrm{d}s}=- \widehat{L}^{(s)},\] (3) where \(\widehat{L}^{(s)}=\left(\widehat{L}^{(s)}\left(1\right),\ldots,\widehat{L}^{( s)}\left(K\right)\right)\in\mathbb{R}^{K}\) satisfies \(\widehat{L}^{(s)}\left(k\right)=\sum_{j\in\left[m_{k}\right]}\mathcal{X}_{k}^ {(s)}(j)\cdot\hat{\ell}_{k}^{(t)}(j)\). Clearly the trajectories of \(\mathcal{Z}^{(s)}\), \(\mathcal{Y}^{(s)}\) and \(\mathcal{X}^{(s)}_{k}\) for every \(k\in[K]\) are piecewise continuous paths in the time interval \(s\in[0,T)\). An important property is that the end of each piece of the trajectories of \(\mathcal{Y}^{(s)}\) and \(\mathcal{X}^{(s)}_{k}\) coincides with its discrete counterpart _before_ performing projection to the probability simplex. Formally, for every \(t\in[T]\) and \(k\in[K]\), define \(\mathcal{X}^{(t)^{-}}_{k}\coloneqq\lim_{s\to t^{-}}\mathcal{X}^{(s)}_{k}\) and \(\mathcal{Y}^{(t)^{-}}\coloneqq\lim_{s\to t^{-}}\mathcal{Y}^{(s)}\). We have the following lemma. **Lemma 9**.: _For every \(t\in[T]\) and \(k\in[K]\), it holds that \(\mathcal{X}^{(t)^{-}}_{k}=\overline{X}^{(t)}_{k}\) and \(\mathcal{Y}^{(t)^{-}}=\overline{Y}^{(t)}\)._ Proof.: To ease the notation, for any fixed \(t\in\{0,1,\ldots,T-1\}\) and fixed \(k\in[K]\), we now prove that \(\mathcal{X}^{(t+1)^{-}}_{k}=\overline{X}^{(t+1)}_{k}\) and \(\mathcal{Y}^{(t+1)^{-}}=\overline{Y}^{(t+1)}\) respectively. In fact, \(\mathcal{X}^{(t+1)^{-}}_{k}=\overline{X}^{(t+1)}_{k}\) immediately follows by integrating both sides of (2) from \(t\) to \(t+1\) and noting that \(\mathcal{X}^{(t)}_{k}=X^{(t)}_{k}\). More efforts are needed to prove the identity for \(\mathcal{Y}^{(t)}\). Recall \(\phi_{k}(\mathbf{x})=\eta_{k}^{-1}\sum_{j}x(j)\log x(j)\) for every \(\mathbf{x}=\left(x(1),\ldots,x(m_{k})\right)\). It follows from (2) that for every \(s\in[t,t+1)\) every \(k\in[K]\) and every \(j\in[m_{k}]\), \[\mathcal{X}^{(s)}_{k}(j)=\mathcal{X}^{(t)}_{k}(j)\cdot\exp\left(-(s-t)\eta_{k }\hat{t}^{(t)}_{k}(j)\right).\] As a result, we know that \[\widehat{L}^{(s)}(k)=\sum_{j\in[m_{k}]}\mathcal{X}^{(t)}_{k}(j)\cdot\exp\left( -(s-t)\eta_{k}\hat{t}^{(t)}_{k}(j)\right)\cdot\hat{t}^{(t)}_{k}(j).\] Integrating (3) from \(t\) to \(s\), plugging in above and noting that \(\mathcal{Y}^{(t)}=Y^{(t)}\), we obtain \[\frac{1}{\sqrt{\mathcal{Y}^{(s)}(k)}}=\frac{1}{\sqrt{Y^{(t)}(k)}}+\frac{\eta }{\eta_{k}}\sum_{j\in[m_{k}]}X^{(t)}_{k}(j)\left(1-\exp\left(-\eta_{k}\cdot(s- t)\cdot\hat{t}^{(t)}_{k}(j)\right)\right),\] which is exactly our rule to define \(\overline{Y}^{(t+1)}\) in Line 9 of Algorithm 1 (take \(s=t+1\)). We define the regret for the piecewise continuous process as follows. **Definition 10**.: _The continuous regret contributed by the process \(\left\{\mathcal{Z}^{(s)}\right\}_{s\in[0,T)}\) with respect to a fixed arm \(a\in[N]\) is defined as_ \[\mathcal{R}_{a}(T)\coloneqq\sum_{t=0}^{T-1}\mathrm{E}\left[\int_{t}^{t+1} \langle\mathcal{Z}^{(s)}-\mathbf{e}^{[N]}_{a},\ell^{(t)}\rangle\,\mathrm{d}s \right].\] Then we are ready to bound \(\mathcal{R}_{a}(T)\). Recall that we may write \(\mathbf{e}^{[N]}_{a}\) as \(\mathbf{e}_{a}\) if the information on \(N\) is clear from the context. **Lemma 11**.: _For any time horizon \(T>0\), any loss sequence \(\ell^{(0)},\ell^{(1)},\ldots,\ell^{(T-1)}\in[0,1]^{N}\), and any arm \(a=(k,j)\), it holds that_ \[\mathcal{R}_{a}(T)\leq B_{\psi}(\mathbf{e}^{[K]}_{k},Y^{(0)})+B_{\phi_{k}}( \mathbf{e}^{[m_{k}]}_{j},X^{(0)}_{k}).\] Proof.: Assume \(a=(k,j)\). For every \(t\in\{0,1,\ldots,T-1\}\), we compute the decreasing rate of the Bregman divergence caused by the evolution of \(\mathcal{Y}^{(s)}\) and \(\mathcal{X}^{(s)}_{k}\) respectively. First consider the change of \(B_{\psi}(\mathbf{e}_{k},\mathcal{Y}^{(s)})\) over time: \[\frac{\mathrm{d}}{\mathrm{d}s}B_{\psi}(\mathbf{e}_{k},\mathcal{Y}^{(s)})=\frac {\mathrm{d}}{\mathrm{d}s}\left(\psi(\mathbf{e}_{k})-\psi(\mathcal{Y}^{(s)})- \langle\mathbf{e}_{k}-\mathcal{Y}^{(s)},\boldsymbol{\nabla}\psi(\mathcal{Y}^{(s )})\rangle\right)\] \[\begin{split}&=\langle\frac{\mathrm{d}\mathbf{\nabla}\psi(\mathbf{J}^{(s)})}{ \mathrm{d}s},\mathbf{J}^{(s)}-\mathbf{e}_{k}\rangle\\ &=-\langle\widetilde{L}^{(s)},\mathbf{J}^{(s)}-\mathbf{e}_{k}\rangle. \end{split}\] Integrating above from \(t\) to \(t+1\), we have \[\int_{t}^{t+1}\langle\widetilde{L}^{(s)},\mathbf{J}^{(s)}-\mathbf{e}_{k}\rangle\, \mathrm{d}s=B_{\psi}(\mathbf{e}_{k},\mathbf{J}^{(t)})-B_{\psi}(\mathbf{e}_{k},\mathbf{J}^{(t+1 )^{-}})=B_{\psi}(\mathbf{e}_{k},Y^{(t)})-B_{\psi}(\mathbf{e}_{k},\overline{Y}^{(t+1)}), \tag{4}\] where the last equality follows from Lemma 9. Note that _projection never increases Bregman divergence_; that is, we have \[\begin{split}&\quad B_{\psi}(\mathbf{e}_{k},\overline{Y}^{(t+1)})-B_{ \psi}(\mathbf{e}_{k},Y^{(t+1)})\\ &=\psi(Y^{(t+1)})-\psi(\overline{Y}^{(t+1)})+\langle\mathbf{\nabla} \psi(Y^{(t+1)}),\mathbf{e}_{k}-Y^{(t+1)}\rangle-\langle\mathbf{\nabla}\psi(\overline{ Y}^{(t+1)}),\mathbf{e}_{k}-\overline{Y}^{(t+1)}\rangle\\ &=\underbrace{\psi(Y^{(t+1)})-\psi(\overline{Y}^{(t+1)})-\langle \mathbf{\nabla}\psi(\overline{Y}^{(t+1)}),Y^{(t+1)}-\overline{Y}^{(t+1)}\rangle}_{ A}+\underbrace{\langle\mathbf{\nabla}\psi(\overline{Y}^{(t+1)})-\mathbf{\nabla}\psi(Y^{(t+1)}),Y^{(t+1)}-\mathbf{e}_{k}\rangle}_{B}.\end{split}\] Since \(\psi\) is convex, we have \(A\geq 0\). By the definition of \(Y^{(t+1)}\), \[Y^{(t+1)}=\operatorname*{arg\,min}_{\mathbf{y}\in\Lambda_{K-1}}B_{\psi}( \mathbf{y},\overline{Y}^{(t+1)})=\operatorname*{arg\,min}_{\mathbf{y}\in \Lambda_{K-1}}\psi(\mathbf{y})-\langle\mathbf{y},\mathbf{\nabla}\psi(\overline{Y}^ {(t+1)})\rangle.\] The first-order optimality condition (see Section 26.5 in [18]) implies that \(B\geq 0\). As a result, \(B_{\psi}(\mathbf{e}_{k},\overline{Y}^{(t+1)})\geq B_{\psi}(\mathbf{e}_{k},Y^{(t+1)})\) and it follows from Equation (4) that \[\int_{t}^{t+1}\langle\widetilde{L}^{(s)},\mathbf{J}^{(s)}-\mathbf{e}_{k}\rangle\, \mathrm{d}s\leq B_{\psi}(\mathbf{e}_{k},Y^{(t)})-B_{\psi}(\mathbf{e}_{k},Y^{(t+1)}). \tag{5}\] Then we consider the change of \(B_{\phi_{k}}(\mathbf{e}_{j},\mathcal{X}_{k}^{(s)})\) over time. Likewise we have \[\frac{\mathrm{d}}{\mathrm{d}s}B_{\phi_{k}}(\mathbf{e}_{j},\mathcal{X}_{k}^{(s)})= \langle\frac{\mathrm{d}\mathbf{\nabla}\phi_{k}(\mathcal{X}_{k}^{(s)})}{\mathrm{d}s },\mathcal{X}_{k}^{(s)}-\mathbf{e}_{j}\rangle=-\langle\tilde{\ell}_{k}^{(t)}, \mathcal{X}_{k}^{(s)}-\mathbf{e}_{j}\rangle.\] By an argument similar to the one for \(\mathbf{J}^{(s)}\) above, we can obtain \[\int_{t}^{t+1}\langle\tilde{\ell}_{k}^{(t)},\mathcal{X}_{k}^{(s)}-\mathbf{e}_{j} \rangle\,\mathrm{d}s\leq B_{\phi_{k}}(\mathbf{e}_{j},X_{k}^{(t)})-B_{\phi_{k}}(\bm {e}_{j},X_{k}^{(t+1)}). \tag{6}\] On the other hand, we have for every \(s\in[t,t+1)\) and any arm \(a^{*}=(k^{*},j^{*})\), \[\mathbb{E}\left[\langle\mathcal{Z}^{(s)}-\mathbf{e}_{a^{*}},\ell^{(t)}\rangle \right]=\mathbb{E}\left[\langle\mathcal{Z}^{(s)}-\mathbf{e}_{a^{*}},\tilde{\ell}^ {(t)}\rangle\right]=\mathbb{E}\left[\sum_{k\in[K]}\sum_{j\in[m_{k}]}\mathbf{J}^{( s)}(k)\cdot\mathcal{X}_{k}^{(s)}(j)\cdot\tilde{\ell}_{k}^{(t)}(j)-\tilde{ \ell}^{(t)}(a^{*})\right].\] Recall that for every \(k\in[K]\), it holds that \(\widetilde{L}^{(s)}(k)=\sum_{j\in[m_{k}]}\mathcal{X}_{k}^{(s)}(j)\cdot\tilde{ \ell}_{k}^{(t)}(j)\). Rearranging above yields \[\mathbb{E}\left[\langle\mathcal{Z}^{(s)}-\mathbf{e}_{a^{*}},\ell^{(t)}\rangle \right]=\mathbb{E}\left[\sum_{k\in[K]}\mathbf{J}^{(s)}(k)\cdot\widetilde{L}^{(s)}( k)-\tilde{\ell}^{(t)}(a^{*})\right]\] \[=\mathbb{E}\left[\left\langle\mathcal{Y}^{(s)},\widetilde{L}^{(s)} \right\rangle-\hat{\ell}^{(t)}\left(a^{*}\right)\right]\] \[=\mathbb{E}\left[\left\langle\mathcal{Y}^{(s)}-\mathbf{e}_{k^{*}},\widetilde{L}^{(s)}\right\rangle+\widetilde{L}^{(s)}\left(k^{*}\right)-\hat{ \ell}_{k^{*}}^{(t)}\left(j^{*}\right)\right]\] \[=\mathbb{E}\left[\left\langle\mathcal{Y}^{(s)}-\mathbf{e}_{k^{*}},\widetilde{L}^{(s)}\right\rangle\right]+\mathbb{E}\left[\left\langle\mathcal{ X}_{k^{*}}^{(s)}-\mathbf{e}_{j^{*}},\hat{\ell}_{k^{*}}^{(t)}\right\rangle\right].\] Integrating above from \(t\) to \(t+1\) and plugging in Equations (5) and (6), we obtain \[\int_{t}^{t+1}\mathbb{E}\left[\left\langle\mathcal{Z}^{(s)}- \mathbf{e}_{a^{*}},\ell^{(t)}\right\rangle\right]\mathrm{d}s =\int_{t}^{t+1}\mathbb{E}\left[\left\langle\mathcal{Y}^{(s)}- \mathbf{e}_{k^{*}},\widetilde{L}^{(s)}\right\rangle\right]\ \mathrm{d}s+\int_{t}^{t+1}\mathbb{E}\left[\left\langle\mathcal{X}_{k}^{(s)}- \mathbf{e}_{j^{*}},\hat{\ell}_{k^{*}}^{(t)}\right\rangle\right]\mathrm{d}s\] \[\leq B_{\psi}(\mathbf{e}_{k},Y^{(t)})-B_{\psi}(\mathbf{e}_{k},Y^ {(t+1)})+B_{\phi_{k}}(\mathbf{e}_{j},X_{k}^{(t)})-B_{\phi_{k}}(\mathbf{e}_{j}, X_{k}^{(t+1)}).\] Summing above over \(t\) from \(0\) to \(T-1\) finishes the proof. #### 3.2.2 Comparison of \(R_{a}(T)\) and \(\mathcal{R}_{a}(T)\) For any fixed loss sequence \(\ell^{(0)},\ell^{(1)},\ldots,\ell^{(T-1)}\), we bound the difference between the regret \(R_{a}(T)\) of Algorithm 1 and the continuous regret \(\mathcal{R}_{a}(T)\) for any arm \(a\). Formally, we establish the following lemma: **Lemma 12**.: \[R_{a}(T)-\mathcal{R}_{a}(T)\leq\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[ \sup_{\xi\in\operatorname{Rect}(Y^{(t)},\overline{Y}^{(t+1)})}\|\widetilde{ L}^{(t)}\|_{\boldsymbol{\nabla}^{-2}\psi(\xi)}^{2}+\sum_{k\in[K]}Y^{(t)}\left(k \right)\cdot\sup_{\xi_{k}\in\operatorname{Rect}(X_{k}^{(t)},\overline{X}_{k }^{(t+1)})}\|\hat{\ell}_{k}^{(t)}\|_{\boldsymbol{\nabla}^{-2}\phi_{k}(\xi_{k}) }^{2}\right].\] Proof.: By the definition of the regret, we have \[R_{a}(T) =\mathbb{E}\left[\sum_{t=0}^{T-1}\langle Z^{(t)}-\mathbf{e}_{a}, \hat{\ell}^{(t)}\rangle\right]\] \[=\sum_{t=0}^{T-1}\mathbb{E}\left[\langle Z^{(t)}-\mathbf{e}_{a}, \hat{\ell}^{(t)}\rangle\right]\] \[=\sum_{t=0}^{T-1}\mathbb{E}\left[\int_{t}^{t+1}\langle\mathcal{Z }^{(s)}-\mathbf{e}_{a},\hat{\ell}^{(t)}\rangle\ \mathrm{d}s+\int_{t}^{t+1}\langle Z^{(t)}-\mathcal{Z}^{(s)},\hat{\ell}^{(t)} \rangle\ \mathrm{d}s\right]\] \[=\mathcal{R}_{a}(T)+\sum_{t=0}^{T-1}\mathbb{E}\left[\int_{t}^{t+1 }\langle\mathcal{Z}^{(t)}-\mathcal{Z}^{(s)},\hat{\ell}^{(t)}\rangle\ \mathrm{d}s\right],\] where the first equality holds due to Fubini's theorem. Therefore, we only need to bound the term \(\sum_{t=0}^{T-1}\mathbb{E}\left[\int_{t}^{t+1}\langle Z^{(t)}-\mathcal{Z}^{(s )},\hat{\ell}^{(t)}\rangle\ \mathrm{d}s\right]\). Fix \(t\in\{0,1,\ldots,T-1\}\). We have shown in the proof of Lemma 9 that \[\mathcal{X}_{k}^{(s)}(j)=X_{k}^{(t)}(j)\cdot\exp\left(-(s-t)\eta_{k}\hat{\ell }_{k}^{(t)}(j)\right)\leq X_{k}^{(t)}(j)\] for any \(s\in[t,t+1)\) and any \(j\in[m_{k}]\). Recall that \(\widetilde{L}^{(s)}(k)=\sum_{j\in[m_{k}]}\mathcal{X}_{k}^{(s)}(j)\cdot\hat{\ell }_{k}^{(t)}(j)\) for every \(k\in[K]\). Then by the discussion above, we have \(\widetilde{L}^{(s)}\leq\widetilde{L}^{(t)}\) for any \(s\in[t,t+1)\). As a result, it follows from (3) that for any \(s\in[t,t+1)\), \[\boldsymbol{\nabla}\psi(\mathcal{Y}^{(s)})-\boldsymbol{\nabla}\psi(Y^{(t)})= \int_{t}^{s}-\widetilde{L}^{(w)}\ \mathrm{d}w\geq-(s-t)\cdot\widetilde{L}^{(t)}. \tag{7}\] Recall that for any two vectors \(\mathbf{x},\mathbf{y}\) of the same dimension, \(\mathtt{Rect}(\mathbf{x},\mathbf{y})\) is the rectangle between \(\mathbf{x}\) and \(\mathbf{y}\). Since our \(\psi\) is a _separable function_ (and therefore \(\boldsymbol{\nabla}^{2}\psi\) is diagonal), we can apply the _mean value theorem_ entrywise and obtain \[\boldsymbol{\nabla}\psi(\boldsymbol{\mathcal{Y}}^{(s)})-\boldsymbol{\nabla} \psi(Y^{(t)})=\boldsymbol{\nabla}^{2}\psi(\xi^{(s)})(\boldsymbol{\mathcal{Y} }^{(s)}-Y^{(t)}) \tag{8}\] for some \(\xi^{(s)}\in\mathtt{Rect}(\boldsymbol{\mathcal{Y}}^{(s)},Y^{(t)})\). By our choice of \(\psi\), it holds that \(\boldsymbol{\nabla}^{2}\psi(\xi^{(s)})>0\) for any \(\xi^{(s)}\in\mathtt{Rect}(\boldsymbol{\mathcal{Y}}^{(s)},Y^{(t)})\). Therefore, combining Equations (7) and (8), we have \[\boldsymbol{\mathcal{Y}}^{(s)}\geq Y^{(t)}-(s-t)\cdot\boldsymbol{\nabla}^{-2 }\psi(\xi^{(s)})\cdot\widehat{L}^{(t)}.\] Similar argument yields that \[\mathcal{X}^{(s)}_{k}\geq X^{(t)}_{k}-(s-t)\cdot\boldsymbol{\nabla}^{-2}\phi_ {k}(\zeta^{(s)}_{k})\cdot\hat{\ell}^{(t)}_{k}\] for some \(\zeta^{(s)}_{k}\in\mathtt{Rect}(\mathcal{X}^{(s)}_{k},X^{(t)}_{k})\). Therefore for any \(k\in[K]\), \(j\in[m_{k}]\) and any \(s\in[t,t+1)\), we can bound the difference between \(Z^{(t)}(k,j)\) and \(\mathcal{Z}^{(s)}(k,j)\): \[Z^{(t)}(k,j)-\mathcal{Z}^{(s)}(k,j)=Y^{(t)}(k)\cdot X^{(t)}_{k}( j)-\boldsymbol{\mathcal{Y}}^{(s)}(k)\cdot\mathcal{X}^{(s)}_{k}(j)\] \[\leq Y^{(t)}(k)\cdot X^{(t)}_{k}(j)-\left(Y^{(t)}(k)-(s-t)\cdot \left[\boldsymbol{\nabla}^{-2}\psi(\xi^{(s)})\cdot\widehat{L}^{(t)}\right](k) \right)\cdot\left(X^{(t)}_{k}(j)-(s-t)\cdot\left[\boldsymbol{\nabla}^{-2}\phi_ {k}(\zeta^{(s)}_{k})\cdot\hat{\ell}^{(t)}_{k}\right](j)\right)\] \[=-(s-t)^{2}\cdot\left[\boldsymbol{\nabla}^{-2}\psi(\xi^{(s)}) \cdot\widehat{L}^{(t)}\right](k)\cdot\left[\boldsymbol{\nabla}^{-2}\phi_{k}( \zeta^{(s)}_{k})\cdot\hat{\ell}^{(t)}_{k}\right](j)+(s-t)\cdot X^{(t)}_{k}(j) \cdot\left[\boldsymbol{\nabla}^{-2}\psi(\xi^{(s)})\cdot\widehat{L}^{(t)} \right](k)\] \[\quad+(s-t)\cdot Y^{(t)}(k)\cdot\left[\boldsymbol{\nabla}^{-2} \phi_{k}(\zeta^{(s)}_{k})\cdot\hat{\ell}^{(t)}_{k}\right](j)\] \[\leq(s-t)\cdot X^{(t)}_{k}(j)\cdot\left[\boldsymbol{\nabla}^{-2} \psi(\xi^{(s)})\cdot\widehat{L}^{(t)}\right](k)+(s-t)\cdot Y^{(t)}(k)\cdot \left[\boldsymbol{\nabla}^{-2}\phi_{k}(\zeta^{(s)}_{k})\cdot\hat{\ell}^{(t)}_ {k}\right](j)\] for some \(\xi^{(s)}\in\mathtt{Rect}(\boldsymbol{\mathcal{Y}}^{(s)},Y^{(t)})\) and \(\zeta^{(s)}_{k}\in\mathtt{Rect}(\mathcal{X}^{(s)}_{k},X^{(t)}_{k})\). We are now ready to bound the gap between \(R_{a}(T)\) and \(\mathcal{R}_{a}(T)\): \[R_{a}(T)-\mathcal{R}_{a}(T) =\sum_{t=0}^{T-1}\mathbb{E}\left[\int_{t}^{t+1}\langle Z^{(t)}- \mathcal{L}^{(s)},\hat{\ell}^{(t)}\rangle\right]\] \[\leq\underbrace{\sum_{t=0}^{T-1}\mathbb{E}\left[\int_{t}^{t+1}(s- t)\left(\sum_{k\in[K]}\sum_{j\in[m_{k}]}X^{(t)}_{k}(j)\cdot\sup_{\xi\in\mathtt{Rect}(Y^{(t)}, \overline{Y}^{(t+1)})}\left[\boldsymbol{\nabla}^{-2}\psi(\xi)\cdot\widehat{L} ^{(t)}\right](k)\right)\cdot\hat{\ell}^{(t)}_{k}(j)\text{ d}s\right]}_{(A)}\] \[\quad+\underbrace{\sum_{t=0}^{T-1}\mathbb{E}\left[\int_{t}^{t+1}(s- t)\left(\sum_{k\in[K]}\sum_{j\in[m_{k}]}Y^{(t)}(k)\cdot\sup_{\zeta_{k}\in\mathtt{Rect}(X^{(t)}_{k} \overline{X}^{(t+1)}_{k})}\left[\boldsymbol{\nabla}^{-2}\phi_{k}(\zeta_{k}) \cdot\hat{\ell}^{(t)}_{k}\right](j)\right)\cdot\hat{\ell}^{(t)}_{k}(j)\text{ d}s \right]}_{(B)}.\] Note that in both expressions (A) and (B) above, only the term \((s-t)\) depend on \(s\). So we can integrate and obtain: \[(A)=\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\left(\sum_{k\in[K]}\sum_{j\in[m _{k}]}X^{(t)}_{k}(j)\cdot\sup_{\xi\in\mathtt{Rect}(Y^{(t)},\overline{Y}^{(t+1) })}\left[\boldsymbol{\nabla}^{-2}\psi(\xi)\cdot\widehat{L}^{(t)}\right](k) \right)\cdot\hat{\ell}^{(t)}_{k}(j)\right] \tag{9}\] \[=\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\sum_{k\in[K]}\sup_{ \xi\in\text{Rect}(Y^{(t)},\overline{Y}^{(t+1)})}\left[\mathbf{\nabla}^{-2}\psi(\xi) \cdot\widetilde{L}^{(t)}\right](k)\cdot\left(\sum_{j\in[m_{k}]}X_{k}^{(t)}(j) \cdot\tilde{t}_{k}^{(t)}(j)\right)\right]\] \[=\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\sum_{k\in[K]}\sup_{ \xi\in\text{Rect}(Y^{(t)},\overline{Y}^{(t+1)})}\left[\mathbf{\nabla}^{-2}\psi( \xi)\cdot\widetilde{L}^{(t)}\right](k)\cdot\widetilde{L}^{(t)}(k)\right]\] \[=\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\sup_{\xi\in\text{Rect }(Y^{(t)},\overline{Y}^{(t+1)})}\left\|\widehat{L}^{(t)}\right\|_{\mathbf{\nabla} ^{-2}\psi(\xi)}^{2}\right].\] Similarly, \[(B) =\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\left(\sum_{k\in[K] }\sum_{j\in[m_{k}]}Y^{(t)}(k)\cdot\sup_{\zeta_{k}\in\text{Rect}(X_{k}^{(t)} \overline{X}_{k}^{(t+1)})}\left[\mathbf{\nabla}^{-2}\phi_{k}(\zeta_{k})\cdot\hat{t }_{k}^{(t)}\right](j)\right)\cdot\hat{t}_{k}^{(t)}(j)\right] \tag{10}\] \[=\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\sum_{k\in[K]}Y^{(t)} (k)\cdot\sup_{\zeta_{k}\in\text{Rect}(X_{k}^{(t)}\overline{X}_{k}^{(t+1)})} \left\|\hat{t}_{k}^{(t)}\right\|_{\mathbf{\nabla}^{-2}\phi_{k}(\zeta_{k})}^{2} \right].\] Combining Equations (9) and (10), we have \[R_{a}(T)-\mathcal{R}_{a}(T)\leq\frac{1}{2}\sum_{t=0}^{T-1}\mathbb{E}\left[\sup _{\xi\in\text{Rect}(Y^{(t)},\overline{Y}^{(t+1)})}\left\|\widetilde{L}^{(t)} \right\|_{\mathbf{\nabla}^{-2}\psi(\xi)}^{2}+\sum_{k\in[K]}Y^{(t)}(k)\cdot\sup_{ \zeta_{k}\in\text{Rect}(X_{k}^{(t)},\overline{X}_{k}^{(t+1)})}\left\|\hat{t} _{k}^{(t)}\right\|_{\mathbf{\nabla}^{-2}\phi_{k}(\zeta_{k})}^{2}\right]. \tag{11}\] If we apply the "regret decomposition theorem" in [10] and use the standard OSMD bound for each stage, we will get the term \[\sup_{\zeta_{k^{+}}\in\text{Rect}(X_{k^{+}}^{(t)},\overline{X}_{k^{+}}^{(t+1 )})}\left\|\hat{t}_{k^{*}}^{(t)}\right\|_{\mathbf{\nabla}^{-2}\phi_{k^{+}}(\zeta_{ k^{*}})}^{2} \tag{12}\] where \(k^{*}\) is the index of the group containing the optimal arm instead of the term \[\sum_{k\in[K]}Y^{(t)}(k)\cdot\sup_{\zeta_{k}\in\text{Rect}(X_{k}^{(t)}, \overline{X}_{k}^{(t+1)})}\left\|\hat{t}_{k}^{(t)}\right\|_{\mathbf{\nabla}^{-2} \phi_{k}(\zeta_{k})}^{2}\] in eq. (11). The new \(Y^{(t)}(k)\) term is crucial to our optimal regret bound since it cancels a \(Y^{(t)}(k)\) term hidden in the denominator of \(\|\hat{t}_{k}^{(t)}\|_{\mathbf{\nabla}^{-2}\phi_{k}(\zeta_{k})}^{2}\). This will be clear in Section 3.2.3. #### 3.2.3 The Regret of Algorithm 1 Note that the regret of Algorithm 1 is composed of the two parts in Lemma 11 and Lemma 12. In this section, we will prove Theorem 8 by providing more specific bounds for the terms in these two lemmas. Proof of Theorem 8.: By definition of Bregman divergence, \[B_{\psi}(\mathbf{e}_{k},Y^{(0)})=\psi(\mathbf{e}_{k})-\psi(Y^{(0)})-\langle\nabla\psi( Y^{(0)}),\mathbf{e}_{k}-Y^{(0)}\rangle.\] Since we initialize \(Y^{(0)}=\arg\min_{b\in\Delta_{K-1}}\psi(b)\), \(Y^{(0)}(k)=\frac{1}{K}\) for \(k\in[K]\) and \(\langle\nabla\psi(Y^{(0)}),\mathbf{e}_{k}-Y^{(0)}\rangle\geq 0\) follows the first-order optimality condition for \(Y^{(0)}\). Thus \[B_{\psi}(\mathbf{e}_{k},Y^{(0)})\leq\psi(\mathbf{e}_{k})-\psi(Y^{(0)})=\frac{-2 +2\sqrt{K}}{\eta}\leq\frac{2\sqrt{K}}{\eta}.\] Similarly we have \(X_{k}^{(0)}(j)=\frac{1}{m_{k}}\) for \(j\in[m_{k}]\) and \[B_{\phi_{k}}(\mathbf{e}_{j},X_{k}^{(0)})\leq\phi_{k}(\mathbf{e}_{j})-\phi_{k}( X_{k}^{(0)})=\frac{\log m_{k}}{\eta_{k}}.\] Therefore \[\mathcal{R}_{a}(T)\leq\frac{2\sqrt{K}}{\eta}+\frac{\log m_{k}}{\eta_{k}}. \tag{13}\] Recall that \(A_{t}=(k_{t},j_{t})\) is the arm pulled by the algorithm at round \(t\). Now we plug our estimator \(\hat{\ell}_{k}^{(t)}(j)=\frac{1\left[k_{t}=k\right]}{Y^{(t)}(k)}\ell_{k}^{(t)} (j)\) and \(\nabla^{2}\psi(\xi)=\mathrm{diag}\left(\frac{1}{2\eta\xi(1)^{3/2}},\frac{1}{2 \eta\xi(2)^{3/2}},\cdots,\frac{1}{2\eta\xi(K)^{3/2}}\right)\) into the first term on the RHS of Lemma 12. \[\mathbb{E}\left[\sup_{\xi\in\text{Rect}\left(Y^{(t)},\overline{Y }^{(t+1)}\right)}\|\widehat{L}^{(t)}\|_{\overline{\nabla}^{-2}\psi(\xi)}^{2} \right] =2\eta\mathbb{E}\left[\sup_{\xi\in\text{Rect}\left(Y^{(t)}, \overline{Y}^{(t+1)}\right)}\sum_{k\in[K]}\xi(k)^{3/2}\cdot\left(\frac{1}{Y^{ (t)}(k)}\sum_{j\in[m_{k}]}\ell_{k}^{(t)}(j)X_{k}^{(t)}(j)\right)^{2}\right]\] \[\overset{(a)}{\leq}2\eta\mathbb{E}\left[\sum_{k\in[K]}\left(Y^{( t)}(k)\right)^{3/2}\cdot\left(\frac{1}{Y^{(t)}(k)}\sum_{j\in[m_{k}]}\ell_{k}^{(t)} (j)X_{k}^{(t)}(j)\right)^{2}\right]\] \[\overset{(b)}{\leq}2\eta\mathbb{E}\left[\mathbb{E}\left[\sum_{k \in[K]}\frac{1}{\sqrt{Y^{(t)}(k)}}\right]\Bigg{|}\ Y^{(t)}\right]\] \[=2\eta\sum_{k=1}^{K}\mathbb{E}\left[\sqrt{Y^{(t)}(k)}\right] \overset{(c)}{\leq}2\eta\sum_{k=1}^{K}\sqrt{\mathbb{E}\left[Y^{(t)}(k)\right] }\leq 2\eta\sqrt{K}.\] In the calculation above: \((a)\) follows from \(\overline{Y}^{(t+1)}(k)\leq Y^{(t)}(k)\), \((b)\) is due to \(\sum_{j\in[m_{k}]}\ell_{k}^{(t)}(j)X_{k}^{(t)}(j)\in[0,1]\), and \((c)\) is due to Jensen's inequality. Similarly we have for the second term with \(\nabla^{2}\phi_{k}(\zeta_{k})=\mathrm{diag}\left(\frac{1}{\eta_{k}\zeta_{k}(1 )},\frac{1}{\eta_{k}\zeta_{k}(2)},\cdots,\frac{1}{\eta_{k}\zeta_{k}(m_{k})}\right)\) \[\mathbb{E}\left[\sum_{k\in[K]}Y^{(t)}(k)\cdot\sup_{\zeta_{k}\in \text{Rect}\left(X_{k}^{(t)},\overline{X}_{k}^{(t+1)}\right)}\|\hat{\ell}_{k} ^{(t)}\|_{\overline{\nabla}^{-2}\phi_{k}(\zeta_{k})}^{2}\right]\] \[=\mathbb{E}\left[\sum_{k\in[K]}\eta_{k}Y^{(t)}(k)\cdot\sup_{\zeta _{k}\in\text{Rect}\left(X_{k}^{(t)},\overline{X}_{k}^{(t+1)}\right)}\sum_{j \in[m_{k}]}\zeta_{k}(j)\cdot\left(\frac{1}{Y^{(t)}(k)}\ell_{k}^{(t)}(j)\right)^ {2}\right]\] \[\overset{(d)}{\leq}\mathbb{E}\left[\sum_{k\in[K]}\eta_{k}Y^{(t)}( k)\cdot\sum_{j\in[m_{k}]}X_{k}^{(t)}(j)\cdot\left(\frac{1}{Y^{(t)}(k)}\ell_{k}^{(t)}(j) \right)^{2}\right]\] \[\overset{(e)}{\leq}\mathbb{E}\left[\mathbb{E}\left[\sum_{k\in[K ]}\eta_{k}\cdot\sum_{j\in[m_{k}]}X_{k}^{(t)}(j)\cdot\frac{\frac{1}{2}\left[k_{ t}=k\right]}{Y^{(t)}(k)}\right]\Bigg{|}\ Y^{(t)}(k)\right]\] \[=\sum_{k\in[K]}\eta_{k}\sum_{j\in[m_{k}]}X_{k}^{(t)}(j)=\sum_{k\in[K]}\eta_{k}.\] In the calculation above: \((d)\) follows from \(\overline{X}_{k}^{(t+1)}(j)\leq X_{k}^{(t)}(j)\) and \((e)\) is due to \(\ell_{k}^{(t)}(j)\in[0,1]\). Hence, summing up above two terms from \(0\) to \(T-1\), we obtain \[R_{a}(T)-\mathcal{R}_{a}(T)\leq\eta\sqrt{K}T+\frac{1}{2}T\sum_{k\in[K]}\eta_{k}. \tag{14}\] Combining Equations (13) and (14) and choosing \(\eta=\frac{1}{\sqrt{T}}\) and \(\eta_{k}=\frac{\log(m_{k}+1)}{\sqrt{T\sum_{k=1}^{K}\log(m_{k}+1)}}\), we obtain for any fixed arm \(a\), \[R_{a}(T)\leq\frac{2\sqrt{K}}{\eta}+\frac{\log m_{k}}{\eta_{k}}+\frac{T}{2} \sum_{k\in[K]}\eta_{k}+\eta T\sqrt{K}\leq O\left(\sqrt{T\sum_{k=1}^{K}\log(m_ {k}+1)}\right).\] ### A Reduction from BAI to Mab In this section, we prove an upper bound of \(O\left(\sum_{k=1}^{K}\frac{\log(m_{k}+1)}{\ell^{2}}\right)\) for \(\mathbf{m}\)-BAI. We achieve this by constructing a PAC algorithm for \(\mathbf{m}\)-BAI from an algorithm for \(\mathbf{m}\)-MAB through the following lemma. Let \(r(T,\vec{L})\) be a real valued function with the time horizon \(T\) and loss sequence \(\vec{L}=\left(\ell^{(1)},\ldots,\ell^{(T)}\right)\) as its input. Let \(\mathcal{H}\) be a BAI instance. With fixed \(T>0\), we use \(\operatorname{\mathbb{E}}_{\mathcal{H}}\left[r(T,\vec{L})\right]\) to denote the expectation of \(r(T,\vec{L})\) where \(\ell^{(t)}\) in \(\vec{L}\) is drawn from \(\mathcal{H}\) independently for every \(t\in[T]\). Let \(\mathcal{H}\) be a set of BAI instances. **Lemma 13**.: _Let \(\mathcal{A}\) be an algorithm for \(\mathbf{m}\)-MAB with regret \(R_{a^{*}}(T,\mathcal{A},\vec{L})\leq r(T,\vec{L})\) for every time horizon \(T\) and every loss sequence \(\vec{L}\). Then there exists an \((\varepsilon,0.05)\)-PAC algorithm \(\mathcal{A}^{\prime}\) for \(\mathbf{m}\)-BAI that terminates in \(T^{*}\) rounds where \(T^{*}\) is the solution of the equation_ \[T^{*}=\frac{2500\cdot\max_{\vec{L}}r(T^{*},\vec{L})}{\varepsilon}.\] _Moreover, if we only care about identifying an \(\varepsilon\)-optimal arm with probability \(0.95\) when the input is chosen from a known family \(\mathcal{H}\), we can construct an algorithm solving this problem that terminates in \(T^{*}_{\mathcal{H}}\) rounds where \(T^{*}_{\mathcal{H}}\) is the solution of the equation_ \[T^{*}_{\mathcal{H}}=\frac{2500\cdot\max_{\vec{J}\in\mathcal{H}}\operatorname{ \mathbb{E}}_{\mathcal{H}}\left[r(T^{*}_{\mathcal{H}}\vec{L})\right]}{\varepsilon}.\] Proof.: Given an instance \(\mathcal{H}\) of \(\mathbf{m}\)-BAI, we run \(\mathcal{A}\) for \(T^{*}\) rounds. Let \(T_{i}\) be the number of times that the arm \(i\) has been pulled, i.e., \(T_{i}=\sum_{t=0}^{T^{*}-1}\mathbb{1}[A_{t}=i]\). Let \(\overline{Z}=\left(\overline{Z}_{1},\overline{Z}_{2},\ldots,\overline{Z}_{N} \right)=\left(\frac{T_{1}}{T^{*}},\frac{T_{2}}{T^{*}},\ldots,\frac{T_{N}}{T^{*}}\right)\) be a distribution on \(N\) arms. We construct \(\mathcal{A}^{\prime}\) by simply sampling from \(\overline{Z}=\left(\frac{T_{1}}{T^{*}},\frac{T_{2}}{T^{*}},\ldots,\frac{T_{N}}{ T^{*}}\right)\) and outputting the result. Recall that \(p_{i}\) is the mean of the \(i\)-th arm in \(\mathcal{H}\) and arm \(a^{*}\) is the one with the minimum mean. Define the gap vector \(\Delta=(p_{1}-p_{a^{*}},\cdots,p_{N}-p_{a^{*}})\). Note that \(\overline{Z}\) is a random vector and define conditional expected regret \(R(\overline{Z})=\langle\Delta,\overline{Z}\rangle\cdot T^{*}\) given \(\overline{Z}\). Thus the expected regret \(\operatorname{\mathbb{E}}_{\overline{Z}}\left[R(\overline{Z})\right]\leq\max _{\vec{L}}r(T^{*},\vec{L})\). By Markov's inequality, \(R(\overline{Z})\leq 100\max_{\vec{L}}r(T^{*},\vec{L})\) with probability at least \(0.99\). Now we only consider \(\overline{Z}\) conditioned on \(R(\overline{Z})\leq 100\max_{\vec{L}}r(T^{*},\vec{L})\). Let \(B\subseteq[N]\) denote the "bad set" which contains arms that are not \(\varepsilon\)-optimal. Then \(\varepsilon T^{*}\sum_{i\in B}\overline{Z}_{i}\leq 100\max_{\vec{L}}r(T^{*},\vec{L})\). Note that \(T^{*}=\frac{2500\cdot\max_{\vec{L}}r(T^{*},\vec{L})}{\varepsilon}\). Therefore \(\sum_{i\in B}\overline{Z}_{i}\leq 0.04\). In total, this algorithm will make a mistake with probability no more than \(0.05\) by the union bound. When we only care about the input instances chosen from \(\mathcal{H}\), we run \(\mathcal{A}\) for \(T^{*}_{\mathcal{H}}\) rounds and similarly, we output an arm drawn from \(\left(\frac{T_{1}}{T_{\mathcal{H}}},\frac{T_{2}}{T_{\mathcal{H}}},\ldots,\frac {T_{N}}{T_{\mathcal{H}}}\right)\). It is easy to verify via the same arguments that this algorithm can output an \(\varepsilon\)-optimal arm with probability \(0.95\) when the input is chosen from \(\mathcal{H}\). Then we can use the Algorithm 1 and Theorem 8 to give an upper bound for \(\mathbf{m}\)-BAI. Proof of Theorem 2.: We use Algorithm 1 to construct an \((\varepsilon,0.05)\)-PAC algorithm for \(\mathbf{m}\)-BAI as described in Lemma 13. Since the regret satisfies \(R(T)\leq c\sqrt{T\sum_{k=1}^{K}\log(1+m_{k})}\) for some constant \(c\) on every loss sequence by Theorem 1, running Algorithm 1 with \(T^{*}=\frac{(2500c)^{2}\sum_{k=1}^{K}\log(1+m_{k})}{\varepsilon^{2}}\), we can get an \((\varepsilon,0.05)\)-PAC algorithm which always terminates in \(O\left(\sum_{k=1}^{K}\frac{\log(m_{k}+1)}{\varepsilon^{2}}\right)\) rounds. ### The Strongly Observable Graph with Self-loops We can generalize our results to any strongly observable graph \(G=(V,E)\) with each vertex owning a self-loop. Assume \(G\) contains a \((V_{1},\ldots,V_{K})\)-clique cover. We construct a new graph \(G^{\prime}=(V,E^{\prime})\) by ignoring the edges between any two distinct cliques. It is clear that \(R^{*}(G,T)\leq R^{*}(G^{\prime},T)\). Then we can prove Corollary 5 by directly applying Algorithm 1 with feedback graph \(G^{\prime}\). This proves Corollary 5, which asserts that \[R^{*}(G,T)=O\left(\sqrt{T\cdot\sum_{k=1}^{K}\log(m_{k}+1)}\right).\] Although we assume that each vertex contains a self-loop for the sake of simplicity, we note that our algorithm can still be applied to strongly observable graphs that have some vertices without self-loops. In such cases, we can incorporate an additional exploration term into our algorithm, and a similar analysis to that in Section 3.2 still works. There have been several works using the clique cover as the parameter to bound the minimax regret of graph bandit. For example, [1] applies FTRL algorithm with a carefully designed potential function which combines the Tsallis entropy with negative entropy. It achieves a regret of \((\log T)^{O(1)}\cdot O\left(\sqrt{KT}\right)\). Our new bound takes into account the size of each clique and is always superior. ## 4 Lower Bounds for \(\mathbf{m}\)-Bai Let \(\mathcal{A}\) be an algorithm for \(\mathbf{m}\)-BAI where \(\mathbf{m}=(m_{1},\ldots,m_{K})\) is a vector. Given an instance of \(\mathbf{m}\)-BAI, we use \(T\) to denote the number of rounds the algorithm \(\mathcal{A}\) proceeds. Recall that for every group \(k\in[K]\) and \(j\in[m_{k}]\), we use \(T_{(k,j)}\) to denote the number of times that the arm \((k,j)\) has been pulled. For every \(k\in[K]\), let \(T^{(k)}=\sum_{j\in[m_{k}]}T_{(k,j)}\) be the number of rounds the arms in the \(k\)-th group have been pulled. We also use \(N_{(k,j)}\) to denote the number of times the arm \((k,j)\) has been observed. Clearly \(N_{(k,j)}=T^{(k)}\). In the following part, we only consider stochastic environment. That is, \(\ell^{(t)}\) is independently drawn from the same distribution for each \(t\in\mathbb{N}\). Therefore, we omit the superscript \((t)\) and only use \(\ell(i)\) or \(\ell_{k}(j)\) to denote the one-round loss of arm \(i\) or arm \((k,j)\) respectively when the information is clear from the context. In Section 4.1, we lower bound the number of rounds for a PAC algorithm on a specific \(\mathbf{m}\)-BAI instance with \(\mathbf{m}=(m)\) and then prove the result for \(\mathbf{m}\)-BAI in Section 4.2. We then use these results to prove a regret lower bound for \(\mathbf{m}\)-MAB and bandit problems with general feedback graphs in Section 5. ### An Instance-Specific Lower Bound for \((m)\)-Bai In this section, we study the number of rounds required for \((m)\)-Bai in an \((\varepsilon,0.05)\)-PAC algorithm. In this setting, the pull of any arm can observe the losses of all arms. We will establish a lower bound for a specified instance, namely the one where all arms follow \(\mathtt{Ber}(\frac{1}{2})\). This is key to our lower bound later. We focus on instances of \((m)\)-Bai where each arm is Bernoulli. As a result, each instance can be specified by a vector \((p_{1},\ldots,p_{m-1},p_{m})\in\mathbb{R}^{m}\) meaning the loss of arm \(i\) follows \(\mathtt{Ber}(p_{i})\) in each round _independently_. Let \(\varepsilon\in\left(0,\frac{1}{2}\right)\). In the following context, when we denote an instance as \(\mathcal{H}^{\mathbf{m}}\), the superscript \(\mathbf{m}\) indicates that it is an \(\mathbf{m}\)-Bai instance. Consider the following \(m+1\)\((m)\)-Bai instances \(\left\{\mathcal{H}^{(m)}_{j}\right\}_{j\in[m]\cup\left\{0\right\}}\): * The instance \(\mathcal{H}^{(m)}_{0}\) is \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2},\cdots,\frac{1}{2}\right)\). That is, \(p_{i}=\frac{1}{2}\) for every \(i\in[m]\) in \(\mathcal{H}^{(m)}_{0}\); * For \(j\in[m]\), \[\mathcal{H}^{(m)}_{j}=\begin{pmatrix}\frac{1}{2},\frac{1}{2},\cdots,\frac{1} {2},&\frac{1}{2}-\varepsilon&,\frac{1}{2},\cdots,\frac{1}{2}\\ &\text{the $j$-th arm}\end{pmatrix};\] that is, the instance satisfies \(p_{j}=\frac{1}{2}-\varepsilon\) and \(p_{i}=\frac{1}{2}\) for every \(i\neq j\). We say an algorithm \(\mathcal{A}\) distinguishes \(\left\{\mathcal{H}^{(m)}_{j}\right\}_{j\in[m]\cup\left\{0\right\}}\) with probability \(p\) if \[\Pr\left[\mathcal{A}\text{ outputs }j\ \Big{|}\text{ the input instance is }\mathcal{H}^{(m)}_{j}\right]\geq p,\] and the output can be arbitrary among \(\left\{0,1,\ldots m\right\}\) when the input is not in \(\left\{\mathcal{H}^{(m)}_{j}\right\}_{j\in[m]\cup\left\{0\right\}}\). The main result of this section is **Lemma 14**.: _Let \(\mathcal{A}\) be an \((\varepsilon,0.05)\)-PAC algorithm. Assume \(m\geq 2\). There exists a universal constant \(c_{1}>0\) such that \(\mathcal{A}\) terminates on \(\mathcal{H}^{(m)}_{0}\) after at least \(\frac{c_{1}}{\varepsilon^{2}}\log(m+1)\) rounds in expectation._ We will prove the lemma in Section 4.1.2 via a reduction from a lower bound for _Gaussian arms_ established in Section 4.1.1. #### 4.1.1 The Gaussian Arms In this section, we relax the constraint on the range of each arm's loss and allow the losses to be arbitrary real numbers. Let \(\varepsilon\in\left(0,\frac{1}{2}\right)\) and \(\sigma\in\left(\frac{1}{2\sqrt{2\pi}},\frac{1}{\sqrt{2\pi}}\right)\). We construct \(m+1\) instances \(\left\{\mathcal{N}_{j}\right\}_{j\in\left\{0\right\}\cup[m]}\) with Gaussian distributions: * In the instance \(\mathcal{N}_{0}\), for each \(i\in[m]\), \(\ell(i)\) is independently drawn from a Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\); * In the instance \(\mathcal{N}_{j}\) for \(j\in[m]\), \(\ell(j)\sim\mathcal{N}(-\varepsilon,\sigma^{2})\) and \(\ell(i)\sim\mathcal{N}(0,\sigma^{2})\) for each \(i\neq j\) and \(i\in[m]\) independently. **Lemma 15** (Bretagnolle-Huber inequality, see e.g. [15]).: _Let \(\mathbf{P}_{1}\) and \(\mathbf{P}_{2}\) be two probability measures on the same measurable space \((\Omega,\mathcal{F})\), and let \(E\in\mathcal{F}\) be an arbitrary event. Then_ \[\mathbf{P}_{1}[E]+\mathbf{P}_{2}[\overline{E}]\geq\frac{1}{2}e^{-D_{\Omega} \left(\mathbf{P}_{1},\mathbf{P}_{2}\right)}\] Let \(\mathcal{N}_{\text{mix}}\) be the mixture of \(\left\{\mathcal{N}_{j}\right\}_{j\in[m]}\) meaning that the environment chooses \(k\) from \([m]\) uniformly at random and generates losses according to \(\mathcal{N}_{k}\) in the following BAI game. Let \(\mathcal{A}\) be an algorithm distinguishing \(\left\{\mathcal{N}_{j}\right\}_{j\in[m]\cup\{0\}}\). Let \(\Omega\) be the set of all possible outcomes during the first \(t^{*}\) rounds, including the samples according to the input distribution and the output of \(\mathcal{A}\) (if \(\mathcal{A}\) does not terminate after the \(t^{*}\)-th round, we assume its output is \(-1\)). Note that if the algorithm terminates in \(t^{\prime}<t^{*}\) rounds, we can always add \(t^{*}-t^{\prime}\) virtual rounds so that it still produces a certain loss sequence in \(\mathbb{R}^{m\times t^{*}}\). As a result, each outcome \(\omega\in\Omega\) can be viewed as a pair \(\omega=(w,x)\) where \(w\in\mathbb{R}^{m\times t^{*}}\) is the loss sequence and \(x\in\left\{-1,0,1,\ldots,m\right\}\) indicates the output of \(\mathcal{A}\). Thus \(\Omega=W\times\left\{-1,0,1,\ldots,m\right\}\) where \(W=\mathbb{R}^{m\times t^{*}}\). To ease the proof below, we slightly change \(\mathcal{A}\)'s output: if the original output is \(x\in\left\{-1,0,\ldots,m\right\}\), we instead output a uniform real in \(\left[x,x+1\right)\). Therefore, we can let \(\Omega=W\times X\) where \(W=\mathbb{R}^{m\times t^{*}}\) and \(X=\mathbb{R}\). The benefit of doing so is that we can let \(\mathcal{F}\) be the Borel sets in \(\Omega\) which is convenient to work with. Clearly it is sufficient to establish lower bounds for the algorithms after the change. For any instance \(\mathcal{H}^{(m)}\), let \(\mathbf{p}_{\mathcal{H}^{(m)}}\) be the measure of outcomes of \(\mathcal{A}\) in \(t^{*}\) rounds with input instance \(\mathcal{H}^{(m)}\) and \(\mathbf{p}_{\mathcal{H}^{(m)}}\) be the corresponding probability density function (PDF). Then \(\mathbf{P}_{\mathcal{N}_{0}}\) and \(\mathbf{P}_{\mathcal{N}_{\text{mix}}}\) are two probability measures on \((\Omega,\mathcal{F})\) and \(\mathbf{p}_{\mathcal{N}_{\text{mix}}}(\omega)=\frac{1}{m}\sum_{j\in[m]} \mathbf{p}_{\mathcal{N}_{j}}(\omega)\) for any \(\omega=(w,x)\in\Omega=\mathbb{R}^{m\times t^{*}+1}\). We also let \(\mathbf{p}_{\mathcal{H}^{(m)}}^{W}\) be the PDF of the samples during the first \(t^{*}\) rounds according to the input \(\mathcal{H}^{(m)}\) and \(\mathbf{p}_{\mathcal{H}^{(m)}}^{X}\) be the PDF of \(\mathcal{A}\)'s output. Furthermore, we let \(\mathbf{p}_{\mathcal{H}^{(m)}}^{X|W}\) to be the conditional density function of \(X\) given \(W\). By definition, we have \(\mathbf{p}_{\mathcal{H}^{(m)}}^{X|W}(x|w)=\frac{\mathbf{p}_{\mathcal{H}^{(m)} }(\omega)}{\mathbf{p}_{\mathcal{H}^{(m)}}^{W}(w)}\). **Lemma 16**.: \[D_{\text{KL}}\left(\mathbf{P}_{\mathcal{N}_{\text{mix}}},\mathbf{P}_{\mathcal{ N}_{0}}\right)\leq\log\frac{m-1+\exp\left(\frac{\varepsilon^{2}t^{*}}{ \sigma^{2}}\right)}{m}.\] Proof.: For any \(\omega=(w,x)\in\Omega\), let \(w_{j,t}\) denote the \((j,t)^{\text{th}}\) entry of the matrix \(w\) for every \(j\in[m]\) and \(t\in[t^{*}]\). That is, \(w_{j,t}=t^{(t)}(j)\), which is the loss of arm \(j\) in the \(t\)-th round. Then for each \(i\in[m]\), \[\mathbf{p}_{\mathcal{N}_{i}}^{W}(w)=\left(2\pi\sigma^{2}\right)^{-\frac{mt^{*}} {2}}\exp\left(-\frac{\sum_{t\in[t^{*}],j\in[m]}w_{j,t}^{2}}{2\sigma^{2}}\right)\] and \[\mathbf{p}_{\mathcal{N}_{0}}^{W}(w)=\left(2\pi\sigma^{2}\right)^{-\frac{mt^{*} }{2}}\exp\left(-\frac{\sum_{t\in[t^{*}],j\in[m]}w_{j,t}^{2}}{2\sigma^{2}}\right).\] Therefore we have \[\frac{\mathbf{p}_{\mathcal{N}_{i}}(\omega)}{\mathbf{p}_{\mathcal{ N}_{0}}(\omega)}=\frac{\mathbf{p}_{\mathcal{N}_{i}}^{W}(w)}{\mathbf{p}_{\mathcal{N}_{0}}^{W}(w)} =\frac{\left(2\pi\sigma^{2}\right)^{-\frac{mt^{*}}{2}}\exp\left(- \frac{\sum_{t\in[t^{*}]}\left((w_{t,t}+\varepsilon)^{2}+\sum_{j\neq t}w_{j,t}^ {2}\right)}{2\sigma^{2}}\right)}{\left(2\pi\sigma^{2}\right)^{-\frac{mt^{*}}{2} }\exp\left(-\frac{\sum_{t\in[t^{*}],j\in[m]}w_{j,t}^{2}}{2\sigma^{2}}\right)}\] \[=\exp\left(-\frac{\varepsilon^{2}t^{*}+2\varepsilon\sum_{t\in[t^{ *}]}w_{t,t}}{2\sigma^{2}}\right).\] From Jensen's inequality, we have \[D_{\text{KL}}\left(\mathbf{P}_{\mathcal{N}_{\text{mix}}},\mathbf{P}_{ \mathcal{N}_{0}}\right)=\int_{\Omega}\log\frac{\mathbf{p}_{\mathcal{N}_{\text{ mix}}}(\omega)}{\mathbf{p}_{\mathcal{N}_{0}}(\omega)}\,\mathrm{dP}_{ \mathcal{N}_{\text{mix}}}(\omega)\leq\log\int_{\Omega}\frac{\mathbf{p}_{ \mathcal{N}_{\text{mix}}}(\omega)}{\mathbf{p}_{\mathcal{N}_{0}}(\omega)}\, \mathrm{dP}_{\mathcal{N}_{\text{mix}}}(\omega)\] \[=\log\int_{\Omega}\frac{1}{m}\sum_{j\in[m]}\mathbf{p}_{\cdot,N_{j}}( \omega)\frac{\frac{1}{m}\sum_{i\in[m]}\mathbf{p}_{\cdot,N_{j}}(\omega)}{\mathbf{p }_{\cdot,N_{0}}(\omega)}\,\mathrm{d}\omega.\] Note that for \(\omega=(w,x)\), For \(i,j\in[m]\) and \(i\neq j\), \[\int_{\Omega}\mathbf{p}_{\cdot,N_{i}}(\omega)\frac{\mathbf{p}_{ \cdot,N_{j}}(\omega)}{\mathbf{p}_{\cdot,N_{0}}(\omega)}\,\mathrm{d}\omega =\int_{W}\int_{X}\mathbf{p}_{\cdot,N_{i}}^{W}(w)\cdot\mathbf{p}_{ \cdot,N_{i}}^{X|W}(x|w)\frac{\mathbf{p}_{\cdot,N_{j}}^{W}(w)}{\mathbf{p}_{\cdot,N_{0}}^{W}(w)}\mathrm{d}x\mathrm{d}w\] \[=\int_{W}\mathbf{p}_{\cdot,N_{i}}^{W}(w)\frac{\mathbf{p}_{\cdot, N_{j}}^{W}(w)}{\mathbf{p}_{\cdot,N_{0}}^{W}(w)}\mathrm{d}w\] \[=\left(2\pi\sigma^{2}\right)^{-\frac{mt^{*}}{2}}\cdot\int_{ \Omega}\exp\left(-\frac{\sum_{t\in[r^{*}]}\left((w_{i,t}+\varepsilon)^{2}+(w_ {j,t}+\varepsilon)^{2}\right)+\sum_{\begin{subarray}{c}j^{\prime}\neq i\\ j^{\prime}\neq j\end{subarray}}w_{j^{\prime},t}^{2}}{2\sigma^{2}}\right) \,\mathrm{d}w=1.\] For \(i\in[m]\), \[\int_{\Omega}\mathbf{p}_{\cdot,N_{i}}(\omega)\frac{\mathbf{p}_{ \cdot,N_{i}}(\omega)}{\mathbf{p}_{\cdot,N_{0}}(\omega)}\,\mathrm{d}\omega =\int_{W}\int_{X}\mathbf{p}_{\cdot,N_{i}}^{W}(w)\cdot\mathbf{p}_{ \cdot,N_{i}}^{X|W}(x|w)\frac{\mathbf{p}_{\cdot,N_{i}}^{W}(w)}{\mathbf{p}_{ \cdot,N_{0}}^{W}(w)}\mathrm{d}x\mathrm{d}w\] \[=\int_{W}\mathbf{p}_{\cdot,N_{i}}^{W}(w)\frac{\mathbf{p}_{\cdot, N_{i}}^{W}(w)}{\mathbf{p}_{\cdot,N_{0}}^{W}(w)}\mathrm{d}w\] \[=\left(2\pi\sigma^{2}\right)^{-\frac{mt^{*}}{2}}\cdot\int_{\Omega }\exp\left(-\frac{\sum_{t\in[r^{*}]}\left((w_{i,t}+2\varepsilon)^{2}+\sum_{j^ {\prime}\neq i}w_{j^{\prime},t}^{2}\right)-2\varepsilon^{2}t^{*}}{2\sigma^{2}} \right)\,\mathrm{d}w=\exp\left(\frac{\varepsilon^{2}t^{*}}{\sigma^{2}}\right).\] Therefore, combining the equations above, we get \[\int_{\Omega}\frac{1}{m}\sum_{j\in[m]}\mathbf{p}_{\cdot,N_{j}}( \omega)\frac{\frac{1}{m}\sum_{i\in[m]}\mathbf{p}_{\cdot,N_{i}}(\omega)}{ \mathbf{p}_{\cdot,N_{0}}(\omega)}\,\mathrm{d}\omega =\frac{1}{m^{2}}\sum_{i,j\in[m]}\int_{\Omega}\mathbf{p}_{\cdot,N_ {i}}(\omega)\frac{\mathbf{p}_{\cdot,N_{j}}(\omega)}{\mathbf{p}_{\cdot,N_{0}}( \omega)}\,\mathrm{d}\omega\] \[=\frac{m(m-1)+m\cdot\exp\left(\frac{\varepsilon^{2}t^{*}}{ \sigma^{2}}\right)}{m^{2}}=\frac{m-1+\exp\left(\frac{\varepsilon^{2}t^{*}}{ \sigma^{2}}\right)}{m},\] where the first equality follows from Fubini's theorem. This indicates that \(D_{\mathrm{KL}}\left(\mathbf{P}_{\cdot,N_{\mathrm{mix}}},\mathbf{P}_{\cdot,N_{ 0}}\right)\leq\log\frac{m-1+\exp\left(\frac{\varepsilon^{2}t^{*}}{\sigma^{2}} \right)}{m}\). Let \(t^{*}=\frac{c_{0}\log(m+1)}{\varepsilon^{2}}\), where \(c_{0}\leq\sigma^{2}\) is a universal constant. We have the following lemma to bound \(\mathbf{Pr}_{\cdot,N_{0}}\left[T\geq t^{*}\right]\). Here the randomness comes from the algorithm and environment when the input instance is \(\mathcal{N}_{0}\). **Lemma 17**.: _For any algorithm distinguishing \(\left\{\cdot\mathcal{N}_{j}\right\}_{j\in[m]\cup\left\{0\right\}}\) with probability \(0.925\), we have \(\mathbf{Pr}_{\cdot,N_{0}}\left[T\geq t^{*}\right]\geq 0.1\)._ Proof.: Let \(\mathcal{A}\) be an algorithm that can distinguish \(\left\{\cdot\mathcal{N}_{j}\right\}_{j\in[m]\cup\left\{0\right\}}\) with probability \(0.925\). Let \(E\) be the event that \(\mathcal{A}\) terminates within \(t^{*}\) rounds and gives answer \(\mathcal{N}_{0}\). Recall that \(T\) is a random variable which represents the rounds that \(\mathcal{A}\) runs. Assume \(\mathbf{Pr}_{\cdot,N_{0}}\left[T\geq t^{*}\right]<0.1\). Then we have \(\mathbf{Pr}_{\cdot,N_{0}}\left[\overline{E}\right]<0.075+0.1\) from the union bound. Combining Lemma 15 and Lemma 16, we get \[\mathbf{Pr}_{\cdot,N_{\mathrm{mix}}}\left[E\right]\geq\frac{m}{2\left(m-1+ \exp\left(\frac{\varepsilon^{2}t^{*}}{\sigma^{2}}\right)\right)}-\mathbf{Pr}_{ \cdot,N_{0}}\left[\overline{E}\right]>\frac{m}{2\left(m-1+m+1\right)}-0.1-0.075 \geq 0.075\] for every \(m\geq 1\). This indicates the existence of some \(j\in[m]\) such that \(\Pr_{\mathcal{N}_{j}}\left[E\right]>0.075\), which is in contradiction to the promised success probability of \(\mathcal{A}\). Therefore \(\mathcal{A}\) satisfies \[\Pr_{\mathcal{N}_{0}}\left[T\geq t^{*}\right]\geq 0.1.\] #### 4.1.2 From Gaussian to Bernoulli We then show a reduction from Gaussian arms to Bernoulli arms which implies lower bounds for instances \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\{0\}}\). Given an input instance from \(\left\{\mathcal{N}_{j}\right\}_{j\in[m]\cup\{0\}}\), we can map it to a corresponding instance among \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\{0\}}\) by the following rules. In each round, if an arm receives a loss \(\ell\in\mathbb{R}\), let \[\widehat{\ell}=\begin{cases}0,&\text{if}\quad\ell<0;\\ 1,&\text{if}\quad\ell\geq 0.\end{cases} \tag{15}\] Obviously, losses drawn from Gaussian distribution \(\mathcal{N}(0,\sigma^{2})\) are mapped to \(\text{Ber}\left(\frac{1}{2}\right)\) losses. For a biased Gaussian \(\mathcal{N}\left(-\varepsilon,\sigma^{2}\right)\), as Figure 1 shows, it holds that \[\Pr\left[\widehat{\ell}<0\right] =\int_{-\infty}^{-\varepsilon}\frac{1}{\sqrt{2\pi}\sigma}e^{- \frac{(x\pi\varepsilon)^{2}}{2\sigma^{2}}}\,\mathrm{d}x+\int_{-\varepsilon}^{ 0}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x\pi\varepsilon)^{2}}{2\sigma^{2}}} \,\mathrm{d}x\] \[=\frac{1}{2}+\int_{-\varepsilon}^{0}\frac{1}{\sqrt{2\pi}\sigma}e ^{-\frac{(x\pi\varepsilon)^{2}}{2\sigma^{2}}}\,\mathrm{d}x\,.\] Let \(f(\sigma)=\int_{-\varepsilon}^{0}\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x\pi \varepsilon)^{2}}{2\sigma^{2}}}\,\mathrm{d}x\) denote the shadowed area in Figure 1. Note that \(f\) is continuous with regard to \(\sigma\) and \[f(\sigma)\in\left(\frac{\varepsilon}{\sqrt{2\pi}\sigma}e^{-\frac{\varepsilon ^{2}}{2\sigma^{2}}},\frac{\varepsilon}{\sqrt{2\pi}\sigma}\right).\] Assume that \(\varepsilon<\frac{1}{8}\). Therefore, there exists \(\sigma_{0}\in\left(\frac{1}{2\sqrt{2\pi}},\frac{1}{\sqrt{2\pi}}\right)\) such that \(f(\sigma_{0})=\varepsilon\). Choose \(\sigma=\sigma_{0}\). Then we map \(\mathcal{N}(-\varepsilon,\sigma^{2})\) to \(\text{Ber}\left(\frac{1}{2}-\varepsilon\right)\) and transform the sample space from \(\mathbb{R}^{m\times t^{*}}\) to \(\{0,1\}^{m\times t^{*}}\). Figure 1: From Gaussian to Bernoulli **Lemma 18**.: _Let \(\varepsilon\) be a number in \(\left(0,\frac{1}{8}\right)\). For any algorithm distinguishing \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\left\{0\right\}}\) with probability \(0.925\), we have \(\Pr_{\mathcal{H}_{0}^{(m)}}\left[T\geq t^{*}\right]\geq 0.1\)._ Proof.: Assume that there exists such an algorithm \(\mathcal{A}\) with \(\Pr_{\mathcal{H}_{0}^{(m)}}\left[T\geq t^{*}\right]<0.1\). We then construct an algorithm \(\mathcal{A}^{\prime}\) to distinguish \(\left\{\mathcal{N}_{j}\right\}_{j\in[m]\cup\left\{0\right\}}\). The algorithm \(\mathcal{A}^{\prime}\) proceeds as follows: When \(\mathcal{A}^{\prime}\) receives a loss \(\ell\), it first calculates \(\widehat{\ell}\) as Equation (15) and treats \(\widehat{\ell}\) as the loss to apply \(\mathcal{A}\). If \(\mathcal{A}\) outputs \(\mathcal{H}_{j}^{(m)}\), \(\mathcal{A}^{\prime}\) output \(\mathcal{N}_{j}\). Therefore, \(\mathcal{A}^{\prime}\) also succeeds with probability \(0.925\) while satisfying \(\Pr_{\mathcal{N}_{0}}\left[T\geq t^{*}\right]<0.1\). This violates Lemma 17. We remark that we cannot replace \(\mathcal{H}_{0}^{(m)}\) by \(\mathcal{H}_{j}^{(m)}\) for any \(j\in[m]\) in Lemma 18, since an "\(\mathcal{H}_{j}^{(m)}\) favourite" algorithm exists for every \(j\in[m]\). For example, an "\(\mathcal{H}_{1}^{(m)}\) favourite" algorithm is as follows: one first sample the arms for \(\frac{2\log\frac{1}{6\log 3}}{\varepsilon^{2}}\) rounds. If the empirical mean \(\widehat{p}_{1}<\frac{1}{2}-\frac{\varepsilon}{2}\), terminate and output \(\mathcal{H}_{1}^{(m)}\). Otherwise apply an algorithm which can distinguish \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\left\{0\right\}}\) with probability \(0.96\). By the Hoeffding's inequality, the error probability in the first stage is at most \(0.03\). Therefore, this "\(\mathcal{H}_{1}^{(m)}\) favourite" algorithm has success probability \(0.925\) and with high probability, it only needs to play \(\frac{2\log\frac{1}{6\log 3}}{\varepsilon^{2}}\) rounds when the input instance is \(\mathcal{H}_{1}^{(m)}\). Then we are ready to prove Lemma 14, which is a direct corollary of the following lemma. **Lemma 19**.: _Let \(\varepsilon\) be a number in \(\left(0,\frac{1}{8}\right)\) and assume \(m\geq 2\). There exists a constant \(c_{1}>0\) such that for any algorithm \(\mathcal{A}\) which can output an \(\varepsilon\)-optimal arm on any instance among \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\left\{0\right\}}\) with probability at least \(0.95\), we have \(\operatorname{\mathbb{E}}_{\mathcal{H}_{0}^{(m)}}\left[T\right]\geq\frac{c_{1 }\log(m+1)}{\varepsilon^{2}}\)._ Proof.: We first consider the case \(c_{0}\log(m+1)>4\log 40\) where \(c_{0}\) is the universal constant in the definition of \(t^{*}\). We reduce from the hypothesis testing lower bound in Lemma 18. Assume \(\mathcal{A}\) satisfying \(\Pr_{\mathcal{H}_{0}^{(m)}}\left[T\geq\frac{c_{0}\log(m+1)}{2\varepsilon^{2 }}\right]<0.1\). Then we construct an algorithm \(\mathcal{A}^{\prime}\) to distinguish \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\left\{0\right\}}\). Given an instance among \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\left\{0\right\}}\), we first apply \(\mathcal{A}\) to get an output arm \(i\). Then we sample \(\frac{2\log\frac{1}{6\log 3}}{\varepsilon^{2}}\) rounds and check whether the empirical mean \(\widehat{p}_{i}\leq\frac{1}{2}-\frac{\varepsilon}{2}\). If so, output \(\mathcal{H}_{i}^{(m)}\). Otherwise, output \(\mathcal{H}_{0}^{(m)}\). The success probability of at least \(0.925\) is guaranteed by Hoeffding's inequality and the union bound. According to our assumption, with probability larger than \(0.9\), \(\mathcal{A}^{\prime}\) terminates in \(\frac{c_{0}\log(m+1)}{2\varepsilon^{2}}+\frac{2\log\frac{1}{6\log 3}}{\varepsilon^{2}}<\frac{c_{0}\log(m+1)}{ \varepsilon^{2}}\) rounds. This violates Lemma 18. Then we consider the case \(c_{0}\log(m+1)\leq 4\log 40\); that is, when \(m\) is bounded by some constant. It then follows from Lemma 24 that \(\mathcal{A}\) satisfies \(\Pr_{\mathcal{H}_{0}^{(m)}}\left[T\geq\frac{c_{s}}{\varepsilon^{2}}\right]\geq 0.1\) for a universal constant \(c_{s}\) when \(m\geq 2\). Then choosing \(c_{1}=\min\left\{\frac{c_{0}}{20},\frac{c_{s}}{10\log(m_{0}+1)}\right\}\) where \(m_{0}=\lfloor e^{\frac{4\log 40}{c_{0}}}-1\rfloor\), we have \(\operatorname{\mathbb{E}}_{\mathcal{H}_{0}^{(m)}}\left[T\right]\geq\frac{c_{1 }\log(m+1)}{\varepsilon^{2}}\) for any algorithms that can output an \(\varepsilon\)-optimal arm on any instance among \(\left\{\mathcal{H}_{j}^{(m)}\right\}_{j\in[m]\cup\left\{0\right\}}\) with probability at least \(0.95\) when \(m\geq 2\). ### The Lower Bound for m-Bai Recall that in \(\mathbf{m-BAI}\), the \(N\) arms are partitioned into \(K\) groups with size \(m_{1},m_{2},\ldots,m_{K}\) respectively. Each pull of an arm results in an observation of all the arms in its group. Consider an \(\mathbf{m-BAI}\) instance \(\mathcal{H}_{0}^{\mathbf{m}}\) which consists of all fair coins. Recall that we use \(T^{(k)}\) to denote the number of rounds in which the pulled arm belongs to the \(k\)-th group. We then prove the following lemma, which indicates the result of Theorem 3 directly. **Lemma 20**.: _Let \(\varepsilon\) be a number in \(\left(0,\frac{1}{8}\right)\). For every \((\varepsilon,0.05)\)-PAC algorithm of \(\mathbf{m}\)-\(\mathsf{BAI}\), we have \(\mathbf{E}_{\mathcal{H}_{0}^{\mathbf{m}}}\left[T^{(k)}\right]\geq\frac{c_{1} \log\left(m_{k}+1\right)}{\varepsilon^{2}}\) for every \(k\in[K]\) with \(m_{k}\geq 2\) and \(\mathbf{E}_{\mathcal{H}_{0}^{\mathbf{m}}}\left[T\right]\geq\sum_{k=1}^{K} \frac{c_{1}\log\left(m_{k}+1\right)}{2\varepsilon^{2}}\) if the total number of arms \(\sum_{k=1}^{K}m_{k}\geq 2\), where \(c_{1}\) is the constant in Lemma 19._ _Moreover, these lower bounds still hold even the algorithm can identify the \(\varepsilon\)-optimal arm with probability \(0.95\) only when the input arms have losses drawn from either \(\mathsf{Ber}\left(\frac{1}{2}\right)\) or \(\mathsf{Ber}\left(\frac{1}{2}-\varepsilon\right)\)._ Proof.: We only prove the latter case which is stronger. Let \(\mathcal{H}\) be the set of all \(\mathbf{m}\)-\(\mathsf{BAI}\) instances where the input arms have losses drawn from either \(\mathsf{Ber}\left(\frac{1}{2}\right)\) or \(\mathsf{Ber}\left(\frac{1}{2}-\varepsilon\right)\). Let \(\mathcal{A}\) be an algorithm that identifies the \(\varepsilon\)-optimal arm with probability \(0.95\) when the input instance is in \(\mathcal{H}\). Assume \(\mathcal{A}\) satisfies \(\mathbf{E}_{\mathcal{H}_{0}^{\mathbf{m}}}\left[T^{(k)}\right]<\frac{c_{1} \log\left(m_{k}+1\right)}{\varepsilon^{2}}\) for some \(k\in[K]\). In the following, we construct an algorithm \(\mathcal{A}^{\prime}\) to find an \(\varepsilon\)-optimal arm given instances in \(\left\{\mathcal{H}_{j}^{(m_{k})}\right\}_{j\in[m]\cup\left\{0\right\}}\). Given any \((m_{k})\)-\(\mathsf{BAI}\) instance \(\mathcal{H}^{(m_{k})}\in\left\{\mathcal{H}_{j}^{(m_{k})}\right\}_{j\in[m]\cup \left\{0\right\}}\), we construct an \(\mathbf{m}\)-\(\mathsf{BAI}\) instance: set \(\mathcal{H}^{(m_{k})}\) to be the \(k\)-th group and all remaining arms are fair ones. Then we apply \(\mathcal{A}\) on this instance. The output of \(\mathcal{A}^{\prime}\) is as follows: \[\text{Output of }\mathcal{A}^{\prime}=\left\{\begin{array}{ll}\text{arm }j,&\text{if the output of }\mathcal{A}\text{ is arm }(k,j);\\ \text{an arbitrary arm,}&\text{otherwise.}\end{array}\right.\] Clearly, the correct probability of \(\mathcal{A}^{\prime}\) is at least \(0.95\). However, \(\mathcal{A}^{\prime}\) satisfies \(\mathbf{E}_{\mathcal{H}_{0}^{(m_{k})}}\left[T\right]<\frac{c_{1}\log\left(m_{ k}+1\right)}{\varepsilon^{2}}\), which violates Lemma 19. Therefore, we have \(\mathbf{E}_{\mathcal{H}_{0}^{\mathbf{m}}}\left[T^{(k)}\right]\geq\frac{c_{1} \log\left(m_{k}+1\right)}{\varepsilon^{2}}\) for every \(k\in[K]\) with \(m_{k}\geq 2\) and thus have proved \(\mathbf{E}_{\mathcal{H}_{0}^{\mathbf{m}}}\left[T\right]\geq\sum_{k=1}^{K} \frac{c_{1}\log\left(m_{k}+1\right)}{\varepsilon^{2}}\) as long as each \(m_{k}\geq 2\). For those groups of size one, we can pair and merge them so that each group contains at least two arms (in case there are odd number of singleton groups, we merge the remaining one to any other groups). Notice that this operation only makes the problem easier (since one can observe more arms in each round) and only affects the lower bound by a factor of at most \(2\). Therefore, we still have \[\mathbf{E}_{\mathcal{H}_{0}^{\mathbf{m}}}\left[T\right]\geq\sum_{k=1}^{K} \frac{c_{1}\log\left(m_{k}+1\right)}{2\varepsilon^{2}}.\] ## 5 Regret Lower Bounds In this section we prove lower bounds for minimax regrets in various settings. All lower bounds for regrets in the section are based on the lower bounds for \(\mathbf{m}\)-\(\mathsf{BAI}\) established in Section 4. ### Regret Lower Bound for \(\mathbf{m}\)-\(\mathsf{MB}\) Let us fix \(\mathbf{m}=(m_{1},\ldots,m_{K})\). We then derive a regret lower bound for \(\mathbf{m}\)-\(\mathsf{MB}\) and thus prove Theorem 4. Let \(T\) be the time horizon and \(c_{1}\) be the constant in Lemma 19. Consider a set of \(\mathbf{m}\)-\(\mathsf{BAI}\) instances where each arm has losses drawn from either \(\mathsf{Ber}\left(\frac{1}{2}\right)\) or \(\mathsf{Ber}\left(\frac{1}{2}-\varepsilon\right)\) where \(\varepsilon=\sqrt{\frac{c_{1}\sum_{k=1}^{K}\log\left(m_{k}+1\right)}{8T}}\). Denote this set by \(\mathcal{H}\). **Lemma 21**.: _For any algorithm \(\mathcal{A}\) of \((m_{1},\ldots,m_{k})\)-\(\mathsf{MAB}\), for any sufficiently large \(T>0\), there exists \(\mathcal{H}\in\mathcal{H}\) such that the expected regret of \(\mathcal{A}\) satisfies_ \[\operatorname{\mathsf{E}}_{\mathcal{H}}\left[R(T)\right]\geq c^{\prime}\cdot \sqrt{T\cdot\sum_{k=1}^{K}\log(m_{k}+1)}\] _where \(c^{\prime}>0\) is a universal constant. Here the expectation is taken over the randomness of losses which are drawn from \(\mathcal{H}\) independently in each round._ Proof.: Assume \(\mathcal{A}\) satisfies \[\operatorname{\mathsf{E}}_{\mathcal{H}}\left[R(T)\right]<\frac{\sqrt{T\cdot \frac{1}{2}\sum_{k=1}^{K}c_{1}\log(m_{k}+1)}}{5000}\] for every \(\mathcal{H}\in\mathcal{H}\) where \(c_{1}\) is the constant in Lemma 19. Lemma 13 shows that \(\mathcal{A}\) implies an algorithm to identify the \(\varepsilon\)-optimal arm for \(\mathbf{m}\)-\(\mathsf{BAI}\) instances in \(\mathcal{H}\) with probability \(0.95\) which terminates in \(c_{1}\cdot\frac{\sum_{k=1}^{K}\log(m_{k}+1)}{8\varepsilon^{2}}\) rounds. We can assume \(\varepsilon<\frac{1}{8}\) since \(T\) is sufficiently large. However, according to Lemma 20, for any such algorithms, there exists some instances in \(\mathcal{H}\) that need at least \(\frac{c_{1}\sum_{k=1}^{K}\log(m_{k}+1)}{2\varepsilon^{2}}\) rounds. This violates Lemma 13 and thus indicates a regret lower bound of \(\Omega\left(\sqrt{T\cdot\sum_{k=1}^{K}\log(m_{k}+1)}\right)\). Theorem 4 is a direct corollary of Lemma 21. ### Regret Lower Bounds for Strongly Observable Graphs Let \(G=(V,E)\) be a strongly observable graph with a self-loop on each vertex. Let \(N=|V|\). Assume that there exist \(K\)_disjoint_ sets \(S_{1},\ldots,S_{K}\subseteq V\) such that there is no edge between \(S_{i}\) and \(S_{j}\) for any \(i\neq j\). For every \(k\in[K]\), let \(m_{k}=|S_{k}|\). Let \(S=\bigcup_{k\in[K]}S_{k}\). Proof of Theorem 6.: We present a reduction from \(\mathbf{m}\)-\(\mathsf{MAB}\) to bandit with feedback graph \(G\) where \(\mathbf{m}=(m_{1},\ldots,m_{K})\). Let \(\mathcal{A}\) be an algorithm for bandit with feedback graph \(G\). Consider a set of instances where the loss of each arm is drawn from either \(\operatorname{\mathsf{Ber}}\left(\frac{1}{2}\right)\) or \(\operatorname{\mathsf{Ber}}\left(\frac{1}{2}-\varepsilon\right)\) where \(\varepsilon=\sqrt{\frac{c_{1}\sum_{k=1}^{K}\log(m_{k}+1)}{8T}}\) (here \(c_{1}\) is the constant in Lemma 19). Denote this set by \(\mathcal{H}\). When we say the input of \(\mathsf{MAB}\) is an instance in \(\mathcal{H}\), we mean that the loss sequence is drawn from this instance independently in each round. Then we design an algorithm \(\mathcal{A}^{\prime}\) for \(\mathbf{m}\)-\(\mathsf{MAB}\) to deal with instances in \(\mathcal{H}\) as follows. For an \(\mathbf{m}\)-\(\mathsf{MAB}\) instance \(\mathcal{H}^{\mathsf{m}}\) in \(\mathcal{H}\), we construct a bandit instance with feedback graph \(G\): the losses of arms in \(S_{k}\) correspond to the losses of arms in the \(k\)-th group of \(\mathcal{H}^{\mathsf{m}}\) in the \(\mathbf{m}\)-\(\mathsf{MAB}\) game and the losses of arms in \(V\setminus S\) are always equal to \(1\). The algorithm \(\mathcal{A}^{\prime}\) actually makes decisions according to \(\mathcal{A}\). If \(\mathcal{A}\) pulls an arm in \(S\), \(\mathcal{A}^{\prime}\) pulls the corresponding arm in the \(\mathbf{m}\)-\(\mathsf{MAB}\) game. Otherwise, when \(\mathcal{A}\) requests to pull an arm \(A_{t}\in V\setminus S\), we replace this action by letting \(\mathcal{A}^{\prime}\) pull the first arm in each group once and then feed the information that \(A_{t}\) should have observed back to \(\mathcal{A}\) (Note that all arms outside \(S\) have fixed loss \(1\)). We force \(\mathcal{A}^{\prime}\) to terminate after pulling exactly \(T\) arms. Note that \(\varepsilon\ll\frac{1}{K}\) since \(T\) is sufficiently large. If we use \(R(T)\) and \(R^{\prime}(T)\) to denote the regret of \(\mathcal{A}\) and \(\mathcal{A}^{\prime}\) respectively, then by our choice of \(\varepsilon\), we have \[\operatorname{\mathsf{E}}\left[R(T)\right]\geq\operatorname{\mathsf{E}}\left[R^ {\prime}(T)\right]\] where the expectation is taken over the randomness of loss sequences specified above. Lemma 21 shows that there exists \(\mathcal{H}\in\mathcal{H}\) such that \[\mathbb{E}_{\mathcal{H}}\left[R^{\prime}(T)\right]\geq c^{\prime}\sqrt{T\cdot \sum_{k=1}^{K}\log(m_{k}+1)}\] Therefore, there exist some loss sequences on which \(\mathcal{A}\) needs to suffer a regret of \(\Omega\left(\sqrt{T\cdot\sum_{k=1}^{K}\log(m_{k}+1)}\right)\). **Remark**.: _Although we assume each vertex has a self-loop in Theorem 6, it is easy to verify that this result also holds for strongly observable graphs which contain some vertices without self-loops, as long as we can find legal \(\{S_{k}\}_{k\in[K]}\). For example, for the loopless clique, we can also apply Theorem 6 with \(K=1\) and \(S_{1}=V\). It gives a minimax regret lower bound of \(\Omega\left(\sqrt{T\log N}\right)\), which matches the previous best upper bound in [1]._ Theorem 6 gives a general regret lower bound for bandit with arbitrary feedback graphs. Intuitively, it allows us to partition the graph and consider the hardness of each single part respectively. For example, consider the graph shown in Figure 2: The feedback graph is the disjoint union of \(K_{1}\) cliques and \(K_{2}=K-K_{1}\) cycles where each clique contains \(m_{1}\) vertices and each cycle contains \(m_{2}\) vertices. Note that the clique cover of this graph contains \(K_{1}\) cliques of size \(m_{1}\) and \(\lceil\frac{K_{2}m_{2}}{2}\rceil\) cliques of constant size. According to Theorem 8, our Algorithm 1 gives a regret upper bound of \(O\left(\sqrt{T\left(K_{1}\log m_{1}+K_{2}m_{2}\right)}\right)\), which matches the lower bound given in Theorem 6. The previous best lower bound ([1]) on this feedback graph is \(\Omega\left(\sqrt{\left(K_{1}+K_{2}m_{2}\right)T}\right)\). When \(K_{1}\) and \(m_{1}\) are large, our result wins by a factor of \(\Theta\left(\sqrt{\log m_{1}}\right)\). ### Regret Lower Bounds for Weakly Observable Graphs Let \(G=(V,E)\) be a weakly observable graph. Assume that \(V\) can be partitioned into \(K\) disjoint sets \(V=V_{1}\cup V_{2}\cup\cdots\cup V_{K}\) and each \(G[V_{k}]\) contains a \(t_{k}\)-packing independent set \(S_{k}\) such that every vertex in \(S_{k}\) does not have a self-loop. Assume there are no edges from \(V_{j}\) to \(S_{i}\) for any \(i\neq j\). Let \(m_{k}=|S_{k}|\) and \(S=\bigcup_{k\in[K]}S_{k}\). Without loss of generality, we assume in the following proof that each \(m_{k}\geq 2\). When there exists some \(m_{k}=1\), we can pair and merge them into new sets of size at least \(2\) (in case there are odd number of singleton sets, we merge the remaining one to any other sets). This merging process only affects the result by at most a constant factor. Let \(\mathbf{m}=(m_{1},\ldots,m_{K})\). Our proof idea is to embed a certain \(\mathbf{m}^{\prime}\)-BAI instance in \(G\) so that the lower bound follows from the lower bound of \(\mathbf{m}^{\prime}\)-BAI. Figure 2: A Feedback Graph Example Proof of Theorem 7.: Let \[\xi_{k}=\max\left\{c_{1}\log(m_{k}+1),\frac{c_{2}m_{k}}{t_{k}}\right\}\] for every \(k\in[K]\) where \(c_{1}>0\) is the constant in Lemma 20 and \(c_{2}=\frac{c_{1}\log 3}{4}\). Assume there exists an algorithm \(\mathcal{A}\) such that \[R(T)<\frac{1}{2\cdot 1250^{\frac{2}{3}}}\left(\sum_{k=1}^{K}\xi_{k}\right)^{ \frac{1}{3}}\cdot T^{\frac{2}{3}} \tag{16}\] for every loss sequence. We will construct an \(\mathbf{m}^{\prime}\)-BAI game for some \(\mathbf{m}^{\prime}=\left(m_{1}^{\prime},m_{2}^{\prime},\ldots,m_{K^{\prime}}^ {\prime}\right)\) and reduce this BAI game to the bandit problem with feedback graph \(G\). The vector \(\mathbf{m}^{\prime}\) is obtained from \(\mathbf{m}\) in the following ways. For every \(k\in[K]\), we distinguish between two cases: * Case 1: if \(c_{1}\log(m_{k}+1)\geq\frac{c_{2}m_{k}}{t_{k}}\), we let the arms in \(S_{k}\) form a group in the \(\mathbf{m}^{\prime}\)-BAI instance; * Case 2: if \(c_{1}\log(m_{k}+1)<\frac{c_{2}m_{k}}{t_{k}}\), we divide \(S_{k}\) into \(\lfloor\frac{m_{k}}{2}\rfloor\) small sets, each with size at least two. Each small set becomes a group in the \(\mathbf{m}^{\prime}\)-BAI instance. In other words, each group in the \(\mathbf{m}^{\prime}\)-BAI instance is either one of \(S_{k}\) (Case 1) or is a subset of a certain \(S_{k}\) (Case 2). Given an \(\mathbf{m}^{\prime}\)-BAI instance and time horizon \(T>0\), we now define the loss sequence for bandit with feedback graph \(G\): the losses of arms in \(S\) in each round are sampled from the distribution of the corresponding arm in the \(\mathbf{m}^{\prime}\)-MAB instance independently, and the losses of arms in \(V\setminus S\) are always equal to \(1\). We then design an algorithm \(\mathcal{A}^{\prime}\) for the \(\mathbf{m}^{\prime}\)-BAI game by simulating \(\mathcal{A}\) on this graph bandit problem. If \(\mathcal{A}\) pulls an arm in \(V\setminus S\) and observes arms in \(S_{k}\), we again consider two cases: * Case 1: if \(c_{1}\log(m_{k}+1)\geq\frac{c_{2}m_{k}}{t_{k}}\), we let \(\mathcal{A}^{\prime}\) pull an arbitrary arm in the corresponding group \(\mathbf{m}^{\prime}\)-MAB instance; * Case 2: if \(c_{1}\log(m_{k}+1)<\frac{c_{2}m_{k}}{t_{k}}\), for each arm in \(S_{k}\) that will be observed, \(\mathcal{A}^{\prime}\) pulls the corresponding arm in the \(\mathbf{m}^{\prime}\)-MAB instance once. Otherwise if \(\mathcal{A}\) pulls an arm in \(S\), \(\mathcal{A}^{\prime}\) does nothing and just skips this round. Note that \(\mathcal{A}^{\prime}\) can always observe more information about the feedback of arms in \(S\) than \(\mathcal{A}\). So \(\mathcal{A}^{\prime}\) can well simulate \(\mathcal{A}\) just by feeding the information it observed to \(\mathcal{A}\) and making decisions according to the behavior of \(\mathcal{A}\) as described above. Let \(T_{i}\) be the number of times that arm \(i\) has been pulled by \(\mathcal{A}\). At the end of the game, \(\mathcal{A}^{\prime}\) samples an arm in \(V\) according to the distribution \(\left(\frac{T_{i}}{T},\frac{T_{i}}{T},\ldots,\frac{T_{N}}{T}\right)\). If the sampled arm is in \(V\setminus S\), \(\mathcal{A}^{\prime}\) outputs a random arm. Otherwise \(\mathcal{A}^{\prime}\) outputs the sampled arm. Choose \(\varepsilon=1250^{\frac{1}{3}}\left(\frac{\sum_{k=1}^{K}\xi_{k}}{T}\right)^{ \frac{1}{3}}\). We can verify that \(\mathcal{A}^{\prime}\) is an \((\varepsilon,0.05)\)-PAC algorithm through an argument similar to the one in our proof of Lemma 13. Let \(T^{(k)}\) be the number of times that the arms in group \(k\) have been pulled by \(\mathcal{A}^{\prime}\) in the \(\mathbf{m}^{\prime}\)-BAI game. According to Lemma 20, for each \(k\in[K^{\prime}]\), \[\mathbf{E}_{\mathcal{A}^{\prime}_{0}}\left[T^{(k)}\right]\geq\frac{c_{1}\log \left(m_{k}^{\prime}+1\right)}{\varepsilon^{2}},\] where \(\mathcal{H}^{\mathbf{m}^{\prime}}_{0}\) is the \(\mathbf{m}^{\prime}\)-BAI instance with all fair coins. Let \(\mathcal{I}_{0}\) denote the graph bandit instance constructed from above rules based on \(\mathcal{H}^{\mathbf{m}^{\prime}}_{0}\). Recall that one pull of \(\mathcal{A}\) corresponds to at most \(t_{k}\) pulls of \(\mathcal{A}^{\prime}\) in Case 2. Therefore, when the input is \(\mathscr{I}_{0}\), \(\mathscr{A}\) must pull the arms in \(V_{k}\setminus S_{k}\) for at least \(\frac{c_{1}\left\lfloor\frac{m_{k}}{2}\right\rfloor\log 3}{t_{k}\epsilon^{2}}\geq \frac{c_{2}m_{k}}{t_{k}\epsilon^{2}}\) times if \(k\) is in Case 2 and at least \(\frac{c_{1}\log\left(m_{k}+1\right)}{\epsilon^{2}}\) times if \(k\) is in Case 1. In other words, \(\mathscr{A}\) must pull the arms in \(V_{k}\setminus S_{k}\) for at least \(\frac{\mathscr{E}_{k}}{\epsilon^{2}}\) times for every \(k\in[K]\). Plugging in our choice of \(\varepsilon\), \(\mathscr{A}\) needs to pull the arms in \(V\setminus S\) for more than \(\frac{1}{1250\tilde{\mathfrak{I}}}\cdot\left(\sum_{k=1}^{K}\xi_{k}\right)^{ \frac{1}{3}}T^{\frac{2}{3}}\) times in total on \(\mathscr{I}_{0}\). These pulls contribute a regret of at least \(\frac{1}{2\cdot 1250\tilde{\mathfrak{I}}}\left(\sum_{k=1}^{K}\xi_{k}\right)^{ \frac{1}{3}}\cdot T^{\frac{2}{3}}\), which contradicts the assumption in Equation (16). Therefore, there exists some loss sequences such that \(\mathscr{A}\) satisfies \[R(T)=\Omega\left(T^{\frac{2}{3}}\cdot\left(\sum_{k=1}^{K}\max\left\{\log m_{k },\frac{m_{k}}{t_{k}}\right\}\right)^{\frac{1}{3}}\right).\] Theorem 7 confirms a conjecture in [1]. It can also generalize the previous lower bound for weakly observable graphs \(\Omega\left(T^{\frac{2}{3}}\left(\log|S|,\frac{|S|}{t}\right)^{\frac{1}{3}}\right)\) in [1] by applying Theorem 7 with \(K=1\) and \(V_{1}=V\) where \(S\subseteq V\) is a \(t\)-packing independent set of \(G\). As consequences, Theorem 7 provides tight lower bounds for several feedback graphs. For example, when \(G\) is the disjoint union of \(K\) complete bipartite graphs of size \(m_{1},m_{2},\ldots,m_{K}\) respectively, it implies a lower bound of \(\Omega\left(\left(\sum_{k\in[K]}\log m_{k}\right)^{\frac{1}{3}}T^{\frac{2}{3}}\right)\), which matches the upper bound in [1].
2305.09430
A BSDE approach to the asymmetric risk-sensitive optimization and its applications
This paper is devoted to proposing a new asymmetric risk-sensitive criterion involving different risk attitudes toward varying risk sources. The criterion can only be defined through the initial value of the minimal solutions of quadratic backward stochastic differential equations (BSDEs). Before uncovering the mean-variance representation for the introduced criterion by the variational approach, some axioms are given for the first time to characterize a variance decomposition of square integrable random variables. The stochastic control problems under this criterion are described as a kind of stochastic recursive control problems that includes controlled quadratic BSDEs. An asymmetric risk-sensitive global stochastic maximum principle is derived when the quadratic BSDEs are equipped with bounded data. A closed-form solution of a stochastic linear-quadratic risk-sensitive control problem is obtained by introducing a novel completion-of-squares technique for controlled quadratic BSDEs. In addition, a dynamic portfolio optimization problem featuring a stochastic return rate is provided as an application of the asymmetric risk-sensitive control.
Mingshang Hu, Shaolin Ji, Rundong Xu, Xiaole Xue
2023-05-16T13:42:36Z
http://arxiv.org/abs/2305.09430v3
# A BSDE approach to the asymmetric risk-sensitive optimization and its applications # A BSDE approach to the asymmetric risk-sensitive optimization and its applications Mingshang Hu Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan, Shandong 250100, PR China. [email protected]. Research supported by NSF (No. 11671231) and Young Scholars Program of Shandong University (No. 2016WLJH10). Shaolin Ji Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan, Shandong 250100, PR China. [email protected]. Research supported by NSF (No. 11571203). Rundong Xu Zhongtai Securities Institute for Financial Studies, Shandong University, Jinan, Shandong 250100, PR China. [email protected] Xiaole Xue School of Management, Shandong University, Jinan 250100, PR China. [email protected]. Research supported by NSF (No.12001316), "The Fundamental Research Funds of Shandong University". All the authors share equal contribution to this paper. **Abstract**. In this paper, we propose a formulation to describe a risk-sensitive criterion involving asymmetric risk attitudes toward different risk sources. The introduced criterion can only be defined through the initial value of the minimal solutions of quadratic backward stochastic differential equations (BSDEs). Before uncovering the mean-variance representation for the introduced asymmetric risk-sensitive criterion by variational approach, some axioms to characterize a variance decomposition of square integrable random variables are given for the first time. The control problems under the asymmetric risk-sensitive criterion are characterized as a kind of stochastic recursive control problem that includes quadratic BSDEs. Under bounded and unbounded (linear-quadratic case) conditions, the asymmetric risk-sensitive control problems are investigated. In addition, a dynamic portfolio optimization problem featuring a stochastic return rate is provided as an application. **Key words**. asymmetric risk-sensitive control, forward-backward stochastic differential equations, linear-quadratic control problem, quadratic backward stochastic differential equations **AMS subject classifications.** 93E20, 60H10, 35K15 ## 1 Introduction In finance and economics, not all behaviors can be described by risk-neutral cost functions. One way of capturing risk sensitivity (including risk-seeking and risk-averse behavior) is replacing the linear expectation with the following nonlinear one, for a random variable \(\xi\) and a constant \(\theta\), considering \[\mathcal{E}_{\theta}[\xi]:=\frac{1}{\theta}\log\mathbb{E}[e^{\theta\xi}], \tag{1.1}\] which is the well-known risk-sensitive criterion [21]. It is obvious that \(\mathcal{E}_{\theta}\) is a nonlinear expectation viewed as an operator preserving monotonicity and constants (see [9] and references therein). Performing the second-order Taylor expansion of \(\mathcal{E}_{\theta}[\xi]\) with respect to \(\theta\) around \(\theta=0\), the criterion (1.1) is approximated in the following way \[\mathcal{E}_{\theta}[\xi]=\mathbb{E}\left[\xi\right]+\frac{\theta}{2}\text{ Var}\left[\xi\right]+O(\theta^{2}), \tag{1.2}\] where \(\text{Var}[\xi]\) is the variance of \(\xi\). The preference includes the variance of \(\xi\) that worsens (resp. improves) the risk situation of the criterion if \(\theta>0\) (resp. \(\theta<0\)). Therefore, a decision maker possesses the risk-averse attitude if \(\theta>0\), on the opposite, she is risk-seeking if \(\theta<0\). The risk-neutral attitude of the decision maker corresponds to \(\mathcal{E}_{\theta}[\xi]=\mathbb{E}[\xi]\) since \(\mathcal{E}_{\theta}[\xi]\rightarrow\mathbb{E}[\xi]\) as \(\theta\to 0\) (see e.g. [5]). The risk sensitivity has been introduced into the control problems since the early work of Jacobson [20], Whittle [33], and after that many researchers studied this subject (see [2, 3, 15, 23, 25, 27, 34] and references therein). The risk-sensitive optimal control problems aim at minimizing \[J(u(\cdot)):=\mathbb{E}\left[\exp\left\{\theta\left(\Phi(X(T))+\int_{0}^{T}f( t,X(t),u(t))dt\right)\right\}\right] \tag{1.3}\] over all admissible controls, where \(\theta>0\) is the risk-sensitive parameter and the state \(X(\cdot)\) satisfies the controlled stochastic differential equation (SDE): \[\left\{\begin{array}{rl}dX(t)=&b(t,X(t),u(t))dt+\sigma(t,X(t),u(t))dW(t),\\ X(0)=&x_{0}.\end{array}\right. \tag{1.4}\] The SDE (1.4) is driven by a \(d\)-dimensional (\(d>1\)) Brownian motion \(W=\left(W_{1}(t),W_{2}(t),...,W_{d}(t)\right)_{0\leq t\leq T}^{\intercal}\) with the initial data \(x_{0}\); the coefficients \(b\), \(\sigma\), \(\Phi\), \(f\) are measurable, deterministic functions in suitable dimension. In the existing literature, it is always assumed that the decision maker possesses the same risk attitudes toward different risk sources. In more detail, the decision maker has the identical risk-sensitive parameter \(\theta\) while confronting different risk sources \(W_{i}(t),i=1,...,d\). However, this is an almost impossible thing to happen in reality. From the perspective of stochastic differential utility, the authors in [22] introduced the asymmetry in risk aversion. To differentiate the attitudes toward risks depending on their sources, they assume that each component of the \(d\)-dimensional standard Brownian motion \(W\) is an independent source of the consumption shock. For example, if \(d=2\) then \(W_{1}\) may represent weather shock while \(W_{2}\) may represent health shocks, and a consumer usually shows different risk aversion towards different risk sources. So it is an interesting issue that how to formulate the stochastic control problems under asymmetric risk sensitivity. It is obvious that the risk-sensitive criterion (1.3) can not characterize the asymmetric risk sensitivity towards different risk sources. Therefore, we must reconstruct the asymmetric risk-sensitive criteria. Note that in [11] the risk-sensitive control for the BSDE objective functional with quadratic growth coefficient was considered, that is, (1.3) can be equivalently described by the solution \(Y(\cdot)\) of the following BSDE at time \(0\): \[\left\{\begin{array}{rl}dY(t)=&-\left[Z^{\intercal}(t)\Gamma Z(t)+f(t,X(t),u(t) )\right]dt+Z^{\intercal}(t)dW(t),\\ Y(T)=&\Phi(X(T))\end{array}\right. \tag{1.5}\] with \(\Gamma=\frac{\theta}{2}\mathrm{I}_{d\times d}\). Under mild conditions on \(\Phi\) and \(f\), following the argument in Theorem 3.1 in [8] it can be proved that the finiteness of the right-hand side in (1.3) is sufficient for constructing a solution \((Y(\cdot),Z(\cdot))\) of (1.5) such that \(Y(0)=\frac{1}{\theta}\log J(u(\cdot))\). Furthermore \(Y(\cdot)\) is minimal among the solutions in some specific space (see [6] for more details about the minimal solution of quadratic BSDEs). Define the risk-sensitive criterion by \[\tilde{J}(u(\cdot))=Y(0), \tag{1.6}\] where \(Y(\cdot)\) is the minimal solution of (1.5) (the minimal solution will become the unique solution while (1.5) is well posed). Inspired by the above analysis, if \(\Gamma\) is a diagonal matrix, for example, \(\Gamma=\mathrm{diag}\{\gamma_{1},...,\gamma_{d}\}\), then \(\Gamma\) might characterize the asymmetric risk sensitivity. Under this setting, it is worth pointing out that we cannot obtain a representation similar to (1.3) through the exponential transformation for \(Y(\cdot)\). In this sense, it seems that the asymmetric risk-sensitive criterion can only be defined by the quadratic BSDE (1.5). To further explore the meaning of \(\Gamma\) in terms of risk sensitivity like (1.2), the Taylor expansion for the asymmetric risk-sensitive criterion is a key issue. Once \(\Gamma\neq\frac{\theta}{2}\mathrm{I}_{d\times d}\), the Taylor expansion with respect to \(\theta\) does not work due to the exponential transformation failure. When \(d=2\) and \(\Gamma\) is a diagonal matrix whose entries \(\gamma_{1},\gamma_{2}>0\) and \(\gamma_{1}\neq\gamma_{2}\), the Taylor expansion for the risk-sensitive criterion \(\tilde{J}(u(\cdot))=Y(0)\) is obtained by regarding \((\gamma_{1},\gamma_{2})\) as control variables and applying the convex perturbation method to \((\gamma_{1},\gamma_{2})\) around \((0,0)\). More than that, for any given strictly positive definite matrix \(\Gamma\), (1.6) is essentially a nonlinear expectation, denoted by \(\mathcal{E}_{\Gamma}\), of the random variable \(\xi=\Phi(X(T))+\int_{0}^{T}g(t,X(t),u(t))dt\), i.e. \(\mathcal{E}_{\Gamma}[\xi]=Y(0)\). Indeed, \(\mathcal{E}_{\Gamma}\) is actually a quadratic filtration-consistent nonlinear expectation (see [19, 24] for more details) induced by (1.5). Based on this observation, the Taylor expansion is ultimately expressed by \[\mathcal{E}_{\gamma_{1},\gamma_{2}}[\xi]=\mathbb{E}[\xi]+\gamma_{1}\mathrm{D }_{1}[\xi]+\gamma_{2}\mathrm{D}_{2}[\xi]+o\left(\sqrt{\gamma_{1}^{2}+\gamma_{ 2}^{2}}\right), \tag{1.7}\] where the functionals \((\mathrm{D}_{1},\mathrm{D}_{2})\), inheriting some axiomatic properties that \(\mathrm{Var}[\cdot]\) possesses, is called a variance decomposition on the domain of \(\mathcal{E}_{\gamma_{1},\gamma_{2}}\) such that \(\mathrm{Var}[\xi]=\mathrm{D}_{1}[\xi]+\mathrm{D}_{2}[\xi]\) and the decomposition is unique under some mild conditions (see subsection 2.1 for the details). The result illustrates the asymmetry of risk attitudes (i.e., \(\gamma_{i}\), \(i=1,2\)) for different risks with various weights \(\mathrm{D}_{i}[\xi]\), for \(i=1,2\). Particularly, (1.7) degenerates into (1.2) when \(\gamma_{1}=\gamma_{2}=\frac{\theta}{2}\). In this paper, we introduce a new framework which is considering the asymmetric risk sensitivity, and its formulation in stochastic control problems are as follows: the goal is to minimize the asymmetric risk-sensitive criterion (1.6) subject to the controlled forward-backward stochastic differential equation (FBSDE) (see [11, 13, 32] for more details about controlled FBSDEs) \[\left\{\begin{array}{rl}dX(t)=&b(t,X(t),u(t))dt+\sigma(t,X(t),u(t))dW(t),\\ dY(t)=&-\left[Z^{\intercal}(t)\Gamma Z(t)+f(t,X(t),u(t))\right]dt+Z^{\intercal}( t)dW(t),\\ Y(T)=&\Phi(X(T)),\end{array}\right. \tag{1.8}\] which we call the asymmetric (resp. symmetric) risk-sensitive control problem when \(\Gamma\neq\frac{\theta}{2}\mathrm{I}_{d\times d}\) (resp. \(\Gamma=\frac{\theta}{2}\mathrm{I}_{d\times d}\)). Because when \(\Gamma=\frac{\theta}{2}\mathrm{I}_{d\times d}\) it degenerates into minimizing (1.3) subject to (1.4). In addition, when \(\theta=0\) (of course, \(\Gamma=0\)), (1.6) together with (1.8) degenerates into the classical (risk-neutral) case studied by Peng [31]. Before solving (1.6) and (1.8), we first review the solution to the classical risk-sensitive control problem (1.3)-(1.4). Note that the cost functional (1.3) is not a classical or risk-neutral form, so the classical results of stochastic control theory can not be directly applied, no matter from the perspective of the maximum principle (MP) or the dynamic programming principle (DPP). To overcome this technical difficulty, researchers resort to the approaches that extend state variables to apply the classical MP or DPP and introduce a log transformation to derive the adjoint system or the Hamilton-Jacobi-Bellman (HJB) equation for (1.3)-(1.4) [1, 16, 23, 27]. Under the assumption that \((\Phi,f)\) is uniformly bounded and the corresponding value function is sufficiently smooth, by extending state variables Lim and Zhou [23] rewrote (1.3)-(1.4) as a risk-neutral form and obtained a new risk-sensitive MP for (1.3)-(1.4) with nonconvex control domain after introducing a log transformation. The DPP is also concerned to cope with (1.3)-(1.4) when \((\Phi,f)\) is uniformly bounded [1, 16, 27], where the related value function is defined by minimizing the logarithmic transformation of (1.3) and it is characterized by the solution of the associated HJB equation. The existence of a smooth solution to such kind of nonlinear parabolic partial differential equations (PDEs) is proved under restrictive regularity conditions imposed on the coefficients both in the case of control-independent diffusions [27] and in the case of control-dependent diffusions [16]. Relying on classical but deep results on parabolic PDEs, weaker regularity of solutions is obtained by Bensoussan, Frehse, and Nagai [3] in the case of control-independent diffusions. We emphasize that in [26] Moon studied the risk-sensitive control (1.6) and (1.8) with \(\Gamma=\frac{\mu}{2}\mathrm{I}_{d\times d}\) for a risk-sensitive parameter \(\mu>0\) and control-dependent diffusions by adopting the DPP approach. Not only that, when \((\Phi,f)\) is no longer bounded, Lim and Zhou [23] also studied a linear-quadratic risk-sensitive problem. The optimal control is obtained in the feedback form by using the new MP they established early in that paper under some convexity conditions. Due to the linear-quadratic setting, Duncan [12] solves (1.3)-(1.4) by applying a completion of square approach instead of MP or DPP. Nevertheless, in the case of asymmetric risk-sensitive linear-quadratic control, one cannot follow the ideas in [12] by carrying out the completion square approach in the exponential in (1.3) because the logarithmic (equivalently, exponential) transformation fails as we mentioned early. Due to the failure of the logarithmic transformation in the asymmetric risk-sensitive case, some further contributions are following immediately from our new formulation. At first, if \((\Phi,f)\) are uniformly bounded, by a simple application of our earlier work [18] in problem (1.6) and (1.8) with a strictly positive \(\Gamma\), we obtain a new asymmetric risk-sensitive MP that covers the results obtained in Lim and Zhou [23]. As long as we take \(\Gamma=\frac{6}{2}\mathbb{I}_{d\times d}\) and assume that the value function is sufficiently smooth, our first- and second-order adjoint equations degenerate into those given in [23]. For the unbounded \((\Phi,f)\), the linear-quadratic counterpart is considered and is of great importance because all the results obtained in [18] are not applicable in this unbounded case. Under the asymmetric risk-sensitive linear-quadratic setting, we introduce a new Riccati differential equation that contains an additional quadratic term to determine the optimal candidate control with feedback type. The optimality of this candidate is discussed among the set of admissible controls that is given to keep the solution \(Y(\cdot)\) to the BSDE in (1.8) from exploding at \(t=0\) so that (1.6) is well defined. Particularly, as an application of the asymmetric risk-sensitive control, by taking \(\xi=\log V(T)\) for some portfolio value \(V(\cdot)\) at a finite time horizon \(T\), (1.1) is an important risk-sensitive criterion widely applied in dynamic portfolio optimization [4, 21, 28, 29]. Applying the proposed new formulation (1.6) and (1.8), a dynamic portfolio optimization problem is also investigated in the case of asymmetric risk sensitivity while the logarithmic transformation does not work. The rest of the paper is organized as follows. In section 2, we give some preliminaries, a mean-variance representation for the asymmetric risk-sensitive criterion is uncovered by applying a variational approach, and a nonlinear asymmetric risk-sensitive control problem is formulated. In section 3, the asymmetric risk-sensitive linear-quadratic control problem is investigated. In section 4, the asymmetric risk-sensitive control under bounded conditions is studied. As an application of the asymmetric risk-sensitive control problem, a dynamic portfolio optimization problem is given in section 5. ## 2 Variance decomposition and asymmetric risk-sensitive control Let \((\Omega,\mathcal{F},\mathbb{P})\) be a complete probability space on which a standard \(d\)-dimensional Brownian motion \(W=(W_{1}(t),W_{2}(t),...W_{d}(t))_{0\leq t\leq T}^{\intercal}\) is defined. Assume that \(\mathbb{F=}\{\mathcal{F}_{t},0\leq t\leq T\}\) is the \(\mathbb{P}\)-augmentation of the natural filtration of \(W\), where \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\)-null sets of \(\mathcal{F}\). Denote by \(\mathbb{R}^{n}\) the \(n\)-dimensional real Euclidean space and \(\mathbb{R}^{k\times n}\) the set of \(k\times n\) real matrices. Let \(\langle\cdot,\cdot\rangle\) (resp. \(|\cdot|\)) denote the usual scalar product (resp. usual norm) of \(\mathbb{R}^{n}\) and \(\mathbb{R}^{k\times n}\). The scalar product (resp. norm) of \(A=(a_{ij})\), \(B=(b_{ij})\in\mathbb{R}^{k\times n}\) is denoted by \(\langle A,B\rangle=\mathrm{tr}\{AB^{\intercal}\}\) (resp. \(|A|=\sqrt{\mathrm{tr}\{AA^{\intercal}\}}\)), where the superscript \({}^{\intercal}\) denotes the transpose of vectors or matrices. Denote by \(\mathbb{S}^{n\times n}\) the set of all \(n\times n\) real symmetric matrices and \(\mathrm{I}_{n\times n}\) the \(n\times n\) identity matrix. For each given \(p\geq 1\), we introduce the following spaces. \(L^{p}(\mathcal{F}_{T};\mathbb{R}^{n})\) : the space of \(\mathcal{F}_{T}\)-measurable \(\mathbb{R}^{n}\)-valued random vectors \(\eta\) such that \[\mathbb{E}[|\eta|^{p}]<+\infty;\] \(L^{\infty}(\mathcal{F}_{T};\mathbb{R}^{n})\): the space of \(\mathcal{F}_{T}\)-measurable \(\mathbb{R}^{n}\)-valued random vectors \(\eta\) such that, \(\mathbb{P}\)-a.s. \[\text{ess sup}_{\omega\in\Omega}|\eta(\omega)|<+\infty;\] \(L^{\infty}([0,T];\mathbb{R}^{n})\): the space of \(\mathbb{R}^{n}\)-valued measurable functions \(f(\cdot)\) on \([0,T]\) such that \[||f(\cdot)||_{\infty}:=\sup_{t\in[0,T]}|f(t)|<+\infty;\] \(L^{\infty}_{\mathbb{F}}([0,T];\mathbb{R}^{n})\): the space of \(\mathbb{F}\)-adapted \(\mathbb{R}^{n}\)-valued stochastic processes \(f(\cdot)\) on \([0,T]\) such that, \(\lambda\otimes\mathbb{P}\)-a.e. \[\text{ess sup}_{(t,\omega)\in[0,T]\times\Omega}|f(t,\omega)|<+\infty;\] where \(\lambda\) represents the Lebesgue measure on \([0,T]\). \(L^{p,q}_{\mathbb{F}}([0,T];\mathbb{R}^{n})\): the space of \(\mathbb{F}\)-adapted \(\mathbb{R}^{n}\)-valued stochastic processes \(f(\cdot)\) on \([0,T]\) such that \[\mathbb{E}\left[\left(\int_{0}^{T}|f(t)|^{p}dt\right)^{\frac{n}{p}}\right]<+\infty;\] and when \(p=q\), we simply write \(L^{p}_{\mathbb{F}}([0,T];\mathbb{R}^{n})\) rather than \(L^{p,q}_{\mathbb{F}}([0,T];\mathbb{R}^{n})\). \(L^{p}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}^{n}))\): the space of \(\mathbb{F}\)-adapted, \(\mathbb{R}^{n}\)-valued continuous stochastic processes \(f(\cdot)\) on \([0,T]\) such that \[\mathbb{E}\left[\sup_{t\in[0,T]}|f(t)|^{p}\right]<+\infty.\] Our main goal in this section is to obtain the mean-variance representation for the asymmetric risk-sensitive criterion. To achieve this goal, the first mission is to provide a characterization for a variance decomposition on the space of all square integrable random variables. ### A variance decomposition for square integrable random variables Let \(m,n\) be two positive integers. \(\{\mathcal{F}^{i},i=1,\ldots,m\}\) are sub-\(\sigma\) fields of \(\mathcal{F}\) and they are independent of each other. Set \(\mathcal{G}=\bigvee_{i=1}^{m}\mathcal{F}^{i}\) and it is well known that \(L^{2}(\mathcal{G};\mathbb{R}^{n})=\mathbb{R}^{n}\oplus L^{2}_{0}(\mathcal{G}; \mathbb{R}^{n})\), \(L^{2}(\mathcal{F}^{i};\mathbb{R}^{n})=\mathbb{R}^{n}\oplus L^{2}_{0}(\mathcal{ F}^{i};\mathbb{R}^{n})\), where \[L^{2}_{0}(\mathcal{G};\mathbb{R}^{n}):=\left\{\xi\in L^{2}(\mathcal{G}; \mathbb{R}^{n}):\mathbb{E}[\xi]=0\right\},\] \[L^{2}_{0}(\mathcal{F}^{i};\mathbb{R}^{n}):=\left\{\xi\in L^{2}(\mathcal{F}^{i} ;\mathbb{R}^{n}):\mathbb{E}[\xi]=0\right\},i=1,\ldots,m.\] We simply write \(L^{2}(\mathcal{G};\mathbb{R}^{n})\), \(L^{2}(\mathcal{F}^{i};\mathbb{R}^{n})\), \(L^{2}_{0}(\mathcal{G};\mathbb{R}^{n})\), \(L^{2}_{0}(\mathcal{F}^{i};\mathbb{R}^{n})\) with \(L^{2}(\mathcal{G})\), \(L^{2}(\mathcal{F}^{i})\), \(L^{2}_{0}(\mathcal{G})\), \(L^{2}_{0}(\mathcal{F}^{i})\) respectively unless the dimension of the space needs to be indicated. In addition, for any closed linear subspace \(\mathcal{L}\subset L^{2}(\mathcal{G})\), we denote by \(P_{\mathcal{L}}\) the projection operator from \(L^{2}(\mathcal{G})\) upon \(\mathcal{L}\) and by \(\mathcal{L}^{\perp}\) the orthogonal complement of \(\mathcal{L}\) with respect to \(L^{2}(\mathcal{G})\). **Definition 2.1**.: _A set of functionals \(\{\mathrm{D}_{i},i=1,\ldots,m\}\) is called a variance decomposition on \(L^{2}(\mathcal{G})\), if for any \(\xi\in L^{2}(\mathcal{G})\)_ \[\mathrm{Var}[\xi]=\sum_{i=1}^{m}\mathrm{D}_{i}\left[\xi\right], \tag{2.1}\] _where \(\mathrm{D}_{i}:L^{2}(\mathcal{G})\longmapsto\mathbb{R}\) satisfies following axiomatic assumptions:_ 1. \(\mathrm{D}_{i}[a\xi+c]=a^{2}\mathrm{D}_{i}[\xi],\forall a\in\mathbb{R},c\in \mathbb{R}^{n}\)_;_ 2. \(\forall\{\xi_{k}\}_{k\in\mathbb{N}_{+}}\subset L^{2}_{0}(\mathcal{G}),\xi\in L ^{2}_{0}(\mathcal{G})\)_, if_ \(\lim_{k\rightarrow\infty}\mathbb{E}\left[\left|\xi_{k}-\xi\right|^{2}\right]=0\) _then_ \(\lim_{k\rightarrow\infty}\mathrm{D}_{i}[\xi_{k}]=\mathrm{D}_{i}[\xi]\)_;_ 3. \(\mathrm{D}_{i}[\xi]=\mathrm{Var}[\xi],\forall\xi\in L^{2}_{0}(\mathcal{F}^{i})\)_;_ 4. \(\mathrm{D}_{i}[\xi]=0,\forall\xi\in L^{2}_{0}(\mathcal{F}^{j}),j\neq i\)_;_ _,_ * _there exists a closed linear subspace_ \(\mathcal{L}_{i}\supset L^{2}_{0}(\mathcal{F}^{i})\) _and_ \(\mathcal{L}^{\perp}_{i}\supset\bigcup_{j\neq i}L^{2}_{0}(\mathcal{F}^{j})\) _such that_ \[\forall\xi,\eta\in L^{2}_{0}(\mathcal{G}),\ \ \text{if}\ \ \mathrm{Cov}\left[P_{\mathcal{L}_{i}}(\xi),P_{ \mathcal{L}_{i}}(\eta)\right]=0\ \ \text{then}\ \ \mathrm{D}_{i}[\xi+\eta]=\mathrm{D}_{i}[\xi]+\mathrm{D}_{i}[\eta].\] **Remark 2.2**.: _From Definition 2.1, if \(\{\mathrm{D}_{i},i=1,\ldots,m\}\) is a variance decomposition on \(L^{2}(\mathcal{G})\) then it is easy to check that axiomatic assumptions (A2), (A3), (A4) also hold when \(L^{2}_{0}(\mathcal{G})\), \(L^{2}_{0}(\mathcal{F}^{i})\), \(L^{2}_{0}(\mathcal{F}^{j})\) are replaced by \(L^{2}(\mathcal{G})\), \(L^{2}(\mathcal{F}^{i})\), \(L(\mathcal{F}^{j})\) respectively, and (A5) holds for any \(\xi,\eta\in L^{2}(\mathcal{G})\). In particular, it follows from (A1), (A4), (A5) that_ \[\mathrm{D}_{i}[\xi+\eta]=\mathrm{D}_{i}[\xi],\ \ \forall\xi\in L^{2}(\mathcal{G}), \eta\in L^{2}(\mathcal{F}^{j}),j\neq i.\] The following simple example illustrates the reason why we make axiomatic assumptions (A1)-(A5) in Definition 2.1. Recall the positive integer \(d\) is the dimension of the standard Brownian motion \(W\) and \(\mathbb{F}=\{\mathcal{F}_{t},t\in[0,T]\}\) is the \(\mathbb{P}\)-augmentation of the natural filtration of \(W\). **Example 2.3**.: _Put \(n=d=1\) and \(m=2\). For any \(\xi\in L^{2}(\mathcal{F}_{T};\mathbb{R})\), it admits a unique Brownian-martingale representation that can be rewritten as_ \[\xi=\mathbb{E}[\xi]+\int_{0}^{\frac{T}{2}}\varphi(s)dW(s)+\int_{\frac{T}{2}}^ {T}\varphi(s)dW(s),\ \ \varphi\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}). \tag{2.2}\] _Taking \(\mathrm{Var}[\cdot]\) on both sides of (2.2) and applying the Ito isometry yield_ \[\mathrm{Var}[\xi]=\mathbb{E}\left[\int_{0}^{\frac{T}{2}}\left|\varphi(s) \right|^{2}ds\right]+\mathbb{E}\left[\int_{\frac{T}{2}}^{T}\left|\varphi(s) \right|^{2}ds\right].\] _We define two functionals on \(L^{2}(\mathcal{F}_{T};\mathbb{R})\) such that_ \[\mathrm{D}_{1}[\xi]:=\mathbb{E}\left[\int_{0}^{\frac{T}{2}}\left|\varphi(s) \right|^{2}ds\right],\ \mathrm{D}_{2}[\xi]:=\mathbb{E}\left[\int_{\frac{T}{2}}^{T}\left|\varphi(s) \right|^{2}ds\right], \tag{2.3}\] _and claim that \(\{\mathrm{D}_{i},i=1,2\}\) is a variance decomposition on \(L^{2}(\mathcal{F}_{T};\mathbb{R})\)._ _To show this, we first notice that \(\{\mathrm{D}_{i},i=1,2\}\) are well-defined due to the uniqueness of the martingale representation of any \(\xi\in L^{2}(\mathcal{F}_{T};\mathbb{R})\). Then (A1) is obvious and (A2) follows from_ \[\mathbb{E}\left[\int_{0}^{\frac{T}{2}}\varphi(s)dW(s)\cdot\int_{\frac{T}{2}}^ {T}\varphi(s)dW(s)\right]=0,\ \ \forall\varphi\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}). \tag{2.4}\] _As for (A3)-(A5), let_ \[\mathbb{F}^{1}=\{\mathcal{F}_{t}:0\leq t\leq T/2\},\ \ \mathbb{F}^{2}=\{ \mathcal{F}_{t}^{T/2}:T/2\leq t\leq T\},\] _where_ \[\mathcal{F}_{t}:=\sigma(W(s):0\leq s\leq t),\ \ \mathcal{F}_{t}^{T/2}:=\sigma(W(s)-W(T/2):T/2 \leq s\leq t).\] _For \(i=1,2\), denote by \(L^{2}_{\mathbb{F}^{i}}([0,T];\mathbb{R})\) the subspace of \(L^{2}_{\mathbb{F}}([0,T];\mathbb{R})\) such that any process in that is \(\mathbb{F}^{i}\)-adapted. Take \(\mathcal{F}^{1}=\mathcal{F}_{T/2}\) and \(\mathcal{F}^{2}=\mathcal{F}_{T}^{T/2}\). Obviously \(\mathcal{F}^{1}\) is independent of \(\mathcal{F}^{2}\) and \(\mathcal{G}:=\mathcal{F}^{1}\vee\mathcal{F}^{2}=\mathcal{F}_{T}\). From the martingale representation theorem, it is well known that_ \[\begin{array}{rl}L^{2}_{0}(\mathcal{F}^{1})=&\left\{\int_{0}^{\frac{T}{2}} \varphi(s)dW(s),\ \varphi\in L^{2}_{\mathbb{F}^{1}}([0,T/2];\mathbb{R})\right\},\\ L^{2}_{0}(\mathcal{F}^{2})=&\left\{\int_{\frac{T}{2}}^{T}\psi(s)dW(s),\ \psi\in L^{2}_{ \mathbb{F}^{2}}([T/2,T];\mathbb{R})\right\},\end{array} \tag{2.5}\] _so it is easy to verify (A3) and (A4). Particularly, for any \(\xi\in L_{0}^{2}(\mathcal{F}_{T})\) we have_ \[\xi=\int_{0}^{\frac{T}{2}}\varphi(s)dW(s)+\int_{\frac{T}{2}}^{T}\varphi(s)dW(s), \ \ \varphi\in L_{\mathbb{F}}^{2}([0,T];\mathbb{R}). \tag{2.6}\] _Thanks to the uniqueness of martingale representation, (2.4) and (2.6) imply that \(L_{0}^{2}(\mathcal{F}_{T})=\mathcal{L}_{1}\oplus\mathcal{L}_{2}\) such that_ \[\mathcal{L}_{1}= \left\{\int_{0}^{\frac{T}{2}}\varphi(s)dW(s),\ \varphi\in L_{ \mathbb{F}}^{2}([0,T/2];\mathbb{R})\right\},\] \[\mathcal{L}_{2}= \left\{\int_{\frac{T}{2}}^{T}\psi(s)dW(s),\ \psi\in L_{\mathbb{F}}^{2}([T/2,T];\mathbb{R})\right\}.\] _Noticing (2.5) we have \(L_{0}^{2}(\mathcal{F}^{i})\subset\mathcal{L}_{i},i=1,2\), from which one can check (A5) without any difficulty._ **Remark 2.4**.: _In Example 2.3, if we further assume that for any variance decomposition \(\{\tilde{\mathrm{D}}_{i},i=1,2\}\) on \(L^{2}(\mathcal{F}_{T};\mathbb{R})\),_ \[\tilde{\mathrm{D}}_{i}[\eta\xi]=\mathbb{E}[\eta^{2}]\cdot\tilde{\mathrm{D}}_{ i}[\xi],\ \forall\eta\in L^{\infty}(\mathcal{F}^{1};\mathbb{R}),\ \xi\in L_{0}^{2}(\mathcal{F}^{2}), \tag{2.7}\] _then one can prove that \(\tilde{\mathrm{D}}_{i}[\xi]=\mathrm{D}_{i}[\xi],\forall\xi\in L^{2}(\mathcal{ F}_{T};\mathbb{R})\). In other word, there exists a unique variance decomposition on \(L^{2}(\mathcal{F}_{T};\mathbb{R})\). In fact, it only needs to note that \(\mathcal{L}_{1}=L_{0}^{2}(\mathcal{F}^{1})\), and for any \(\xi\in\mathcal{L}_{2}\) satisfying_ \[\xi=\int_{\frac{T}{2}}^{T}\psi(s)dW(s),\ \ \psi\in L_{\mathbb{F}}^{2}([T/2,T]; \mathbb{R}),\] _we have_ \[\exists\ \xi_{k}=\int_{0}^{T}\psi_{k}(s)dW(s),\ \ \psi_{k}(s)=\sum_{i=1}^{k}1_{A_{k}^{(i)}} \psi_{k}^{(i)}(s),\] _such that \(\lim_{k\to\infty}\mathbb{E}\left[\left|\xi_{k}-\xi\right|^{2}\right]=0\), where \(\{A_{k}^{(i)},i=1,\ldots,k\}\) is an \(\mathcal{F}^{1}\)-partition of \(\Omega\), and \(\{\psi_{k}^{(i)},i=1,\ldots,k\}\subset L_{\mathbb{F}^{2}}^{2}([T/2,T];\mathbb{ R})\), \(k\in\mathbb{N}_{+}\)._ In fact, Example 2.3 provides us with an idea to construct a variance decomposition on \(L^{2}(\mathcal{G})\). **Proposition 2.5**.: _Suppose that \(L_{0}^{2}(\mathcal{G})\) admits an orthogonal direct sum:_ \[L_{0}^{2}(\mathcal{G})=\mathcal{L}_{1}\oplus\cdots\oplus\mathcal{L}_{m}\] _with \(m\) closed linear subspaces such that \(\mathcal{L}_{1}\supset L_{0}^{2}(\mathcal{F}^{1}),\ldots,\mathcal{L}_{m} \supset L_{0}^{2}(\mathcal{F}^{m})\). For \(i=1,\ldots,m\), define a functional \(\mathrm{D}_{i}:L^{2}(\mathcal{G})\longmapsto\mathbb{R}\) such that for any \(\xi\in L^{2}(\mathcal{G})\) with the orthogonal decomposition \(\xi=\mathbb{E}[\xi]+\sum_{i=1}^{m}\xi_{i}\),_ \[\mathrm{D}_{i}\left[\xi\right]:=\mathrm{Var}\left[\xi_{i}\right],\ \xi_{i}\in\mathcal{L}_{i}, \tag{2.8}\] _then \(\{\mathrm{D}_{i},i=1,\ldots,m\}\) is a variance decomposition on \(L^{2}(\mathcal{G})\)._ Proof.: \(\forall\xi\in L^{2}(\mathcal{G})\), (2.1) is obvious due to (2.8). It is not difficult to verify (A1)-(A4) so we only check (A5). For any \(\xi,\eta\in L_{0}^{2}(\mathcal{G})\) with \[\xi=\sum_{i=1}^{m}\xi_{i},\quad\eta=\sum_{i=1}^{m}\eta_{i},\quad\xi_{i},\eta_ {i}\in\mathcal{L}_{i}.\] Since \(\mathrm{Cov}\left[\xi_{i},\eta_{i}\right]=\mathrm{Cov}\left[P_{\mathcal{L}_{i} }(\xi),P_{\mathcal{L}_{i}}(\eta)\right]=0\), then it follows from (2.8) immediately that \[\mathrm{D}_{i}[\xi+\eta]=\mathrm{Var}[\xi_{i}+\eta_{i}]=\mathrm{Var}[\xi_{i}]+ \mathrm{Var}[\eta_{i}]=\mathrm{D}_{i}[\xi]+\mathrm{D}_{i}[\eta], \tag{2.9}\] which completes the proof. Thanks to Proposition 2.5, the following example is helpful for us to interpret how a decision maker measures the risks stemming from different risk sources and weights each of them with her asymmetric risk-sensitive parameters in the next subsection. **Example 2.6**.: _Put \(m=d\). For \(i=1,\ldots,d\), let \(\mathbb{F}^{i}=\{\mathcal{F}^{i}_{t}:0\leq t\leq T\}\) where \(\mathcal{F}^{i}_{t}:=\sigma(W_{i}(s):0\leq s\leq t)\). Denote by \(L^{2}_{\mathbb{F}^{i}_{t}}([0,T];\mathbb{R}^{n})\) the subspace of \(L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{n})\) such that any process in that is \(\mathbb{F}^{i}\)-adapted. For \(i=1,\ldots,d\), take \(\mathcal{F}^{i}=\mathcal{F}^{i}_{T}\). It is obvious that \(\mathcal{F}^{i},i=1,\ldots,d\) are mutually independent and \(\mathcal{G}:=\bigvee_{i=1}^{d}\mathcal{F}^{i}=\mathcal{F}_{T}\). On the one hand, for any \(\xi\in L^{2}_{0}(\mathcal{F}_{T})\), thanks to the martingale representation theorem, there exist \(\varphi_{i}(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{n}),i=1,\ldots,d\) uniquely such that_ \[\xi=\sum_{i=1}^{d}\int_{0}^{T}\varphi_{i}(s)dW_{i}(s). \tag{2.10}\] _Since \(L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{n})\) is complete and \(W_{i}\) is independent of \(W_{j}\) when \(1\leq i\neq j\leq d\), we have for any \(\varphi(\cdot),\psi(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R})\),_ \[\mathbb{E}\left[\int_{0}^{T}\varphi(s)dW_{i}(s)\cdot\int_{0}^{T}\psi(s)dW_{j} (s)\right]=0,1\leq i\neq j\leq d. \tag{2.11}\] _Then it follows from (2.10) and (2.11) that \(L^{2}_{0}(\mathcal{F}_{T})\) admits an orthogonal direct sum_ \[L^{2}_{0}(\mathcal{F}_{T})=\mathcal{L}_{1}\oplus\cdots\oplus\mathcal{L}_{d}\] _such that_ \[\mathcal{L}_{i}=\left\{\int_{0}^{T}\varphi(s)dW_{i}(s):\varphi(\cdot)\in L^{2} _{\mathbb{F}}([0,T];\mathbb{R}^{n})\right\},i=1,\ldots d. \tag{2.12}\] _On the other hand, for \(i=1,\ldots,d\), applying the martingale representation theorem to any \(\xi\in L^{2}_{0}(\mathcal{F}^{i}_{T})\) yields_ \[L^{2}_{0}(\mathcal{F}^{i}_{T})=\left\{\int_{0}^{T}\varphi(s)dW_{i}(s):\varphi( \cdot)\in L^{2}_{\mathbb{F}^{i}}([0,T];\mathbb{R}^{n})\right\},i=1,\ldots d. \tag{2.13}\] _Obviously \(L^{2}_{0}(\mathcal{F}^{i}_{T})\subset\mathcal{L}_{i},i=1,\ldots d\). Due to Proposition 2.5, we can construct a variance decomposition \(\{\mathrm{D}_{i},i=1,\ldots,m\}\) on \(L^{2}(\mathcal{F}_{T})\) such that for any \(\xi\in L^{2}(\mathcal{F}_{T})\) with the orthogonal decomposition_ \[\xi=\mathbb{E}[\xi]+\sum_{i=1}^{d}\int_{0}^{T}\varphi_{i}(s)dW_{i}(s),\ \ \varphi_{i}(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{n}),i=1,\ldots,d,\] _we have_ \[\mathrm{D}_{i}[\xi]=\mathbb{E}\left[\int_{0}^{T}|\varphi_{i}(s)|^{2}\,ds \right],\ i=1,\ldots d\] _and \(\mathrm{Var}[\xi]=\sum_{i=1}^{d}\mathrm{D}_{i}[\xi]\). Furthermore, similar to (2.7), letting \(\{\tilde{\mathrm{D}}_{i},i=1,\ldots,m\}\) be any variance decomposition on \(L^{2}(\mathcal{F}_{T})\), if for any fixed \(i\in\{1,\ldots,m\}\),_ \[\tilde{\mathrm{D}}_{i}[\eta\xi]=\mathbb{E}[\eta^{2}]\cdot\tilde{\mathrm{D}}_{i }[\xi],\ \forall\xi\in L^{2}_{0}(\mathcal{F}^{j}_{T}),\ \eta\in L^{\infty}\left(\bigvee_{1\leq l\leq m,l\neq j}\mathcal{F}^{l}_{T}; \mathbb{R}\right),\ j=1,\ldots,m,\] _then \(\bar{\mathrm{D}}_{i}[\xi]=\mathrm{D}_{i}[\xi],\forall\xi\in L^{2}(\mathcal{F}_{T})\), which implies that there exists a unique variance decomposition on \(L^{2}(\mathcal{F}_{T})\). The main idea of the proof is similar to that in Remark 2.4, that is, one can find a linear subspace \(\mathcal{L}\) dense in \(L^{2}(\mathcal{F}_{T})\) to show that \(\bar{\mathrm{D}}_{i}[\xi]=\mathrm{D}_{i}[\xi],\forall\xi\in\mathcal{L}\) by the above condition and the axiomatic assumptions in Definition 2.1. Then the uniqueness of the variance decomposition follows from the fact that \(\mathcal{L}\) is dense in \(L^{2}(\mathcal{F}_{T})\)._ ### The mean-variance representation for asymmetric risk-sensitive criterion As mentioned in the introduction, the risk-sensitive criterion (1.1) can be regarded as a nonlinear expectation \(\mathcal{E}_{\theta}\) such that it has a Taylor expansion around \(\theta=0\), that is, \[\mathcal{E}_{\theta}[\xi]=\mathbb{E}\left[\xi\right]+\frac{\theta}{2}\mathrm{ Var}\left[\xi\right]+O(\theta^{2}), \tag{2.14}\] where \(\xi=\Phi(X(T))+\int_{0}^{T}f(t,X(t),u(t))dt\) for any admissible control \(u(\cdot)\). A natural question follows that whether we can perform a Taylor expansion for the criterion (1.6) in the asymmetric risk-sensitive case, if we can, to what extent such a Taylor expansion generalizes (2.14). To answer this question, we adopt a variational method which is usually used in deriving the stochastic MP to perform the Taylor expansion. Without loss of generality, we consider the diagonal matrix \(\Gamma=\mathrm{diag}\{\gamma_{1},\ldots,\gamma_{d}\}\) for some \((\gamma_{1},\ldots,\gamma_{d})\in\mathbb{R}^{d}\) such that \(\gamma_{i}>0,i=1,\ldots,d\). For simplicity, we put \(d=2\) and the analysis in the case \(d>2\) is similar to the former case. In the following context, the constant \(C\) may change from line to line in the proofs. Consider the BSDE \[\left\{\begin{array}{rl}dY^{v_{1},v_{2}}(t)=&-\left[v_{1}\left|Z_{1}^{v_{1}, v_{2}}(t)\right|^{2}+v_{2}\left|Z_{2}^{v_{1},v_{2}}(t)\right|^{2}\right]dt+Z_{1}^ {v_{1},v_{2}}(t)dW_{1}(t)+Z_{2}^{v_{1},v_{2}}(t)dW_{2}(t),\\ Y^{v_{1},v_{2}}(T)=&\xi,\end{array}\right. \tag{2.15}\] where \((v_{1},v_{2})\in[0,1]\times[0,1]\). The objective is to minimize \[J(v_{1},v_{2}):=Y^{v_{1},v_{2}}(0) \tag{2.16}\] over \((v_{1},v_{2})\in[0,1]\times[0,1]\). **Assumption 2.7**.: \(\xi\) _is an \(\mathcal{F}_{T}\)-measurable random variable such that \(\mathbb{E}\left[e^{16|\xi|}\right]<+\infty\)._ The following lemma is an application of Corollary 4 in [7] and Theorem 3.3 in [10] to (2.15). **Lemma 2.8**.: _Let Assumption 2.7 hold. Then, for any \((v_{1},v_{2})\in[0,1]\times[0,1]\), the state equation (2.15) admits a unique solution \((Y^{v_{1},v_{2}}(\cdot),Z^{v_{1},v_{2}}(\cdot))\) such that \(\mathbb{E}\left[e^{16\sup_{t\in[0,T]}|Y^{v_{1},v_{2}}(t)|}\right]<+\infty\) and \(Z^{v_{1},v_{2}}(\cdot)=(Z_{1}^{v_{1},v_{2}}(\cdot),Z_{2}^{v_{1},v_{2}}(\cdot) )\in L_{\mathbb{F}}^{2}([0,T];\mathbb{R}^{2})\). Moreover, there exists a \(C>0\) such that_ \[\mathbb{E}\left[\exp\left\{16\sup_{t\in[0,T]}|Y^{v_{1},v_{2}}(t)|\right\}+ \left(\int_{0}^{T}\left|Z^{v_{1},v_{2}}(t)\right|^{2}dt\right)^{4}\right] \leq C\mathbb{E}\left[e^{16|\xi|}\right], \tag{2.17}\] _where \(C\) depends only on \(T\)._ On the one hand, for any given \((v_{1},v_{2})\in[0,1]\times[0,1]\), Lemma 2.8 guarantees the well-posedness of quadratic BSDE (2.15). Then thanks to Definition 3.3 and Example 3.4 in [19], the unique solution \(Y^{v_{1},v_{2}}(\cdot)\) to (2.15) actually induces a quadratic \(\mathbb{F}\)-consistent nonlinear expectation \(\mathcal{E}_{v_{1},v_{2}}\) with its domain \(\mathrm{Dom}(\mathcal{E}_{v_{1},v_{2}})\) such that \[\mathcal{E}_{v_{1},v_{2}}[\xi]:= Y^{v_{1},v_{2}}(0),\ \forall\xi\in\mathrm{Dom}(\mathcal{E}_{v_{1},v_{2}}), \tag{2.18}\] \[\mathrm{Dom}(\mathcal{E}_{v_{1},v_{2}}):= \left\{\xi\in L^{2}(\mathcal{F}_{T};\mathbb{R}),\ \mathbb{E}[e^{16|\xi|}]<+\infty\right\}.\] On the other hand, one can observe that the couple \((\bar{v}_{1},\bar{v}_{2})=(0,0)\) minimize (2.16) uniquely, and the corresponding optimal trajectory, denoted by \(\left(\bar{Y}(\cdot),\bar{Z}_{1}(\cdot),\bar{Z}_{2}(\cdot)\right)\), satisfies the following BSDE: \[\left\{\begin{array}{rl}d\bar{Y}(t)=&\bar{Z}_{1}(t)dW_{1}(t)+\bar{Z}_{2}(t) dW_{2}(t),\\ \bar{Y}(T)=&\xi.\end{array}\right. \tag{2.19}\] Since \([0,1]\times[0,1]\) is closed and convex, we adopt the convex perturbation around \((0,0)\) to deduce the variational equation for (2.15). For any \(\gamma_{1},\gamma_{2}\in[0,1]\), set \(v^{\gamma_{i}}=\bar{v}_{i}+\gamma_{i}(1-\bar{v}_{i}),i=1,2\).. Obviously, \((v^{\gamma_{1}},v^{\gamma_{2}})=(\gamma_{1},\gamma_{2})\in[0,1]\times[0,1]\). Let \((Y^{\gamma_{1},\gamma_{2}}(\cdot),Z_{1}^{\gamma_{1},\gamma_{2}}(\cdot),Z_{2} ^{\gamma_{1},\gamma_{2}}(\cdot))\) be the solution to (2.15) corresponding to the admissible control \((\gamma_{1},\gamma_{2})\). Combing (2.19), we have \[Y^{\gamma_{1},\gamma_{2}}(t)-\bar{Y}(t)= \int_{t}^{T}\left(\gamma_{1}\left|Z_{1}^{\gamma_{1},\gamma_{2}}( s)\right|^{2}+\gamma_{2}\left|Z_{2}^{\gamma_{1},\gamma_{2}}(s)\right|^{2} \right)ds \tag{2.20}\] \[-\int_{t}^{T}\left(Z_{1}^{\gamma_{1},\gamma_{2}}(s)-\bar{Z}_{1}( s)\right)dW_{1}(s)-\int_{t}^{T}\left(Z_{2}^{\gamma_{1},\gamma_{2}}(s)-\bar{Z}_{2}(s) \right)dW_{2}(s),\] We provide the estimate for (2.20) by the following lemma. **Lemma 2.9**.: _Let Assumption 2.7 hold. Then_ \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|Y^{\gamma_{1},\gamma_{2}}(t)-\bar{Y}(t) \right|^{4}+\left(\int_{0}^{T}\left|Z^{\gamma_{1},\gamma_{2}}(t)-\bar{Z}(t) \right|^{2}dt\right)^{2}\right]=O\left(\left(\gamma_{1}^{2}+\gamma_{2}^{2} \right)^{2}\right),\] _where \(Z^{\gamma_{1},\gamma_{2}}(\cdot)-\bar{Z}(\cdot)=\left(Z_{1}^{\gamma_{1}, \gamma_{2}}(\cdot)-\bar{Z}_{1}(\cdot),Z_{2}^{\gamma_{1},\gamma_{2}}(\cdot)- \bar{Z}_{2}(\cdot)\right)\)._ Proof.: Applying a standard BSDE estimate (please refer to [13, 36]) to (2.20), we have \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|Y^{\gamma_{1},\gamma_{2}} (t)-\bar{Y}(t)\right|^{4}+\left(\int_{0}^{T}\left|Z^{\gamma_{1},\gamma_{2}}(t )-\bar{Z}(t)\right|^{2}dt\right)^{2}\right]\] \[\leq C\mathbb{E}\left[\left(\int_{0}^{T}\left(\gamma_{1}\left|Z_ {1}^{\gamma_{1},\gamma_{2}}(t)\right|^{2}+\gamma_{2}\left|Z_{2}^{\gamma_{1}, \gamma_{2}}(t)\right|^{2}\right)dt\right)^{4}\right]\] \[\leq C\mathbb{E}\left[\left(\int_{0}^{T}\left|Z^{\gamma_{1}, \gamma_{2}}(t)\right|^{2}dt\right)^{4}\right]\left(\gamma_{1}^{2}+\gamma_{2} ^{2}\right)^{2},\] where \(Z^{\gamma_{1},\gamma_{2}}(\cdot)=(Z_{1}^{\gamma_{1},\gamma_{2}}(\cdot),Z_{2}^{ \gamma_{1},\gamma_{2}}(\cdot))\), and \(C\) depends only on \(T\). By Lemma 2.8, we conclude that \[\sup_{\gamma_{1},\gamma_{2}\in[0,1]}\mathbb{E}\left[\left(\int_{0}^{T}\left|Z^ {\gamma_{1},\gamma_{2}}(t)\right|^{2}dt\right)^{4}\right]\leq C\mathbb{E} \left[e^{16|\xi|}\right],\] where \(C\) depends only on \(T\). Therefore, we finally obtain \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|Y^{\gamma_{1},\gamma_{2}}(t)-\bar{Y}(t) \right|^{4}+\left(\int_{0}^{T}\left|Z^{\gamma_{1},\gamma_{2}}(t)-\bar{Z}(t) \right|^{2}dt\right)^{2}\right]\leq C\left(\gamma_{1}^{2}+\gamma_{2}^{2} \right)^{2},\] where \(C\) depends only on \(T\) and \(\xi\). For \(i=1,2\), let \((Y_{i}(\cdot),Z_{i1}(\cdot),Z_{i2}(\cdot))\) be respectively the solution to the following BSDEs: \[\left\{\begin{array}{rl}dY_{i}(t)=&-\left|\bar{Z}_{i}(t)\right|^{2}dt+Z_{i1} (t)dW_{1}(t)+Z_{i2}(t)dW_{2}(t),\\ Y_{i}(T)=&0.\end{array}\right. \tag{2.21}\] Under Assumption 2.7, the well-posedness of (2.21) can be guaranteed by the classical theory of the BSDEs (please refer to [13, 36]) and the estimate (2.17) holds. Now we can state the main result of this subsection. **Theorem 2.10**.: _Let Assumption 2.7 hold. Then_ \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|Y^{\gamma_{1},\gamma_{2}}(t )-\bar{Y}(t)-\gamma_{1}Y_{1}(t)-\gamma_{2}Y_{2}(t)\right|^{2}\right.\] \[+\left.\int_{0}^{T}\left(\sum_{i=1}^{2}\left|Z_{i}^{\gamma_{1}, \gamma_{2}}(t)-\bar{Z}_{i}(t)-\gamma_{1}Z_{1i}(t)-\gamma_{2}Z_{2i}(t)\right|^{ 2}\right)dt\right]=o\left(\gamma_{1}^{2}+\gamma_{2}^{2}\right).\] Proof.: Denote \(\eta(\cdot)=Y^{\gamma_{1},\gamma_{2}}(\cdot)-\bar{Y}(\cdot)-\gamma_{1}Y_{1}( \cdot)-\gamma_{2}Y_{2}(\cdot)\) and \(\zeta_{i}(\cdot)=Z_{i}^{\gamma_{1},\gamma_{2}}(\cdot)-\bar{Z}_{i}(\cdot)- \gamma_{1}Z_{1i}(\cdot)-\gamma_{2}Z_{2i}(\cdot)\) for \(i=1,2\). From (2.20) and (2.21), we get \[\eta(t)= \int_{t}^{T}\left(\sum_{i=1}^{2}\gamma_{i}(Z_{i}^{\gamma_{1}, \gamma_{2}}(s)+\bar{Z}_{i}(s))(Z_{i}^{\gamma_{1},\gamma_{2}}(s)-\bar{Z}_{i}(s) )\right)ds \tag{2.22}\] \[-\int_{t}^{T}\zeta_{1}(s)dW_{1}(s)-\int_{t}^{T}\zeta_{2}(s)dW_{2} (s).\] Similar to the estimate (2.20), by using a standard BSDE estimate, we have \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|\eta(t)\right|^{2}+\int_{0 }^{T}\left(\left|\zeta_{1}(t)\right|^{2}+\left|\zeta_{2}(t)\right|^{2}\right) dt\right]\] \[\leq C\mathbb{E}\left[\left(\int_{0}^{T}\sum_{i=1}^{2}\gamma_{i} \left|Z_{i}^{\gamma_{1},\gamma_{2}}(t)+\bar{Z}_{i}(t)\right|\left|Z_{i}^{ \gamma_{1},\gamma_{2}}(t)-\bar{Z}_{i}(t)\right|dt\right)^{2}\right]\] \[\leq C\sum_{i=1}^{2}\mathbb{E}\left[\left(\int_{0}^{T}\left|Z_{i }^{\gamma_{1},\gamma_{2}}(t)+\bar{Z}_{i}(t)\right|\left|Z_{i}^{\gamma_{1}, \gamma_{2}}(t)-\bar{Z}_{i}(t)\right|dt\right)^{2}\right]\left(\gamma_{1}^{2}+ \gamma_{2}^{2}\right).\] For \(i=1,2\), by Holder's inequality, the estimate (2.17) and Lemma 2.9, we have \[\mathbb{E}\left[\left(\int_{0}^{T}\left|Z_{i}^{\gamma_{1},\gamma_{2} }(t)+\bar{Z}_{i}(t)\right|\left|Z_{i}^{\gamma_{1},\gamma_{2}}(t)-\bar{Z}_{i}(t) \right|dt\right)^{2}\right]\] \[\leq\mathbb{E}\left[\left(\int_{0}^{T}\left|Z_{i}^{\gamma_{1}, \gamma_{2}}(t)+\bar{Z}_{i}(t)\right|^{2}dt\right)\left(\int_{0}^{T}\left|Z_{i} ^{\gamma_{1},\gamma_{2}}(t)-\bar{Z}_{i}(t)\right|^{2}dt\right)\right]\] \[\leq 2\sqrt{2}\left(\mathbb{E}\left[\left(\int_{0}^{T}\left|Z_{i} ^{\gamma_{1},\gamma_{2}}(t)\right|^{2}dt\right)^{2}\right]+\mathbb{E}\left[ \left(\int_{0}^{T}\left|\bar{Z}_{i}(t)\right|^{2}dt\right)^{2}\right]\right)^ {\frac{1}{2}}\left(\mathbb{E}\left[\left(\int_{0}^{T}\left|Z_{i}^{\gamma_{1}, \gamma_{2}}(t)-\bar{Z}_{i}(t)\right|^{2}dt\right)^{2}\right]\right)^{\frac{1}{ 2}}\] \[\leq C\left(\gamma_{1}^{2}+\gamma_{2}^{2}\right).\] Hence, we finally obtain \[\mathbb{E}\left[\sup_{t\in[0,T]}\left|\eta(t)\right|^{2}+\int_{0}^{T}\left( \left|\zeta_{1}(t)\right|^{2}+\left|\zeta_{2}(t)\right|^{2}\right)dt\right] \leq C\left(\gamma_{1}^{2}+\gamma_{2}^{2}\right)^{2},\] where \(C\) depends only on \(T\) and \(\xi\). The proof is complete. Thanks to Theorem 2.10, the following Taylor expansion for \(Y^{\gamma_{1},\gamma_{2}}(0)\) holds: \[Y^{\gamma_{1},\gamma_{2}}(0)=\bar{Y}(0)+\gamma_{1}Y_{1}(0)+\gamma_{2}Y_{2}(0) +o\left(\sqrt{\gamma_{1}^{2}+\gamma_{2}^{2}}\right), \tag{2.23}\] where \(\bar{Y}(\cdot)\), \(Y_{i}(\cdot),i=1,2\) are the unique solutions to (2.19), (2.21) respectively. **Remark 2.11**.: _A higher order Taylor expansion (e.g., second order) of \(Y^{\gamma_{1},\gamma_{2}}(0)\) can also be obtained if stronger integrability is imposed on the exponential of the terminal value \(\xi\). For example, suppose \(\mathbb{E}\left[e^{32|\xi|}\right]<+\infty\), then, similar to the proof of (2.23), we have_ \[Y^{\gamma_{1},\gamma_{2}}(0)= \bar{Y}(0)+\gamma_{1}Y_{1}(0)+\gamma_{2}Y_{2}(0) \tag{2.24}\] \[+\frac{1}{2}\left(\gamma_{1},\gamma_{2}\right)\left(\begin{array} []{cc}Y_{11}(0)&Y_{12}(0)\\ Y_{21}(0)&Y_{22}(0)\end{array}\right)\left(\begin{array}{c}\gamma_{1}\\ \gamma_{2}\end{array}\right)+o\left(\gamma_{1}^{2}+\gamma_{2}^{2}\right),\] _where \(\bar{Y}(\cdot)\), \(Y_{i}(\cdot)\), \(i=1,2\) satisfy (2.19), (2.21) respectively, and \(Y_{ij}(\cdot)\), \(i,j=1,2\) satisfy the following BSDEs:_ \[\left\{\begin{array}{rl}dY_{ij}(t)=&-\left[\bar{Z}_{i}(t)Z_{ji}(t)+\bar{Z}_{ j}(t)Z_{ij}(t)\right]dt+Z_{ij1}(t)dW_{1}(t)+Z_{ij2}(t)dW_{2}(t),\\ Y_{ij}(T)=&0,\ \ i,j=1,2.\end{array}\right.\] _Here \(\bar{Z}(\cdot)=(\bar{Z}_{1}(\cdot),\bar{Z}_{2}(\cdot))\) satisfies (2.21). Furthermore, if \(\xi\) has exponential moment of all order, then any order Taylor expansion of \(Y^{\gamma_{1},\gamma_{2}}(0)\) like (2.23) and (2.24) can be obtained. We omit the proof for the lack of space._ Let \(\mathcal{E}_{\gamma_{1},\gamma_{2}}\) be the quadratic \(\mathbb{F}\)-consistent nonlinear expectation satisfying (2.18) when \((v_{1},v_{2})=(\gamma_{1},\gamma_{2})\) and \(\mathrm{Dom}(\mathcal{E}_{\gamma_{1},\gamma_{2}})\) is its domain. Thanks to Proposition 2.5 and Example 2.6, we are benefiting from (2.18) and (2.23) to obtain the mean-variance representation of the asymmetric risk-sensitive counterpart of (2.14). **Theorem 2.12**.: _Suppose that Assumption 2.7 holds. Then for any \(\xi\in\mathrm{Dom}(\mathcal{E}_{\gamma_{1},\gamma_{2}})\) we have_ \[\mathcal{E}_{\gamma_{1},\gamma_{2}}[\xi]=\mathbb{E}[\xi]+\gamma_{1}\mathrm{D}_{ 1}[\xi]+\gamma_{2}\mathrm{D}_{2}[\xi]+o\left(\sqrt{\gamma_{1}^{2}+\gamma_{2}^{ 2}}\right), \tag{2.25}\] _where \(\{D_{i},i=1,2\}\) is a variance decomposition on \(L^{2}(\mathcal{F}_{T};\mathbb{R})\) such that \(\mathrm{Var}[\xi]=\mathrm{D}_{1}[\xi]+\mathrm{D}_{2}[\xi]\)._ Proof.: For any \(\xi\in\mathrm{Dom}(\mathcal{E}_{\gamma_{1},\gamma_{2}})\), it follows from (2.18) that \(\mathcal{E}_{\gamma_{1},\gamma_{2}}[\xi]=Y^{\gamma_{1},\gamma_{2}}(0).\) On the one hand, according to (2.19) and (2.21), we have \[\bar{Y}(0)=\mathbb{E}[\xi],\ Y_{1}(0)=\mathbb{E}\left[\int_{0}^{T}\bar{Z}_{1} ^{2}(t)dt\right],\ Y_{2}(0)=\mathbb{E}\left[\int_{0}^{T}\bar{Z}_{2}^{2}(t)dt \right],\] and \[\xi=\mathbb{E}[\xi]+\int_{0}^{T}\bar{Z}_{1}(t)dW_{1}(t)+\int_{0}^{T}\bar{Z}_{ 2}(t)dW_{2}(t).\] On the other hand, from Example 2.6 and the uniqueness of the martingale representation of \(\xi\), we have \[\mathrm{D}_{1}[\xi]=\mathbb{E}\left[\int_{0}^{T}\bar{Z}_{1}^{2}(t)dt\right],\ \mathrm{D}_{2}[\xi]=\mathbb{E}\left[\int_{0}^{T}\bar{Z}_{2}^{2}(t)dt\right].\] Combining the above relationships with (2.23) yields (2.25). We interpret (2.25) from the perspective in finance. As it is mentioned in [19], the left hand side of (2.25), \(\mathcal{E}_{\gamma_{1},\gamma_{2}}[\xi]\), can be understood as a convex risk measure about the derivative \(\xi\) (maybe some future or some option contract) based on the underlying asset \(X\) adapted to the filtration \(\mathbb{F}\) generated by \((W_{1},W_{2})\). The total risk measure of \(\xi\) is decomposed into three main parts--the right hand side of (2.25). For a decision maker, as \(\gamma_{1}\neq\gamma_{2}\) represents her asymmetric risk-sensitive attitudes toward two different risk sources \(W_{1}\) and \(W_{2}\), in her criterion she needs to distinguish the risks stemming from \(W_{i},i=1,2\) so that \(\mathrm{Var}[\xi]\) is decomposed into \(\mathrm{D}_{1}[\xi]\) and \(\mathrm{D}_{2}[\xi]\), and for \(i=1,2\) she weights \(\mathrm{D}_{i}[\xi]\) with \(\gamma_{i}\). **Corollary 2.13**.: _If \(\gamma_{1}=\gamma_{2}=\frac{\theta}{2}>0\) then (2.25) becomes_ \[\mathcal{E}_{\theta}[\xi]=\mathbb{E}[\xi]+\frac{\theta}{2}\mathrm{Var}[\xi]+o \left(\theta\right),\ \forall\xi\in\mathrm{Dom}(\mathcal{E}_{\theta})\] _This is in accordance with the mean-variance representation as (2.14) in symmetric risk-sensitive problem._ ### Formulation of asymmetric risk-sensitive control problems In the preceding subsection, we have seen that (1.6) and (1.8) are indeed more suitable for describing risk-sensitive control problems in the asymmetric case, where \(\Gamma\) are only strictly positive definite. Now we can formulate the asymmetric risk-sensitive stochastic control problems as follows. Consider the control system \[\left\{\begin{array}{rl}dX(t)=&b\left(t,X(t),u(t)\right)dt+\sigma(t,X(t),u( t))dW(t),\\ dY(t)=&-[Z^{\intercal}(t)\Gamma Z(t)+f(t,X(t),u(t))]dt+Z^{\intercal}(t)dW(t), \\ X(0)=&x_{0},\ Y(T)=\Phi(X(T)),\end{array}\right. \tag{2.26}\] where \(x_{0}\in\mathbb{R}^{n}\), \(U\subset\mathbb{R}^{k}\) is a non-empty set, the \(U\)-valued process \(u(\cdot)\) is the control process that will be defined later, and the coefficients \[b:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}^{k}\to\mathbb{R}^{n},\sigma:[0,T] \times\mathbb{R}^{n}\times\mathbb{R}^{k}\to\mathbb{R}^{n\times d},g:[0,T] \times\mathbb{R}^{n}\times\mathbb{R}^{k}\to\mathbb{R},\Phi:\mathbb{R}^{n}\to \mathbb{R}\] are measurable functions. \(\Gamma\in\mathbb{S}^{d\times d}\) is strictly positive definite. In (2.26), an \(\mathbb{F}\)-progressively measurable process \(u(\cdot)\) is called an admissible control if the SDE admits a unique solution \(X(\cdot)\) and the BSDE admits a minimal solution \((Y(\cdot),Z(\cdot))\) in specific spaces. Denote by \(\mathcal{U}[0,T]\) the set of all the admissible controls. The cost functional is defined by \[J(u(\cdot)):=Y(0),\ u(\cdot)\in\mathcal{U}[0,T]. \tag{2.27}\] The objective is to find \(\bar{u}(\cdot)\in\mathcal{U}[0,T]\) (if it ever exists) such that \[J(\bar{u}(\cdot))=\inf_{u(\cdot)\in\ \mathcal{U}[0,T]}J(u(\cdot)). \tag{2.28}\] Let us review some pioneering works about the risk-sensitive control problems in details. Consider the risk-sensitive control problem with the cost functional (1.3) with state equation (1.4). The problem that minimizing (1.3) subject to (1.4) was studied by Lim and Zhou [23] in two cases. On the one hand, under the assumption that \((\Phi,f)\) is uniformly bounded and the corresponding value function defined from (1.3) is sufficiently smooth, they obtained a new risk-sensitive MP for (1.3)-(1.4). On the other hand, when \((\Phi,f)\) is no longer bounded, the authors studied the risk-sensitive linear-quadratic problems by taking \[\begin{array}{rl}b(t,x,u)=&A(t)x+B(t)u,\\ \sigma(t,x,u)=&\Sigma(t),\\ \Phi(x)=&\frac{1}{2}x^{\intercal}Hx,\\ f(t,x,u)=&\frac{1}{2}x^{\intercal}M(t)x+\frac{1}{2}u^{\intercal}N(t)u,\end{array} \tag{2.29}\] where \(A\), \(B\), \(\Sigma\), \(H\), \(M\), \(N\) are respectively matrices or matrix-valued, deterministic function on \([0,T]\) in suitable sizes. The authors obtained the optimal control in the feedback form by verifying sufficient conditions for optimality. After that, using neither stochastic MP nor DPP, a completion of square approach is adopted by Duncan in [12] to solve this kind of problems, so one can get rid of the assumption that the value function is sufficiently smooth in, as called in [12], the linear-exponential-quadratic Gaussian case. As an application of risk-sensitive control in mathematical finance, the continuous time portfolio optimization problems with identical risk-sensitive attitudes towards different risk sources are well studied. Let \(d=m+n\) with two positive integer \(m,n\) and let \(a\in\mathbb{R}^{m}\), \(b\in\mathbb{R}^{n}\), and \(A\), \(B\), \(\Lambda\), \(\Sigma\) be respectively \(m\times n\), \(n\times n\), \(n\times d\), \(m\times d\) constant matrices, and \(r(t)\) be a nonnegative, deterministic function of \(t\). If \(\Gamma=\frac{\theta}{4}\mathsf{I}_{d\times d}\) for some \(\theta>0\), then (2.26)-(2.28) is closely related to the portfolio optimization problem studied in [21] that the objective is to maximize the risk-sensitized expected growth rate up to time horizon \(T\): \[I(u(\cdot)):=-\frac{2}{\theta}\log\mathbb{E}\left[\exp\left\{-\frac{\theta}{2} \log V(T)\right\}\right], \tag{2.30}\] where \(V(\cdot)\) represents the investor's wealth process that is described by \[\frac{dV(t)}{V(t)}=r(t)dt+u^{\intercal}(t)\left(a+A\bar{X}(t)-r(t)\mathbf{1} \right)dt+u^{\intercal}(t)\Sigma dW(t) \tag{2.31}\] with \(\mathbf{1}:=\overbrace{(1,\ldots,1)}^{n}\). Here the factor process \(\tilde{X}\) satisfies the SDE \[\left\{\begin{array}{rl}d\tilde{X}(t)=&(b+B\tilde{X}(t))dt+\Lambda dW(t),\\ \tilde{X}(0)=&x_{0},\end{array}\right. \tag{2.32}\] which is interpreted as an exogenous macroeconomic, microeconomic or statistical process driving asset returns. Then for any \(u(\cdot)\in\mathcal{U}[0,T]\) we have \(I(u(\cdot))=-J(u(\cdot))\) if we put \[\begin{array}{rl}b(t,x,u)=&\left(\begin{array}{cc}0&\mathbf{0}_{1\times n }\\ \mathbf{0}_{n\times 1}&B\end{array}\right)\left(\begin{array}{c}x_{1}\\ x_{2}\end{array}\right)+\left(\begin{array}{c}0\\ b\end{array}\right),\\ \sigma(t,x,u)=&\left(\begin{array}{c}-u^{\intercal}\Sigma\\ \Lambda\end{array}\right),\ \ \Phi(x)=x_{1},\\ f(t,x,u)=&\frac{1}{2}u^{\intercal}\Sigma\Sigma^{\intercal}u-u^{\intercal}(a+ Ax_{2}-r(t)\mathbf{1})-r(t)\end{array} \tag{2.33}\] in (2.26) with \(x=(x_{1},x_{2})\in\mathbb{R}\times\mathbb{R}^{n}\), where \(\mathbf{0}_{n\times 1}^{\intercal}=\mathbf{0}_{1\times n}=\overbrace{(0, \ldots,0)}^{n}\). Therefore maximizing \(I(\cdot)\) over \(\mathcal{U}[0,T]\) is equivalent to minimizing \(J(\cdot)\) over \(\mathcal{U}[0,T]\). As we pointed out in the introduction, the risk-sensitive control problems studied in the above literature are equivalent to stochastic recursive optimal control problem (1.6) and (1.8) by taking \(\Gamma\) to be a scalar matrix. For any given strictly positive definite \(\Gamma\), the model can characterize asymmetric risk aversion for a decision maker, but the exponential transformation does not hold. We'd like to emphasize that \(\mathcal{U}[0,T]\) will be more specific according to the different cases we studied in the following context. ## 3 Asymmetric linear-quadratic risk-sensitive control problems In this section, we consider a kind of unbounded \((\Phi,f)\) in (2.26). Similar to [23], we are interested in the asymmetric linear-quadratic risk-sensitive control problems where \((\Phi,f)\) possesses the forms in (2.29). Consider the following stochastic control system: \[\left\{\begin{array}{rl}dX(t)=&[A(t)X(t)+B(t)u(t)]dt+\Sigma(t)dW(t),\\ dY(t)=&-[Z^{\intercal}(t)\Gamma Z(t)+\frac{1}{2}X^{\intercal}(t)M(t)X(t)+ \frac{1}{2}u^{\intercal}(t)N(t)u(t)]dt\\ &+Z^{\intercal}(t)dW(t),\ \ t\in[0,T],\\ X(0)=&x_{0},\ Y(T)=\frac{1}{2}X^{\intercal}(T)HX(T),\end{array}\right. \tag{3.1}\] where \(A(\cdot)\in L^{\infty}([0,T];\mathbb{R}^{n\times n})\), \(B(\cdot)\in L^{\infty}([0,T];\mathbb{R}^{n\times k})\), \(\Sigma(\cdot)\in L^{2}([0,T];\mathbb{R}^{n\times d})\), \(M(\cdot)\in L^{\infty}([0,T];\mathbb{S}^{n\times n})\), \(N(\cdot)\in L^{\infty}([0,T];\mathbb{S}^{k\times k})\) are deterministic matrix-valued functions; \(H\in\mathbb{S}^{n\times n}\) and \(H\geq 0\); \(M(t)\geq 0\), \(N(t)\geq\delta\mathbb{I}_{k\times k}\) for some \(\delta>0\) and all \(t\in[0,T]\); \((\Sigma\Sigma^{\intercal})(t)>0\), \(t\in[0,T]\). Set \(\Delta=\int_{0}^{T}(\Sigma\Sigma^{\intercal})(s)ds\) and denote by two positive numbers \(\gamma_{\max}\), \(\gamma_{\min}\) the maximal, minimal eigenvalues of \(\Gamma\) respectively. For any \(u(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{k})\), under the above conditions, the SDE in (3.1) admits a unique solution \(X(\cdot)\in L^{2}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}^{n}))\) according to the standard theory. To describe our goal under asymmetric linear-quadratic setting, we introduce the set of all admissible controls. **Definition 3.1**.: _Denote the admissible control set by \(\mathcal{U}_{LQ}[0,T]\) that is given by_ \[\mathcal{U}_{LQ}[0,T]:= \left\{u(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{k}):\text {the BSDE in (\ref{eq:1}) admits a minimal solution}\right.\] \[\left.\left(Y(\cdot),Z(\cdot)\right)\in L^{2}_{\mathbb{F}}(\Omega ;C([0,T],\mathbb{R}))\times L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{d})\right\}.\] The cost functional is defined by \[J(u(\cdot)):=Y(0),\ u(\cdot)\in\mathcal{U}_{LQ}[0,T]. \tag{3.2}\] The objective is to find \(\bar{u}(\cdot)\in\mathcal{U}_{LQ}[0,T]\) (if it ever exists) such that \[J(\bar{u}(\cdot))=\inf_{u(\cdot)\in\,\mathcal{U}_{LQ}[0,T]}J(u(\cdot)).\] To determine the optimal feedback control like the classical linear-quadratic case, we first introduce the Riccati differential equation \[\left\{\begin{array}{rl}dP(t)=&-\left[A^{\intercal}(t)P(t)+P(t)A(t)+M(t) \right.\\ &+\left.P(t)\left(2\Sigma(t)\Gamma\Sigma^{\intercal}(t)-B(t)N^{-1}(t)B^{ \intercal}(t)\right)P(t)\right]dt,\\ P(T)=&H.\end{array}\right. \tag{3.3}\] **Lemma 3.2**.: _Assume \(\Sigma(t)\), \(N(t)\) are both continuous in \(t\) and_ \[2\Sigma(t)\Gamma\Sigma^{\intercal}(t)-B(t)N(t)^{-1}B^{\intercal}(t)<0,\ \forall t\in[0,T]. \tag{3.4}\] _Then (3.3) admits a unique solution \(P(\cdot)\in C([0,T],\mathbb{S}^{n\times n})\) such that \(P(t)\geq 0\) for all \(t\in[0,T]\) and \(\left\|P\right\|_{\infty}\leq B_{P}\), where \(B_{P}:=e^{2\left\|A\right\|_{\infty}T}\left(\left\|H\right\|_{\infty}+\left\| M\right\|_{\infty}T\right)\)._ Proof.: According to the continuity of \(\Sigma(\cdot)\) and \(N(\cdot)\), if (3.4) holds then (3.3) admits a unique solution by the classical Riccati theory (We refer the readers to [35] for more details). To prove the last claim, let \(\tilde{P}(\cdot)\in C([0,T],\mathbb{S}^{n\times n})\) be the unique solution to the linear ordinary differential equation \[\left\{\begin{array}{rl}d\tilde{P}(t)=&-\left[A^{\intercal}(t)\tilde{P}(t) +\tilde{P}(t)A(t)+M(t)\right]dt,\\ \tilde{P}(T)=&H.\end{array}\right. \tag{3.5}\] Thanks to Grownwall's lemma, one can easily deduce that \(\left\|\tilde{P}\right\|_{\infty}\leq B_{P}\). On the other hand, since \(2\Sigma(t)\Gamma\Sigma^{\intercal}(t)-B(t)N(t)^{-1}B^{\intercal}(t)<0\) for all \(t\in[0,T]\) and \(H\geq 0\), by Theorem 2.2 in [14] we have \(0\leq P(t)\leq\tilde{P}(t),\forall t\in[0,T]\). Then from the definition of the Frobenius norm for the real matrices we obtain \(\left\|P\right\|_{\infty}\leq\left\|\tilde{P}\right\|_{\infty}\leq B_{P}\). The following main result indicates that if the Riccati differential equation (3.3) admits a unique solution, then \(\mathcal{U}_{LQ}[0,T]\) is not empty and there exists an admissible control with feedback type optimizing the problem (3.1)-(3.2). **Theorem 3.3**.: _If \(P(\cdot)\in C([0,T],\mathbb{S}^{n\times n})\) uniquely solves (3.3), then the feedback control_ \[\bar{u}(t)=-N^{-1}(t)B^{\intercal}(t)P(t)\bar{X}(t)\ \ t\in[0,T] \tag{3.6}\] _belongs to \(\mathcal{U}_{LQ}[0,T]\) and is optimal for the problem (3.1)-(3.2). The optimal value of the objective function is_ \[J(\bar{u}(\cdot))=\frac{1}{2}x_{0}^{\intercal}P(0)x_{0}+\frac{1}{2}\int_{0}^{T }\operatorname{tr}\left\{P(t)\left(\Sigma\Sigma^{\intercal}\right)(t)\right\}dt.\] Proof.: Plugging (3.6) into the SDE in (3.1), it admits a unique solution \(\bar{X}(\cdot)\in\bigcap_{p>1}L_{\mathbb{F}}^{p}(\Omega;C([0,T],\mathbb{R}^{ n}))\) as it is Gaussian, which implies \(\bar{u}(\cdot)\in L_{\mathbb{F}}^{2}([0,T];\mathbb{R}^{k})\). To prove that \(\bar{u}(\cdot)\in\mathcal{U}_{LQ}[0,T]\), we have to find the minimal solution in \(L_{\mathbb{F}}^{2}(\Omega;C([0,T],\mathbb{R}))\times L_{\mathbb{F}}^{2}([0,T] ;\mathbb{R}^{d})\) corresponding to \(\bar{u}(\cdot)\). Applying Ito's formula to \(\bar{X}^{\intercal}(t)P(t)\bar{X}(t)\) yields \[\tfrac{1}{2}\bar{X}^{\intercal}(t)P(t)\bar{X}(t)+\tfrac{1}{2} \int_{t}^{T}\operatorname{tr}\left\{P(s)\left(\Sigma\Sigma^{\intercal}\right)( s)\right\}ds\] \[= \tfrac{1}{2}\bar{X}^{\intercal}(T)H\bar{X}(T)+\int_{t}^{T}\left[ \bar{X}^{\intercal}(s)P(s)\Sigma(s)\Gamma\Sigma^{\intercal}(s)P(s)\bar{X}(s)\right.\] \[+\left.\tfrac{1}{2}\bar{X}^{\intercal}(s)M(s)\bar{X}(s)+\tfrac{1} {2}\bar{u}^{\intercal}(s)N(s)\bar{u}(s)\right]ds-\int_{t}^{T}\bar{X}^{\intercal }(s)P(s)\Sigma(s)dW(s),\] which implies that \[\bar{Y}(t):= \tfrac{1}{2}\bar{X}^{\intercal}(t)P(t)\bar{X}(t)+\tfrac{1}{2} \int_{t}^{T}\operatorname{tr}\left\{P(s)\left(\Sigma\Sigma^{\intercal}\right) (s)\right\}ds, \tag{3.8}\] \[\bar{Z}(t):= \Sigma^{\intercal}(t)P(t)\bar{X}(t)\] solves the BSDE in (3.1) when \(u(\cdot)=\bar{u}(\cdot)\), and it belongs to \(L_{\mathbb{F}}^{2}(\Omega;C([0,T],\mathbb{R}))\times L_{\mathbb{F}}^{2}([0,T] ;\mathbb{R}^{d})\) as \(\bar{X}(\cdot)\in L_{\mathbb{F}}^{4}(\Omega;C([0,T],\mathbb{R}^{n}))\). We claim that \((\bar{Y}(\cdot),\bar{Z}(\cdot))\) is minimal. Actually, if \((\bar{Y}^{\prime}(\cdot),\bar{Z}^{\prime}(\cdot))\in L_{\mathbb{F}}^{2}( \Omega;C([0,T],\mathbb{R}))\times L_{\mathbb{F}}^{2}([0,T];\mathbb{R}^{d})\) is another solution corresponding to \(\bar{u}(\cdot)\), then we have for any \(t\in[0,T]\), \[\bar{Y}^{\prime}(t)-\bar{Y}(t)\geq\int_{t}^{T}2\left[\bar{Z}^{\prime}(s)-\bar {Z}(s)\right]^{\intercal}\Gamma\bar{Z}(s)ds-\int_{t}^{T}\left[\bar{Z}^{\prime }(s)-\bar{Z}(s)\right]^{\intercal}dW(s).\] Consider a new probability measure \(\bar{\mathbb{P}}\) defined by the stochastic exponential \[d\bar{\mathbb{P}}=\exp\left\{2\int_{0}^{T}\bar{Z}^{\intercal}(s)\Gamma dW(s)- \int_{0}^{T}\left|\Gamma\bar{Z}(s)\right|^{2}ds\right\}d\mathbb{P}. \tag{3.9}\] Because \(\bar{X}(\cdot)\) is a Gaussian process, the above stochastic exponential is a Radon-Nikodym derivative that integrates to one, according to (3.8) and the argument in [17] that the Girsanov exponential with Gaussian integrand is an exponential martingale. According to Girsanov's theorem, we deduce \[\bar{Y}^{\prime}(t)-\bar{Y}(t)\geq-\int_{t}^{T}\left[\bar{Z}^{\prime}(s)-\bar {Z}(s)\right]^{\intercal}d\bar{W}(s), \tag{3.10}\] where \[\bar{W}(t)=W(t)-2\int_{0}^{t}\Gamma\bar{Z}(s)ds,\ \ t\in[0,T]\] is a \(d\) dimensional Brownian motion under \(\bar{\mathbb{P}}\). Denote by \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\cdot\right]\) the mathematical expectation corresponding to \(\bar{\mathbb{P}}\). It will be proved later that the right-hand side of inequality (3.10) is a true martingale. Taking the conditional expectation \(\mathbb{E}_{\mathbb{P}}\left[\cdot\mid\mathcal{F}_{t}\right]\) in both sides of (3.10) yields \(Y^{\prime}(t)\geq Y(t)\), \(\bar{\mathbb{P}}\)-a.s. (of course, \(\mathbb{P}\)-a.s.) Therefore \(\bar{u}(\cdot)\in\mathcal{U}_{LQ}[0,T]\) and \[J(\bar{u}(\cdot))=\bar{Y}(0). \tag{3.11}\] Now we prove the optimality of \(\bar{u}(\cdot)\). For any \(u(\cdot)\in\mathcal{U}_{LQ}[0,T]\), if \((Y(\cdot),Z(\cdot))\in L^{2}_{\mathbb{P}}(\Omega;C([0,T],\mathbb{R}))\times L^ {2}_{\mathbb{P}}([0,T];\mathbb{R}^{d})\) is the minimal solution to the BSDE in (3.1) corresponding to \(u(\cdot)\), then the convexity of the quadratic function leads to \[Y(0)-\bar{Y}(0)\] \[\geq \bar{X}^{\intercal}(T)H\left[X(T)-\bar{X}(T)\right]+\int_{0}^{T} \left\{2\left[Z(s)-\bar{Z}(s)\right]^{\intercal}\Gamma\bar{Z}(s)\right.\] \[+\left.\bar{X}^{\intercal}(s)M(s)\left[X(s)-\bar{X}(s)\right]+ \bar{u}^{\intercal}(s)N(s)\left[u(s)-\bar{u}(s)\right]\right\}ds\] \[-\int_{0}^{T}\left[Z(s)-\bar{Z}(s)\right]^{\intercal}dW(s)\] and then applying Ito's formula to \(X^{\intercal}(t)P(t)(X(t)-\bar{X}(t))\) over \([0,T]\) yields \[Y(0)-\bar{Y}(0)\] \[\geq \int_{0}^{T}\left\{2\left[Z(s)-\bar{Z}(s)-\Sigma^{\intercal}(s)P( s)\left(X(s)-\bar{X}(s)\right)\right]^{\intercal}\Gamma\bar{Z}(s)\right.\] \[+\left.\left[\bar{X}^{\intercal}(s)P(s)B(s)+\bar{u}^{\intercal}( s)N(s)\right]\left[u(s)-\bar{u}(s)\right]\right\}ds\] \[-\int_{0}^{T}\left[Z(s)-\bar{Z}(s)-\Sigma^{\intercal}(s)P(s) \left(X(s)-\bar{X}(s)\right)\right]^{\intercal}dW(s).\] Noting \[\bar{X}^{\intercal}(s)P(s)B(s)+\bar{u}^{\intercal}(s)N(s)=0,\] by Girsanov's theorem, (3.12) implies that \[Y(0)-\bar{Y}(0)\geq-\int_{0}^{T}\left[Z(s)-\bar{Z}(s)-\Sigma^{\intercal}(s)P( s)\left(X(s)-\bar{X}(s)\right)\right]^{\intercal}d\bar{W}(s), \tag{3.13}\] It will be proved later that the right-hand side of inequality (3.13) is a true martingale. Taking \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\cdot\right]\) in both sides of (3.13), we have \(Y(0)-\bar{Y}(0)\geq 0\), which means that \[J(u(\cdot))\geq J(\bar{u}(\cdot)) \tag{3.14}\] due to (3.2) and (3.11). The optimality of \(\bar{u}(\cdot)\) follows from (3.14) and the arbitrariness of \(u(\cdot)\) chosen from \(\mathcal{U}_{LQ}[0,T]\). Combining (3.8) and (3.11) results in the optimal value \[J(\bar{u}(\cdot))=\bar{Y}(0)=\frac{1}{2}x_{0}^{\intercal}P(0)x_{0}+\frac{1}{2} \int_{0}^{T}\operatorname{tr}\left\{P(s)\left(\Sigma\Sigma^{\intercal}\right) (s)\right\}ds. \tag{3.15}\] It remains to prove that the right-hand side of (3.10) (resp. (3.13)) is true martingale under \(\bar{\mathbb{P}}\) so we can take \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\cdot\mid\mathcal{F}_{t}\right]\) (resp. \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\cdot\right]\)) to eliminate the stochastic integral. Since we have \[\mathbb{E}_{\mathbb{P}}\left[\left(\int_{0}^{T}\left|\bar{Z}(s)\right|^{2}ds \right)^{\frac{\mathbb{P}}{2}}\right]<+\infty,\;\forall p>1\] due to the fact that \(\bar{X}(\cdot)\) is also Gaussian under \(\mathbb{P}\) and \(\bar{Z}(s)=\Sigma^{\intercal}P(s)\bar{X}(s)\), it comes down to proving that if \((Y(\cdot),Z(\cdot))\in L^{2}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}))\times L^{ 2}_{\mathbb{F}}([0,T];\mathbb{R}^{d})\) is a solution of the BSDE in (3.1) corresponding to a given \(u(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{k})\), then \(\{\int_{0}^{t}Z(s)dW(s),t\in[0,T]\}\) is a true martingale under \(\bar{\mathbb{P}}\). To this end, for \(m\geq 1\), let \(\tau_{m}\) be the following stopping time \[\tau_{m}=\inf\left\{t\geq 0:\int_{0}^{t}|Z(s)|^{2}\geq m\right\}\wedge T.\] As \(Y(t)\geq 0\), \(t\in[0,T]\), \(\mathbb{P}\)-a.s. (of course \(\bar{\mathbb{P}}\)-a.s.), then for each \(m\) we have \[Y(0) \geq Y(t\wedge\tau_{m})+\int_{0}^{t\wedge\tau_{m}}Z^{\intercal}(s) \Gamma Z(s)ds-\int_{0}^{t\wedge\tau_{m}}Z^{\intercal}(s)dW(s)\] \[\geq \int_{0}^{t\wedge\tau_{m}}\left[Z^{\intercal}(s)\Gamma Z(s)-2Z^{ \intercal}(s)\Gamma\bar{Z}(s)\right]ds-\int_{0}^{t\wedge\tau_{m}}Z^{\intercal }(s)d\bar{W}(s)\] \[\geq \int_{0}^{t\wedge\tau_{m}}\left[\tfrac{\gamma_{\min}}{2}\left|Z( s)\right|^{2}-\tfrac{2\gamma_{\max}^{2}}{\gamma_{\min}}\left|\bar{Z}(s)\right|^{2} \right]ds-\int_{0}^{t\wedge\tau_{m}}Z^{\intercal}(s)d\bar{W}(s),\] which implies \[\int_{0}^{\tau_{m}}\left|Z(s)\right|^{2}ds\leq\frac{2}{\gamma_{\min}}Y(0)+4 \left(\frac{\gamma_{\max}}{\gamma_{\min}}\right)^{2}\int_{0}^{T}\left|\bar{Z} (s)\right|^{2}ds+\frac{2}{\gamma_{\min}}\sup_{t\in[0,T]}\left|\int_{0}^{t \wedge\tau_{m}}Z^{\intercal}(s)d\bar{W}(s)\right|.\] Taking \(\mathbb{E}_{\mathbb{P}}\left[\cdot\right]\) in both sides of the last inequality and applying the Burkholder-Davis-Gundy inequality yield \[\mathbb{E}_{\mathbb{P}}\left[\int_{0}^{\tau_{m}}\left|Z(s)\right|^{2}ds\right] \leq\frac{2}{\gamma_{\min}}Y(0)+4\left(\frac{\gamma_{\max}}{\gamma_{\min}} \right)^{2}\mathbb{E}_{\mathbb{P}}\left[\int_{0}^{T}\left|\bar{Z}(s)\right|^{ 2}ds\right]+\frac{6}{\gamma_{\min}}\mathbb{E}_{\mathbb{P}}\left[\left(\int_{ 0}^{\tau_{m}}\left|Z(s)\right|^{2}ds\right)^{\frac{1}{2}}\right].\] Consequently, putting \(a=\left(\int_{0}^{\tau_{m}}\left|Z(s)\right|^{2}ds\right)^{\frac{1}{2}}\) and from the fundamental inequality \(a\leq\frac{1}{2}(\frac{\gamma_{\min}}{6}a^{2}+\frac{6}{\gamma_{\min}})\), we obtain \[\mathbb{E}_{\bar{\mathbb{P}}}\left[\int_{0}^{\tau_{m}}\left|Z(s)\right|^{2}ds \right]\leq\frac{4}{\gamma_{\min}}Y(0)+8\left(\frac{\gamma_{\max}}{\gamma_{ \min}}\right)^{2}\mathbb{E}_{\mathbb{P}}\left[\int_{0}^{T}\left|\bar{Z}(s) \right|^{2}ds\right]+\frac{36}{\gamma_{\min}^{2}}<+\infty,\] which yields \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\int_{0}^{T}\left|Z(s)\right|^{2}ds\right]<+\infty\) immediately from Fatou's lemma. Thus the stochastic integrals in the right sides of (3.10) and (3.13) are both true martingales under \(\bar{\mathbb{P}}\). The proof is complete. As the end of this section, we emphasize that the admissible control set \(\mathcal{U}_{LQ}[0,T]\) and the cost functional (3.2) provide a natural perspective to tackle with the linear-quadratic risk-sensitive control problem with identical risk-sensitive attitudes towards different risk sources. To illustrate this, putting \(\Gamma=\frac{\theta}{2}\mathbb{I}_{d\times d}\) for some \(\theta>0\), we find that a process \(u(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{k})\) belongs to \(\mathcal{U}_{LQ}[0,T]\) if and only if \[\mathbb{E}\left[\exp\left\{\theta\left(\frac{1}{2}X^{\intercal}(T)HX(T)+\frac {1}{2}\int_{0}^{T}g(t)dt\right)\right\}\right]<+\infty, \tag{3.16}\] where \(g(t)=X^{\intercal}(t)M(t)X(t)+u^{\intercal}(t)N(t)u(t),t\in[0,T]\). Actually, on the one hand, benefiting from the proof of Theorem 3.1 in [8] and the \(L^{1}\)-martingale representation theorem (please refer to Theorem 2.46 in [30]), (3.16) is sufficient to construct a solution \((Y(\cdot),Z(\cdot))\in L^{2}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}))\times L^{ 2}_{\mathbb{F}}([0,T];\mathbb{R}^{d})\) of the BSDE in (3.1) such that \[e^{\theta Y(0)}=\mathbb{E}\left[\exp\left\{\theta\left(\frac{1}{2}X^{\intercal }(T)HX(T)+\frac{1}{2}\int_{0}^{T}g(t)dt\right)\right\}\right]. \tag{3.17}\] On the other hand, according to Theorem 3.1 in [8], (3.16) is also necessary to guarantee the BSDE in (3.1) admits at least one solution in \(L^{2}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}))\times L^{2}_{\mathbb{F}}([0,T]; \mathbb{R}^{d})\) since \[\mathbb{E}\left[\exp\left\{\theta\left(\frac{1}{2}X^{\intercal}(T)HX(T)+\frac {1}{2}\int_{0}^{T}g(t)dt\right)\right\}\right]\leq e^{\theta Y^{\prime}(0)}<+\infty \tag{3.18}\] for any solution \((Y^{\prime}(\cdot),Z^{\prime}(\cdot))\in L^{2}_{\mathbb{F}}(\Omega;C([0,T], \mathbb{R}))\times L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{d})\). It follows from (3.17) and (3.18) that (3.2) can be expressed by \[J(u(\cdot))=Y(0)=\frac{1}{\theta}\log\mathbb{E}\left[\exp\left\{\theta\left( \frac{1}{2}X^{\intercal}(T)HX(T)+\frac{1}{2}\int_{0}^{T}g(t)dt\right)\right\} \right],\] which is nothing but the cost functional of the linear-quadratic risk-sensitive control with identical risk-sensitive attitudes towards different risk sources studied by Lim and Zhou [23], and Duncan [12]. Resorting to Theorem 3.3, we can obtain the same feedback control. Thus (3.16) completely characterizes \(\mathcal{U}_{LQ}[0,T]\). ## 4 Asymmetric risk-sensitive control under bounded conditions From our problem formulation, if \((\Phi,f)\) is uniformly bounded in (2.26), then it can be addressed by applying the results in our earlier work [18]. We adopt the spike variation approach to obtain a global stochastic MP for the optimality of (2.26)-(2.28). For \(\psi=b\), \(\sigma\), \(f\), \(\Phi\), denote \(\psi(t)=\psi(t,\bar{X}(t),\bar{u}(t)),\psi_{x}(t)=\psi_{x}(t,\bar{X}(t),\bar{u }(t))\), \(t\in[0,T]\). **Assumption 4.1**.: _(i) \(b\), \(\sigma\) are twice continuously differentiable with respect to \(x\). The derivatives \(b_{x}\), \(b_{xx}\), \(\sigma_{x}\), \(\sigma_{xx}\) are continuous in \((x,u)\) and uniformly bounded. \(b,\sigma\) are bounded by \(C(1+|x|+|u|)\);_ _(ii) \(f\), \(\Phi\) are twice continuously differentiable with respect to \(x\). The derivatives \(f_{x},f_{xx}\) are continuous in \((x,u)\); \(f\), \(\Phi\), \(f_{x}\), \(\Phi_{x}\), \(f_{xx}\), \(\Phi_{xx}\) are bounded._ We further assume the set of admissible controls \[\mathcal{U}_{BD}[0,T]=\{u:[0,T]\times\Omega\to U|\sup_{0\leq t\leq T} \mathbb{E}[|u(t)|^{p}]<+\infty,\ \forall p>0\}.\] Under Assumption 4.1, it follows from Theorem 2.3 in [18] that for any \(u(\cdot)\in\mathcal{U}_{BD}[0,T]\) (2.26) admits a unique solution \((X(\cdot),Y(\cdot),Z(\cdot))\in L^{2}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}^ {2}))\times L^{\infty}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}))\times L^{2}_{ \mathbb{F}}([0,T];\mathbb{R}^{d})\) such that the stochastic integral \(\{\int_{0}^{t}Z(s)dW(s),t\in[0,T]\}\) is a bounded mean oscillation martingale. Therefore (2.28) is well defined. **Theorem 4.2**.: _Let Assumption 4.1 hold, \(\bar{u}(\cdot)\) be an optimal control and \((\bar{X}(\cdot),\bar{Y}(\cdot),\bar{Z}(\cdot))\) be the corresponding optimal state trajectory. Then the stochastic maximum principle for the optimal control problem (2.26)-(2.28) is_ \[\begin{split}&\mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),u,p(t),q( t),P(t))\\ &\geq\mathcal{H}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),\bar{u}(t),p( t),q(t),P(t)),\ \forall u\in U\text{, }dt\otimes d\mathbb{P}\text{-a.e.},\end{split} \tag{4.1}\] _where \(\mathcal{H}\) is defined by_ \[\mathcal{H}(t,x,y,z,u,p,q,P)\] \[= p^{\intercal}b(t,x,u)+\mathrm{tr}\left\{q^{\intercal}\sigma(t,x,u) \right\}+f(t,x,u)\] \[+\tfrac{1}{2}\mathrm{tr}\left\{\left(\sigma(t,x,u)-\sigma(t,\bar{ X}(t),\bar{u}(t))\right)^{\intercal}P\left(\sigma(t,x,u)-\sigma(t,\bar{X}(t),\bar{u}(t)) \right)\right\}\] \[+2p^{\intercal}\sigma(t,x,u)\Gamma z+p^{\intercal}\left(\sigma(t,x,u)-\sigma(t,\bar{X}(t),\bar{u}(t))\right)\Gamma\left(\sigma(t,x,u)-\sigma( t,\bar{X}(t),\bar{u}(t))\right)^{\intercal}p,\] _and \((p(\cdot),q(\cdot))\), \((P(\cdot),Q(\cdot))\) satisfy the first-order adjoint equation_ \[\left\{\begin{array}{ll}dp(t)=&-\left\{f_{x}(t)+2\sum\limits_{i=1}^{d}\left( \Gamma\bar{Z}(t)\right)_{i}\left[\sigma_{i,x}^{\intercal}(t)p(t)+q_{i}(t) \right]\right.\\ &\quad+\left.b_{x}^{\intercal}(t)p(t)+\sum\limits_{i=1}^{d}\sigma_{i,x}^{ \intercal}(t)q_{i}(t)\right\}dt+\sum\limits_{i=1}^{d}q_{i}(t)dW_{i}(t),\ \ t\in[0,T],\\ p(T)=&\Phi_{x}(\bar{X}(T))\end{array}\right. \tag{4.3}\] _and the second-order adjoint equation_ \[\left\{\begin{array}{ll}dP(t)=&-\left\{2\sum\limits_{i=1}^{d} \left(\Gamma\bar{Z}(t)\right)_{i}\left[\sigma_{i,x}^{\intercal}(t)P(t)+P(t) \sigma_{i,x}(t)+Q_{i}(t)\right]\right.\\ &\quad+b_{x}^{\intercal}(t)P(t)+P(t)b_{x}(t)+\sum\limits_{i=1}^{d}\sigma_{i,x }^{\intercal}(t)P(t)\sigma_{i,x}(t)+\sigma_{i,x}^{\intercal}(t)Q_{i}(t)+Q_{i} (t)\sigma_{i,x}(t)\\ &\quad+\sum\limits_{i=1}^{d}b_{i,xx}(t)p_{i}(t)+\sum\limits_{i=1}^{d}\sum \limits_{j=1}^{n}\sigma_{ji,xx}(t)\left[2\left(\Gamma\bar{Z}(t)\right)_{i}p_{i }(t)+q_{ji}(t)\right]+f_{xx}(t)\\ &\quad+\left.2\sum\limits_{i,l=1}^{d}\Gamma_{il}\left(\sigma_{i,x}^{\intercal }(t)p(t)+q_{i}(t)\right)\left(\sigma_{l,x}^{\intercal}(t)p(t)+q_{l}(t)\right)^ {\intercal}\right\}dt\\ &\quad+\sum\limits_{i=1}^{d}Q_{i}(t)dW_{i}(t),\ \ t\in[0,T],\\ P(T)=&\Phi_{xx}(\bar{X}(T))\end{array}\right. \tag{4.4}\] _respectively, where the \(\{\sigma_{i,x}\}_{i=1,\ldots,d}\) is the Jacobian matrix of the \(i\)th column of \(\sigma\); \(\{b_{i,xx}\}_{i=1,\ldots,d}\) is the Hessian matrix of the \(i\)th entry of \(b\); \(\{(\Gamma\bar{Z})_{i}\}_{i=1,\ldots,d}\) is the \(i\)th entry of the \(d\)-dimensional vector \(\Gamma\bar{Z}\); \([\Gamma_{il}]_{i,l=1,\ldots,d}\) is the entry of \(\Gamma\) on the \(i\)th row and \(j\)th column._ Proof.: The proof is almost same as Theorem 3.16 in [18] so we omit it. **Remark 4.3**.: _When \(\Gamma=\frac{\theta}{2}\mathrm{I}_{d\times d}\), if \(\bar{Z}(t)=\sigma^{\intercal}(t,\bar{X}(t),\bar{u}(t))p(t)\) then we find that \((p(t),q(t))=(-\bar{p}(t),-\bar{q}(t))\), \((P(t),Q(t))=(-\bar{P}(t),-\bar{Q}(t))\), where \((\bar{p}(\cdot),\bar{q}(\cdot))\), \((\bar{P}(\cdot),\bar{Q}(\cdot))\) is the first-order, second-order adjoint process in [23] respectively._ Compared with the risk-neutral MP, the asymmetric risk sensitivity leads to the additional term \[p^{\intercal}(t)\left[\sigma(t,\bar{X}(t),u)-\sigma(t,\bar{X}(t),\bar{u}(t)) \right]\Gamma\left\{2\bar{Z}(t)+\left[\sigma(t,\bar{X}(t),u)-\sigma(t,\bar{X }(t),\bar{u}(t))\right]^{\intercal}p(t)\right\} \tag{4.5}\] in (4.1) by simple calculation. When \(\Gamma=\frac{\theta}{2}\mathrm{I}_{d\times d}\), under the same assumption in [23] that the value function is sufficiently smooth, we have \(\bar{Z}(t)=\sigma^{\intercal}(t,\bar{X}(t),\bar{u}(t))p(t)\) and then the new term (4.5) degenerates into the MP in [23]. Therefore, without the smooth assumption imposed on the value function, it is more convenient and straightforward to use (1.6) and (1.8) to formulate the risk-sensitive control problem in [23] so that one does not have to introduce the auxiliary state equation and use the logarithmic transformation. **Corollary 4.4**.: _Assume the coefficients are differentiable with respect to \(u\) and the control domain \(U\subseteq\mathbb{R}^{k}\) is a convex set. Then the stochastic maximum principle (4.1) implies_ \[H_{u}(t,\bar{X}(t),\bar{Y}(t),\bar{Z}(t),u,p(t),q(t))|_{u=\bar{u}(t)}(u-\bar{u}( t))\geq 0,\forall u\in U\text{, }dt\otimes d\mathbb{P}\text{-a.e.}, \tag{4.6}\] _where_ \[H(t,x,z,u,p,q)=p^{\intercal}b(t,x,u)+\operatorname{tr}\left\{q^{\intercal} \sigma(t,x,u)\right\}+2p^{\intercal}\sigma(t,x,u)\Gamma z+g(t,x,u). \tag{4.7}\] This maximum principle (4.6) can also be proved as a sufficient condition as we show in the following theorem. **Theorem 4.5**.: _Let Assumption 4.1 hold. Assume the coefficients are differentiable with respect to \(u\) and \(U\subseteq\mathbb{R}^{k}\) is a convex set. Let \((\bar{u}(\cdot),\bar{X}(\cdot),\bar{Y}(\cdot),\bar{Z}(\cdot))\) be an admissible quadruple with a pair of adjoint processes \((p(\cdot),q(\cdot))\) satisfying (4.3), and let_ \[H(t,x,z,u,p(t),q(t))\ \text{ be convex with respect to }(x,u) \tag{4.8}\] _for any \(z\in\mathbb{R}^{d}\), \(dt\otimes d\mathbb{P}-a.e.\), where \(H\) is defined by (4.7). Suppose \((\bar{u}(\cdot),\bar{X}(\cdot),\bar{Y}(\cdot),\bar{Z}(\cdot),p(\cdot),q(\cdot))\) satisfies (4.6) and \(\Phi(\cdot)\) satisfies the condition_ \[\Phi(X(T))-\Phi(\bar{X}(T))\geq\left(\Phi_{x}(\bar{X}(T))\right)^{\intercal}( X(T)-\bar{X}(T)),\ \ \mathbb{P}\text{-a.s..} \tag{4.9}\] _Then \(\bar{u}(\cdot)\) is an optimal control of the problem (2.26)-(2.28)._ Proof.: Define \(\tilde{\mathcal{H}}:[0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R} ^{d}\times U\times\mathbb{R}^{n}\times\mathbb{R}^{n\times d}\longmapsto \mathbb{R}\) as \[\tilde{\mathcal{H}}(t,x,z,u,p,q)=p^{\intercal}b(t,x,u)+\operatorname{tr} \left\{q^{\intercal}\sigma(t,x,u)\right\}+z^{\intercal}\Gamma z+f(t,x,u).\] For any \((x,z,u)\), \((\bar{x},\bar{z},\bar{u})\in\mathbb{R}^{n}\times\mathbb{R}\times U\), notice that \[H(t,x,\bar{z},u,p,q)+z^{\intercal}\Gamma z=\tilde{\mathcal{H}}(t,x,z,u,p,q)+2 p^{\intercal}\sigma(t,x,u)\Gamma\bar{z},\] and \(\tilde{\mathcal{H}}_{z}(t,x,\bar{z},u,p,q)=\tilde{\mathcal{H}}_{z}(t,\bar{x}, \bar{z},\bar{u},p,q)=2\Gamma\bar{z}\). As \(\Gamma>0\), it follows from the above relationship and (4.8) that \[\begin{split}&\tilde{\mathcal{H}}(t,x,z,u,p(t),q(t))-\tilde{ \mathcal{H}}(t,\bar{x},\bar{z},\bar{u},p(t),q(t))\\ &\geq\tilde{\mathcal{H}}_{x}^{\intercal}(t,\bar{x},\bar{z},\bar{u },p(t),q(t))(x-\bar{x})+\tilde{\mathcal{H}}_{u}^{\intercal}(t,\bar{x},\bar{z}, \bar{u},p(t),q(t))(u-\bar{u})\\ &\quad+\tilde{\mathcal{H}}_{z}^{\intercal}(t,\bar{x},\bar{z}, \bar{u},p(t),q(t))(z-\bar{z})\\ &\quad-\sum_{i=1}^{d}\tilde{\mathcal{H}}_{z_{i}}(t,\bar{x},\bar{z },\bar{u},p(t),q(t))p^{\intercal}(t)\left[\sigma_{i}(t,x,u)-\sigma_{i}(t,\bar{ x},\bar{u})\right.\\ &\quad-\left.\sigma_{i,x}(t,\bar{x},\bar{u})(x-\bar{x})-\sigma_{i,u}(t,\bar{x},\bar{u})(u-\bar{u})\right],\quad dt\otimes d\mathbb{P}-a.e..\end{split} \tag{4.10}\] (4.6), (4.9), and (4.10) verify the conditions of Theorem 4.1 in [18]. Hence, \(\bar{u}(\cdot)\) is an optimal control of the problem (2.26)-(2.28). An application to dynamic portfolio optimization In this section, we take the coefficients in (2.26) that satisfy (2.33) into account, from which an asymmetric risk-sensitive portfolio optimization problem arises. For simplicity of writing, we write the factor process \(\tilde{X}\) determined by (2.32) as \(X\) without causing any ambiguity. By the standard theory, it is easy to show that (2.32) admits a unique solution \(X(\cdot)\in\bigcap_{p>1}L^{p}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}^{n}))\) since it is Gaussian. It follows from (2.33) that the controlled BSDE in (2.26) can be rewritten as \[\left\{\begin{array}{rl}dY(t)=&-\left[(Z^{\intercal}(t)-u^{\intercal}(t) \Sigma)\Gamma(Z(t)-\Sigma^{\intercal}u(t))\right.\\ &\quad+\left.\frac{1}{2}u^{\intercal}(t)\Sigma\Sigma^{\intercal}u(t)-u^{ \intercal}(t)(a+AX(t)-r(t)\mathbf{1})-r(t)\right]dt,\\ &\quad+Z^{\intercal}(t)dW(t),\\ Y(T)=&0,\end{array}\right. \tag{5.1}\] where \(\Gamma\) is strictly positive definite. The following assumption is necessary. **Assumption 5.1**.: _The matrix \(\Sigma\Sigma^{\intercal}\) is strictly positive definite._ **Remark 5.2**.: _Assumption 5.1 implies that one cannot replicate the risk structure of one of the \(m\) assets by setting up a portfolio of the other \(m-1\) assets. As a result there is no risk-induced arbitrage opportunity on the market._ Noting that (2.32) is linear and Gaussian, we are inspired by the method adopted in the section 3 to determine the optimal investment strategy. Let \(\Pi(\cdot)\in C([0,T];\mathbb{S}^{n\times n})\) be the unique solution to the Riccati differential equation \[\left\{\begin{array}{rl}d\Pi(t)=&-\left[(B^{\intercal}-A^{\intercal}\Theta^{ -1}\Xi)\Pi(t)-\Pi(t)(B-\Xi^{\intercal}\Theta^{-1}A)\right.\\ &\quad+\left.\Pi(t)(\Psi-\Xi^{\intercal}\Theta^{-1}\Xi)\Pi(t)-A^{\intercal} \Theta^{-1}A\right]dt,\ \ t\in[0,T],\\ \Pi(T)=&0,\end{array}\right. \tag{5.2}\] and let \(\varphi(\cdot)\in C([0,T];\mathbb{R}^{n})\) be the unique solution to the linear ordinary differential equation \[\left\{\begin{array}{rl}d\varphi(t)=&-\left\{[B^{\intercal}-\Pi(t)(\Psi- \Xi^{\intercal}\Theta^{-1}\Xi)-A^{\intercal}\Theta^{-1}\Xi]\varphi(t)\right. \\ &\quad+\left.\Pi(t)[b-\Xi^{\intercal}\Theta^{-1}(a-r(t)\mathbf{1})]+A^{ \intercal}\Theta^{-1}(a-r(t)\mathbf{1})\right\}dt,\\ \varphi(T)=&0,\end{array}\right. \tag{5.3}\] where \[\Theta=\Sigma(2\Gamma+\mathrm{I}_{d\times d})\Sigma^{\intercal},\ \ \Xi=2\Sigma\Gamma\Lambda^{\intercal},\ \ \Psi=2\Lambda\Gamma\Lambda^{\intercal}. \tag{5.4}\] Obviously \(\Theta\in\mathbb{S}^{n\times n}\) and \(\Theta\geq\Sigma\Sigma^{\intercal}>0\) so it is invertible. **Lemma 5.3**.: _If \(\Psi-\Xi^{\intercal}\Theta^{-1}\Xi>0\), then (5.2) admits a unique solution \(\Pi(t)\geq 0,t\in[0,T]\) and \(\left\|\Pi\right\|_{\infty}\leq B_{\Pi}\), where \(B_{\Pi}:=\exp\left\{2\left(\left|B\right|+\left|\Xi\right|\left|\Theta^{-1} \right|\left|A\right|\right)T\right\}\left|A\right\|^{2}\left|\Theta^{-1} \right|T\)._ Proof.: Since \(\Pi(T)=0\) and \(A^{\intercal}\Theta^{-1}A\geq 0\), according to Theorem 7.5 in [36], (5.2) admits a unique solution \(\Pi(\cdot)\in C([0,T];\mathbb{S}^{n\times n})\) such that \(\Pi(t)\geq 0;t\in[0,T]\). Similar to the proof of Lemma 3.2, we can deduce \(\left\|\Pi\right\|_{\infty}\leq B_{\Pi}\). **Remark 5.4**.: _When \(\Gamma=\frac{\theta}{4}\mathrm{I}_{d\times d}\), then_ \[\Psi-\Xi^{\intercal}\Theta^{-1}\Xi= \frac{\theta}{2}\Lambda\left[\mathrm{I}_{d\times d}-\frac{ \theta}{\theta+2}\Sigma^{\intercal}\left(\Sigma\Sigma^{\intercal}\right)^{-1} \Sigma\right]\Lambda^{\intercal}\] \[= \frac{\theta}{2}\Lambda\left[\left(\mathrm{I}_{d\times d}- \Sigma^{\intercal}\left(\Sigma\Sigma^{\intercal}\right)^{-1}\Sigma\right)+ \frac{1}{\theta+2}\Sigma^{\intercal}\left(\Sigma\Sigma^{\intercal}\right)^{-1 }\Sigma\right]\Lambda^{\intercal}.\] _Note that \(\Sigma^{\intercal}\left(\Sigma\Sigma^{\intercal}\right)^{-1}\Sigma\) is the projection on the column space of \(\Sigma\) and therefore \(\mathrm{I}_{d\times d}-\Sigma^{\intercal}\left(\Sigma\Sigma^{\intercal} \right)^{-1}\Sigma\) is an orthogonal projection. As \(\theta>0\) the term \(\frac{\theta}{2}\Lambda\left[\mathrm{I}_{d\times d}-\frac{\theta}{\theta+2} \Sigma^{\intercal}\left(\Sigma\Sigma^{\intercal}\right)^{-1}\Sigma\right] \Lambda^{\intercal}>0\), so (5.2) naturally admits a unique solution \(\Pi(t)\geq 0\) defined for all \(t\in[0,T]\)._ **Definition 5.5**.: _An investment strategy \(u(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{m})\) is called admissible if it satisfies the following conditions._ 1. _The BSDE in (_5.1_) admits a minimal solution_ \((Y(\cdot),Z(\cdot))\in L^{2}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}))\times L^ {2}_{\mathbb{F}}([0,T];\mathbb{R}^{d})\)_._ 2. _For any solution_ \((Y^{\prime}(\cdot),Z^{\prime}(\cdot))\in L^{2}_{\mathbb{F}}(\Omega;C([0,T], \mathbb{R}))\times L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{d})\) _of the BSDE in (_5.1_),_ \[\mathbb{E}_{\bar{\mathbb{F}}}\left[\left(\int_{0}^{T}\left|Z^{\prime}(s) \right|^{2}ds\right)^{\frac{1}{2}}\right]<+\infty,\] _where_ \(\mathbb{E}_{\bar{\mathbb{F}}}\left[\cdot\right]\) _is the mathematical expectation corresponding to the reference probability_ \(\bar{\mathbb{P}}\) _defined by_ \[d\bar{\mathbb{P}}:= \exp\left\{-2\int_{0}^{T}\chi^{\intercal}(s)dW(s)-2\int_{0}^{T} \left|\chi(s)\right|^{2}ds\right\}d\mathbb{P},\] (5.5) _where_ \(\Pi(\cdot)\)_,_ \(\varphi(\cdot)\) _is the unique solution to (_5.2_), (_5.3_) respectively, and_ \[\chi(s)=\Gamma\left\{\left[\Lambda^{\intercal}\Pi(s)+\Sigma^{\intercal}\Theta ^{-1}\left(A-\Xi\Pi(s)\right)\right]X(s)+\left(\Lambda^{\intercal}-\Sigma^{ \intercal}\Theta^{-1}\Xi\right)\varphi(s)+\Sigma^{\intercal}\Theta^{-1} \left(a-r(s)\mathbf{1}\right)\right\}.\] The set of all admissible strategies will be denoted by \(\mathcal{U}_{PO}[0,T]\). **Remark 5.6**.: _Since \(X(\cdot)\) is Gaussian and the Girsanov exponential with Gaussian integrand is an exponential martingale, the stochastic exponential in (5.5) is a Radon-Nikodym derivative._ The cost functional is defined by \[J(u(\cdot)):=Y(0),\ u(\cdot)\in\mathcal{U}_{PO}[0,T]. \tag{5.6}\] The objective is to find \(\bar{u}(\cdot)\in\mathcal{U}_{PO}[0,T]\) (if it ever exists) such that \[J(\bar{u}(\cdot))=\inf_{u(\cdot)\in\mathcal{U}_{PO}[0,T]}J(u(\cdot)).\] **Theorem 5.7**.: _Let Assumption 5.1 hold. Assume \(\Pi(\cdot)\in C([0,T];\mathbb{S}^{n\times n})\) uniquely solves (5.2) and \(\varphi(\cdot)\in C([0,T];\mathbb{R}^{n})\) uniquely solves (5.3). Then the state feedback strategy_ \[\bar{u}(t):=\Theta^{-1}[(A-\Xi\Pi(t))X(t)-\Xi\varphi(t)+(a-r(t)\mathbf{1})], \ \ t\in[0,T]. \tag{5.7}\] _belongs to \(\mathcal{U}_{PO}[0,T]\) and is optimal for the problem (5.1), (5.6). The corresponding optimal value of the objective function is_ \[J(\bar{u}(\cdot))=-\frac{1}{2}x_{0}^{\intercal}\Pi(0)x_{0}-\varphi^{\intercal }(0)x_{0}-\kappa(0),\] _where the time dependent coefficient \(\kappa(\cdot)\in C([0,T];\mathbb{R})\) is defined as \(\kappa(t)=\int_{t}^{T}l(s)ds,t\in[0,T]\) with_ \[\begin{array}{ll}l(t)=&-\frac{1}{2}\left[\operatorname{tr}\left\{\Lambda \Lambda^{\intercal}\Pi(t)\right\}+2r(t)+2b^{\intercal}\varphi(t)-\varphi^{ \intercal}(t)(\Psi-\Xi^{\intercal}\Theta^{-1}\Xi)\varphi(t)\right.\\ &-\left.2\varphi^{\intercal}(t)\Xi^{\intercal}\Theta^{-1}(a-r(t)\mathbf{1})+ (a-r(t)\mathbf{1})^{\intercal}\Theta^{-1}(a-r(t)\mathbf{1})\right].\end{array} \tag{5.8}\] Proof.: We first show that \(\bar{u}(\cdot)\in\mathcal{U}_{PO}[0,T]\). Due to (5.7), it can be verified that \(\bar{u}(\cdot)\in L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{m})\). Applying Ito's lemma to \(\frac{1}{2}X^{\intercal}(t)\Pi(t)X(t)\), \(\varphi^{\intercal}(t)X(t)\) respectively, and from (5.8), we get \[\begin{array}{ll}&\frac{1}{2}X^{\intercal}(t)\Pi(t)X(t)+\varphi^{\intercal}( t)X(t)+\kappa(t)\\ &=\int_{t}^{T}\left[-(X^{\intercal}(s)\Pi(s)+\varphi^{\intercal}(s))\Lambda \Gamma\Lambda^{\intercal}(\Pi(s)X(s)+\varphi(s))+\frac{1}{2}\bar{u}^{ \intercal}(s)\Theta\bar{u}(s)+r(s)\right]ds\\ &\quad-\int_{t}^{T}(X^{\intercal}(s)\Pi(s)+\varphi^{\intercal}(s))\Lambda dW (s)\\ &=\int_{t}^{T}\left\{-\left[(X^{\intercal}(s)\Pi(s)+\varphi^{\intercal}(s)) \Lambda+\bar{u}^{\intercal}(s)\Sigma\right]\Gamma\left[\Lambda^{\intercal}( \Pi(s)X(s)+\varphi(s))+\Sigma^{\intercal}\bar{u}(s)\right]\right.\\ &\quad-\left.\frac{1}{2}\bar{u}^{\intercal}(s)\Sigma\Sigma^{\intercal}\bar{u}( s)+\bar{u}^{\intercal}(s)(a+AX(s)-r(s)\mathbf{1})+r(s)\right\}ds\\ &\quad-\int_{t}^{T}(X^{\intercal}(s)\Pi(s)+\varphi^{\intercal}(s))\Lambda dW (s).\end{array}\] Therefore, when \(u(\cdot)=\bar{u}(\cdot)\), the BSDE in (5.1) admits a solution \[\begin{array}{ll}\bar{Y}(t)=&-\frac{1}{2}X^{\intercal}(t)\Pi(t)X(t)-\varphi^ {\intercal}(t)X(t)-\kappa(t),\\ \bar{Z}(t)=&-\Lambda^{\intercal}(\Pi(t)X(t)+\varphi(t))\end{array} \tag{5.9}\] in \(L^{2}_{\mathbb{F}}(\Omega;C([0,T],\mathbb{R}))\times L^{2}_{\mathbb{F}}([0,T] ;\mathbb{R}^{d})\) as \(X(\cdot)\in L^{4}_{\mathbb{F}}(\Omega;C([0,T];\mathbb{R}^{n}))\). Moreover, it can be checked that \(X(\cdot)\) is also Gaussian under \(\bar{\mathbb{P}}\) so we have \(\mathbb{E}_{\mathbb{P}}\left[\left(\int_{0}^{T}\left|\bar{Z}(s)\right|^{2}ds \right)^{\frac{1}{2}}\right]<+\infty\), which verifies (ii) in Definition 5.5. We prove \((\bar{Y}(\cdot),\bar{Z}(\cdot))\) is minimal. If \((\bar{Y}^{\prime}(\cdot),\bar{Z}^{\prime}(\cdot))\in L^{2}_{\mathbb{F}}(\Omega; C([0,T],\mathbb{R}))\times L^{2}_{\mathbb{F}}([0,T];\mathbb{R}^{d})\) is another solution corresponding to \(\bar{u}(\cdot)\), then we have for any \(t\in[0,T]\), \[\begin{array}{ll}\bar{Y}^{\prime}(t)-\bar{Y}(t)\geq&\int_{t}^{T}2\left[\bar{ Z}^{\prime}(s)-\bar{Z}(s)\right]^{\intercal}\Gamma\left[\bar{Z}(s)-\Sigma^{ \intercal}\bar{u}(s)\right]ds\\ &\quad-\int_{t}^{T}\left[\bar{Z}^{\prime}(s)-\bar{Z}(s)\right]^{\intercal}dW (s).\end{array}\] From (5.5), (5.7), and (5.9), applying Girsanov's theorem yields that \[\bar{Y}^{\prime}(t)-\bar{Y}(t)\geq-\int_{t}^{T}\left[\bar{Z}^{\prime}(s)-\bar{ Z}(s)\right]^{\intercal}d\bar{W}(s), \tag{5.10}\] where \[\bar{W}(t)=W(t)+2\int_{0}^{t}\Gamma\left[\bar{Z}(s)-\Sigma^{\intercal}\bar{u}(s) \right]ds\] is an \(d\)-dimensional Brownian motion under \(\bar{\mathbb{P}}\). As \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\left(\int_{0}^{T}\left|Z^{\prime}(s)\right|^ {2}ds\right)^{\frac{1}{2}}\right]<+\infty\) due to (ii) in Definition 5.5, we deduce \(\bar{Y}^{\prime}(t)\geq\bar{Y}(t)\), \(t\in[0,T]\), \(\bar{\mathbb{P}}\)-a.s. (of course, \(\mathbb{P}\)-a.s.) by taking the conditional expectation \(\mathbb{E}\left[\cdot\mid\mathcal{F}_{t}\right]\) in both sides of (5.10). Hence \(\bar{u}(\cdot)\in\mathcal{U}_{PO}[0,T]\) and we obtain \[J(\bar{u}(\cdot))=\bar{Y}(0). \tag{5.11}\] Now we prove the optimality of \(\bar{u}(\cdot)\). Taking any admissible strategy \(u(\cdot)\in\mathcal{U}_{PO}[0,T]\) and the corresponding minimal solution \((Y(\cdot),Z(\cdot))\in L^{2}_{\mathbb{P}}(\Omega;C([0,T],\mathbb{R}))\times L ^{2}_{\mathbb{P}}([0,T];\mathbb{R}^{d})\), we obtain \[\begin{array}{ll}&Y(0)-\bar{Y}(0)\\ \geq&\int_{0}^{T}\left\{2\left[Z(s)-\bar{Z}(s)\right]^{\intercal}\Gamma\left[ \bar{Z}(s)-\Sigma^{\intercal}\bar{u}(s)\right]\right.\\ &+\left[u(s)-\bar{u}(s)\right]^{\intercal}\left[\Theta\bar{u}(s)-2\Sigma\Gamma \bar{Z}(s)-(a+AX(s)-r(s)\mathbf{1})\right]\right\}ds\\ &-\int_{0}^{T}\left[Z(s)-\bar{Z}(s)\right]^{\intercal}dW(s).\end{array} \tag{5.12}\] From (5.7) and (5.12), we get \[\Theta\bar{u}(s)-2\Sigma\Gamma\bar{Z}(s)-(a+AX(s)-r(s)\mathbf{1})=0\] and therefore \[Y(0)-\bar{Y}(0)\geq-\int_{0}^{T}\left[Z(s)-\bar{Z}(s)\right]^{\intercal}d\bar {W}(s) \tag{5.13}\] by Girsanov's theorem. Since \(\bar{u}(\cdot)\in\mathcal{U}_{PO}[0,T]\) and \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\left(\int_{0}^{T}\left|Z(s)\right|^{2}ds \right)^{\frac{1}{2}}\right]<+\infty\) according to (ii) in Definition 5.5, the right-hand side in (5.13) is a true martingale admitting a mean zero under \(\bar{\mathbb{P}}\). Taking \(\mathbb{E}_{\bar{\mathbb{P}}}\left[\cdot\right]\) in both sides of (5.13), we have \(Y(0)-\bar{Y}(0)\geq 0\) which means that \[J(u(\cdot))\geq J(\bar{u}(\cdot)). \tag{5.14}\] The optimality of \(\bar{u}(\cdot)\) follows from (5.14) and the arbitrariness of \(u(\cdot)\) chosen from \(\mathcal{U}_{PO}[0,T]\). Finally, combining the relationship (5.9) with (5.11), we deduce the optimal value \[J(\bar{u}(\cdot))=\bar{Y}(0)=-\frac{1}{2}x_{0}^{\intercal}\Pi(0)x_{0}-\varphi^ {\intercal}(0)x_{0}-\kappa(0),\] which accomplishes the proof. **Remark 5.8**.: _As the original problem is to maximize the expected growth rate \(-J(\cdot)\) over \(\mathcal{U}_{PO}[0,T]\), it follows from Theorem 5.7 that the optimal growth rate is \(\frac{1}{2}x_{0}^{\intercal}\Pi(0)x_{0}+\varphi^{\intercal}(0)x_{0}+\kappa(0)\). Furthermore, when \(\Gamma=\frac{\theta}{4}\mathrm{I}_{d\times d}\) for some given \(\theta>0\), this result degenerates into the same one as Theorem 2.1 in [21] since it can be verified that \((\Pi(\cdot),\varphi(\cdot),\kappa(\cdot))\) satisfy (2.16), (2.17), (2.18) respectively on pages 316-317 in [21]._
2308.06927
The algorithmic second law of thermodynamics
G\'acs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Shannon-Gibbs entropies, it requires no prior commitment to macrovariables or probabilistic ensembles, rendering it applicable to settings arbitrarily far from equilibrium. For Markovian coarse-grainings, we prove a number of algorithmic fluctuation inequalities. The most important of these is a very general formulation of the second law of thermodynamics. In the presence of a heat and work reservoir, it implies algorithmic versions of Jarzynski's equality and Landauer's principle. Finally, to demonstrate how a deficiency of algorithmic entropy can be used as a resource, we model an information engine powered by compressible strings.
Aram Ebtekar
2023-08-14T04:13:14Z
http://arxiv.org/abs/2308.06927v3
# The algorithmic second law of thermodynamics1 ###### Abstract Gacs' coarse-grained algorithmic entropy leverages universal computation to quantify the information content of any given physical state. Unlike the Boltzmann and Shannon-Gibbs entropies, it requires no prior commitment to macrovariables or probabilistic ensembles. Whereas earlier work had made loose connections between the entropy of thermodynamic systems and information-processing systems, the algorithmic entropy formally unifies them both. After adapting Gacs' definition to Markov processes, we prove a very general second law of thermodynamics, and discuss its advantages over previous formulations. Finally, taking inspiration from Maxwell's demon, we model an information engine powered by compressible data. keywords: algorithmic entropy, information, Markov processes, nonequilibrium thermodynamics, stochastic thermodynamics, Kolmogorov complexity ## 1 Introduction The second law of thermodynamics is one of the most profound facts of nature, and yet one of the most difficult to state in precise terms. It concerns the **entropy** of an **isolated system**. An isolated system is simply a piece of the Universe that is not in contact with anything outside it. The entropy is a measure of complexity or disorder. Informally speaking, a system is considered to have high entropy if its microscopic configuration "looks the same", from a macroscopic point of view, as a large number of other configurations. For example, a messy room has higher entropy than a tidy room, because most possible arrangements of a room's contents would be considered messy, while only a few are tidy. The need to unambiguously classify arrangements as "messy" or "tidy" illustrates the difficulty of making this intuition precise. The second law of thermodynamics then states that an isolated system's entropy can either increase or stay the same, but never decrease. We know from experience that rooms tend toward becoming messy, unless there is a concerted effort to clean them. Whether or not we make such a concerted effort, our interactions with a room involve heat-generating metabolic processes, which increase the ambient air's entropy by far more than any decrease due to cleaning. We might still ask whether an idealized cleaner, acting on an idealized room, could in principle violate the second law. According to modern analyses of such "Maxwell's demons", the minimum entropy production required to tidy a room comes from information-processing considerations, and exactly equals the decrease in the arrangement's entropy [1]. In addition to rescuing the second law, this finding hints at its fundamentally information-theoretic nature. Historically, such arguments have been fairly ad hoc. It's hard to rigorously prove a statement of the second law that's both precise and general. We propose that a satisfactory formulation should meet the following criteria: 1. The law should forbid entropy from decreasing appreciably, except perhaps extremely rarely. 2. The law should permit entropy to increase at an appreciable rate. While it need not increase in all situations, it should be easy to construct examples where it does. 3. The entropy should be well-defined up to a tiny margin of error, independent of choice parameters such as macrovariable-induced partitions or probabilistic ensembles. 4. The entropy should continue to be well-defined in nonequilibrium and information-processing settings, without ad hoc extensions. 5. The law should not merely predict a long-term maximum-entropy outcome; rather, it should predict monotonic increases throughout a system's evolution. Criterion 1 is just a mild relaxation of the second law's usual informal statement. Criterion 2, however, is violated by most fine-grained notions of entropy. These are notions that consider the full microscopic configuration with infinite precision. For example, in classical mechanics, a consequence of Liouville's theorem is that the differential Shannon-Gibbs entropy is exactly conserved under Hamiltonian evolutions [2]. Likewise, in quantum mechanics, the von Neumann entropy is conserved under unitary evolutions. Later, we will discuss the algorithmic entropy; however, it too is unsuitable in its fine-grained version. The algorithmic second law suggested in Appendix C of Zurek [3], and rigorously proved by Janzing et al. [4], presents conditions under which the fine-grained algorithmic entropy can increase, albeit much too slowly to meet Criterion 2. Gacs [5] explains that since the fine-grained state evolves deterministically, its increase in entropy is bounded by the tiny amount of information that specifies its evolution: essentially, the laws of physics and the elapsed time. Only by coarse-graining the state, so that the effective dynamics become stochastic, does it become possible to generate entropy at a positive rate, called the Kolmogorov-Sinai entropy per unit time [6, 7, 8]. Unfortunately, even in their coarse-grained forms, the classic entropies of Boltzmann, Shannon, and Gibbs violate Criteria 3 and 4. The Boltzmann entropy depends on how a state space is partitioned into cells. By definition, it's proportional to \(\log W\), where \(W\) is some measure of the current cell's size. In our room example, we would need to partition all possible states by their tidiness, and then count the number of states belonging to the cell corresponding to each level of tidiness. In equilibrium thermodynamic systems, it seems natural to partition based on macroscopic variables such as temperature and pressure. It's less clear how to partition other kinds of objects, such as digital data. Meanwhile, the Shannon-Gibbs entropy is a function not of a particular physical state, but rather, of a probability distribution, i.e., an _ensemble_ of possible states. In stochastic thermodynamics [9, 10, 11] and information thermodynamics [12, 13], equilibrium systems (typically heat baths) are coupled to other systems, taking the latter along Markovian trajectories, to nonequilibrium probability distributions. With enough care, the Shannon-Gibbs entropy can thus be made useful in a variety of settings, including some mesoscopic and information-processing systems. However, this approach requires a lot of physical intuition and is hard to put on a simple foundation. One difficulty is that there exist realistic ensembles for which the Shannon-Gibbs entropy loses its physical meaning. Consider a device that flips a coin to decide whether to immediately discharge a large battery. The battery being full or empty corresponds to two ensembles with distinct Shannon-Gibbs entropies. After the coin flip, the battery is modeled by a half-half mixture of both ensembles, and its Shannon-Gibbs entropy closely approximates their average, equaling that of a battery at half capacity. From this value, we would calculate that we have half a charge worth of free energy at our disposal. Averages are useful when the law of large numbers applies. However, with only a single battery, this result makes no practical sense: in reality, either the battery can be consumed in full or not at all, even if we don't yet know which. In general, a choice of partition or ensemble may be motivated by considerations of equilibrium or subjective uncertainty. We find neither option appealing: the Universe as a whole is certainly not in equilibrium, and subjectivity raises serious questions regarding the source of inductive prior. Inspired by a thought experiment in which compressible data is used to do physical work, Bennett [14] argues that a fully general definition of entropy should leverage universal computation to _infer_ the best description for any given state. He defines the **algorithmic entropy** to be the length of a state's shortest programmatic description. Zurek [3] develops this idea much further, but retains a dependence on measurements and ensembles. Gacs [5] refines the definition to remove this dependence. While his algorithmic entropy still depends on a choice of universal computer, the dependence is bounded by the length of a compiler or interpreter from the code of one computer to another; for realistic microprocessors, this length appears to be small. To quantify "small", recall that entropy is measured in logarithmic units, such as bits, nats, or Joules per Kelvin [15]. The conversion between them is \[1\,\mathrm{bit}=k_{B}\ln 2=9.57\times 10^{-24}\,\mathrm{J}\,\mathrm{K}^{-1},\] where \(k_{B}\) is Boltzmann's constant, equivalent to \(1\) nat. Pessimistically, consider a pair of languages, for which a compressed interpreter of each language in the other takes up to \(12\,\mathrm{GiB}\) (i.e., \(12\times 2^{33}\) bits); these languages would agree on the entropy of every system to within a picojoule per Kelvin. For macroscopic systems, this is a negligible difference. Of course, our notion of a "realistic microprocessor" should include physical size and resource constraints, to prevent directly embedding a large interpreter program. Regarding Criterion 4, most analyses of information-processing settings, such as Maxwell's demon, need to distinguish between the "physical" entropy of thermodynamic systems, and the "computational" entropy of information-processing systems. We will rely on the insights of Gacs [5], who unified the two with his coarse-grained algorithmic entropy. Finally, Criterion 5 distinguishes the second law from many ergodic-like theorems. These are long-term entropy maximization laws; see Gacs [5] for a version that applies to algorithmic entropy. Attempts to interpret an ergodic-like theorem as a second law give rise to Loschmidt's paradox: since they imply maximum entropy in both the infinite past and the infinite future, they cannot yield a true time asymmetry such as the second law. In particular, they say nothing about how the entropy of yesterday should compare to the entropy of tomorrow. The fundamental laws of physics are widely believed to obey CPT symmetry: if one reverses time, the dynamics are unchanged (aside from a reversal of charge and parity) [16]. This can be seen, for example, in the parabolic trajectories of objects in free fall, which look equally plausible backward as forward. It's possible for irreversibility to emerge at the macroscopic level, but only if an initial condition breaks the symmetry. Albert [17] argues that the Big Bang might provide the starting point, from which entropy began to increase for all subsequent isolated systems. This idea is extremely difficult to prove. Given a suitable initial condition, one wishes to show that the evolution forward in time has "typical" statistics forever after. That is, the coarse-grained description should evolve in a time-homogeneous and Markovian manner. Gaspard [18], Nicolis and Nicolis [19], and Werndl [20] present some deterministic time-reversible continuous-state dynamical systems, whose coarse-grainings are shown to emulate time-homogeneous discrete-state Markov chains, provided that their initial condition includes microscopic randomization. The network multibaker maps of Altaner and Vollmer [21] also have this property, and can be customized to emulate a wider variety of Markov chains. To demonstrate their idea, we present a simplified version in A. It remains an open question to characterize precisely which dynamical systems exhibit time-homogeneous or Markovian coarse-grained evolutions. The present article takes time-homogeneous Markovian evolution for granted, substituting physical systems with their Markov process counterparts. An advantage of this approach is that it abstracts away most details of the physics, producing simple rigorous statements under minimal assumptions. As such, our algorithmic second law can be seen as an ensemble-free version of the classic KL divergence decrease law of Markov chains (22, SS4.4). _Paper outline._ To start, Section 2 reviews some notation and definitions. Then, Section 3 presents our main contribution: the second law of thermodynamics, for an algorithmic entropy very similar to that of Gacs [5]. Since Gacs himself did not prove a second law, this is to our knowledge the first rigorous second law that meets all five of our criteria. After specializing it to doubly stochastic processes, we derive an algorithmic version of the Helmholtz free energy by coupling the processes to heat baths. Our formulation of the second law inspires useful modeling principles that focus attention on the information-theoretic aspects of physical phenomena. Section 4 applies them to simplify the analysis of Maxwell's demon and of Landauer's principle. Building upon these thought experiments, we then proceed to model an information engine powered by compressible data. Finally, Section 5 concludes with some possible directions for further research. ## 2 Preliminaries We start with some notation. \(\mathbb{R}^{+}\) denotes the non-negative real numbers, \(\mathbb{Z}^{+}\) the non-negative integers, and \(\mathbb{Z}_{m}:=\{0,1,\ldots,m-1\}\) the first \(m\) elements of \(\mathbb{Z}^{+}\). Let \(\mathcal{B}:=\mathbb{Z}_{2}=\{0,1\}\), so that \(\mathcal{B}^{*}\) is the set of binary strings. For a string \(x\in\mathcal{B}^{*}\), \(|x|\) denotes its length in bits. For a set \(A\), \(|A|\) denotes its cardinality. Probabilities of events are noted by \(\Pr(\cdot)\). Expectations of random variables are denoted by angled brackets \(\langle\cdot\rangle\). ### Markov processes Fix a finite or countably infinite set of possible states \(\mathcal{X}\). An \(\mathcal{X}\)-valued **stochastic process** is a collection of \(\mathcal{X}\)-valued random variables indexed by continuous time \((X_{t})_{t\in\mathbb{R}^{+}}\), or discrete time \((X_{t})_{t\in\mathbb{Z}^{+}}\). We say it is a **time-homogeneous Markov process** if \[s\leq t\implies\Pr(X_{t}\mid X_{\leq s})=\Pr(X_{t}\mid X_{s})=P_{t-s}(X_{s},\, X_{t}),\] where \(P_{\Delta t}:\mathcal{X}\times\mathcal{X}\to\mathbb{R}^{+}\) is the **transition matrix** for time steps of length \(\Delta t\). The first equality is called the **Markov property**, while the second expresses time-homogeneity. A discrete-time Markov process is also called a **Markov chain**. Note that a Markov chain's joint probability distribution is uniquely determined by \(P_{1}\), along with the distribution of \(X_{0}\). These two pieces are often called the **dynamics** and **initial condition**, respectively. A function \(\pi:\mathcal{X}\rightarrow\mathbb{R}^{+}\) can be interpreted as a discrete **measure** on \(\mathcal{X}\) in the obvious way. It is a **semimeasure** if \[\sum_{x\in\mathcal{X}}\pi(x)\leq 1,\] and a **probability measure** if equality holds. A transition matrix \(P\) is called \(\pi\)**-stochastic** if \[\forall x\in\mathcal{X},\;\sum_{y\in\mathcal{X}}P(x,\,y) =1, \tag{1}\] \[\forall y\in\mathcal{X},\;\sum_{x\in\mathcal{X}}\pi(x)P(x,\,y) =\pi(y). \tag{2}\] In this case, we also say that \(\pi\) is **stationary** for \(P\). **Doubly stochastic** is a common synonym for \(\sharp\)-stochastic, where \(\sharp\) denotes the counting measure, i.e., \(\sharp(x):=1\). ### Description complexity Fix a universal prefix machine \(U\). Briefly put, a prefix machine is a computer whose set of valid halting programs \(\mathcal{P}\subset\mathcal{B}^{*}\) has no element that is a proper prefix of another; hence, we say the programs are **self-delimiting**. Let \(U(p)\) denote the output of \(U\) on \(p\in\mathcal{P}\), and write \(U(p)=\emptyset\) for \(p\in\mathcal{B}^{*}\setminus\mathcal{P}\). Universality means that, for every prefix machine \(T\), there exists \(x_{T}\in\mathcal{B}^{*}\), such that for all \(y,p\in\mathcal{B}^{*}\), \[U(y,x_{T},p)=T(y,p).\] We omit some details covered by standard references [23; 24], such as the means to unambiguously encode a tuple of strings as a single string input to \(T\) or \(U\). The Solomonoff-Kolmogorov-Chaitin **description complexity** of a string \(x\in\mathcal{B}^{*}\), given side information \(y\in\mathcal{B}^{*}\), is \[K(x\mid y):=\min_{p}\{|p|:\,U(y,p)=x\}.\] When \(y\) is the empty string, \(K(x\mid y)\) becomes the unconditional description complexity \(K(x)\). When a finite set is written in place of \(x\) or \(y\), we mean a lexicographic listing of its elements. Abusing notation, we may write a function to mean any fixed program that computes it; since the resulting complexity depends on which program is chosen, we will only do this in statements that hold for all possible choices. Complexities and entropies in this article are measured in units of bits; correspondingly, all logarithms are assumed to have base 2. (In)equalities that hold up to constant additive terms or multiplicative factors are expressed by writing a \(+\) or \(\times\) on top of the (in)equality sign. For example, \(f(x)\stackrel{{+}}{{<}}g(x)\) and \(f(x)\stackrel{{\times}}{{<}}g(x)\) mean \(f(x)<c+g(x)\) and \(f(x)<c\cdot g(x)\), respectively, for some constant \(c\). By "constant", we mean that \(c\) is only a function of parameters that we've explicitly declared as fixed, such as the universal computer \(U\). We also assume a fixed string encoding for the states in \(\mathcal{X}\), so that \(K(x)\) is well-defined for \(x\in\mathcal{X}\). Whereas the Shannon-Gibbs entropy measures the mean information content per independent sample of a probability distribution [22], the description complexity measures the information content of an individual string without reference to any distribution. Perhaps unsurprisingly, many of its properties are analogous [25]. Here, we list just a few: \[\sum_{x}2^{-K(x|y)}<1,\qquad K(x)\stackrel{{+}}{{<}}|x|+ K(|x|)\stackrel{{+}}{{<}}|x|+2\log|x|,\] \[K(x\mid y)\stackrel{{+}}{{<}}K(x)\stackrel{{ +}}{{<}}K(x,y)\stackrel{{+}}{{=}}K(y,x)\stackrel{{+} }{{=}}K(y)+K(x\mid y,K(y))\stackrel{{+}}{{<}}K(y)+K(x\mid y),\] \[K(x\mid A)\stackrel{{+}}{{=}}\log|A|\text{ for ``most'' $x$ in any finite set $A\subset\mathcal{B}^{*}$},\] \[\mu(x)\stackrel{{\times}}{{<}}2^{-K(x|\mu)}\text{ for any lower semicomputable discrete semimeasure $\mu$}.\] The last property motivates referring to \(2^{-K(x)}\) as a **universal semimeasure**. The algorithmic **mutual information** between \(x\) and \(y\) is defined by \[I(x;\,y):=K(x)+K(y)-K(x,y)\stackrel{{+}}{{=}}K(x)-K(x\mid y,K(y)). \tag{3}\] Since we can identify \(\mathcal{B}^{*}\) with other countably infinite sets, such as \(\mathbb{Z}^{+}\), \(\mathbb{Z}\times\mathbb{Z}\), \(\mathbb{Q}\), and \(\mathcal{X}\), computable functions between these sets are simply functions of the form \(x\mapsto U(f,x)\), that halt for all \(x\). The description complexity \(K\) is not computable, but it's possible to compute a sequence that approaches it from above, so we say it is upper semicomputable. We can extend these concepts to real-valued functions. \(f:\mathbb{Z}^{+}\rightarrow\mathbb{R}\) is said to be **lower (upper) semicomputable** if there exists a computable function \(g:\mathbb{Z}^{+}\times\mathbb{Z}^{+}\rightarrow\mathbb{Q}\), such that \(g(n,\cdot)\) is monotonically increasing (decreasing), and \[\lim_{m\rightarrow\infty}g(n,m)=f(n).\] The real-valued function \(f\) is **computable** if it is both lower and upper semicomputable. ## 3 Theoretical results In the analogy to classical mechanics, we think of each discrete state \(x\in\mathcal{X}\) as a coarse-grained cell in some underlying Hamiltonian system's phase space. If \(\pi(x)\) is the Liouville volume of the corresponding cell, then \(\pi\) is a stationary measure of the coarse-grained process. With this intuition in mind, we now adapt Gacs [5]' coarse-grained entropy to Markov processes. Define the **algorithmic entropy**\(S_{\pi,P}\), relative to the computable measure \(\pi\) and the computable \(\pi\)-stochastic matrix \(P\), by \[S_{\pi,P}(x):=K(x\mid\pi,P)+\log\pi(x). \tag{4}\] Recall that \(K(\cdot\mid\pi,P)\) is a conditional complexity whose side information consists of programs that compute \(\pi\) and \(P\). When the subscripts are clear from context, we may omit them and write simply \(S(x)\). Under \(\pi\)-stochastic transitions, the algorithmic entropy satisfies a very general second law of thermodynamics. To understand why, consider an arbitrary state transition \(x\to y\). The resulting change in various state functions is bounded in terms of the transition probability \(P(x,\,y)\). **Lemma 1** (Bounds on individual transitions).: _Let \(P:\mathcal{X}\times\mathcal{X}\to\mathbb{R}^{+}\) be a computable \(\pi\)-stochastic matrix, with \(\pi\) computable and strictly positive. Then, for all \(x,y\in\mathcal{X}\), \(z\in\mathcal{B}^{*}\),_ \[K(y\mid P)-K(x\mid P) \stackrel{{+}}{{<}}\log\frac{1}{P(x,\,y)}-K(x\mid y ^{*},P), \tag{5}\] \[\log\pi(x)-\log\pi(y) \stackrel{{+}}{{<}}\log\frac{1}{P(x,\,y)}-K(x\mid y,\pi,P),\] (6) \[S_{\pi,P}(x)-S_{\pi,P}(y) \stackrel{{+}}{{<}}\log\frac{1}{P(x,\,y)}-K(y\mid x _{\pi}^{*},\pi,P),\] (7) \[I(y;\,z\mid P)-I(x;\,z\mid P) \stackrel{{+}}{{<}}\log\frac{1}{P(x,\,y)}-K(y\mid x ^{*},z_{x^{*}}^{*},P), \tag{8}\] _where we've used the shorthand \(a_{b}^{*}:=(a,\,K(a\mid b,P))\) with an optional subscript \(b\)._ Proof.: Define the computable matrix \[\widetilde{P}(y,\,x):=\frac{\pi(x)}{\pi(y)}P(x,\,y). \tag{9}\] Since \(P\) is \(\pi\)-stochastic, directly verifying Equations (1) and (2) reveals that \(\widetilde{P}\) is as well. The discrete probability measure \(y\mapsto P(x,\,y)\) can be computed by a constant-sized program along with \((x,P)\), whereas \(x\mapsto\widetilde{P}(y,\,x)\) can be computed using \((y,\pi,P)\). Hence, \[P(x,\,y) \stackrel{{\times}}{{<}}2^{-K(y\mid x,P)},\] \[\widetilde{P}(y,\,x) \stackrel{{\times}}{{<}}2^{-K(x\mid y,\pi,P)}.\] Now, we verify the inequalities one at a time. Equation (5) follows from \[K(x\mid y^{*},P) \stackrel{{\pm}}{{=}}K(x,y\mid P)-K(y\mid P)\] \[\stackrel{{+}}{{<}}K(y\mid x,P)+K(x\mid P)-K(y\mid P)\] \[\stackrel{{+}}{{<}}\log\frac{1}{P(x,\,y)}+K(x\mid P)- K(y\mid P).\] Similarly, Equation (6) follows from \[K(x\mid y,\pi,P) \stackrel{{+}}{{<}}\log\frac{1}{\widetilde{P}(y,\, x)}\] \[=\log\frac{1}{P(x,\,y)}+\log\pi(y)-\log\pi(x).\] Next, Equation (7) follows from \[K(y\mid x_{\pi}^{*},\pi,P) \stackrel{{\pm}}{{=}}K(x,y\mid\pi,P)-K(x\mid\pi,P)\] \[\stackrel{{+}}{{<}}K(x\mid y,\pi,P)+K(y\mid\pi,P)-K( x\mid\pi,P)\] \[\stackrel{{+}}{{<}}\log\frac{1}{P(x,\,y)}+S_{\pi,P}( y)-S_{\pi,P}(x),\] where the last step used Equations (4) and (6). Finally, the proof of Equation (8) is due to Gacs et al. [26]. **Remark 1**.: If Lemma 1 applies to a given \((x,y,\pi,P)\), then it also applies when \(x\) and \(y\) are swapped, or \(P\) is replaced by \(\widetilde{P}\), or both. Thus, we obtain additional bounds for free. The choice of representation for \(\pi\) or \(P\) is not too important, as the bounds remain valid when conditioned on additional information. The matrix \(\widetilde{P}\) from Equation (9) was first introduced by Kolmogorov [27], and is called a _dual_ or _reverse_ of \(P\). At equilibrium, when the state is distributed proportionally to \(\pi\), Bayes' rule implies that \(\widetilde{P}\) gives the transition probabilities backward in time. Away from equilibrium, this is generally no longer the case. However, for some systems, a microscopic perturbation restores agreement between \(\widetilde{P}\) and the backward transition probabilities, thus also restoring a kind of symmetry under time reversal [28]. For a microscopic view, see Appendix A. Now, Lemma 1 says that if a transition \(x\to y\) occurs with substantial probability \(P(x,\,y)\), then it cannot substantially _decrease_ the Liouville measure \(\pi\) or the algorithmic entropy \(S\), nor can it substantially _increase_ the description complexity \(K\) or the algorithmic mutual information \(I\) (with respect to any fixed object \(z\)). Nonetheless, the evolution of \(\pi\) and \(K\) can be highly non-monotonic when a large number of unlikely transitions sum to a high enough probability. To decrease \(\pi\) and/or increase \(K\), one only has to randomize. In contrast, both \(S\) and \(I\) are essentially monotonic. Gacs et al. [26] use Equation (8) to prove it for the algorithmic mutual information (see also Levin [29], who proved it earlier by a different method). Now, we use Equation (7) to prove monotonicity for the algorithmic entropy, establishing its role as a thermodynamic potential. **Theorem 1** (Second law of thermodynamics).: _Let \(X\) and \(Y\) be \(\mathcal{X}\)-valued random variables satisfying_ \[\forall x,y\in\mathcal{X},\quad\Pr(Y=y\mid X=x)=P(x,\,y),\] _for some computable \(\pi\)-stochastic matrix \(P\), with \(\pi\) computable and strictly positive. Then,_ \[\left\langle 2^{S_{\pi,P}(X)-S_{\pi,P}(Y)}\right\rangle\overset{\times}{<}1. \tag{10}\] _Therefore, for \(\delta>0\), with probability at least \(1-\delta\),_ \[S_{\pi,P}(X)-S_{\pi,P}(Y)\overset{+}{<}\log\frac{1}{\delta}. \tag{11}\] Proof.: By Lemma 1, omitting subscripts for \(S\), \[\log P(x,\,y)+S(x)-S(y)\overset{+}{<}-K(y\mid x_{\pi}^{*},\pi,P).\] Consequently, \[\sum_{y\in\mathcal{X}}P(x,\,y)\cdot 2^{S(x)-S(y)}\overset{\times}{<}\sum_{y \in\mathcal{X}}2^{-K(y\mid x_{\pi}^{*},\pi,P)}<1.\] We conclude that \[\left\langle 2^{S(X)-S(Y)}\right\rangle=\sum_{x,y\in\mathcal{X}}\Pr(X=x)\cdot P (x,\,y)\cdot 2^{S(x)-S(y)}\overset{\times}{<}1.\] Finally, let \(\delta>0\). Markov's inequality implies that with probability at least \(1-\delta\), \[2^{S(X)-S(Y)}\overset{\times}{<}\frac{1}{\delta}.\] Taking logarithms now yields Equation (11). If Equation (10) held with equality, it would be an integral fluctuation theorem [9]. To see that this inequality must be strict, consider the case where \(\mathcal{X}\) is large but finite, \(\pi=\sharp\), and the transition probabilities are uniform. Then, even if \(X\) is fixed to the state with the lowest entropy, \(Y\) would distribute uniformly over \(\mathcal{X}\), making its entropy never less, and typically much greater, than that of \(X\). This example also confirms that Theorem 1 meets our Criterion 2. In fact, we argue that it meets all five of our criteria for a second law of thermodynamics. To verify Criterion 1, note that the entropy increase in Equation (11) is bounded by a constant plus \(\log(1/\delta)\). The constant in the proof came from basic properties of \(K\), and can be made small by an appropriate choice of universal computer \(U\)[23, SS3.9]. In addition, even for exponentially small \(\delta\), \(\log(1/\delta)\) amounts to a modest number of bits, negligible in terms of physical units. Criterion 3 relies on the existence of short interpreters between all "reasonable" choices of universal computer and state encoding. While we do not prove it rigorously, Appendix B of Zurek [3] provides arguments for its plausibility. Criterion 4 is met as well: while Gacs [5] does not prove a proper second law, he demonstrates how algorithmic entropy simplifies the analyses of Maxwell's demon and Landauer's principle. Now that we have a second law, Section 4 revisits these thought experiments. Finally, Criterion 5 is met because our law applies to a finite transition, without resorting to long-term asymptotics. To apply Theorem 1 on a real system, we must choose a coarse-graining. Suppose we choose the elements of \(\mathcal{X}\) to correspond to the Boltzmann macrostates. In other words, we measure a few macrovariables (e.g., temperature, pressure, and chemical composition) to a reasonable number of significant figures, and divide the phase space into cells corresponding to every possible measurement. In this case, \(K(x\mid\pi,P)\) is the macrovariables' description length, which is negligible. The algorithmic entropy is dominated by the remaining term \(\log\pi(x)\), which is precisely the Boltzmann entropy of the macrostate \(x\). The problem with the Boltzmann coarse-graining is that it depends on macrovariables. Instead, let's coarse-grain directly in terms of the microvariables. In an \(N\)-particle Hamiltonian system, elements of \(\mathcal{X}\) correspond to tiny cells in \(6N\)-dimensional phase space, formed by specifying every particle's position and momentum to a high level of precision. Every cell has equal Liouville volume (about \(h^{3N}\) in the formulation of Gaspard [8, SS2.7], \(h\) being Planck's constant); hence, \(\pi\) is constant. By normalizing, we can assume \(\pi=\sharp\), so that the term \(\log\pi(x)\) vanishes. Thus, the algorithmic entropy is precisely \(K(x\mid\sharp,P)\). In fact, we need not condition on \(\sharp\), since \(K(\sharp)\stackrel{{+}}{{=}}0\). Treating the laws of physics as fixed, we might be tempted to say \(K(P)\stackrel{{+}}{{=}}0\) as well. There are two caveats: first, to keep the implied constant small, \(U\) should be chosen such that it can compactly describe the laws of physics. Second, when the laws of physics are allowed to run for arbitrary lengths of time, \(P\)'s description consists not only of the laws of physics, but also the elapsed time. With that in mind, we now specialize our second law of thermodynamics to the doubly stochastic setting. **Theorem 2** (Second law for doubly stochastic processes).: _Fix a uniformly computable collection of doubly stochastic matrices \((P_{\Delta t})_{\Delta t\in\mathcal{T}}\), with either \(\mathcal{T}=\mathbb{R}^{+}\) or \(\mathcal{T}=\mathbb{Z}^{+}\). Let \((X_{t})_{t\in\mathcal{T}}\) be a stochastic process, and \(s,t\in\mathcal{T}\) be a pair of times with \(s\leq t\), satisfying_ \[\forall x,y\in\mathcal{X},\quad\Pr(X_{t}=y\mid X_{s}=x)=P_{t-s}(x,\,y).\] _Then, for \(\delta>0\), with probability at least \(1-\delta\),_ \[K(X_{s})-K(X_{t})\stackrel{{+}}{{<}}K(t-s)+\log\frac{1}{\delta}. \tag{12}\] Proof.: The first premise means that some fixed program computes the transition matrix \(P_{t-s}\) as a function of the elapsed time \(t-s\). Hence, \[K(P_{t-s})\stackrel{{+}}{{<}}K(t-s).\] Meanwhile, the obvious identities \(\log\sharp(x)=0\) and \(K(\sharp)\stackrel{{\pm}}{{=}}0\) yield \[S_{\sharp,P_{t-s}}(x)=K(x\mid\sharp,P_{t-s})\stackrel{{+}}{{=}}K (x\mid P_{t-s}).\] From now on, we omit the subscripts for \(S\). Theorem 1 says that with probability at least \(1-\delta\), \[S(X_{s})\stackrel{{+}}{{<}}S(X_{t})+\log\frac{1}{\delta},\] Putting it all together, \[K(X_{s}) \stackrel{{+}}{{<}}K(X_{s}\mid P_{t-s})+K(P_{t-s})\] \[\stackrel{{+}}{{=}}S(X_{s})+K(P_{t-s})\] \[\stackrel{{+}}{{<}}S(X_{t})+K(t-s)+\log\frac{1}{\delta}\] \[\stackrel{{+}}{{=}}K(X_{t}\mid P_{t-s})+K(t-s)+\log \frac{1}{\delta}\] \[\stackrel{{+}}{{<}}K(X_{t})+K(t-s)+\log\frac{1}{ \delta}.\] **Remark 2**.: Strictly speaking, Theorem 2 does not require the stochastic process to be Markovian. If it is Markovian, then the transition probabilities are invariant to conditioning on past events, so that the theorem continues to apply after updating on knowledge of the past. In Equation (12), the complexity of an incomputable real number must be taken as infinity. Fortunately, for realistic systems whose evolution is not too fast, \(K(t-s)\) can be made negligible by a sufficiently close approximation of \(t\). Even at the granularity of Planck times, the age of the Universe is only about \(2^{202}\). All integers in the range \(1<n<2^{202}\) have \[K(n)\stackrel{{+}}{{<}}\log n+2\log\log n<202+2\cdot 8=218\text{ bits}<2.1 \times 10^{-21}\operatorname{J}\operatorname{K}^{-1}.\] Thus, up to microscopic fudge terms, the state of a doubly stochastic process has non-decreasing description complexity \(K\). Even if entropy is produced only a few bits at a time, a sum of such events can easily dominate the fudge terms over a larger span of space or time. In order to meet Theorem 2's hypothesis of double stochasticity, we had to abandon the Boltzmann coarse-graining and equalize the cell volumes. Fortunately, \(K\) can still be related to the Boltzmann entropy. With our refined coarse-graining, each Boltzmann macrostate is represented not by a single cell, but by a finite set of them \(B\subset\mathcal{X}\). The Boltzmann entropy is \(\log|B|\). Meanwhile, by standard properties of \(K\), most cells \(x\in B\) have \[K(x)\stackrel{{+}}{{=}}K(x\mid B)\stackrel{{+}}{{=}} \log|B|,\] where the first equality is due to \(B\) having a short description in terms of a few macrovariables. More generally, \(B\) need not be a classical Boltzmann macrostate; any simply describable finite set \(B\subset\mathcal{X}\) containing \(x\) will do. Indeed, for any fixed small \(c\) and all \(x\in\mathcal{X}\), \[K(x)\stackrel{{+}}{{<}}\min_{B\ni x}\{\log|B|:\,K(B)\leq c\}, \tag{13}\] since \(x\) can be identified by a short description of \(B\), along with a numerical index of size \(\log|B|\). In particular, the Boltzmann entropy is only an upper bound on \(K\). \(K(x)\) may be lower if the state \(x\) has additional structure not captured by the Boltzmann macrovariables. Generalizing the fixed-size index to a variable-size Shannon-Fano code [22], we also have \[K(x)\stackrel{{+}}{{<}}\min_{\mu:\mathcal{X}\to\mathbb{R}^{+}} \{\log\frac{1}{\mu(x)}:\,K(\mu)\leq c,\,\,\sum_{y\in\mathcal{X}}\mu(y)\leq 1\}. \tag{14}\] In algorithmic statistics [23; SS5.5] [26; 30], Equations (13) and (14) are seen as ways to _infer_ which set \(B\) or ensemble \(\mu\) best describes \(x\). This flexibility extends the applicability of entropy, to settings that are not adequately characterized by the usual thermodynamic macrovariables or ensembles. For example, to compare the entropy between a room in a messy vs tidy state, we no longer need to design ad hoc classifications of messiness. Instead, \(K\) automatically takes into account every computable manner in which the room may be organized. Thus, from this point onward, let's assume an equal-size coarse-graining and take \(K\) as the definition of entropy. ### Heat reservoirs So far, we've assumed our system to be isolated. In practice, it's often more realistic to model our system as being coupled to a heat bath, treating their union as a larger isolated system. While it would be perfectly valid to apply an equal-size coarse-graining to the combined system, it's more convenient to "inflate" our system's standard coarse-graining with the heat bath's contribution. To be specific, suppose our system, with coarse-grained state space \(\mathcal{X}\), is coupled to a large heat bath at a constant temperature \(T\). Then, although the elements \(x\in\mathcal{X}\) represent equal Liouville volumes in the original system, the Liouville volume corresponding to \(x\) in the combined phase space is no longer constant. Instead, it's proportional to the amount of heat bath configurations compatible with \(x\). By conservation of energy, the lower our system's energy, the higher the heat bath's energy, and the more ways it can distribute its energy. Up to a normalization factor, the joint Liouville volume at energy \(E(x)\) is \[\pi(x)=e^{-E(x)/k_{B}T}.\] We can compute the minimum heat transfer \(Q(x\to y):=E(x)-E(y)\) required to transition from a state \(x\) to another state \(y\). By Lemma 1, \[\frac{Q(x\to y)}{k_{B}T\ln 2}=\log\pi(y)-\log\pi(x)\stackrel{{+}}{{> }}K(x\mid y,\pi,P)-\log\frac{1}{P(x,\,y)}.\] This is precisely the bound derived by Kolchinsky [31], generalizing earlier work by Zurek [32]. \(Q(x\to y)\) is the immediate energy cost of the transition \(x\to y\); however, it does not account for the change in complexity between \(x\) and \(y\), which may incur an additional cost later. To get a proper thermodynamic potential, take the joint system's algorithmic entropy: \[S_{\pi,P}(x):=K(x\mid\pi,P)+\log\pi(x)=K(x\mid\pi,P)-\frac{E(x)}{k_{B}T\ln 2}. \tag{15}\] By Theorem 1, the algorithmic entropy is non-decreasing. To express it in units of energy, multiply by \(-k_{B}T\ln 2\). The result is an algorithmic version of the **Helmholtz free energy**: \[F(x):=E(x)-K(x\mid\pi,P)\cdot k_{B}T\ln 2. \tag{16}\] The free energy is non-increasing. It serves as a convenient accounting mechanism to track changes in the global entropy, as a function of only the local state \(x\) of our system. Other thermodynamic potentials can be derived in a similar manner; in general, the contribution from \(\pi(x)\) depends on the compatible configurations of an environment that's kept at equilibrium. ## 4 Discussion ### Markovian dynamics The predominant approach to modeling the thermodynamics of information is to present some concrete physical process, and set its initial condition in such a way as to render its coarse-grained evolution Markovian. In order to derive general principles, we've abstracted away the physical setup, working directly in the general setting of Markov processes subject to a stationary measure. Now, we demonstrate how this setting can be used to construct and analyze thought experiments, offering a great deal of clarity and precision. Since we'd like Theorem 2 to hold, its premises will be our guide. Recall that we've fixed a binary string encoding of the states in \(\mathcal{X}\). By identifying \(\mathcal{X}\) with its image in \(\mathcal{B}^{*}\), and examining the state at periodic time intervals, we may view physical systems as large arrays of bits that evolve according to some simply computable doubly stochastic transition matrix \(P\). By the Birkhoff-von Neumann theorem and its generalization by Revesz [33], transitioning by a doubly stochastic matrix is precisely the same as by a probabilistic mixture of bijective mappings. This offers an alternative to writing an explicit transition matrix: we can instead describe the dynamics as replacing \(x\) with \(F(x)\), where the bijection \(F:\mathcal{X}\to\mathcal{X}\) is sampled from some distribution with low description complexity, independently at each time step. If we only care to specify \(F\) on a subset of \(\mathcal{X}\), then it need only be a random _injection_ whose domain and range have complements of equal cardinality, since these can be extended to bijections on all of \(\mathcal{X}\). For example, on the set \(\mathcal{X}:=\mathcal{B}=\{0,1\}\), there are exactly two bijections: identity and negation. Therefore, these are the only permitted deterministic transitions. The full set of allowed transitions are the mixtures of identity and negation, parametrized by a probability of negation \(\alpha\in[0,1]\). One-to-one mappings on subsets of \(\mathcal{X}\) are also allowed: for example, we can specify that \(0\) maps to \(1\), without caring what \(1\) maps to (though in this case, negation would be the only option). It may seem odd to allow ourselves to choose the distribution of \(F\); after all, the laws of physics should uniquely determine the dynamics of every isolated system. On the other hand, when systems interact, a fixed global dynamics may emulate a variety of local dynamics. To demonstrate, consider a composite system with state space \(\mathcal{P}\times\mathcal{X}\), whose first component is static. In other words, the system evolves according to \[(p,\,x)\mapsto(p,\,F_{p}(x)),\] where \(F_{p}:\mathcal{X}\to\mathcal{X}\) is a random function whose distribution depends on the **control**\(p\in\mathcal{P}\). It's easy to see that the composite system's evolution is doubly stochastic (i.e., a mixture of bijections) if, and only if, all of the possible local evolutions \(\{F_{p}:\,p\in\mathcal{P}\}\) are. If an engineer sets \(p\) at initialization, then she effectively decides which dynamics \(F_{p}\) apply to the second subsystem. Since \(p\) is kept fixed, it seems natural to omit it, treating the second subsystem as if it were isolated and evolving according to the chosen dynamics \[x\mapsto F_{p}(x).\] The problem is that, even if the composite mapping is simply distributed, each individual \(F_{p}\) might not be, since it depends on \(p\). Hence, the proof of Theorem 2 no longer applies directly to \(K(\cdot)\); instead, it applies to the conditional complexity \(K(\cdot\mid p)\). Only when \(K(p)\) is small, is it safe to omit all mention of \(p\). When \(K(p)\) is large, we must either use the conditional complexity, or include \(p\) as an explicit part of the state. One way that \(K(p)\) might become large is if the engineer's choice \(p\) depends on \(x\). In such cases, it's worthwhile to explicitly model the computation responsible for setting \(p\). In addition to a fixed control, we can use a reversible program counter (i.e., a clock) \(c\in\mathbb{Z}_{m}\): \[(c,\,x)\mapsto(c+1,\,F_{c}(x)),\] where the addition is modulo \(m\). If \(m\) is small, then the entropy of \(c\) is small enough to omit. Therefore, we propose the following model: let's allow ourselves the freedom to choose a small number of random bijections \(F_{c}:\mathcal{X}\to\mathcal{X}\) (or random injections on subsets of \(\mathcal{X}\)), with simply computable distributions, to apply in cyclic order. This framework abstracts away details of the underlying physics, focusing our attention on its information-theoretic aspects. ### Maxwell's demon Let's specialize our model to a classic thought experiment made to test the second law [1]. **Maxwell's demon** has a memory that starts in a low-entropy "clear" state \(0\in\mathcal{X}\). It interacts with a system that starts in some high-entropy state \(x\in\mathcal{X}\). In our stylized version, let's allow the demon to reversibly perform a complete measurement, copying the system's state into memory: \[(0,\,x)\mapsto(x,\,x).\] Using its measurement as a control, the demon proceeds to reversibly erase the system's entropy: \[(x,\,x)\mapsto(x,\,0).\] Both of these mappings are one-to-one on their respective domains, \(\{(0,x):\,x\in\mathcal{X}\}\) and \(\{(x,x):\,x\in\mathcal{X}\}\). Therefore, they can be extended to bijections on \(\mathcal{X}\times\mathcal{X}\). Since the bijections are deterministic, Theorem 2 applies both forward and backward in time; therefore, the composite system's entropy cannot change substantially. Indeed, \[K(0,x)\stackrel{{+}}{{=}}K(x,x)\stackrel{{+}}{{=}}K( x,0)\stackrel{{+}}{{=}}K(x).\] The erasure stage's control \(x\) has high entropy, so we must not omit it. If we do, there would appear to be a violation of the second law, as the system undergoes the erasure \[x\mapsto 0.\] We can generalize the thought experiment to a demon who performs only a partial measurement \(m(x)\), where \(m\) is a (possibly random, not necessarily bijective) function, whose distribution has low description complexity. The demon first takes the measurement, and then uses it as a control to transition the system from \(x\) to some new (possibly random) state \(y\): \[(0,\,x)\mapsto(m(x),\,x)\mapsto(m(x),\,y).\] The specifics of \(y\)'s computation are not important. As long as our rules are followed, Theorem 2 says that with high probability, \(K(m(x),\,x)\stackrel{{+}}{{<}}K(m(x),\,y)\). Therefore, \[K(y)\stackrel{{+}}{{>}}K(m(x),\,y)-K(m(x))\stackrel{{ +}}{{>}}K(m(x),\,x)-K(m(x))=K(x)-I(m(x);\,x).\] In other words, although the second law forbids a decrease in the _total_ entropy, it permits the measured system to lose as much entropy as was measured from it! That said, the demon cannot then proceed to clear its measurement using \[(m(x),\,y)\mapsto(0,\,y),\] nor overwrite it with a new measurement \[(m(x),\,y)\mapsto(m(y),\,y),\] as these mappings are generally not one-to-one if defined to work for all \(x\). Nor can we tailor them for a specific \(x\), as a map that depends on \(x\) may have high description complexity. In summary, when \(K\) is taken as the definition of entropy, we see that Maxwell's demon is entirely straightforward. There is no violation of the second law, nor any need to distinguish the entropy in the memory as being of a different type than usual. Our analysis is generic, applying to every physical implementation that meets our coarse-grained modeling assumptions. ### Landauer's principle Landauer [34] discovered, as Bennett [14] later elaborated, that a computer memory cannot be cleared without dissipating an equivalent amount of heat. The importance of Landauer's principle is that it puts a fundamental physical limit on the energy-efficiency of irreversible computers. For a modern treatment, see Frank [35]. The analysis of Landauer's principle depends on some subtle details of the setup, so care is needed to ensure we arrive at the right conclusions. We've seen that entropy inside a memory is not fundamentally different from physical entropy. Since the total entropy cannot decrease, clearing a memory requires that its entropy be moved to its environment. When the entropy moves, it typically takes the form of heat energy, because most environments have the property that the number and complexity of available configurations increase with the amount of energy. In particular, Equation (15) and Theorem 1 imply an energy cost of \(k_{B}T\ln 2\) per bit of entropy dumped into a heat bath. See also Neyman [36] for a larger cost that applies when a memory is cleared by irreversible thermal equilibration. On the other hand, it's possible to erase data without zeroing it; that is, without clearing its entropy. By keeping the entropy inside the memory rather than dumping it in a heat bath, Sagawa [37] demonstrates that erasure on its own need not result in heat dissipation. To see this in our model, suppose \(\mathcal{X}\) is finite, and then sample one of the \(|\mathcal{X}|!\) bijections uniformly at random. The old data \(x\) is replaced with new random data \(y\), leaving no hint of what \(x\) was. Although \(x\) is destroyed, its entropy is replaced by that of \(y\), satisfying the second law without dissipating heat. The analysis changes if \(x\) has a distant copy somewhere. This might occur, for instance, if one copy was obtained by measuring the other. Then, although it's possible to destroy our copy of \(x\) without any immediate dissipation, doing so would necessarily increase the joint entropy of the two systems taken together. For example, the uniformly random bijection can be applied locally to replace our copy of \(x\) with some random \(y\): \[(x,\,x)\mapsto(y,\,x).\] The fact that \(K(x,x)<K(y,x)\) becomes physically relevant if the two copies are later brought together to interact: after applying the joint mapping \[(x,\,x)\mapsto(0,\,x),\] the remaining copy requires less dissipation to clear, than for the pair \((y,\,x)\). Therefore, a rigorous variant of Landauer's principle concerns the cost of forgetting _mutual information_ between systems during periods of _non-interaction_. Consider the general setting of two systems with respective stationary measures \(\pi_{1},\pi_{2}\). Then, their joint dynamics are stationary with respect to \(\pi(x,\,y):=\pi_{1}(x)\pi_{2}(y)\). Equations (3) and (4) imply that \[S(x,\,y)\stackrel{{+}}{{=}}S(x)+S(y)-I(x;\,y).\] As long as the systems don't interact, Theorem 1 says that their respective algorithmic entropies \(S(x)\) and \(S(y)\) are non-decreasing, and the algorithmic data processing inequality [26; 29] says that \(I(x;\,y)\) is non-increasing. We conclude that the change in \(S(x,\,y)\) decomposes into a sum of three essentially non-negative terms: \[\Delta S(x,\,y)\stackrel{{+}}{{=}}\Delta S(x)+\Delta S(y)+\Delta (-I(x;\,y)).\] In particular, the price of discarding mutual information is an increase in the total entropy. In all cases, the algorithmic entropy acts as an anti-resource: once produced, it can never be destroyed. On the other hand, we may be able to prevent its production. A string \(x\) computed by a small deterministic program will always have low \(K(x)\), no matter how random or complex it may appear. Only when we ignore the origins of \(x\) and toss it into a stochastic environment, is entropy produced [35]. A **reversible computer** could in principle avoid dissipation, clearing \(x\) by running its computation in reverse [38; 39; 40]. ### A data compression engine In applied thermodynamics, we are often interested in transferring energy between different systems or different forms. The rate of change of a system's entropy, with respect to its energy, depends on the form of energy under consideration: it's high for heat energy in a cold body, moderate for heat energy in a hot body, and nearly zero for mechanical (kinetic plus potential) energy in a macroscopic rigid body. Defining the **temperature** of a system's energy-bearing degrees of freedom by the inverse of this derivative, we may thus think of mechanical energy as an especially "hot" form of energy. The second law of thermodynamics permits energy transfers that result in a net entropy increase, such as a mechanical source heating a hot body via friction, or a hot body diffusing heat into a cold body. Thus, mechanical energy is a very versatile _source_ to transfer from, while a cold body is a very versatile energy _sink_ to transfer to. It's more difficult to do the opposite, e.g., to further cool an already cold body. To compensate for the decrease in entropy when heat transfers from a cold body to a warmer environment, a refrigeration cycle is typically engineered to supply additional heat from a mechanical source. From these examples, we see that energy is most useful in its "hot", i.e., low-entropy, forms. This is because it's only possible to reallocate energy in ways that either maintain or increase the total entropy. To be more general, we need not speak of energy at all: the primary resource of thermodynamics is absence of entropy. In algorithmic terms, define the **entropy deficiency** by \(J(x):=\log\left(\sum_{y\in\mathcal{X}}\pi(y)\right)-S(x)\). For an \(m\)-bit memory system, this is approximately \(m-K(x)\), a measure of how well \(x\) can be compressed. To illustrate this resource view, we model a generic "information engine" powered by compressible strings. It has an \(m\)-bit memory with state space \(\mathcal{B}^{m}\). Using a computable self-delimiting encoding \(n^{\prime}\in\mathcal{B}^{*}\) of the non-negative integers \(n\in\mathbb{Z}^{+}\), any string \(x\in\mathcal{B}^{*}\) with \(||x|^{\prime}|+|x|\leq m\) can be encoded in memory as the concatenation of \(|x|^{\prime}\), \(x\), and a padding of zeros: \[\mathrm{e}(x):=|x|^{\prime}x0^{m-||x|^{\prime}|-|x|}\in\mathcal{B}^{m}.\] The self-delimiting prefix allows \(\mathrm{e}(x)\) to be uniquely parsed into its three parts. The engine is based on a **lossless compression algorithm**: a computable one-to-one function \(f:\mathcal{B}^{*}\to\mathcal{B}^{*}\), whose worst-case blowup \[c:=\max_{x\in\mathcal{B}^{*}}\left\{||f(x)|^{\prime}|+|f(x)|-||x|^{\prime}|-|x|\right\}\] is much less than \(m\). Since \(\mathrm{e}\) and \(f\) are one-to-one, the mapping \(\mathrm{e}(x)\mapsto\mathrm{e}(f(x))\), defined on the range of \(\mathrm{e}\), is also one-to-one. If \(f\) is not simple, we can take it to be programmed onto a read-only section of memory, so that the joint mapping \(g:(f,\,x)\mapsto(f,\,f(x))\) is simple. Thus, no generality is lost in assuming \(K(f)\stackrel{{+}}{{=}}0\). Since \(|f(x)|^{\prime}f(x)\) serves as a self-delimiting encoding of \(f(x)\), \[||f(x)|^{\prime}|+|f(x)|\stackrel{{+}}{{>}}K(f(x))\stackrel{{ +}}{{=}}K(x).\] Therefore, \(K(x)\) sets an optimistic bound on how well \(f\) may compress \(x\). When the compression succeeds, the padding of zeros lengthens, serving as a store of refined fuel for later use. We are now ready to describe the engine's operation. It cycles between three modes: 1. Consume ("burn") refined fuel to perform some task, producing waste. 2. Expel waste, and gather ("eat") raw fuel in its place. 3. Refine ("digest") fuel, producing waste as a byproduct. The corresponding transitions to the memory state are summarized as follows: \[\mathrm{e}(x)\xrightarrow{\mathrm{burn}}\mathrm{e}(xy)\xrightarrow{\mathrm{ eat}}\mathrm{e}(z)\xrightarrow{\mathrm{digest}}\mathrm{e}(f(z))\] Here, \(x\) is a small string, perhaps compressed from the previous cycle. Thus, \(\mathrm{e}(x)\) has a large zero padding to serve as refined fuel. We'll see shortly how that's used to perform useful tasks. In doing so, a portion of the padding is replaced with some string \(y\). If we expect \(x\) and \(y\) to be incompressible, we treat them as waste. At the second stage, we identify a location in the environment where we hope to find compressible strings; these act as raw fuel. With one reversible swap, we expel \(xy\), and gather the (hopefully) compressible string \(z\) in its place, with \(|z|=|x|+|y|\). Finally, the third stage refines the fuel \(z\) by compressing it, yielding additional zeros alongside the byproduct \(f(z)\), which takes the role of \(x\) when the cycle resets. If we are calling strings of zeros a "refined fuel", then they better have some uses; what are they? One is that they pay for the processing of bad fuel: if the string \(z\) turns out not to be compressible after all, then \(f(z)\) may actually be longer than \(z\), overwriting up to \(c\) of the zeros. If this happens so often as to deplete the supply of zeros, the engine's behavior becomes ill-defined; in that event, we might as well consider it to have "starved to death". Otherwise, zero padding is primarily used to convert irreversible (many-to-one) operations into reversible (one-to-one) operations, which we can then apply. Some relevant operations include irreversible computation, error-correction, sensing, healing, and repair. Each of these maps a larger number of "bad" states into a smaller number of "good" states. In Section 4.2, Maxwell's demon demonstrates the use of a cleared memory to make the transition \((0,\,x)\mapsto(m(x),\,y)\), when the second law would forbid directly mapping \(x\mapsto y\). Bennett [14] offers the more concrete example of adiabatic demagnetization, consuming zeros to perform energy conversions that would otherwise violate the second law, i.e., to do work. If the engine is part of a living organism, then it can support the organism's growth and reproduction. These are normally irreversible, because they overwrite pieces of the environment with copies of the organism's data. To make an overwrite reversible, we first swap a chunk of zero padding from the engine onto a desired target location. With the target location now cleared, we can reversibly copy onto it. ### When to use probability? The reader might be disappointed to see that, after all the trouble we went through to remove probabilities from the definition of entropy, we nonetheless retained probabilities in the Markovian dynamics. The fine-grained view in A restores determinism in the dynamics, only to reintroduce randomness in the initial condition. The giant of probability and information theory, Kolmogorov [41], argued that: _"Information theory must precede probability theory, and not be based on it. By the very essence of this discipline, the foundations of information theory have a finite combinatorial character."_ Indeed, algorithmic information theory enables us to describe nature without granting probability a fundamental role. Instead of describing a _distribution_ over state trajectories, we can speak of an _individual_ trajectory that appears random in the sense of Martin-Lof [42]. Roughly speaking, this means it passes all simply computable statistical tests of randomness. For certain well-behaved distributions, Martin-Lof randomness implies membership in a **typical set**: an event accounting for the overwhelming majority of the distribution's probability, over which the characteristics of interest are confined to a very narrow range [22]. When a given sequence is a member of a typical set, it's often convenient to describe it in terms of the corresponding probabilistic ensemble. Time-homogeneous Markovian dynamics are described fairly well by their typical statistics, because their trajectories collect very many independent samples. Physical systems whose _states_ belong to a typical set are also good candidates for an ensemble description. For example, consider the canonical ensemble for a mechanically isolated ideal gas container at thermodynamic equilibrium. A consequence of the gas particles' independence and large number is that, on the ensemble's typical set, the algorithmic entropy approximately equals the ensemble's Shannon-Gibbs entropy [3; 43]. Moreover, unlike the algorithmic entropy, the Shannon-Gibbs entropy is straightforward to compute. On the other hand, the algorithmic entropy is far more general. We propose that it should take conceptual priority, especially in settings whose states are not so nicely described as being typical of some simple ensemble. ## 5 Conclusions and future work In summary, we stated and proved a second law of thermodynamics for the algorithmic entropy. Unlike other entropies, the algorithmic entropy takes into account arbitrary computable structure in the state. This removes the need to specify macrovariables or ensembles, and allows information-processing systems to be handled on an equal footing with thermodynamic systems. Unlike ergodic theorems that predict an asymptotic tendency toward maximum entropy in both the infinite future and past, our law applies to real incremental changes in entropy. To our knowledge, our formulation is the first to satisfy our full wishlist of criteria for a second law of thermodynamics. Perhaps, the premise of Theorem 2 is even more fundamental than its conclusion. In the generic setting of Markov chains subject to a modest set of rules, we found that it's possible to analyze Maxwell's demon and Landauer's principle with remarkable clarity and precision. This setting gave us the tools to model an engine powered by compressible data, demonstrating that a deficiency of algorithmic entropy really is the same as "free energy". This is further evidenced by Equation (16), which derives the free energy as a negative multiple of the algorithmic entropy. So far, all predictions derived from this setting seem consistent with known results in classical statistical mechanics. The simplicity and flexibility of these modeling tools should make them amenable to both theory and simulation, with the potential to yield further insights in all applications of nonequilibrium thermodynamics, ranging from reversible computing to biology. In particular, we hope that Lemma 1 can be used to derive additional fluctuation theorems. While this article focuses on the second law of thermodynamics, Markov chains are known to satisfy additional laws. We've already mentioned the data processing inequality, which likewise has both a probabilistic [22, SS2.8] and an algorithmic [26; 29] formulation. It implies that if two isolated Markov processes evolve independently, then their mutual information cannot increase. In an unpublished manuscript, Ebtekar [28] argues this inequality plays an essential role in the perceptual, psychological, and causal arrow of time. Causality, not in the time-symmetric sense studied in Einstein's relativity, but in the asymmetric sense required by Bell [44], is defined by Pearl [45] as a graphical extension of the Markov property. See also Janzing and Scholkopf [46] for an algorithmic formulation. There is yet another general law to consider. As an alternative measure of complexity that seems more attuned to the intricate structures of living organisms, Bennett [47; 48] defines the **logical depth** of \(x\) to be, roughly speaking, the minimum runtime of a shortest program that outputs \(x\). He proved that logical depth, if it increases, can only do so slowly. Thus, logically deep objects, such as genomes, are only created by very gradual processes over a long span of time. Both mutual information and logical depth describe a sense in which the free energy of a system becomes harder to extract, demanding that separated systems be reunited in the former case, and that a long computation be rewound in the latter. In future work, it would be interesting to study the interactions between the entropy non-decrease, mutual information non-increase, and logical depth slow-increase laws, or any related laws that are as yet undiscovered. Together, they seem to define the _arrow of time_, mediating the role of information in physics, computation, and intelligent life [48]. Since intelligence evolved by natural selection, a reductionist view of its purpose might be that of optimizing data compression engines of the kind in Section 4.4. In light of the known connections between data compression, inductive learning, and intelligence [24], it would be interesting to see whether this drastic oversimplification predicts any nontrivial properties of intelligent agents. Finally, extending algorithmic thermodynamics to incorporate quantum information remains a wide open problem. As a promising start, several quantum analogues of the description complexity have been proposed, each with different properties [49; 50; 51; 52]. ## Acknowledgements This article benefited significantly from proofreading by Xianda Sun. ## Appendix A Markovian coarse-grainings No discussion of the second law would be complete without addressing the fundamental modeling assumptions responsible for the asymmetry between past and future. The premise of Theorem 2 is a time-homogeneous doubly stochastic process. Doubly stochastic matrices that are deterministic are simply permutation matrices; clearly, their dynamics are reversible. Applying Theorem 2 both forward and backward, it follows that the entropy of a _deterministic_ doubly stochastic process can neither decrease nor increase at an appreciable rate. Randomness is therefore necessary to increase entropy. Since the Markov property can be phrased in a time-symmetric manner [45], one might ask why randomness invalidates a backward application of Theorem 2. The crucial difference between the forward and backward processes is that the latter is generally neither time-homogeneous nor doubly stochastic [28]. This seems to match our real-life macroscopic experience, where forward evolutions follow localized statistical laws, but backward evolutions do not. For example, a glass in free fall will shatter at a predictable time; and while the final arrangement of its pieces is chaotic and hard to predict, we can expect it to follow a certain statistical distribution. Moreover, our statistical prediction would not depend on any concurrent happenings at the neighbor's house. In contrast, consider the reverse situation, where we see a broken glass and want to retrodict its time of impact. It's hard to make even a meaningful statistical prediction: if we try, it will be based on principles beyond the localized physics. For example, we might take into account the conversation at the neighbor's house, telling of the accident. Moreover, a reversed view would have distant shards begin to converge simultaneously, in apparent violation of locality. This example supports the claim that time-homogeneous Markov processes are good models of real macroscopic systems. On the other hand, the fundamental microscopic laws of nature are widely believed to be deterministic and CPT symmetric [16]. How, then, can nature's coarse-grained evolution violate this symmetry? To demonstrate its plausibility, let's construct an example of this emergent behavior. Gaspard [18] defines the **multibaker map**: a deterministic time-reversible dynamical system that, when suitably coarse-grained, emulates a random walk. Altaner and Vollmer [21] generalize the multibaker map to emulate almost arbitrary Markov chains. To convey the idea in an easy fashion, we now present multibaker maps at an intermediate level of generality. Fix an integer \(m>1\). We augment the coarse-grained state space \(\mathcal{X}\) with a bi-infinite sequence of \(\mathbb{Z}_{m}\)-valued microvariables, so that the complete fine-grained state space is \(\mathcal{X}\times(\mathbb{Z}_{m})^{\mathbb{Z}}\). Every individual fine-grained state can be written in the form \[(x,\;(\ldots,\,r_{-2},\,r_{-1},\,r_{0},\,r_{1},\,r_{2},\,\ldots)),\] where \(x\in\mathcal{X}\) is the coarse-grained part, and the \(r_{i}\in\mathbb{Z}_{m}\) collect the remaining fine-grained information. Alternatively, we can rearrange the variables and punctuation as follows: \[(x.r_{-1}r_{-2}r_{-3}\ldots,\;0.r_{0}r_{1}r_{2}\ldots).\] The "0." here is symbolic. If we were to identify \(\mathcal{X}\) with \(\mathbb{Z}\), the latter notation is suggestive of the base \(m\) representation of a point in the two-dimensional "phase space" \(\mathbb{R}\times[0,1]\). There is an extensive literature that studies symbolic representations as proxies for continuous chaotic dynamical systems; for theory and examples, see Lind and Marcus [53]. At each discrete time step, the system evolves by a deterministic and reversible two-stage transformation. The first stage shifts all of the \(r_{i}\) by one index; we think of it as emulating microscopic chaos. The second stage applies a fixed bijection of \(\mathcal{X}\times\mathbb{Z}_{m}\) to the pair \((x,\,r_{0})\); we think of it as emulating the coarse-grained physics. In summary: \[(x.r_{-1}r_{-2}r_{-3}\ldots,\;0.r_{0}r_{1}r_{2}\ldots)\] \[\xrightarrow{\text{shift}}(x.r_{0}r_{-1}r_{-2}\ldots,\;0.r_{1}r_ {2}r_{3}\ldots)\] \[\xrightarrow{\text{permute}}(x^{\prime}.r_{0}^{\prime}r_{-1}r_{-2 }\ldots,\;0.r_{1}r_{2}r_{3}\ldots).\] The system's only source of randomness is its initial condition. At the start time \(t=0\), \(x\) can have any distribution we choose, but the \(r_{i}\) are uniformly distributed, and all of the variables are independent. We can think of \(r_{i}\in\mathbb{Z}_{m}\) as an \(m\)-sided die used to emulate a stochastic transition of \(x\) at the \(i\)'th time step. In the coarse-grained view, where we ignore all of the \(r_{i}\), it's easy to verify that \(x\)'s trajectory is a time-homogeneous doubly stochastic Markov chain, whose transition matrix entries are all multiples of \(1/m\). In fact, the multibaker map can emulate _all_ such Markov chains, by a suitable choice of the bijection \(T:(x,\,r_{0})\mapsto(x^{\prime},\,r_{0}^{\prime})\). Indeed, recall that a Markov chain's distribution is uniquely determined by its initial condition and transition matrix. We already allow the initial distribution of \(x\) to be arbitrary, so let's focus on the doubly stochastic matrix \(P\) that we want to emulate. Since its entries are multiples of \(1/m\), we only need to assign each pair \(x,y\in\mathcal{X}\) to each other with multiplicity \(m\cdot P(x,\,y)\). One way to accomplish this is to fix any total order \(<\) on \(\mathcal{X}\), and let \[T\left(x,\;i+m\sum_{z<y}P(x,\,z)\right):=\left(y,\;i+m\sum_{z<x}P(z,\,y) \right)\quad\forall x,y\in\mathcal{X},\;i\in\mathbb{Z}_{m\cdot P(x,\,y)}.\] Therefore, Theorem 2 applies to the coarse-grained state \(x\). We conclude that, aside from very rare fluctuations, \(K(x)\) increases monotonically over time. The full construction by Altaner and Vollmer [21] relaxes the requirement that \(P\) be doubly stochastic, or that its entries have a common denominator \(m\). The unpublished manuscript by Ebtekar [28] does this in a different manner, and provides further extensions to model local causality. In addition, it somewhat relaxes the requirement that the \(r_{i}\) be uniform and independent. As long as the fine-grained state starts with a continuous distribution, it's shown that the dynamics eventually stabilize to become time-homogeneous and Markovian. Thus, a continuous initial distribution may serve as Albert [17]'s _Past Hypothesis_. While these conclusions are only proven for our symbolic systems, they are highly suggestive of techniques that we might hope to extend to realistic systems. In particular, it appears that we should seek a short-term ergodic property of the state's microscopic part, which occurs on a much faster time scale than macroscopic ergodicity. The goal would be to obtain fast convergence to Markovian behavior, long before the slower but better-understood convergence to maximum entropy. In this manner, we hope to establish the second law of thermodynamics as a mathematically rigorous property of real, CPT-symmetric systems.
2303.05184
Mechanisms of SiO oxidation: Implications for dust formation
Reactions of SiO molecules have been postulated to initiate efficient formation of silicate dust particles in outflows around dying (AGB) stars. Both OH radicals and H$_2$O molecules can be present in these environments and their reactions with SiO and the smallest SiO cluster, Si$_2$O$_2$, affect the efficiency of eventual dust formation. Rate coefficients of gas-phase oxidation and clustering reactions of SiO, Si$_2$O$_2$ and Si$_2$O$_3$ have been calculated using master equation calculations based on density functional theory calculations. The calculations show that the reactions involving OH are fast. Reactions involving H$_2$O are not efficient routes to oxidation but may under the right conditions lead to hydroxylated species. The reaction of Si$_2$O$_2$ with H$_2$O, which has been suggested as efficient producing Si$_2$O$_3$, is therefore not as efficient as previously thought. If H$_2$O molecules dissociate to form OH radicals, oxidation of SiO and dust formation could be accelerated. Kinetics simulations of oxygen-rich circumstellar environments using our proposed reaction scheme suggest that under typical conditions only small amounts of SiO$_2$ and Si$_2$O$_2$ are formed and that most of the silicon remains as molecular SiO.
Stefan Andersson, David Gobrecht, Rosendo Valero
2023-03-09T11:27:05Z
http://arxiv.org/abs/2303.05184v1
# Mechanisms of SiO oxidation: Implications for dust formation ###### Abstract Reactions of SiO molecules have been postulated to initiate efficient formation of silicate dust particles in outflows around dying (AGB) stars. Both OH radicals and H\({}_{2}\)O molecules can be present in these environments and their reactions with SiO and the smallest SiO cluster, Si\({}_{2}\)O\({}_{2}\), affect the efficiency of eventual dust formation. Rate coefficients of gas-phase oxidation and clustering reactions of SiO, Si\({}_{2}\)O\({}_{2}\) and Si\({}_{2}\)O\({}_{3}\) have been calculated using master equation calculations based on density functional theory calculations. The calculations show that the reactions involving OH are fast. Reactions involving H\({}_{2}\)O are not efficient routes to oxidation but may under the right conditions lead to hydroxylated species. The reaction of Si\({}_{2}\)O\({}_{2}\) with H\({}_{2}\)O, which has been suggested as efficient producing Si\({}_{2}\)O\({}_{3}\) is therefore not as efficient as previously thought. If H\({}_{2}\)O molecules dissociate to form OH radicals, oxidation of SiO and dust formation could be accelerated. Kinetics simulations of oxygen-rich circumstellar environments using our proposed reaction scheme suggest that under typical conditions only small amounts of SiO\({}_{2}\) and Si\({}_{2}\)O\({}_{2}\) are formed and that most of the silicon remains as molecular SiO. SiO, circumstellar, dust, DFT, rate coefficients, kinetics ## 1 Introduction The SiO molecule has been observed in the interstellar medium and in stellar outflows and is believed to be important for the formation of interstellar dust, which to a large extent consists of silicates (Hartquist et al., 1980; Clegg et al., 1983; Herbst et al., 1989; Langer and Glassgold, 1990; Sternberg and Dalgarno, 1995; Schilke et al., 1997; Gail and Sedlmayr, 1999; Smith et al., 2004; Gusdorf et al., 2008; Reber et al., 2008; Goumans and Bromley, 2012; Chakraborty et al., 2013; Plane, 2013; Krasnokutski et al., 2014; Bromley et al., 2016; Gobrecht et al., 2016). SiO is also found in terrestrial environments such as the upper atmosphere (from meteoric ablation; see Plane et al., 2016), in combustion of silicon compounds (Jachimowski and McLain, 1983; Britten et al., 1990; Tokuhashi et al., 1990; Chagger et al., 1996; Lindackers et al., 1997; Wooldridge, 1998; Moore et al., 2006) and in industrial silicon production processes (Johansen et al., 1998; Schei et al., 1998; Ravary and Johansen, 1999; Gradahl et al., 2007; Ravary et al., 2007; Kamjford et al., 2012; Ness et al., 2014). SiO can react with oxygen-bearing species, to form SiO\({}_{2}\)(Gomez Martin et al., 2009; Chakraborty et al., 2013). The reactions \[\text{SiO}+\text{OH}\,\rightarrow\,\text{H}+\text{SiO}_{2}\] and \[\text{SiO}+\text{O}_{2}\,\rightarrow\,\text{O}+\text{SiO}_{2}\]
2307.01615
Testing Complex Singlet Scalar Cosmology at the Large Hadron Collider
The Standard Model extended with a complex singlet scalar (cxSM) can admit a strong first order electroweak phase transition (SFOEWPT) as needed for electroweak baryogenesis and provide a dark matter (DM) candidate. The presence of both a DM candidate and a singlet-like scalar that mixes with the Standard Model Higgs boson leads to the possibility of a $b\bar{b}+\text{MET}$ final state in $pp$ collisions. Focusing on this channel, we analyze the prospective reach at the Large Hadron Collider (LHC) for a heavy singlet-like scalar in regions of cxSM parameter space compatible with a SFOEWT and DM phenomenology. We identify this parameter space while implementing current constraints from electroweak precision observable and Higgs boson property measurements as well as those implied by LHC heavy resonance searches.
Wenxing Zhang, Yizhou Cai, Michael J. Ramsey-Musolf, Lei Zhang
2023-07-04T10:02:07Z
http://arxiv.org/abs/2307.01615v2
# Testing Complex Singlet Scalar Cosmology at the Large Hadron Collider ###### Abstract The Standard Model extended with a complex singlet scalar (cxSM) can admit a strong first order electroweak phase transition (SFOEWPT) as needed for electroweak baryogenesis and provide a dark matter (DM) candidate. The presence of both a DM candidate and a singlet-like scalar that mixes with the Standard Model Higgs boson leads to the possibility of a \(b\bar{b}+\)MET final state in \(pp\) collisions. Focusing on this channel, we analyze the prospective reach at the Large Hadron Collider (LHC) for a heavy singlet-like scalar in regions of cxSM parameter space compatible with a SFOEWT and DM phenomenology. We identify this parameter space while implementing current constraints from electroweak precision observable and Higgs boson property measurements as well as those implied by LHC heavy resonance searches. Implementing a proposed search strategy, we find that the heavy scalar and DM candidate can be probed up to 1 TeV and 400 GeV at \(2\sigma\) level respectively. + Footnote †: preprint: ACFI T23-03 ## I Introduction The origin of the cosmic baryon asymmetry is one of the long-standing puzzles in particle physics. Electroweak baryogenesis [1; 2; 3; 4] provides a promising solution and can be tested at the current collider experiments[4; 5]. In general, a baryogenesis mechanism should meet Sakharov's three conditions [6; 7]: * Baryon number violating interactions. * C and CP violation. * Departure from thermal equilibrium (or CPT violation). The baryon number violating processes could appear in Standard Model (SM) via the non-perturbative effects caused by sphaleron transitions [8; 9]. In principle, the requisite CP violation also appears in SM via Cabibbo-Kobayashi-Maskawa matrix, though the strength is found to be insufficient to generate the observed matter-antimatter asymmetry. In the SM, a possible departure from thermodynamic equilibrium could happen via a first order electroweak phase transition (FOEWPT) at the electroweak temperature, \(T_{\rm EW}\sim 140\) GeV, that marks the onset of electroweak symmetry-breaking[5]. To ensure preservation of any baryon asymmetry produced during this transition, the latter must be sufficiently strong. The occurrence of a FOEWPT requires the mass of the Higgs boson to lie below \(\sim 70\) GeV [7; 10; 11; 12; 13; 14; 15; 16; 17], which is inconsistent with the experimental observation [18; 19]. Therefore, electroweak baryogenesis can only be realised in extensions of SM that accommodate a strongly first order electroweak phase transition (SFOEWPT). The most widely considered scenarios include the real singlet extensions (xSM) [15; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61], complex singlet extensions [70; 62; 63; 64; 65; 66; 67; 68; 69; 71; 72; 73; 74; 75; 76; 77; 78; 79], Higgs doublet extensions [71; 72; 73; 74; 75; 76; 77; 78; 79], and supersymmetric extensions [84; 85; 86; 87; 88; 89]. Among the distinctive signatures of such mixing is resonant di-Higgs production, where the the heavy resonance is a mixed singlet-doublet state [5]. The possibility of probing the SFOEWPT-viable parameter space in the xSM has been studied extensively(for example see [5] and references therein). In the cxSM after electroweak symmetry-breaking, the model yields both a viable DM candidate (A) as well as two real neutral scalar \(h_{1}\) and \(h_{2}\) that are mixtures of the SM Higgs boson and the real part of the complex singlet. In this case, the cxSM provides more collider phenomenological signatures than xSM, such as the presence of missing transverse energy (MET) associated with pair production of A, in conjunction with decay products of one of the mixed doublet-singlet states, \(h_{1,2}\). When the DM mass is below half that of the SM-like state \(h_{1}\), resonant di-Higgs production may be the dominant underlying process. However, for heavier DM, there exist a variety of other subprocesses that play an important role. Thus, the SFOEWPT-viable cxSM admits a richer collider phenomenology than the xSM. In what follows, we analyze the \(b\bar{b}+\text{MET}\) final state and find that it provides a powerful probe of the realization of the cxSM consistent with a SFOEWPT and DM phenomenology. We consider both the resonant di-Higgs portion of parameter space, wherein \(m_{A}<m_{h_{1}}/2\) as well as the heavier \(m_{A}\) regime. Present experimental constraints on \(h_{1}\) invisible decays render the \(b\bar{b}+\text{MET}\) signal to be rather weak in the \(m_{A}<m_{h_{1}}/2\) regime. Consequently, we focus on the heavier \(m_{A}\) region. We find that there exist promising prospects for cxSM discovery for DM and \(h_{2}\) masses up to 400 GeV and 1 TeV, respectively. The discussion of our analysis is organized as follows. Section II introduces the framework of cxSM. Section III discusses the experimental constraints on the mixing angles. Section IV describes the requirements to realise the SFOEWPT together with the cold DM candidate. Section V discusses the remaining parameter space allowed by the measurements of the DM relic density and the Higgs boson invisible decay. Section VI discusses the exclusion of the parameter space from the latest LHC experiments. In section VII, we discuss the Monte Carlo simulation of b-jets plus DM candidates in cxSM and propose a search strategy for the corresponding signals at HL-LHC. Section VIII is the conclusion. ## II The Cxsm Model The cxSM extends the SM by introducing a complex SU(2) singlet scalar S that transforms under a global U(1) group as \(S\to Se^{i\alpha}\). The DM candidate emerges through two ways: (a) spontaneous breaking of the global U(1) symmetry, yielding a massless Nambu-Goldstone boson; (b) inclusion of explicit, soft U(1) breaking terms in the potential, as needed to generate a DM mass. One of the two degrees of freedom in \(S\) behaves like the real singlet of the xSM, and could mix with the SM Higgs boson and potentially catalyze a SFOEWPT. The other one becomes the cold DM candidate. We consider a technically natural soft symmetry-breaking and minimal renormalizable cxSM model that do not generate additional soft symmetry-breaking terms through renormalization. The scalar potential at the tree-level is [62] \[V_{0}(H,S) =\frac{\mu^{2}}{2}(H^{\dagger}H)+\frac{\lambda}{4}(H^{\dagger}H) ^{2}+\frac{\delta_{2}}{2}H^{\dagger}H|S|^{2}\] \[+\frac{b_{2}}{2}|S|^{2}+\frac{d_{2}}{4}|S|^{4}\] \[+a_{1}S+\frac{b_{1}}{4}S^{2}+h.c..\] (II.1) The first two lines in Eq. (II.1) are invariant under the U(1) transformation. The \(a_{1}\) and \(b_{1}\) terms in the third line break the U(1) symmetry explicitly. In general, \(a_{1}\) or \(b_{1}\) can be complex numbers. Under redefinition of S the quantity \(\phi_{S}\equiv\text{Arg}(b_{1}a_{1}^{*2})\) is a rephasing invariant complex phase. However, to obtain a viable DM candidate, mixing between the real singlet and imaginary singlet should be avoided, which requires \(\phi_{S}=0\). Therefore we fix \(a_{1}\) and \(b_{1}\) to be real numbers in the following studies. Spontaneous symmetry-breaking (SSB) is implemented via \[S =\frac{1}{\sqrt{2}}(v_{s}+s+iA),\] (II.2) \[H =\begin{pmatrix}G^{+}\\ \frac{1}{\sqrt{2}}(v_{0}+h+iG^{0})\end{pmatrix}\,\] (II.3) where \(v_{s}\) and \(v\) denote the vacuum expectation values, \(G^{0,\pm}\) are the usual Higgs doublet would-be Goldstone bosons, and \(s\) and \(A\) denote the real and imaginary parts of the fluctation around the singlet vacuum expectation value (vev). Based on the U(1) symmetry breaking schemes, the model can be classified into four cases [62]: * \(v_{s}\neq 0\) and \(a_{1}\neq 0,\ b_{1}\neq 0\). The U(1) symmetry is both spontaneously and explicitly broken. We may take Im(\(S\)) to be the pseudo-Goldstone boson that is no longer massless, with its mass depending on the extent of explicit breaking via the values of \(a_{1}\) and \(b_{1}\). Note that the domain wall problem would appear if \(a_{1}\) vanishes since a discrect \(Z_{2}\) symmetry breaks spontaneously in this case. * \(v_{s}=0\) and \(a_{1}=b_{1}=0\). U(1) symmetry is kept. \(A\) and \(s\) are identical and massive particles, such that the model is degenerate to xSM. Since the U(1) symmetry is preserved, the singlet does not mix with SM Higgs and become two stable particles. In this case, we have two DM candidates. Comparing with the xSM, the DM relic desity is equal to twice of the xSM case. * \(v_{s}=0\) with \(\ b_{1}\neq 0\). The U(1) symmetry is explicitly broken. The scalar \(S\) has no mixing with SM Higgs, such that \(s\) and \(A\) are both stable massive particles. Note that the \(a_{1}\) term is mainly to avoid a potential domain wall problem for the case when \(v_{s}\neq 0\) as the first case. Here we can set it to be zero since we do not have SSB and, thus, no domain wall problem in this case. * \(v_{s}\neq 0\) and \(a_{1}=b_{1}=0\). The U(1) symmetry is spontaneously broken, yielding a massless Nambu-Goldstone boson, which we may take to be Im(\(S\)) and which becomes a possible warm DM candidate. However, such possible candidate has been ruled out suppose the warm DM candidate mass of the range \(\mathcal{O}(1)\) GeV [62]. In the following studies, we will focus on the most general scenario where \(v_{s}\neq 0\) and \(a_{1}\neq 0,\ b_{1}\neq 0\). By using the minimization condition of the potential in SSB, we get \[\mu^{2} =\frac{1}{2}(-v_{s}^{2}\delta_{2}-v_{0}^{2}\lambda)\] (II.4) \[\Sigma_{12} =\frac{-4\sqrt{2}a_{1}-d_{2}v_{s}^{3}-v_{0}^{2}v_{s}\delta_{2}}{ 2v_{s}},\] (II.5) where \(\Sigma_{12}\) is defined as \[\Sigma_{12}=b_{1}+b_{2}.\] (II.6) Hence we can write down the scalar masses \[m_{A}^{2}=-\frac{\sqrt{2}a_{1}}{v_{s}}-b_{1},\] (II.7) and \[\mathcal{M}_{h}^{2}\equiv\begin{pmatrix}M_{h}^{2}&M_{hs}\\ M_{sh}&M_{s}^{2}\end{pmatrix}=\begin{pmatrix}\frac{1}{2}\lambda v_{0}^{2}& \frac{\delta_{2}}{2}v_{0}v_{s}\\ \frac{\delta_{2}}{2}v_{0}v_{s}&\frac{1}{2}d_{2}v_{s}^{2}-\frac{\sqrt{2}a_{1}}{ v_{s}}\end{pmatrix},\] (II.8) which can be diagonalized by an orthogonal matrix \(O(\theta)\): \[O(\theta)^{T}\mathcal{M}_{h}^{2}O(\theta)=\begin{pmatrix}m_{h_{1}}^{2}&0\\ 0&m_{h_{2}}^{2}\end{pmatrix},\quad O(\theta)=\begin{pmatrix}\cos\theta&-\sin \theta\\ \sin\theta&\cos\theta\end{pmatrix}.\] (II.9) Specifically, the fields are expressed in terms of mass eigenstates and the mixing angle as \[h =\cos\theta\ h_{1}-\sin\theta\ h_{2},\] (II.10) \[s =\sin\theta\ h_{1}+\cos\theta\ h_{2}.\] (II.11) The diagonal matrix, \(O(\theta)^{T}\mathcal{M}_{h}^{2}O(\theta)\), gives three equations that express \(\lambda\), \(\delta_{2}\) and \(d_{2}\) in terms of \(m_{h_{1}}\), \(m_{h_{2}}\), \(a_{1}\), \(v_{0}\), \(v_{s}\) and \(\theta\). \[\delta_{2} =\frac{\sin 2\theta\ \left(m_{h1}^{2}-m_{h2}^{2}\right)}{v_{0}v_{s}}\] (II.12) \[\lambda =\frac{2\left(m_{h_{1}}^{2}\cos^{2}\theta+m_{h_{2}}^{2}\sin^{2} \theta\right)}{v_{0}^{2}}\] (II.13) \[d_{2} =\frac{2\left(\sqrt{2}a_{1}+m_{h_{2}}^{2}v_{s}\cos^{2}\theta+m_{h _{1}}^{2}v_{s}\sin^{2}\theta\right)}{v_{s}^{3}}\] (II.14) Meanwhile, the parameters \(b_{1}\) and \(b_{2}\) are related to the input parameters above and the DM mass, \(m_{A}^{2}\). \[b_{1}=\frac{-\sqrt{2}a_{1}-m_{A}^{2}v_{s}}{v_{s}},\quad b_{2}=\Sigma_{12}-b_{1}\] (II.15) So far, we have two known parameter, \(v_{0}\) and \(m_{h_{1}}\), and five free parameters \(m_{A}^{2}\), \(m_{h_{2}}^{2}\), \(a_{1}\), \(\theta\) and \(v_{s}\). Moreover, the coefficients of quartic terms should be _bounded from below_. We express the scalar fields as \(h=\varphi\sin\alpha\) and \(s=\varphi\cos\alpha\). It is convenient to express the effective potential for a general values of \(\alpha\) and \(\varphi\) as [22] \[V_{eff}(\varphi,\alpha,T)=A\varphi^{4}+\bar{B}\varphi^{2}+\bar{C}T^{2}\varphi+ D\varphi+const.,\] (II.16) where the \(A\), \(\bar{B}\), \(\bar{C}\) and \(D\) are massive couplings related with \(v_{s}\), \(v_{0}\) and T. The bar stands for that the quantity is obtained from high-T approximation. Here, the tree-level quartic coupling is \[A=\frac{1}{16}\bigg{(}\lambda\cos^{4}\alpha+2\delta_{2}\cos^{2}\alpha\sin^{2} \alpha+d_{2}\sin^{4}\alpha\bigg{)}.\] (II.17) To guarantee the potential is _bounded from below_ in any direction of \(h-s\) plane, it must satisfy \(\lambda>0\), \(d_{2}>0\) and \(\delta_{2}>-\lambda d_{2}\) for negative \(\delta_{2}\). In addition, the requirement of positive eigenvalues of mass-squared matrix in Eq.(II.8) leads to \(\lambda\big{(}d_{2}-\frac{2\sqrt{2}a_{1}}{v_{s}^{2}}\big{)}>\delta_{2}^{2}\) for non-zero \(a_{1}\)[62]. It is useful to show the field dependent scalar masses that will be used in the calculation of the high-temperature Lagrangian. Before electroweak symmetry breaking, the field dependent masses are \[m_{G^{\pm,0}}^{2}=\frac{\partial^{2}V_{0}}{\partial G^{\pm,0}{}^{2}}=\frac{1} {4}\left(2\mu^{2}+s^{2}\delta_{2}+\lambda h^{2}\right),\] (II.18) \[m_{A}^{2}=\frac{\partial^{2}V_{0}}{\partial A^{2}}=\frac{1}{4}\left(-4b_{1}+d _{2}s^{2}+2\Sigma_{12}+h^{2}\delta_{2}\right),\] (II.19) \[\mathcal{M}_{h}^{2}=\left(\begin{matrix}\frac{1}{4}\left(2\mu^{2}+s^{2} \delta_{2}+3h^{2}\lambda\right)&\frac{h\kappa\delta_{2}}{2}\\ \frac{h\kappa\delta_{2}}{2}&\frac{1}{4}\left(3d_{2}s^{2}+2\Sigma_{12}+h^{2} \delta_{2}\right)\end{matrix}\right).\] (II.20) Combining all the field dependent terms together and ignoring those field-independent terms, such as \(\mu^{2}T^{2}\) and \(b_{1,2}\) terms, we could obtain the high-T approximation potential as discussed in detail in Sec. IV. ## III Constrains on parameters and benchmarks We first discuss the constraints of the mixing angle \(\theta\) in Eq. (II.10) since it is an essential parameter in cxSM for the dark matter candidate, EWPT and collider phenomenology. The mixing angle \(\theta\) is constrained by the electroweak precision observables (EWPO) and the global Higgs measurements at LHC. Note that during writing this paper, the CDF experiment reported a new W mass measurement \(m_{W}=80.4335\pm 0.0094\) GeV [100], which is about \(7\sigma\) away from the SM prediction. Given that there exists some tension between this result and other experimental results, e.g. ATLAS experiment [101], we prefer to not include an analysis of its implications in this paper. We defer such an analysis to a dedicated study in the future. ### Electroweak Precision Observables (EWPO) The limits on the scalar mixing angle from precision electroweak measurement can be studied by assuming the extended scalar mainly contribute to the gauge boson self-energy functions. Modifications of the oblique parameters S, T and U [102; 103] are induced due to the coupling difference between \(h_{1}VV\) and the SM coupling \(hVV\) and due to additional contributions arising from \(h_{2}\) via mixing. Indeed, since the new BSM particle is a gauge singlet, no further contributions come from the gauge sector except of those associated with the the mixing angle \(\sin\theta\) Therefore, the deviation of EWPO operators can be expressed as [24] \[\Delta\mathcal{O} =\mathcal{O}_{cxSM}-\mathcal{O}_{SM}\] \[=\cos^{2}\theta\ \mathcal{O}(m_{h_{1}})+\sin^{2}\theta\ \mathcal{O}(m_{h_{2}})- \mathcal{O}(m_{h_{1}})\] \[=\sin^{2}\theta\left[\mathcal{O}(m_{h_{2}})-\mathcal{O}(m_{h_{1}})\right]\] (III.1) where \(m_{h_{1}}\) and \(m_{h_{2}}\) are the masses of the two mass eigenstates in Eq. (II.9) and \(h_{1}\) is the observed Higgs boson with \(m_{h_{1}}\approx 125\) GeV. Hence the deviation of a given oblique parameter \(\mathcal{O}\) on EWPO from its SM value, including \(\Delta S\), \(\Delta T\) and \(\Delta(U+S)\), is contingent upon two free parameters: \(\theta\) and \(m_{h_{2}}\). For completeness, we provide explicit expressions in terms of the Passarino Veltman functions in Appendix. A. The best-fit values of S, T and U with respect to the SM prediction [104] are \[S-S_{SM} =0.04\pm 0.11\] \[T-T_{SM} =0.09\pm 0.14\] \[U-U_{SM} =-0.02\pm 0.11\] (III.2) To perform the parameter scan with these experimental constraints, the \(\chi^{2}\) is constructed as \[\chi^{2}=(X-\hat{X})_{i}(\sigma^{2})^{-1}_{ij}(X-\hat{X})_{j},\] (III.3) where the vector \(X_{i}=(S,\ T,\ U)\) and \((X-\hat{X})_{i}=(\Delta S,\ \Delta T,\ \Delta U)\) are derived from Eq. (III.1) and defined to be the corresponding central values of the shift from SM predictions in Eq. (III.2). The quantity \(\sigma^{2}\) is the error matrix which can be expressed as \(\sigma^{2}_{ij}=\sigma_{i}\rho_{ij}\sigma_{j}\). Here, the \(\sigma_{i}\) is the uncertainty of \((X-\hat{X})_{i}\) in Eq. (III.2), and \(\rho_{ij}\) is the correlation matrix [104] with \[\rho_{ij}=\begin{pmatrix}1&0.92&-0.68\\ 0.92&1&-0.87\\ -0.68&-0.87&1\end{pmatrix}.\] (III.4) Fig.1 shows \(\chi^{2}\) distribution of the 2-D parameter scan. The pink solid curve indicates the upper limit on the mixing angle \(\sin\theta\) at 95% C.L. as a function of \(m_{h_{2}}\). From the plot, we could see that the mixing angle \(|\sin\theta|\) is excluded above 0.35 for \(m_{2}\leq 400\) GeV and 0.25 for \(m_{2}\geq 600\) GeV. In the following section, we focus on the absolute value of the mixing angle lower than 0.35. ### Measurements of the Higgs boson couplings The mixing angle \(\theta\) between \(h_{1}\) and \(h_{2}\) describes the coupling between SM-like Higgs boson and other SM particles and thus is constrained by measurements of the experimental measurement of the Higgs boson coupling. This section will derive the 95% C.L. upper limit on \(\sin^{2}\theta\) by performing a global fit to the latest ATLAS measurements [105]. To characterize the impact of the cxSM on properties of the 125 GeV Higgs-like boson, it is useful to consider the signal strength, defined as \[\mu_{pp\to h_{1}\to XX}=\frac{\sigma_{pp\to h_{1}}\ BR(h_{1}\to XX)}{\sigma_{pp \to h}^{SM}\ BR(h\to XX)_{SM}},\] (III.5) where \(\sigma_{pp\to h_{1}}=\cos^{2}\theta\times\sigma_{pp\to h}^{SM}\) is considered in tree-level. By using the the decay width relationship between the SM-like Higgs and SM Higgs: \(\Gamma_{h_{1}\to XX}=\cos^{2}\theta\ \Gamma_{h\to XX}\), the branching ratio of the SM-like Higgs boson decay can be expressed as: \[BR(h_{1}\to XX)=BR(h\to XX)_{SM},.\] This equation is valid in the parameter space relevant to the present study, where \(m_{h_{2}}\) is greater than \(m_{h_{1}}\) and \(m_{A}\) is greater than \(m_{h_{1}}/2\). In this case, both \(\Gamma_{h_{1}\to AA}\) and \(\Gamma_{h_{1}\to h_{2}h_{2}}\) vanish, and therefore \(\mu_{pp\to h_{1}\to XX}=\cos^{2}\theta\). To quantify cxSM-induced deviations from SM Higgs boson properties, we construct the \(\chi^{2}\) function for \(\mu_{i\to h_{1}\to f}\), where the subscript "i" stands for the production mode (_e.g._, gluon-gluon fusion) and "f" indicates the decay mode: \[\chi^{2}=\sum_{i,f}\frac{(\mu_{i\to h_{1}\to f}^{cxSM}-\mu_{i\to h_{1}\to f}^{ obs})^{2}}{\sigma_{\mu_{i\to h-f}}^{2}},\] (III.6) where all the channels tested at current LHC are considered and translated into a 95% C.L. upper bound on \(\sin^{2}\theta\), which translate the deviation of \(\chi^{2}\) to be \(\Delta\chi^{2}\leq 3.841\). This is translated into an upper bound on \(\sin^{2}\theta\), with \(\sin^{2}\theta<0.125\), which is calculated based on the current global Higgs fit results summarised in Tab. 1. Figure 1: The color bar represents the \(\chi^{2}\) value for EWPO. The points with \(\chi^{2}\) larger than 5.99 are excluded by EWPO. The black line corresponds to the deviation of 95% C.L. limit. The region above the current bound is excluded. ## IV Sfoewpt and numerical results In this section, we consider the gauge-independent \(\mathcal{O}(T^{2})\) high temperature (high-T) approximation of the finite temperature effective potential. We start with the expansion \[V_{eff}(h,s,T)=V_{0}(h,s)+V_{CW}^{T=0}(h,s)+V_{\mathrm{T}\neq 0}(h,s,T).\] (IV.1) \(V_{CW}^{T=0}\) is the zero-temperature Coleman-Weinberg effective potential with the general form \[V_{CW}=\sum_{k}\frac{(-1)^{2s_{k}}}{64\pi^{2}}g_{k}\ [M_{k}^{2}]^{2}\left(\log\frac{M_{k} ^{2}}{\mu^{2}}+c_{k}\right),\] (IV.2) where \(s_{k}\) is the spin of the \(k\)-th particle; \(g_{k}\) indicates the number of degrees of freedom; \(c_{k}\) is equal to \(3/2\) for scalars and fermions, and \(5/6\) for vector gauge bosons. The quantity \(V_{\mathrm{high-T}}\) is the effective potential at finite-temperature approximation at leading order in the finite temperature effective theory. It can be obtained from the conventional one-loop thermal potential \[V_{T}^{1-\mathrm{loop}}=\frac{T^{4}}{2\pi^{2}}\sum_{k}n_{k}J_{B,F}(m_{k}^{2}/T ^{2})\] (IV.3) with \[J_{B}(\frac{m_{k}^{2}}{T^{2}}) =-\frac{\pi^{4}}{45}+\frac{\pi^{2}}{12}\frac{m_{k}^{2}}{T^{2}}- \frac{\pi}{6}\left(\frac{(m_{k}^{2})^{3/2}}{T^{3}}\right)\] \[\quad-\frac{m_{k}^{4}}{32T^{4}}\,\log\left(\frac{m_{k}^{2}}{c_{B} T^{2}}\right)\] \[J_{F}(\frac{m_{k}^{2}}{T^{2}}) =-\frac{7\pi^{4}}{360}-\frac{\pi^{2}}{24}\frac{m_{k}^{2}}{T^{2}}- \frac{m_{k}^{4}}{32T^{4}}\,\log\left(\frac{m_{k}^{2}}{c_{F}T^{2}}\right),\] where \(\log\,c_{B}=5.4076\) and \(\log\,c_{F}=2.6351\). Field-dependent logarithms in \(V_{high-T}\) are cancelled by \(V_{CW}\) with a factor of form \(\ln(T^{2}/\mu^{2})\) left. In principle, one can choose the renormalization scale to be \(\mu\propto T\), so that the log-term is temperature independent. Moreover, at leading order in the high-temperature limit, the leading order of \(V_{T\neq 0}\) is field independent and thus ignored. Therefore, we keep the the second order in \(V_{T\neq 0}\) that is proportional to \(T^{2}\). In this case, the Coleman-Weinberg potential is proportional to \(M_{k}^{4}\), which is negligable in high-T approximation, \(T\gg M_{k}\). In this paper, we use the high-T approximated potential without including the subordinate Coleman-Weinberg potential, \[V^{High-T}(h,s,T)\] \[=V_{0}(h,s)+\frac{T^{2}}{48}\left(12m_{t}^{2}\right)\] \[+\frac{T^{2}}{24}\left(3m_{G}^{2}+m_{h}^{2}+m_{s}^{2}+m_{A}^{2}+6 M_{W}^{2}+3M_{Z}^{2}\right)\] \[=\frac{1}{2}\left(\frac{\lambda}{8}+\frac{\delta_{2}}{24}+\frac{3 g_{2}^{2}+g_{1}^{2}}{16}+\frac{y_{t}^{2}}{4}\right)h^{2}T^{2}\] \[\quad+\frac{\delta_{2}+d_{2}}{48}s^{2}T^{2}.\] (IV.4) The \(m_{G}^{2}\), \(m_{s}^{2}\), \(m_{A}^{2}\) and \(m_{h}^{2}\) are field-dependent masses of the fields that interacts with the scalar fields \(h\) or \(s\) defined in Eq. (IV.4). Note that Eq.(IV.4) is already gauge independent thanks to the gauge-invariant thermal masses [106]. Thus the critical temperature defined by high-T approximation is also gauge independent. In the presence of the additional neutral scalar and the portal interaction, spontaneous symmetry breaking (SSB) can take place via multiple ways [5]: (a.) a single-step transition to the present pure Higgs vaccum from the symmetric phase at \(T=T_{EW}\). (b.) The universe first lands in a phase with a non-zero \(v_{s}\) at \(T>T_{EW}\) followed by a transition to the current Higgs vaccum at \(T_{EW}\). (c.) A one-step transition to where both the SM Higgs and the real singlet obtain vevs. The first order EWPT can be induced at tree-level in the high-T approximated Lagrangian under certain conditions, where the situation is classified according to the number of transition steps. We discuss these possibilities below. In so doing, we first observe that a first order EWPT for scenario (a) requires that thermal loops containing the singlet scalar sufficiently enhance the term in \(V_{\mathrm{eff}}\) proportional to \(Th^{3}\). We do not consider this possibility here. For a discussion, see, _e.g._, Ref [5] and references therein. For the two-step phase transition, as shown in Fig. 2, the singlet scalar vev first moves from \(O^{\prime}\) to \(A\), where \(>=v_{S}^{A}/\sqrt{2}\) and \(h>=0\); the SM Higgs then also obtains its vev in the second step from \(A\) to \(B\), where \(>=v_{S}^{B}/\sqrt{2}\) and \(h>=v_{C}/\sqrt{2}\). For the second step, we denote the critical temperature as \(T_{C}\), such that the strong first order electroweak phase \begin{table} \begin{tabular}{|c||c|c|c|c|c|c|} \hline Production mode & g\(\mathrm{F}\)\(+\)\(b\)\(H\) & VBF & WH & ZH & \(t\bar{t}\)H & \(t\)H \\ \hline \(\sum_{f}\mu_{i\to h_{1}\to ff}\) & \(1.03^{+0.07}_{-0.07}\) & \(1.10^{+0.13}_{-0.12}\) & \(1.16^{+0.23}_{-0.22}\) & \(0.96^{+0.22}_{-0.21}\) & \(0.74^{+0.24}_{-0.24}\) & \(6.61^{+4.24}_{-3.76}\) \\ \hline \end{tabular} \end{table} Table 1: Combined signal strength of Higgs global measurement of different production channels [105]. The subscript "i" in \(\mu_{i\to h_{1}}\) represents the production mode. The second line corresponds to the summation of signal strenth of all decay products within one production mode. transition can be approximated by \(v_{C}/T_{C}\gtrsim 1\) with [64] \[v_{C}\simeq\sqrt{\frac{2\delta_{2}v_{S}^{A}}{\lambda}\left(v_{S}^{ A}(T_{C})-v_{S}^{B}(T_{C})\right)}\] (IV.5) \[T_{C}\simeq\sqrt{\frac{1}{2\Sigma_{H}}\left(-\mu^{2}-\frac{v_{S} ^{A}(T_{C})^{2}}{2}\delta_{2}\right)},\] (IV.6) where \(\Sigma_{H}=\frac{\lambda}{8}+\frac{\delta_{2}}{24}+\frac{3g_{2}^{2}+g_{1}^{2} }{16}+\frac{y_{1}^{2}}{4}\). In addition, \(\delta_{2}\) can be expressed as \[\delta_{2}=\frac{2}{v_{0}v_{s}}\left(m_{h_{1}}^{2}-m_{h_{2}}^{2} \right)\sin\theta\cos\theta.\] (IV.7) A positive \(\delta_{2}\) can generate a barrier between two minima and therefore induce a first order EWPT, where a positive \(\bar{v}_{S}^{A}-\bar{v}_{S}^{B}\) is required by Eq. (IV.5). For the purpose of collider phenomenology, we will focus on the heavy Higgs search at HL-LHC in the following section, such that a heavy scalar resonance with \(m_{h_{2}}^{2}>m_{h_{1}}^{2}\) is considered. Thus, as implied by Eq. (IV.7) the heavy scalar requirement requires a negative mixing angle, \(\theta\), in order for \(\delta_{2}>0\). Moreover, as shown in Eq. (IV.6), a positive \(\delta_{2}\) indicates an upper limit of itself. For an one-step phase transition wherein \(v_{0}\) and \(v_{S}\) vary from zero to nonzero at the same time, the situation is complex. If we consider the high-T effective theory without the thermal loop-induced cubic term - as what we performed above, such a one-step transition cannot be first order since \(v_{C}\) is always zero. This can be seen from Eq. (IV.5) with \(v_{S}^{A}\) replaced by zero. In principle, introducing the thermal cubic term can generate first order phase transition [62; 64; 22]. With the foregoing considerations in mind, we will focus in this paper on the two-step phase transition. The CosmoTransitions [107] package is used to numerically evaluate the EWPT quantities, e.g. \(T_{c}\) and the corresponding vevs, and then locate the feasible parameters space for the strong first-order EWPT. ## V Constraints on dark matter candidate For the pseudoscalar \(A\), since it does not mix with other scalars due to its CP-odd nature, this particle is stable and can be regarded as a dark matter candidate. However, the \(\delta_{2}\) term in Lagrange generates an interaction of \(g_{1AA}\cdot h_{1}AA\), which can contribute to the Higgs invisible decay if the \(m_{A}\) is less than half of the Higgs mass. Given no significant Higgs invisible decay is observed, this indicates either the coupling strength, which can be expressed as: \[g_{h_{1}AA}=\frac{\sqrt{2}a_{1}+m_{h_{1}}^{2}v_{s}}{2v_{s}^{2}} \sin\theta,\] (V.1) is highly suppressed or the \(m_{A}\) close to or even heavier than \(m_{h_{1}}/2\). To be specific, we redefine \(a_{1}=\gamma^{3}m_{h_{1}}^{3},\ \ v_{s}=\beta m_{h_{1}}\) by introducing \(\gamma\) and \(\beta\), the invisible decay width can be expressed as \[\begin{split}\Gamma_{h_{1}\to AA}&=\frac{g_{h_{1}AA} ^{2}}{8\pi m_{h_{1}}}\sqrt{1-\frac{4m_{A}^{2}}{m_{h_{1}}^{2}}}\\ &=\frac{m_{h_{1}}}{8\pi}\left(\frac{\sqrt{2}\gamma^{3}+\beta}{2 \beta^{2}}\right)^{2}\sqrt{1-\frac{4m_{A}^{2}}{m_{h_{1}}^{2}}}\ \sin^{2}\theta\\ &\sim\left(\frac{\sqrt{2}\gamma^{3}+\beta}{2\beta^{2}}\right)^{2 }\sqrt{1-\frac{4m_{A}^{2}}{m_{h_{1}}^{2}}}\left(\frac{\sin\theta}{0.1}\right) ^{2}\times 50\ \text{[MeV]},\end{split}\] (V.2) where the approximation in the last row is obtained by taking \(|\sin\theta|=0.1\). The current observed upper bound on the branching ratio of Higgs invisible decay at LHC experiments is about 13% for ATLAS [105] and 16% for CMS [108]. While the total decay width for SM Higgs is about 4.1 MeV [109], and notice that in Eq. (V.2), the factor \(\left(\frac{\sqrt{2}\gamma^{3}+\beta}{2\beta^{2}}\right)^{2}\sim\mathcal{O}(1)\) is satisfied for \(\beta\sim\mathcal{O}(1)\) and \(|\gamma|\sim\mathcal{O}(1)\), which indicates a narrow window for DM mass around \(m_{h_{1}}/2\) or a delicate fine tuning cancellation between \(a_{1}\) and \(v_{s}\). Taking into account above considerations and without loss of generality, we also consider the dark matter particle \(A\) in the range 60 GeV \(\leq m_{A}\leq\) 1 TeV. Dark matter relic density and rescaled spin-independent cross section are also taken into account. To obtain the dark matter relic density, we implement Figure 2: Two-step symmetry breaking at finite temperature for \(a_{1}\neq\) 0. The first transition is continuous phase transition which occurs from \(O^{\prime}\) to A. The second transition is from A to B, where a possible barrier can be generated between A and B for a positive \(\delta_{2}\). the cxSM model interactions in Feynrules[110] to produce the CalcHep[111] model file, which are then fed to MicrOMEGAs[112] to calculate. In this paper, a general scan on the free parameters is performed with \[0\leq v_{s}/\text{GeV}\leq 150.0,\] \[|\sin\theta|\leq 0.35,\] \[-1000.0^{3}\leq a_{1}/\text{GeV}^{3}\leq 1000.0^{3},\] \[60.0\leq m_{A}/\text{GeV}\leq 1000.0,\] \[300.0\leq m_{h_{2}}/\text{GeV}\leq 1000.0,\] (V.3) and the distribution of the DM relic density are shown in Fig. 3. The type of EWPT and its strength in Sec. IV are used to classify the spots. The blue spots represent the parameter points that could induce first order phase transition with \(v_{C}/T_{C}>1\). The orange spots contend all the other case. Current measurement of cold DM relic density given as \(\Omega_{DM}h^{2}=0.1186\pm 0.0020\)[113] is shown as black line. Most of the spots in our general scan are below this line, thus satisfy the DM relic density constraint. There is a minimum at \(m_{A}\simeq 62.5\) GeV as expected where the DM annihilation process mediated by \(h_{1}\) is highly enhanced, and valleys between \(m_{A}\simeq 150\) GeV and \(m_{A}\simeq 500\) GeV for the increase of the annihilation process mediated by \(h_{2}\) since the scanning region of \(m_{h_{2}}\) is chosen to be from 300 GeV to 1 TeV. Fig. 4 shows the Feynman diagram of the interaction between the dark matter particle and the proton by exchanging the SM Higgs. Since the SM Higgs is composed by (II.10), the spin-independent cross section of DM-proton process can be written as \[\sigma_{S}^{[p]} =\frac{m_{p}^{4}}{2\pi v^{2}(m_{p}+M_{A})^{2}}\left(\frac{g_{h_{ 1}AA}\cos\theta}{m_{h_{1}}^{2}}-\frac{g_{h_{2}AA}\sin\theta}{m_{h_{2}}^{2}} \right)^{2}\] \[\times\left(f_{u}^{[p]}+f_{d}^{[p]}+f_{s}^{[p]}+\frac{2}{9}f_{G} ^{[p]}\right)^{2},\] (V.4) where the \(f_{u}^{[p]},~{}f_{d}^{[p]},~{}f_{s}^{[p]}\) and \(f_{G}^{[p]}\) are proton form factors [112] and the minus sign in the first braket is derived from the minus sign in Eq. (II.10) with the couplings being \[g_{h_{2}AA} =\frac{\sqrt{2}a_{1}+m_{h_{2}}^{2}v_{s}}{2v_{s}^{2}}\cos\theta,\] (V.5) \[g_{h_{1}AA} =\frac{\sqrt{2}a_{1}+m_{h_{1}}^{2}v_{s}}{2v_{s}^{2}}\sin\theta.\] (V.6) In this work, MicrOMEGAs[112] is also used to calculate the spin-independent cross section. If the DM abundance is less than the observed DM abundance, the rescaled spin-independent cross section \(\sigma_{SI}\)(rescaled) could be obtained according to \[\sigma_{SI}(\text{rescaled})=\sigma_{SI}\frac{\Omega_{cxSM}h^{2}}{\Omega_{ DM}h^{2}}.\] (V.7) The general scan with Eq. (V.3) is also performed to \(\sigma_{SI}\)(rescaled) as shown in Fig. 5a. The definition of color remains the same as that in the figure of dark matter relic density. Experimental constraints from the direct dark matter search experiment XENON1T[114] is shown as line, and the expected efficiencies of future experiment XENONnT[115] and PandaX-4T[116] are shown by the dashed line. Currently we can exclude the dark matter mass between 65 GeV and 120 GeV under the premise of Figure 3: A general scan for the result of the DM relic density with the DM mass varying from 60 GeV to 1 TeV. The blue spots satisfy the conditions that induce SFOEWPT, and the orange spots induce second order electroweak phase transition or first order phase transition with low strength. The black solid line shows the cold DM relic density with \(\Omega_{DM}h^{2}~{}=~{}0.1186\)[113]. Valleys at 62.5 GeV and 150 GeV-500 GeV arise from DM annihilation process mediated by \(h_{1}\) and \(h_{2}\) respectively. Figure 4: DM direct detection: DM-proton interaction by mediating an SU(2) neutral Higgs via a t-channel process. The \(h_{1}\) and \(h_{2}\) interacts with the DM candidate with the nonzero \(g_{1AA}\) and \(g_{2AA}\). SFOEWPT. Most of the SFOEWPT spots in our scanning space can be covered by XENONnT. Fig. 5b shows the scaled cross section v.s. singlet-like Higgs mass with the \(m_{A}\) fixed to 62.5 GeV and \(m_{h_{2}}\) Varying from 70 GeV to 1 TeV. A minimum is generated when \(m_{h_{2}}=m_{h_{1}}\) as indicated in Eq. (V.4). From Fig. 5b, we can see that very few parameter points for SFOEWPT can survive from the direct dark matter search. On the contary, in the Fig. 5a most parameter regions with \(m_{A}>62.5\) GeV that realise SFOEWPT survive the current direct DM search and are able to be tested by XENONnT. Therefore, it is more valuable for DM direct detection to investigate the \(m_{A}\) region beyond 62.5 GeV. A similar study on the DM relic density is presented in the Ref. [68], which, same as this paper, suggests that most of the parameter region satisfying DM relic density, and SFOEWPT conditions survives the Xenon-1T search and can be probed by Xenon-nT and PandaX-4T. Compared with Ref. [68], this paper finds some parameter space that survives Xenon-nT search. We further studies the cases of \(m_{A}\simeq 62.5\) GeV and \(m_{A}>62.5\) GeV, and reaches the SFOEWPT parameter region beyond the detection capability of XENONnT. ## VI Heavy scalar resonance searches bounds at the LHC The cxSM predicts that the singlet-like scalar boson \(h_{2}\) can be produced at the LHC and decay to various standard model particles. Thus, \(h_{2}\) can behave as a heavy spin-0 resonance in collider when \(m_{h_{2}}>m_{h_{1}}\). In this section, we investigate the constraints on the cxSM parameter space from the direct heavy resonance search at the LHC. The production cross section times branching fraction of \(h_{2}\to WW\)[117], \(ZZ\)[117], \(hh\)[118], \(\tau\tau\)[119] and \(bb\)[120] are scanned in the parameter space Eq. (V.3). These calculations rely on the mixing angle \(\theta\) and the widths of additional decay. Given by Eq. (II.10), the production cross section of \(h_{2}\) can be expressed as \(\sigma_{pp\to h_{2}}=\sin^{2}\theta\ \sigma_{pp\to h}\) for each production mode. The decay widths of the existing channels are also obtained by multiplying a factor \(\sin^{2}\theta\) on the standard model widths as \(\Gamma_{h_{2}\to XY}=\sin^{2}\theta\Gamma_{h\to XY}^{SM}\). For the additional decay channels, the \(h_{2}\to AA\) decay is considered because of the \(\delta_{2}\) term in Lagrangian, similar to the discussion of \(h_{1}\to AA\) in Sec. V. The \(h_{2}h_{1}h_{1}\) vertex also exists with the coupling being \[g_{h_{2}h_{1}h_{1}}= \sin\theta\cos\theta\times\] \[\left[\frac{3a_{1}}{\sqrt{2}v_{s}}\frac{\sin\theta}{v_{s}}+(m_{h_ {1}}^{2}+\frac{m_{h_{2}}^{2}}{2})(\frac{\sin\theta}{v_{s}}-\frac{\cos\theta}{ v_{0}})\right],\] (VI.1) due to their mixing. Thus, we must also include the \(h_{2}\to h_{1}h_{1}\) channel. In addition, the three-body decay channel, \(h_{2}\to h_{1}AA\), is also taken into consideration Figure 5: The rescaled spin-independent DM-proton cross section of cxSM parameter points where the DM relic density is below the current measurement. Fig.(a) is the distribution of the general parameter space as shown in Eq. (V.3). In Fig.(b), we fix \(m_{A}\) to 62.5 GeV and let \(m_{h_{2}}\) vary from 70 GeV to 1 TeV. The blue spots satisfy the conditions that induce SFOEWPT, the orange spots induce EWPT other than strong first order. The solid line corresponds to the 95% C.L. exclusion constrained by XENON1T and the dashed line is the expected efficiencies from XENONnT (red) and PandaX-4T (black). because of the non-zero coupling of \(g_{h_{2}h_{1}AA}\) with \[\begin{split} g_{h_{2}h_{1}AA}&=\frac{1}{2v_{0}v_{s}^{ 3}}(\sqrt{2}a_{1}v_{0}\sin\theta\cos\theta+m_{h_{2}}^{2}v_{s}^{2}\cos^{2}\theta \sin^{2}\theta\\ &-m_{h_{1}}^{2}v_{s}^{2}\cos^{2}\theta\sin^{2}\theta+m_{h_{1}}^{2} v_{s}v_{0}\cos\theta\sin^{3}\theta\\ &+m_{h_{2}}^{2}v_{0}v_{s}\cos^{3}\theta\sin\theta).\end{split}\] (VI.2) Apart from the direct \(h_{2}\to h_{1}AA\) decay, an interesting process where one or both of the Higgs boson from di-Higgs decay channel is off shell, leading to one or more pairs of heavy particles (\(WW\), \(t\bar{t}\)_etc_ or a pair of heavy dark matter particles) in the final state, e.g. \(h_{2}\to h_{1}h_{i}^{*}\to h_{1}AA\). One nominally expects these contributions to be suppressed due to the off-shell \(h_{1}\) propagator and additional-particle phase space suppression. We find, however, that the contribution from the \(h_{2}\to h_{1}h_{i}^{*}\to h_{1}AA\) channel can provide significant discovery potential. The differential cross section of the mediate three-body decay process is calculated according to the Appendix. B, by integrating which we can obtain the width. With these additional decays, the branching ratio for a decay from \(h_{2}\) to standard model particles can be written as \[BR(h_{2}\to XX)=\frac{\sin^{2}\theta\ \Gamma_{h_{2}\to XX}}{\sin^{2}\theta\ \Gamma_{h}^{SM}+\Gamma_{h_{2}}^{BSM}},\] (VI.3) where \[\Gamma_{h_{2}}^{BSM}=\Gamma_{h_{2}\to h_{1}h_{1}}+\Gamma_{h_{2}\to AA}+ \Gamma_{h_{2}\to h_{1}AA}+\Gamma_{h_{2}\to h_{1}t\bar{t}}.\] (VI.4) Finally, the overall cross section in cxSM for heavy resonance search can be simply written as \(\sigma_{pp\to h_{2}}\times BR(h_{2}\to XX)\). Fig. 6a and 6b shows the experimental constraints from \(h_{2}\to WW\) and \(ZZ\) channels for those parameter points satisfying SFOEWPT, where both vector-boson fusion (VBF) and gluon-gluon fusion (ggF) production modes are considered. Fig. 6c shows the constraint for the same points from ggF+VBF Higgs searches combining the results of \(b\bar{b}b\bar{b}\), \(b\bar{b}\gamma\gamma\) and \(b\bar{b}\tau\bar{\tau}\) final states. The black curves in the figures are the experimental upper limit on the overall cross section, above which the parameter points are excluded. The other channels, including \(h_{2}\to\tau\tau\) and \(h_{2}\to bb\), are found to hardly have exclusion power in the scanned parameter space and thus not shown in the figures. Those spots with heavy \(h_{2}\) that survive the diboson searches, \(h_{2}\to VV\), are likely to have lower \(A\) mass. This is because BSM branching ratio \(h_{2}\to AA\) (\(h_{2}\to h_{1}AA\)) becomes nonzero for \(m_{h_{2}}\geq 2m_{A}\) (\(m_{h_{2}}\geq 2m_{A}+m_{h_{1}}\)) and thus reduce the braching ratio of \(h_{2}\to VV\), making it difficult for experiment to exclude this space via di-boson resonance searches. ## VII Prospect of heavy scalar search in B-jets\(+\)met channels When considering the cxSM \(b\bar{b}+\mathrm{MET}\) signal, we consider a comprehensive set of processes (CSPs) that contribute to this channel. Our search strategy is inspired by strategies used for mono-Higgs plus MET, then optimized to account for other important sub-processes, such as those in which an off-shell \(h_{1}\) mediates \(b\bar{b}\) pair production. To carry out detailed simulations for the HL-LHC, we select a set of benchmark parameter points after applying all the constraints and requirements discussed in the previous sections. In subsection VII.1, we explore aspects of the underlying sub-processes and allowed pa Figure 6: Cross sections of SFOEWPT parameter points for (a) VBF \(h_{2}\ \to\ VV\), (b) ggF \(h_{2}\ \to\ VV\) and (c) \(\mathrm{VBF}+\mathrm{ggF}\ h_{2}\ \to h_{1}h_{1}\) as functions of \(m_{h_{2}}\). The colorbar represents the mass of pseudoscale boson \(A\). The black curves show the 95% C.L. upper limit from ATLAS heavy resonance searches [117; 118], above which the parameter points are excluded. rameter space, as it bears on the LHC signal. The selection criteria and the signal signature are shown in subsection VII.2. Finally, we find that the discovery potential with a significance of \(\geq 1.96\sigma\) reach for the \(b\bar{b}+\text{MET}\) channel is significant at the HL-LHC, and most parameter points will be covered in that case. ### The complete set of cxSM processes for b-jets plus MET In the cxSM, multiple processes contribute to the \(b\bar{b}+\text{MET}\) final state, including the di-Higgs channels, heavy Higgs boson direct decay channels and mono-Higgs plus b-jets. The DM candidate can be produced from direct four-particle vertex from heavy Higgs boson \(h_{2}\) or from the subsequent decay of an on-shell or off-shell \(h_{1,2}\) boson. We consider all the processes with the coupling order satisfying \(\text{QCD}\leq 2\) and \(\text{QED}\leq 4\) in MadGraph [121]. The CSPs have more than one hundred diagrams. A brief overview of the main types is illustrated in Fig. 7, among which the cross section is dominated by the diagram-(a) and diagram-(b), in particular, the diagram-(b) with mediator substituted by an off-shell \(h_{1}\) are found to be significant. Previous studies on the collider searches of the cxSM include: * The \(h_{1}\to A\ A\) case with \(m_{A}=62.5\) GeV [64], which satisfies the Higgs invisible decay constraint and obtains a relatively large parameter space. * The degenerate-scalar scenario with \(|m_{h_{2}}-m_{h_{1}}|\lesssim\mathcal{O}(1)\) GeV [70; 122]. Collider signatures in this scenario is SM-like, and therefore current experimental data cannot distinguish them from the SM predictions. However, the on-shell \(h_{1}\to A\ A\) decay with \(m_{A}=62.5\) GeV is not expected to significantly enhance the sensitivity of \(b\bar{b}+\text{MET}\) search because the branching ratio is already highly bounded by the Higgs invisible decay constraint. Moreover, with \(m_{A}=62.5\) GeV, we find that the parameter space is tightly constrained by the current experimental requirements. Therefore, in this study, we investigate the most general case where \(m_{A}\geq 62.5\) GeV. However, due to the exclusion of all points with \(m_{A}\) in the range of [62.5 GeV, 120 GeV] by XENON1T, as mentioned in Sec. V, we further restrict our analysis to \(m_{A}\geq 120\) GeV. To choose benchmark mass points for analysis, we impose a requirement that \(m_{h_{2}}>m_{h_{1}}+2\times m_{A}\). This condition ensures that the \(h_{2}\) mediator in diagram-(a) and diagram-(b) can be on-shell, and thus enhances the cross section of CSPs signal. Therefore, the analysis will be conducted on the following ten mass points: Taking into account all the current constraints and requirements discussed in the previous sections, it is not possible to find a shared benchmark point for the remaining parameters (\(a_{1}\), \(v_{s}\), \(\sin\theta\)) that satisfies all the mass points. For instance, The SFOEWPT tends to choose a larger \(-a_{1}\) when \(m_{h_{2}}\) becomes heavier. The relationship between \(m_{h_{2}}\) and \(a_{1}\) is depicted in Fig. (a)a, from which it is evident that there is no single choice for \(a_{1}\) that can be used for the mass range between \(m_{h_{2}}=400\) GeV and \(m_{h_{2}}=1000\) GeV. This \(a_{1}\)-\(m_{h_{2}}\) correlation leads to an increase in the cross section of certain processes in CSPs. Specifically, the process \(pp\to h_{1}^{*}\to h_{1}AA\) with diagram-7(b) is found \begin{table} \begin{tabular}{|l||c c c c c c c c c|} \hline \(m_{A}\)/GeV & 130 & 130 & 130 & 130 & 230 & 230 & 230 & 330 & 330 & 430 \\ \hline \(m_{h_{2}}\)/GeV & 400 & 600 & 800 & 1000 & 600 & 800 & 1000 & 800 & 1000 & 1000 \\ \hline \end{tabular} \end{table} Table 2: Mass points used to analyze. Figure 7: Representative Feynman diagram to generate signal events with b-jets plus MET final states at the LHC. to be reinforced and even becomes the dominant process for heavy \(h_{2}\) masses. Its cross section is proportional to \(g_{h_{1}h_{1}AA}\), which can be expressed as \[g_{h_{1}h_{1}AA} =\frac{1}{2v_{0}v_{s}^{2}}(m_{h_{1}}^{2}v_{s}^{2}\cos^{5}\theta+m_{ h_{1}}^{2}v_{s}^{2}\cos^{3}\theta\sin^{2}\theta\] \[+\sqrt{2}a_{1}v_{0}\sin^{3}\theta+m_{h_{1}}^{2}v_{0}v_{s}\cos^{2} \theta\sin^{2}\theta\] \[+m_{h_{1}}^{2}v_{0}v_{s}\sin^{5}\theta).\] (VII.1) From the formula, it can be observed that this coupling becomes larger with increasing values of \(-a_{1}\) since the \(\sin\theta\) is negative due to the heavy scalar requirement as discussed in Sec. IV. The resulting correlation between \(m_{h_{2}}\) and \(g_{h_{1}h_{1}AA}\) is shown in Fig. 9. To justify the other parameter ranges, specifically \(v_{s}\), we must consider the collider constraints on the lower \(m_{h_{2}}\) mass region. For a larger \(v_{s}\gg 100\) GeV, the \(pp\to h_{2}\to h_{1}h_{1}\) becomes dominant in the low \(m_{h_{2}}\) region. However, the major process obtains a large cross section \(\sim\mathcal{O}(10^{-1})-\mathcal{O}(1)\) pb around 400 GeV depending on the mixing angle and thus are excluded by the correct LHC bound, see in Fig. 6c. The range of the mixing angle, \(\sin\theta\), is generally constrained by the EWPO given in Fig. 1. The correlations between \(m_{h_{2}}\) and the other parameters, namely \(v_{s}\), \(\sin\theta\), and \(m_{A}\), are also depicted in Fig. 8. Among these correlations, the decrease in \(v_{s}\) can be attributed to the enhancement of process \(h_{2}\to h_{1}AA\). Moreover, as the mass of \(h_{2}\) increases, it is expected that the coupling angle \(\sin\theta\) approaches 0, but those points too close to 0 are rejected based on the SFOEWPT requirement. The detailed discussion regarding the correlation with \(m_{A}\) can be found in Sec.VI. ### Analysis and results In this subsection, we will describe the simulation procedures used to select the signals of b-jets plus MET at the HL-LHC. Monte Carlo samples are generated for both the CSPs signal and background events at a \(pp\) collider with a center-of-mass energy of 14 TeV. These samples are then normalized to the integrated luminosity of the HL-LHC, which is set to \(3000fb^{-1}\). We performed a detailed simulation for the mass points listed in Table 2. The remaining three parameters corresponding to each mass point were chosen randomly within the allowed parameter space. However, these parameters are believed to affect the relative contributions of different diagrams in CSPs, thereby impacting the selection efficiency. It is important to note that the exclusion reach in the \(m_{h_{2}}-m_{A}\) plane obtained from our search is intended to be general. Hence, variables that could potentially provide discrimination power between the most dominant diagrams, such as the angular separation of the \(b\bar{b}\) system and missing transverse momentum, were not considered in this analysis. The signal Monte Carlo (MC) samples are generated at the leading order using MadGraph5_aMC@NLO[121] with UFO and parameter relationships implemented Figure 9: Distribution of \(g_{h_{1}h_{1}AA}\) after requirement of SFOEWPT, DM constraints and heavy Higgs searches at LHC. The magnitude of \(g_{h_{1}h_{1}AA}\) increases as \(m_{h_{2}}\) increases. by the FeynRules [110]. The events are then processed through Pythia8 [123] for parton showering and hadronization. Finally, the simulated events are passed through Delphes3 [124] to account for the detector response. Associated background processes from top quark pair production (ttbar), single top quark production (single-top), Vh production, diboson production, and processes involving a vector boson in association with jets (V+jets) are generated using Pythia8 [123]. The aim is to simulate backgrounds that have similar visible final states as our target signal and can contaminate into the signal region. Therefore, all background events are required to have at most one lepton and at least one bottom quark. Additionally, they must have at least one neutrino to satisfy the requirement of high missing transverse energy. Table 3 provides a summary of the background generation process. The showering and simulation approach for background events follows the same procedure as for the signal. The generated Monte Carlo samples are analyzed using MadAnalysis5 [125]. During the object reconstruction stage, some basic requirements on transverse momentum and pseudorapidity are applied. Specifically, jets are required to have \(p_{t}>25\) GeV and \(|\eta|<2.5\), while electrons and muons are required to have \(p_{t}>10\) GeV and \(|\eta|<2.4\). These requirements help ensure the quality and reliability of the reconstructed objects in the analysis. Two general cuts are initially applied to distinguish the signal and background events for all mass points: * Cut-1 \(n_{\rm lepton}=0\). * Cut-2 \(n_{b-{\rm jets}}=2\). After applying these cuts, we present the distribution of the invariant mass of the \(b\bar{b}\) system in Fig. 10. In this figure, the signal events have been rescaled to match the remaining background events. To identify the bottom-quark pair from the SM-like Higgs boson decay, we implement a related cut: * Cut-3: 100 GeV \(<m_{b\bar{b}}<140\) GeV. Furthermore, we take into account the missing transverse energy to further distinguish signal events from the background. As depicted in Fig. 11, this variable is expected to be significantly large in our signal samples. To ensure that the statistical uncertainty of the generated background does not have a substantial impact, we apply a relatively loose cut: * Cut-4: MET \(>350\) GeV The purpose of this cut is to ensure that the signal events can be effectively separated from the background. After applying the selection criteria, the number of the signal events that can be detected at a 95% confidence level corresponds to a cross section close to \(10^{-2}\) pb. The exact exclusion cross sections are listed in Table 4. From the table, we observe that the selection efficiency is pri Figure 11: Distributions of the missing transverse energy after the first three cuts. Figure 10: Distributions of the invariant mass of bottom-pair system after the first two cuts. \begin{table} \begin{tabular}{|c|c|c|c|} \hline & Process & \(\sigma\) (pb) & Generator \\ \hline ttbar & \(tt\) & 493 & Pythia8 \\ \hline single-top & \(tq\) & 172 & Pythia8 \\ \hline \multirow{2}{*}{Vh} & \(Wh\) & 0.227 & Pythia8 \\ \cline{2-4} & \(Zh\) & 0.0768 & Pythia8 \\ \cline{2-4} & \(WZ\) & 4.94 & Pythia8 \\ \cline{2-4} & \(ZZ\) & 1.25 & Pythia8 \\ \hline \multirow{2}{*}{V+jets} & \(W+jets\) & 55.8 & MG5\_aMC \\ \cline{2-4} & \(Z+jets\) & 218 & MG5\_aMC \\ \hline \end{tabular} \end{table} Table 3: Information of the background MC samples. The cross sections (\(\sigma\)) are calculated with the requirement that there are at most one lepton, at least one neutrino and at least one bottom quark in the final states. marily dependent on the mass of the dark matter candidate \(A\), as one can expect from the MET cut. To cover the entire 300 GeV \(<\) m\({}_{h_{2}}\)\(<\) 1000 GeV range for each \(m_{A}\) point, we employ the linear interpolation and extrapolation based on the limits obtained from our analysis. In particular for the case of \(m_{A}=430\) GeV, we make the assumption that the limit remains constant throughout the entire range. Notice that the expected discovery ability is enhanced in \(m_{h_{2}}=400\) GeV for the case of \(m_{A}=130\) GeV, therefore, our approach to obtaining the upper limit in the low \(m_{h_{2}}\) region can be considered conservative, the actual exclusion limit in that region might be even stronger than what is indicated by our study. Subsequently, we employ a bivariate spline approximation based on this rectangular mesh to obtain a wide range of upper limits on the \(m_{h_{2}}-m_{A}\) plane. We then scatter the parameter points from our general scanning space (Eq.V.3) on the this two-dimensional plane, taking into account all the current experimental constraints. The resulting plot is shown in Fig. 12, where the color represents the comparison of the cross section with respect to the upper limits obtained from our study. Parameter points that are colored orange indicate a deviation of 1.96\(\sigma\), while those marked by red spots indicate a deviation of 5\(\sigma\), allowing for the discovery or exclusion of those specific parameter points. Several observations are in order: Firstly, in Fig. 12, there is a distinct line with a positive slope at the upper boundary of the mass points region, particularly noticeable for heavier \(h_{2}\) values. This slope corresponds to the relationship \(m_{h_{2}}=2m_{A}\). The region located above this slope is largely excluded based on the results of the heavy scalar resonance search, as discussed in Section VI. Secondly, the density of red spots is more pronounced in the region of heavier \(h_{2}\) masses, suggesting a more promising discovery potential in the higher \(h_{2}\) mass range. At first glance, this result may seem counterintuitive. However, it can be attributed to the increasing cross section of the \(pp\to h_{1}^{*}\to h_{1}AA\) process, as discussed in Subsection VII.1. Finally, a significant portion of the parameter space with heavier \(h_{2}\) masses can be effectively probed by the \(b\bar{b}\)+MET search at the HL-LHC. This even indicates that there is already notable discovery potential for some regions of the parameter space if we were to migrate this analysis to the current LHC. ## VIII Conclusion Through spontaneous and soft breaking of a global U(1) symmetry, the cxSM introduces two additional degrees of freedom, with one catalyzing a possible SFOEWPT and the other providing a viable DM candidate. Previous studies have demonstrated the viability of the cxSM for both DM and SFOEWPT and have elucidated the correlation between the singlet scalar-SM Higgs coupling and the occurrence of a SFOEWPT in the cxSM parameter space. In addition, there exists a coupling \(h_{1}AA\) between the SM-like Higgs and pseudoscalar (DM) pair. For sufficiently light \(A\), the Higgs invisible decay is induced for small pseudoscalar masses. To avoid the an experimentally excluded excess of the the Higgs invisible decay, one way is to restrict \(m_{A}\) to a narrow window around \(m_{h_{2}}/2\) or to implement a delicate fine tuning cancellation between \(a_{1}\) and \(v_{s}\). Alertnately, one may take \(m_{A}>m_{h_{2}}/2\) so that the Higgs invisible decay is impossible. In both cases, a distinctive signal in \(pp\) collisions is a \(b\bar{b}\) pair plus MET, with various contributions being mediated by on- and/or off-shell \(h_{1,2}\) bosons. Searches for such signal processes have never been performed for the cxSM. Therefore, there exists strong motivation to study the HL-LHC reach for the cxSM EWPT-DM viable parameter space. In this work, we have performed a detailed analysis of this reach. Compared with the previously considered most relevant heavy resonance searches at the LHC, which include the di-Higgs channel (see in Fig. 6c) and the \(WW+ZZ\) channel (see in Fig. 6a and Fig. 6b), that realize the ca \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline & \(m_{A}=130\) GeV & \(m_{A}=230\) GeV & \(m_{A}=330\) GeV & \(m_{A}=430\) GeV \\ \hline \(m_{h_{2}}=400\) GeV & 0.0086 pb & & & \\ \hline \(m_{h_{2}}=600\) GeV & 0.011 pb & 0.0006 pb & & \\ \hline \(m_{h_{2}}=800\) GeV & 0.0117 pb & 0.0063 pb & 0.0045 pb & \\ \hline \(m_{h_{3}}=1000\) GeV & 0.0113 pb & 0.0063 pb & 0.0043 pb & 0.0035 pb \\ \hline \end{tabular} \end{table} Table 4: The exclusion cross sections at a 95% confidence level for each mass point in the analysis. Figure 12: Exclusion plot on \(m_{h_{2}}-m_{A}\) plane with green stars showing the benchmark mass points used in the analysis. These spots survived all constraints discussed in previous sections. The red parameter spots can be detected or excluded by 5\(\sigma\) in HL-LHC via our analysis, the orange spots can be only reached by 1.96\(\sigma\), and the blue spots will still remain after all. pacity to probe the heavy resanance production up to about \(\mathcal{O}(10)\) pb, the present analysis via \(b\bar{b}+\text{MET}\) channel improves the ability significantly, see in Tab. 4. We find that a significant portion of the viable parameter space can be discovered/excluded by the \(b\bar{b}+\text{MET}\) search. While we considered a complete set of processes with \(b\bar{b}+\text{MET}\) final states, we designed the detection method based on the characteristics of the heavy scalar resonance signal events. We find that one of the dominant processes, \(pp\to h_{1}^{*}\to h_{1}AA\) that is induced by the coupling \(g_{h_{1}h_{1}AA}\), is reinforced significantly by the increasing \(-a_{1}\) in heavy \(m_{h_{2}}\) region. The selection is more likely to detect heavier \(h_{2}\) with larger \(|a_{1}|\). We further find that one can probe the EWPT-DM viable cxSM for a heavy scalar mass up to \(\sim 1\) TeV. ## IX Acknowledgement M.J. Ramsey-Musolf and W. Zhang were supported in part by the National Natural Science Foundation of China under grant no. 11975150 and by the Ministry of Science and Technology of China under grant no. WQ20183100522. M. J. Ramsey-Musolf also gratefully acknowledges support under the Double First Class Plan of the Shanghai Jiao Tong University and sponsorship from Shanghai Tang Junyuan Education Foundation. Y. Cai received financial support from the China Scholarships Council program. L. Zhang's work was supported by the National Science Fund of China for Excellent Young Scholars under grant number 12122507. ## Appendix A Oblique Parameter Following the notation by Peskin and Takeuchi Peskin and Takeuchi (1975), the contribution to S, T and U from the new scalar can be expressed as Peskin and Takeuchi (1975); Peskin (1975); Peskin (1975); Peskin (1975) \[\Delta S =\frac{1}{\pi}|\sin\theta|^{2}\{B_{0}(0,m_{h_{2}},M_{Z})-B_{0}(M_ {Z},m_{h_{2}},M_{Z})\] \[+\frac{1}{M_{Z}^{2}}\left[B_{22}(M_{Z},m_{h_{2}},M_{Z})-B_{22}(0, m_{h_{2}},M_{Z})\right]\},\] \[\Delta T =\frac{1}{4\pi g_{w}^{2}}|\sin\theta|^{2}\{-B_{0}(0,m_{h_{2}},M_{ W})+\frac{1}{c_{w}^{2}}B_{0}(0,m_{h_{2}},M_{Z})\] \[+\frac{1}{M_{W}^{2}}\left[B_{22}(0,m_{h_{2}},M_{W})-B_{22}(0,m_{h_ {2}},M_{Z})\right]\},\] \[\Delta(U+S) =\frac{1}{\pi}|\sin\theta|^{2}\{B_{0}(0,m_{h_{2}},M_{W})-B_{0}(M_ {W},m_{h_{2}},M_{W})\] \[+\frac{1}{M_{W}^{2}}\left[-B_{22}(0,m_{h_{2}},M_{W})+B_{22}(M_{W}, m_{h_{2}},M_{W})\right]\},\] where \(B_{0}\) and \(B_{22}\) are Passarino-Veltman funtions Peskin (1975). ## Appendix B Three-body decay phase space We use the example of the "three-body" decay process, where the differential cross section for a process of a three-body decay is \[d\Gamma=\frac{(2\pi)^{4}}{2m_{h_{2}}}|\mathcal{M}|^{2}d\Phi_{3}, \tag{19}\] where we use the \(d\Phi_{n}\) to denote the n-body phase space. Since the standard form of the phase space volume element with n final state particles can be decomposed into a number of multiplication of 2-body phase space, with the \(d\Phi_{3}\) is related with according to \[d\Phi_{3} =d\Phi_{2}(m_{AA},m_{A},m_{A})d\Phi_{2}(m_{h_{2}},m_{AA},m_{h_{1} })(2\pi)^{2}dm_{AA}^{2} \tag{20}\] \[=d\Omega^{*}\frac{|p^{*}|}{(2\pi)^{6}4m_{AA}}d\Omega_{3}\frac{|p _{3}|}{(2\pi)^{6}4M}(2\pi)^{2}dm_{AA}^{2},\] where the \(\Omega^{*}\) and \(\Omega_{3}\) are the solid angles of the off-shell SM-like Higgs and heavy resanance respectively. The integration parameter, \(m_{AA}\), is the invariant mass of two-DM system. The integration range is \([2m_{A},m_{h_{2}}-m_{h_{1}}]\). \(p^{*}\) (\(p_{3}\)) is the momentum of off-shell (on-shell) SM-like Higgs momentum. Thus the differential cross section can be expressed as \[d\Gamma =\frac{(2\pi)^{5}}{16M^{2}}|\mathcal{M}|^{2}dm_{AA}d\Omega^{*}d \Omega_{3}\] \[=\frac{\lambda^{\frac{1}{2}}(m_{AA},m_{A},m_{A})\lambda^{\frac{1} {2}}(m_{h_{2}},m_{AA},m_{h_{1}})}{32\pi^{3}m_{h_{2}}^{2}}\] \[\quad\times|\frac{g_{211}g_{1AA}}{m_{AA}^{2}}|^{2}\ dm_{AA}. \tag{21}\] where we have used \(|\mathcal{M}|^{2}=|\frac{g_{211}g_{1AA}}{m_{AA}^{2}}|^{2}\) and \[\lambda^{\frac{1}{2}}(m_{12},m_{1},m_{2})=\frac{\sqrt{\left[m_{12}^{2}-(m_{1}^{ 2}+m_{2}^{2})\right]^{2}-4m_{1}^{2}m_{2}^{2}}}{2m_{12}}. \tag{22}\] Based on these relations, we calculate both "2-body" and "3-body" branching ratios and scan over the general parameter space via \[BR(h_{2}\to h_{1}AA)=\frac{\Gamma_{h_{2}\to h_{1}AA}}{\sin^{2}\theta\ \Gamma_{h}^{SM}+\Gamma_{h_{2}\to AA}+\Gamma_{h_{2}\to h_{1}AA}} \tag{23}\] for "3-body" case, and the "2-body" case has a similar form. ## Appendix C Additional Content for HL-LHC search The cross sections of parameter points surviving all of the current experimental constraints are shown in Fig. 13 for \(m_{A}\) around 130GeV, 230GeV, 330GeV and 430GeV respectively. Notice that the cross section increases as \(m_{h_{2}}\) increases. The reason is that the coupling \(g_{1AA}\) grows with \(m_{h_{2}}\). Hence the cross section of the dominant process \(pp\to h_{1}^{*}\to h_{1}AA\) increases. The dashed line and the solid line in each sub-figure represent the 5 \(\sigma\) and 1.96 \(\sigma\) discovery significance in the HL-LHC via our analysis for the corresponding \(m_{A}\) values of 130 GeV, 230 GeV, 330 GeV, and 430 GeV.
2301.12281
Kohn-Sham approximation scheme for an interacting Bose-condensed gas
The grand canonical density functional theory for inhomogeneous systems of interacting bosons is developed in the effective action approach. The Legendre transform of the generating functional for Green's functions is used to define the effective action as a functional of both the particle density and the order parameter. Expanding the thermal effective action in powers of the Planck constant we obtain a systematic approximation scheme, which practically implements the Kohn-Sham idea: the problem of interacting bosons is reduced to a single-particle system in a fictitious external potential. The Kohn-Sham potential, the density and the order parameter have to be determined self-consistently in a given order approximation.
Anna Okopińska
2023-01-28T19:32:44Z
http://arxiv.org/abs/2301.12281v1
# Kohn-Sham approximation scheme ###### Abstract The grand canonical density functional theory for inhomogeneous systems of interacting bosons is developed in the effective action approach. The Legendre transform of the generating functional for Green's functions is used to define the effective action as a functional of both the particle density and the order parameter. Expanding the thermal effective action in powers of the Planck constant we obtain a systematic approximation scheme, which practically implements the Kohn-Sham idea: the problem of interacting bosons is reduced to a single-particle system in a fictitious external potential. The Kohn-Sham potential, the density and the order parameter have to be determined self-consistently in a given order approximation. ## I Introduction The Density Functional Theory (DFT), has become nowadays a method of choice in quantum chemistry and solid state physics [1; 2]. The theory is based on the exact theorem of Hohenberg and Kohn [3] that a functional exists, by minimization of which the density and other ground state properties of the many-body system are completely determined. The practical applications are successfully developed using the idea of Kohn and Sham [4] to replace the interacting many-electron problem by the exactly equivalent problem of non-interacting particles moving in an appropriately chosen external potential. Solving the single-particle problem numerically is a standard task, all the difficulties being transferred to the construction of the Kohn-Sham potential. The rigorous definition of the density functional (DF), was given by Levy [6] and Lieb [7] in the constrained search approach. Later, an extension to finite temperatures has been discussed by Mermin in the grand canonical ensemble [5]. The constrained search approach does not provide, however, an explicit method to construct the DF; therefore various physically motivated approximate forms have been guessed and discussed in many works, both for solids and for molecules [1; 2]. Several years ago, Fukuda et al. [8] provided a new formulation of DFT, using generating functionals with an external source \(J({\bf r},t)\) linearly coupled to the local composite operator \(\widehat{\Phi}^{+}({\bf r})\widehat{\Phi}({\bf r})\). The effective action, obtained as the Legendre transform of the generating functional for connected Green's functions, has been used to define the time-dependent DF in a way different from that developed by Runge and Gross [9]. Using the path integral representation, Fukuda et al. [8] were able to express the DF as a series in powers of the interaction strength, formulating diagrammatic rules for coefficients [10; 11]. Extending the formalism to finite temperature, Valiev and Fernando [12] demonstrated that the imaginary time effective action coincides with the Mermin grand canonical DF. Moreover, they have shown that the approximation scheme generated by the expansion in powers of the interaction strength can be regarded as an implementation of the idea of Kohn and Sham [4]. The leading order approximation describes the thermal equilibrium of non-interacting particles in an unknown potential, which can is determined by higher-order corrections to the effective action. An interesting modification of the approximation scheme for DF in an effective field theory has been proposed in nuclear physics with the effective parameter as a parameter of expansion [13]. The application of this scheme to dilute Fermi system in a harmonic trap demonstrated the convergence of densities and energies with increasing order calculations [13; 14]. An alternative scheme of gradually including interactions, motivated by renormalization group [15], is successfully developed [16]. In the case of bosons, the Hohenberg-Kohn theorem is valid, but as argued by Griffin [17], the problem is complicated by the phenomenon of Bose-Einstein condensation, which takes place below the critical temperature. In the field-theoretic approach, this is attributed [18] to the spontaneously broken symmetry with the macroscopic wave-function \(\Phi({\bf r})=<\widehat{\Phi}({\bf r})>\) playing the role of the order parameter, which determines the condensate density \(\rho_{c}({\bf r})=|\Phi({\bf r})|^{2}\). The proper extension of the Hohenberg-Kohn theorem to bosonic fields makes it necessary to consider a functional of both the particle density \(\rho({\bf r})\) and the order parameter \(\Phi({\bf r})\)[17]. However, because of difficulties in defining the Kohn-Sham reference system, the dependence on the order parameter was never included in the practical applications of DFT for bosons, and only the approximate functionals depending on the total particle density have been discussed [19]. In this work, we show that the density functional for bosonic systems can be conveniently defined as the effective action of quantum field theory. We consider the connected generating functional \(W[j,J]\), which depends on two kinds of sources, \(J({\bf r},t)\) coupled to the composite density operator \(\widehat{\Phi}^{+}({\bf r})\widehat{\Phi}({\bf r})\), and \(j({\bf r},t)\) coupled to the elementary field \(\widehat{\Phi}({\bf r})\). The Legendre transform of \(W[j,J]\) with respect to both sources defines the effective action as a functional of the particle density \(\rho(t,{\bf r})\) and the order parameter \(\Phi(t,{\bf r})\). We formulate a systematic approximation scheme by expanding the effective action in powers of the Planck constant. In the non-condensed phase, the expansion is reduced to that in powers of the interaction strength, where the leading order Schrodinger equation describes the non-interacting gas subjected to an unknown potential. In the Bose-condensed phase, the leading order approximation is also of the single-particle type, but given by the non-linear Gross-Pitaevskii equation with an additional external potential. In both cases the external potential is defined by higher-order contributions to the effective action, and can be self-consistently determined in the given order approximation. In this way a scheme implementing the Kohn-Sham idea emerges naturally in this approach and the many-body effects are taken into account in a systematic manner. The scheme is formulated for spatially inhomogeneous systems, which is important in describing the properties of Bose-condensed gases in magnetic traps. The outline of the present work is as follows. The effective action will be discussed in the Lagrangian approach, but first, in Section II, we review briefly the grand canonical DFT in the Hamiltonian approach in order to relate the two formulations. In Section III generating functionals for Green's functions are presented and the effective action is defined as a functional of the order parameter and particle density. Expansion of the effective action in powers of \(\hbar\) is developed in Section IV. Section V discusses the case of thermal equilibrium, and the Kohn-Sham approximation scheme for interacting bosons is described in Section VI. Our conclusions are summarized in Section VII. ## II Density functional in the constrained search approach The quantum many-body system is usually described by the second quantized Hamiltonian \[\widehat{H} = \widehat{T}+\widehat{U}+\widehat{V}_{ext}=-\int\!d^{3}r\frac{ \hbar^{2}}{2m}\widehat{\Phi}^{\dagger}({\bf r})\nabla^{2}\widehat{\Phi}({\bf r}) \tag{1}\] \[+ \int\!d^{3}rV_{ext}({\bf r})\widehat{\Phi}^{\dagger}({\bf r}) \widehat{\Phi}({\bf r})+\frac{1}{2}\!\int\!d^{3}r\int\!d^{3}r^{\prime}\widehat {\Phi}^{\dagger}({\bf r})\widehat{\Phi}^{\dagger}({\bf r^{\prime}})U({\bf r},{\bf r^{\prime}})\widehat{\Phi}({\bf r^{\prime}})\widehat{\Phi}({\bf r}),\] with the inter-particle interaction \(U({\bf r}_{i},{\bf r}_{j})\), and the external potential \(V_{ext}({\bf r}_{i})\) characterizing the considered system (the potential of nuclei for a molecule or a solid, the potential of magnetic forces for a trapped atomic gas, etc.). This form of the Hamiltonian applies both to fermions and bosons, the different particle statistics is included by the appropriate commutation relations for the field operator \(\widehat{\Phi}({\bf r})\). The rigorous definition of the DF, provided by the constrained search approach [6; 7], can be extended to the systems at fixed temperature \(T=\frac{1}{\beta}\)[1]. In the grand canonical ensemble, the states of the system are represented by Fock space density operators: \[\widehat{\Gamma}\!=\!\sum_{N}\sum_{i=1}^{\infty}p_{N}^{i}|\Phi_{Ni}\rangle \langle\Phi_{Ni}| \tag{2}\] with \(p_{N}^{i}\) being the probability of finding the system in the \(N-\)particle state \(|\Phi_{Ni}\rangle\). Since the Hamiltonian (1) commutes with the number operator \(\widehat{N}=\int\!d^{3}r\widehat{\rho}({\bf r})=\int\!d^{3}r\widehat{\Phi}^{ \dagger}({\bf r})\widehat{\Phi}({\bf r})\), and the total number of particles \(N\) is conserved, one introduces a chemical potential \(\mu\) with a value adjusted such that the average number of particles would be equal to \(N\). The grand canonical functional of the state, defined as \[\omega^{state}[\widehat{\Gamma}]\!=\!Tr\left\{\widehat{\Gamma}(\widehat{H}- \mu\widehat{N}+\frac{1}{\beta}ln\widehat{\Gamma})\right\}, \tag{3}\] reaches a minimum for the equilibrium state, \(\widehat{\Gamma}=\widehat{\Gamma}_{eq}\), its value determines the grand canonical potential of the system \[\omega(\beta,\mu)=\omega^{state}[\widehat{\Gamma}_{eq}]=\inf_{\widehat{ \Gamma}}Tr\left\{\widehat{\Gamma}(\widehat{H}-\mu\widehat{N}+\frac{1}{\beta} ln\widehat{\Gamma})\right\}. \tag{4}\] The search for the minimum can be split in two steps. First, the constrained search is performed over all states \(\widehat{\Gamma}[\rho]\) with the expectation value of the density operator equal to the prescribed function \(\rho({\bf r})\) as defined by \[Tr\left[\widehat{\Gamma}\widehat{\rho}\right]=\sum_{N}\sum_{i=1}^{\infty}p_{N} ^{i}\langle\Phi_{i}|\widehat{\rho}|\Phi_{i}\rangle=\rho({\bf r}), \tag{5}\] and later the obtained functional is minimized over all possible \(\rho(\mathbf{r})\). This allows Eq. 4 to be represented as \[\omega(\beta,\mu)\!=\!\inf_{\rho(\mathbf{r})}\!\inf_{\widehat{\Gamma}\to\rho}tr \left\{\widehat{\Gamma}[\rho](\widehat{T}+\widehat{U}+\widehat{V}_{ext}\!-\! \mu\widehat{N}+\frac{1}{\beta}ln\widehat{\Gamma}[\rho])\right\}\!=\!\inf_{\rho (\mathbf{r})}\left[F[\rho]\!+\!\int d^{3}r\rho(\mathbf{r})V_{ext}(\mathbf{r}) \right], \tag{6}\] where the universal functional \[F[\rho]\!=\!\inf_{\widehat{\Gamma}\to\rho}Tr\left\{\widehat{\Gamma}[\rho]( \widehat{T}+\widehat{U}-\mu\widehat{N}+\frac{1}{\beta}ln\widehat{\Gamma}[\rho] )\right\} \tag{7}\] does not depend on external potential. The functional \[\Omega[\rho]=F[\rho]+\int d^{3}rV_{ext}(\mathbf{r})\rho(\mathbf{r}) \tag{8}\] provides a rigorous construction of the grand canonical DF, introduced by Mermin [5]. Eq.6 clearly shows that \(\Omega[\rho]\) determines the equilibrium density and the grand canonical potential of the interacting system by the minimum principle. The infimum is searched in the class of functions which may be obtained from a Fock space density matrix by (5). All the functions fulfilling the conditions \[\rho(\mathbf{r})\geq 0,\ \ \mbox{and}\ \int d^{3}r\left|\nabla\rho^{1/2}( \mathbf{r})\right|^{2}<\infty, \tag{9}\] belong to this class, since any function of this type, normalized to \(N\), can be obtained from an \(N-\)particle density matrix [2]. It was observed by Lieb [7] that regarding the grand potential (3) as a functional of an arbitrary one-particle potential \(V(\mathbf{r})\), \[\omega[V]\!=\!\inf_{\widehat{\Gamma}}tr\left\{\widehat{\Gamma}(\widehat{T}+ \widehat{U}+\widehat{V}\!-\!\mu\widehat{N}+\frac{1}{\beta}ln\widehat{\Gamma}) \right\}, \tag{10}\] the universal DF (7) can be represented by the Legendre transform \[F[\rho]=\sup_{V(\mathbf{r})}\left[\omega[V]-\int d^{3}rV(\mathbf{r})\rho( \mathbf{r})\right], \tag{11}\] where the maximum is searched over all reasonable functions \(V(\mathbf{r})\) at fixed \(\rho(\mathbf{r})\). Knowing \(F[\rho]\), the Mermin functional \(\Omega[\rho]\) can be easily obtained via Eq.8. For our purposes, it is more convenient to represent the thermal DF directly in terms of the Legendre transformation \[\Omega[\rho]=\sup_{J(\mathbf{r})}\left[W[J]-\int d^{3}rJ(\mathbf{r})\rho( \mathbf{r})\right], \tag{12}\] where \[W[J]=\omega[V]\!=\omega[V_{ext}+J] \tag{13}\] is the grand potential (3) regarded as a functional of a new functional variable \(J(\mathbf{r})=V(\mathbf{r})-V_{ext}(\mathbf{r})\). Observe that \(J(\mathbf{r})\) plays a role of a fictitious external potential which adds to the potential \(V_{ext}(\mathbf{r})\), which really operates in the considered system. The formula (12) explains the key idea of DFT by the Legendre concept of switching between different independent variables: the dependence on the fictitious potential \(J(\mathbf{r})\) is replaced by the dependence on the density distribution \(\rho(\mathbf{r})\)[7; 20]. The local Legendre transform (12) is a functional generalization of the transformation from the chemical potential \(\mu\) to the number of particles \(N\). Unfortunately, neither (7) nor (12) does present a useful way to calculate \(\Omega[\rho]\) in practice. A more suitable form can be obtained if the functional \(W[J]\) has appropriate differentiability properties and the supremum in Eq. 12 occurs at \[\rho(\mathbf{r})=\frac{\delta W}{\delta J(\mathbf{r})}. \tag{14}\] With the solution of the above equation expressed as a functional of the density, \(J[\rho]\), the Legendre transform is obtained just by substitution: \[\Omega[\rho]=\left[W[J]-\int d^{3}rJ(\mathbf{r})\rho(\mathbf{r})\right]_{J=J[ \rho]}. \tag{15}\] Although not shown explicitly, it has to be borne in mind that the Mermin DF depends on the chemical potential. The value of \(\mu\) is determined by the relation \[N=-\frac{\delta\Omega}{\delta\mu}, \tag{16}\] which ensures the average number of particles to be equal to \(N\). ## III Generating functionals for Green's functions Full information on the quantum system requires the knowledge of all Green's functions, and can be conveniently encoded in generating functionals, which describe the system probed by external classical sources. Here we consider the generating functional in the form \[Z[j,J] = \left\langle Te^{-\frac{i}{\hbar}\int dt\left(\widehat{H}-\mu\int d ^{3}r\widehat{\Phi}^{\dagger}(\mathbf{r})\widehat{\Phi}(\mathbf{r})+\int d^{3 }r^{\prime}(t,\mathbf{r})\widehat{\Phi}(\mathbf{r})+\int d^{3}rj(t,\mathbf{r} )\widehat{\Phi}^{\dagger}(\mathbf{r})+\int d^{3}r\widehat{\Phi}^{\dagger}( \mathbf{r})J(t,\mathbf{r})\widehat{\Phi}(\mathbf{r})\right)}\right\rangle, \tag{17}\] where the expectation value is taken in the vacuum state, and \(T\) denotes the time-ordering operator. Besides the complex source \(j(t,\mathbf{r})\), linearly coupled to the elementary quantum field \(\widehat{\Phi}(\mathbf{r})\), a real source \(J(t,\mathbf{r})\), coupled to the density operator \(\widehat{\Phi}^{\dagger}(\mathbf{r})\widehat{\Phi}(\mathbf{r})\), has been introduced for more efficient probing of the system. The above functional can be conveniently represented [21] as a path integral \[Z[j,J]=\int\!D\Phi D\Phi^{*}\,e^{\frac{i}{\hbar}\int dtd^{3}r}[L[\Phi]+\mu \Phi^{*}(t,\mathbf{r})\Phi(t,\mathbf{r})-j^{*}(t,\mathbf{r})\Phi(t,\mathbf{r} )-j(t,\mathbf{r})\Phi^{*}(t,\mathbf{r})-\Phi^{*}(t,\mathbf{r})J(t,\mathbf{r}) \Phi(t,\mathbf{r})], \tag{18}\] where the Lagrangian density, derived from the Hamiltonian (1), reads \[L[\Phi] = i\hbar\Phi^{*}(t,\mathbf{r})\frac{\partial\Phi(t,\mathbf{r})}{ \partial t}+\frac{\hbar^{2}}{2m}\Phi^{*}(t,\mathbf{r})\nabla^{2}\Phi(t, \mathbf{r}) \tag{19}\] \[- V_{ext}(\mathbf{r})\Phi^{*}(t,\mathbf{r})\Phi(t,\mathbf{r})- \frac{1}{2}\!\int\!d^{3}r^{\prime}\Phi^{*}(t,\mathbf{r})\Phi^{*}(t,\mathbf{r} ^{\prime})U(\mathbf{r},\mathbf{r}^{\prime})\Phi(t,\mathbf{r}^{\prime})\Phi(t,\mathbf{r}).\] In the general case of a time-dependent system, the path integral for the generating functional (18) is defined within the Schwinger-Keldysh formalism [22] on the three-branch contour in the complex-time plane \(\{(-\infty,+\infty),(+\infty,-\infty),(-\infty,-\infty+i\beta)\}\). The boundary conditions on the fields are periodic in imaginary time, with a period being the inverse temperature, \(\beta=\frac{1}{T}\). Although almost all attention in this work is given to equilibrium applications, we keep the formulation general as long as possible, having in mind possible studies of time-dependent issues. The generating functional for connected Green's functions, \(W[j,J]\), is defined by \[Z[j,J]=e^{\frac{i}{\hbar}\Psi[j,J]}. \tag{20}\] The background field in the presence of external sources can be obtained as \[\Phi(x)=\frac{\delta W}{\delta j^{*}(x)}=<\widehat{\Phi}(x)>_{j,J}\ \ \ \mbox{and}\ \ \Phi^{*}(x)=\frac{\delta W}{\delta j(x)}=<\widehat{\Phi}^{\dagger}(x)>_{j,J}, \tag{21}\] and the total density \[n(x)=\frac{\delta W}{\delta J(x)}=<\widehat{\Phi}^{\dagger}(x)\widehat{\Phi }(x)>_{j,J}=<\widehat{\rho}(x)>_{j,J}=\hbar\rho(x)+|\Phi(x)|^{2} \tag{22}\] consists of the uncondensed particles density \(\rho(x)\) and the condensate density \(n_{cond}(x)=|\Phi(x)|^{2}\). Here and in the following \(x\) stands for \((t,\mathbf{r})\). The effective action for composite density operator is defined as the double Legendre transform \[\Gamma[\Phi,\rho]=W[j,J]-\int\!\Phi^{*}(x)j(x)\,dx-\int\!j^{*}(x)\Phi(x)-\int\! J(x)\left(\hbar\rho(x)+|\Phi(x)|^{2}\right)\,dx \tag{23}\] with the sources \(j(x)\) and \(J(x)\) eliminated in favor of \(\Phi(x)\) and \(\rho(x)\) with the aid of Eqs.21 and 22. The above functional contains full information on the system in terms of \(\rho\) and \(\Phi\), corresponding to external sources \(j\) and \(J\). Due to Legendre transform properties, the effective action fulfils \[\frac{\delta\Gamma}{\delta\Phi(x)}=-j^{*}(x),\ \ \frac{\delta\Gamma}{\delta\Phi^{*}(x)} =-j(x)\ \ \mbox{and}\ \ \frac{\delta\Gamma}{\delta\rho(x)}=-\hbar J(x). \tag{24}\] The original system is recovered by setting sources to zero, its states can be thus determined by solving the stationarity conditions \[\frac{\delta\Gamma}{\delta\Phi(x)}=\frac{\delta\Gamma}{\delta\Phi^{*}(x)}=0 \tag{25}\] and \[\frac{\delta\Gamma}{\delta\rho(x)}=0. \tag{26}\] Since the interaction potential does not depend on time, a time-independent solution can be found, \(\Phi_{eq}(x)=\Phi_{eq}(\mathbf{r})\) and \(\rho_{eq}(x)=\rho_{eq}(\mathbf{r})\), which corresponds to the equilibrium state. Let us observe that the conventionally used effective action \[\Gamma[\Phi]=W[j,J=0]-\int\!\Phi^{*}(x)j(x)\,dx-\int\!j^{*}(x)\Phi(x) \tag{27}\] can be obtained as \(\Gamma[\Phi,\rho]\) at \(J(x)=0\), or equivalently as. \(\Gamma[\Phi]=\Gamma[\Phi,\rho_{0}[\Phi]]\), where \(\rho_{0}\) is a solution of (26). Both the conventional effective action, \(\Gamma[\Phi]\), and the effective action for composite density operator, \(\Gamma[\Phi,\rho]\), contain full information on quantum field theory, but considering \(\Gamma[\Phi,\rho]\) as a functional of two independent variables provides an easier access to some physical observables. Both effective actions can be used to generate proper vertices, which are the simplest, one-particle irreducible Green's functions, directly related to the excitations of the system. Proper vertices of elementary fields, defined trough differentiation of \(\Gamma[\Phi]\), can be also obtained as derivatives of \(\Gamma[\Phi,\rho]\) taken at the equilibrium values of the order parameter and density. Especially useful is the second derivative \[\Gamma(x,y)=\left(\begin{array}{cc}\Gamma_{\Phi\Phi^{*}}(x,y)&\Gamma_{\Phi \Phi}(x,y)\\ \Gamma_{\Phi\Phi^{*}}(x,y)&\Gamma_{\Phi\Phi}(x,y)\end{array}\right)=\left( \begin{array}{c|c}\frac{\delta^{2}\Gamma}{\delta\Phi(x)\delta\Phi^{*}(y)} \bigg{|}_{\begin{subarray}{c}\Phi_{eq}\\ \rho_{eq}\end{subarray}}&\frac{\delta^{2}\Gamma}{\delta\Phi^{*}(x)\delta\Phi^{* }(y)}\bigg{|}_{\begin{subarray}{c}\Phi_{eq}\\ \rho_{eq}\end{subarray}}\\ \frac{\delta^{2}\Gamma}{\delta\Phi(x)\delta\Phi(y)}\bigg{|}_{\begin{subarray} {c}\Phi_{eq}\\ \rho_{eq}\end{subarray}}&\frac{\delta^{2}\Gamma}{\delta\Phi^{*}(x)\delta\Phi(y)} \bigg{|}_{\begin{subarray}{c}\Phi_{eq}\\ \rho_{eq}\end{subarray}}\end{array}\right), \tag{28}\] which fulfils \[\int\Gamma(x,y)G(y,z)dz=-\delta(x,z), \tag{29}\] where the full propagator \(G(x,y)\) is given by the connected Green's function \[G(x,y)=\left(\begin{array}{cc}G_{\Phi\Phi^{*}}(x,y)&G_{\Phi\Phi}(x,y)\\ G_{\Phi\Phi^{*}}(x,y)&G_{\Phi\Phi^{*}}(x,y)\end{array}\right)=\left(\begin{array} []{c|c}\frac{\delta^{2}W}{\delta j(x)\delta j^{*}(y)}\bigg{|}_{j=J=0}&\frac{ \delta^{2}W}{\delta j^{*}(x)\delta j^{*}(y)}\bigg{|}_{j=J=0}\\ \frac{\delta^{2}W}{\delta j(x)\delta j(y)}\bigg{|}_{j=J=0}&\frac{\delta^{2}W}{ \delta j^{*}(x)\delta j(y)}\bigg{|}_{j=J=0}\end{array}\right). \tag{30}\] Zero modes of \(\Gamma(x,y)\), corresponding to the poles of the propagator \(G(x,y)\), describe thus the one-particle excitations. The functional \(\Gamma[\Phi,\rho]\) offers an additional possibility of taking functional derivatives with respect to the density, which can be useful in studying collective excitations. For instance, the density fluctuations are described by the two-point composite vertex, given by the second derivative \[\chi(x,y)=\left.\frac{\delta^{2}\Gamma}{\delta\rho(x)\delta\rho(y)}\right|_{ \begin{subarray}{c}\Phi_{eq}\\ \rho_{eq}\end{subarray}}. \tag{31}\] ## IV Expansion of effective action In the case of interacting particles, the effective action functionals cannot be calculated exactly, so one resorts to approximations. It is advantageous to formulate an approximation scheme for the effective action functional, which makes it possible to generate consistent sets of approximate Green's functions through functional differentiation. A natural approximation scheme emerges if \(\Gamma[\Phi,\rho]\) can be represented as a series in powers of a conveniently chosen parameter. Because of the implicit definition (23), the expansion of \(\Gamma[\Phi,\rho]\) must be obtained in three steps: expanding \(Z[j,J]\) in powers of the chosen parameter, deriving the expansion for \(W[j,J]=\ln Z[j,J]\), and performing the Legendre transform order by order in the chosen parameter. Expansions of effective actions were obtained in this way for the cases when only one source is present. Expanding the conventional effective action \(\Gamma[\Phi]\), being the Legendre transform of \(W[j,J=0]\), in powers of the Planck constant, results in the well-known loop expansion, represented by Feynman diagrams with \(\Phi\)-dependent propagator and vertices [21; 23]. Expansion of the density-dependent effective action \(\Gamma[\rho]\), being the Legendre transform of \(W[j=0,J]\) in powers of the interaction strength has been derived by Fukuda et al. [8]. The diagrammatic representation of \(\Gamma[\rho]\) has been established in terms of the propagator, being related to \(\rho(x)\) via an implicit relation [10; 11]. Later, considering the effective action for composite operators \(\widehat{\Phi}^{2}({\bf r})\) and \(\widehat{\Phi}^{4}({\bf r})\) we have shown [24] that by using the Planck constant as a parameter of expansion, the Fukuda's approach can be extended to the case when the effective action depends on several functional variables. Now, we shall exploit this idea to derive a diagrammatic representation of \(\Gamma[\Phi,\rho]\), being the Legendre transform of \(W[j]\) for non-relativistic system of interacting bosons. The double Legendre transform for the effective action (23) can be performed sequentially. First, we perform the Legendre transform with respect to the source \(j(x)\): \[\Gamma_{1PI}[\Phi,J]=W[j,J]-\int\!\Phi^{*}(x)j(x)\,dx-\int\!j^{*}(x)\Phi(x)dx, \tag{32}\] calculating the integral (18) by the steepest-descent method. As stressed in [21], the steepest-descent method does not strictly yield a semi-classical expansion in powers of \(\hbar\), since the Planck constant appears not only in the parameter \(\frac{1}{\hbar}\) multiplying the action in the exponent in (18) but also in the Lagrangian (19). Choosing the strategy, routinely used in relativistic QFT, which consists in keeping the term \(\frac{1}{\hbar}\) in the exponent but setting \(\hbar=1\) in the Lagrangian, we obtain the steepest-descent expansion in the form \[Z[j,J]=\sum_{k=0}^{\infty}\hbar^{k}Z^{(k)}[j,J]. \tag{33}\] This yields the series representation of the connected generating functional \[W[j,J]=\ln Z[j,J]=\sum_{k=0}^{\infty}\hbar^{k}W^{(k)}[j,J], \tag{34}\] which can be differentiated to obtain \[\Phi[j,J]=\sum_{k=0}^{\infty}\hbar^{k}\Phi^{(k)}[j,J]=\sum_{k=0}^{\infty} \hbar^{k}\frac{\delta W^{(k)}[j,J]}{\delta j}. \tag{35}\] The series for the background field can be explicitly inverted order by order in \(\hbar\) to the form \[j[\Phi,J]=\sum_{k=0}^{\infty}\hbar^{k}j^{(k)}[\Phi,J], \tag{36}\] which enables \(j(x)\) to be eliminated in favor of \(\Phi(x)\) in the Legendre transform (32) leading to the loop expansion formula \[\Gamma_{1PI}[\Phi,J]=\sum_{k=0}^{\infty}\hbar^{k}\Gamma^{(k)}[\Phi,J]=S_{E}[ \Phi]+\int J(x)|\Phi(x)|^{2}dx+\frac{\hbar}{2}TrLnG^{-1}\] (37) with the line denoting the propagator functional \(G_{J}(x,y)\), the inverse of which is defined by \[G_{J}^{-1}(x,y)=\] \[\begin{pmatrix}\Big{(}(i\partial_{t}\!+\!\frac{\nabla^{2}}{2m}+V_{J} (x)\Big{)}\delta(x\!-\!y)\!+\!2\Phi^{*}(x)U(x,y)\Phi(y)&\Phi(x)U(x,y)\Phi(y)\\ \Phi^{*}(x)U(x,y)\Phi^{*}(y)&\Big{(}\!-i\partial_{t}\!+\!\frac{\nabla^{2}}{2m }+V_{J}(x)\Big{)}\delta(x\!-\!y)\!+\!2\Phi^{*}(x)U(x,y)\Phi(y)\!\end{pmatrix}, \tag{38}\] where the auxiliary potential \(V_{J}(x)=-\mu+V_{ext}(x)+J(x)\). Dots represent Hugenholtz vertices, which depend on the interaction potential \(U({\bf r}_{i},{\bf r}_{j})\) and the background field \(\Phi(x)\). The diagrams in the above expansion should be interpreted according to the rules of non-equilibrium theory on the three-branch contour in the complex-time plane. The next step consists in eliminating \(J(x)\) in favor of \(\rho(x)\), while performing the second Legendre transform \[\Gamma[\Phi,\rho]=\Gamma_{1PI}[\Phi,J]-\int(\hbar\rho(x)+|\Phi(x)|^{2})J(x)dx. \tag{39}\] After substituting the loop expansion (37) into the relation \[|\Phi|^{2}+\hbar\rho\!=\!\frac{\delta\Gamma_{1PI}}{\delta J} \tag{40}\] one obtains the power series representation of the density \[\begin{array}{c}\hbar\rho[\Phi,J]\!=\sum_{k=0}^{\infty}\hbar^{k}\rho^{(k)}[ \Phi,J]\!=\hbar\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\ cannot be solved for \(J(x)\). The best thing one can do is to keep a definition of the functional \(J^{(0)}[\Phi,\rho]\) in an implicit form \[\rho(x)=\frac{1}{2}trG_{J^{(0)}}(x,x), \tag{44}\] and to determine the higher-order coefficients \(J^{(k)}\) as functionals of \(J^{(0)}\), which enables us to perform the Legendre transform order by order in \(\hbar\). For simplicity, from here on we take the interaction potential to be local \[U(\mathbf{r}_{i},\mathbf{r}_{j})=g\delta(\mathbf{r}_{i}-\mathbf{r}_{j}), \tag{45}\] which is usually assumed in describing the Bose condensed gas at very low energies, with \(g=\frac{4\pi\hbar^{2}a}{m}\) being related to the scattering length \(a\)[25]. In this case, the Hugenholtz vertices are reduced to the local ones: the \(3-\)point vertices \(-2g\Phi(x)\) and \(-2g\Phi^{*}(x)\), and the \(4-\)point vertex \(-2g\), and the diagrammatic representation of the effective action is obtained in the form \[\Gamma[\Phi,\rho]=S[\Phi]+\hbar\int\rho(x)J^{(0)}(x)dx+\frac{\hbar}{2}TrLnG^{ -1}\] \[+\frac{3\hbar^{2}}{2}g\int\rho^{2}(x)dx\quad-3\hbar^{2}\] \[-27\hbar^{3}\] \[+81\hbar^{3}\] \[-\frac{3\hbar^{3}}{4}\] \[+\frac{0}{\hbar^{4}}\] where the inverse composite propagator \[\raisebox{-0.5pt}{\includegraphics[width=14.226378pt]{fig/f1.eps}}=\chi(x,y)=- \frac{\partial J^{(0)}(x)}{\partial\rho(y)}.\] The line denotes the propagator \(G_{J}(x,y)\) (38) taken at \(J=J_{0}[\Phi,\rho]\), which is implicitly given by (44). In fact, the implicit form of this relation is the advantage of the method of composite operators, because more information is included in the lowest order. It can be observed that for vanishing background field, \(\Phi=0\), the loop expansion (46) would be reduced to an expansion of \(\Gamma[\rho]\) in powers of the interaction strength, similar to that obtained by Fukuda for fermions [8]. The lowest order of the time-dependent effective action is the classical action \[\Gamma^{(0)}[\Phi,\rho]=S[\Phi]=\int d\hbar^{3}r\left[\Phi^{*}(t,\mathbf{r}) \left(i\frac{\partial}{\partial t}+\frac{\nabla^{2}}{2m}-V_{ext}(\mathbf{r})+ \mu\right)\Phi(t,\mathbf{r})-\frac{g}{2}|\Phi(t,\mathbf{r})|^{4}\right], \tag{47}\] which does not depend on \(\rho\), and trivially fulfills (26). In this approximation, the stationarity equation (25) yields the time-dependent Gross-Pitaevskii equation \[\frac{\delta\Gamma^{(0)}}{\delta\Phi^{*}(t,\mathbf{r})}=\left(i\frac{ \partial}{\partial t}+\frac{\nabla^{2}}{2m}-V_{ext}(\mathbf{r})+\mu-g|\Phi(t,\mathbf{r})|^{2}\right)\Phi(t,\mathbf{r})=0. \tag{48}\] In order to determine corrections to the above equation and other time-dependent characteristics, it would be necessary to calculate higher-order diagrams of \(\Gamma[\Phi,\rho]\) by means of Schwinger-Keldysh rules. We will not develop this point here, and in the following we restrict our discussion to the equilibrium case, discussing a systematic approximation scheme for the thermal DF. Thermal density functional In the case of thermal equilibrium, the path integral formalism becomes greatly simplified, since only the branch along the imaginary axis on the Schwinger-Keldysh contour matters. Changing to imaginary time \(\tau=it\) reduces the grand canonical generating functional to the Matsubara integral, \[Z[j,J]=\int\!D\Phi D\Phi^{*}\,e^{-\frac{1}{\hbar}\int dx[L_{E}[\Phi,\Phi^{*}]- \mu\Phi^{*}(x)\Phi(x)+j^{*}(x)\Phi(x)+j(x)\Phi^{*}(x)+\Phi^{*}(x)J(x)\Phi(x)]}, \tag{49}\] where \(x\) stands for \((\tau,{\bf r})\), and the integral over \(\tau\) is taken on the interval \((0,\beta)\), as the functions are periodic in \(\tau\). The Wick's rotated Lagrangian density takes a form \[L_{E}[\Phi] = \Phi^{*}(\tau,{\bf r})\left(\hbar\frac{\partial}{\partial\tau}- \frac{\hbar^{2}}{2m}\nabla^{2}+V_{ext}({\bf r})\right)\Phi(\tau,{\bf r}) \tag{50}\] \[+ \frac{1}{2}\!\int\!d^{3}r^{\prime}\left(\Phi^{*}(\tau,{\bf r}) \Phi^{*}(\tau,{\bf r}^{\prime}),\tau\right)U({\bf r},{\bf r}^{\prime})\Phi( \tau,{\bf r}^{\prime})\Phi(\tau,{\bf r}))\,.\] For studying the equilibrium properties of the system, it is sufficient to consider time-independent generating functionals. The functional \[w[j,J]=-\left.\frac{1}{\beta}W[j,J]\right|_{\begin{subarray}{c}j=j({\bf r})\\ j=J({\bf r})\end{subarray}} \tag{51}\] represents the grand canonical potential of the system being probed by the time-independent sources \(j({\bf r})\) and \(J({\bf r})\). In this case, the background field and density, given respectively by \[\Phi({\bf r})=\frac{\delta w}{\delta j^{*}({\bf r})},\ \ \Phi^{*}({\bf r})=\frac{ \delta w}{\delta j({\bf r})},\ \ \mbox{and}\ \ \frac{\delta w}{\delta J({\bf r})}=\hbar\rho({\bf r})+|\Phi({\bf r})|^{2}, \tag{52}\] are also time-independent, and the effective action can be used to define the thermal density functional \[\Omega[\Phi,\rho]\!=\!-\frac{1}{\beta}\Gamma[\Phi,\rho]\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\omega_{n}=\frac{2\pi n}{\beta}\) is the \(n-\)th Matsubara frequency, and the vertex labeled \((\omega_{n},\mathbf{r})\) implies the combined sum and integral \(\sum_{n=-\infty}^{\infty}\int d^{3}r.\) The auxiliary potential is given by \(V_{J^{(0)}}(\mathbf{r})=-\mu+V_{ext}(\mathbf{r})+J^{(0)}(\mathbf{r})\) with the functional \(J^{(0)}[\Phi,\rho]\) implicitly defined by \[\rho(\mathbf{r})=\frac{1}{2\beta}\sum_{n=-\infty}^{\infty}tr\mathcal{G}_{J^{(0 )}}(\omega_{n},\mathbf{r},\mathbf{r}). \tag{59}\] ## VI Kohn-Sham approximation scheme Now, we construct a systematic approximation scheme for bosonic fields from the expansion of the thermal DF proceeding in a way analogous to that of Valiev and Fernando [12]. They established the Kohn-Sham approximation scheme for fermions using the effective action \(\Gamma[\rho]\) with the coupling constant as the expansion parameter. In the case of bosons, the background fields do not necessarily vanish, so we have to consider the effective action \(\Gamma[\Phi,\rho]\) and its expansion in powers of \(\hbar\) (46). The \(K-\)th order approximation at the temperature \(\frac{1}{\beta}\) is obtained from the thermal DF series truncated at the \(K\)th order \[\Omega^{(K)}[\Phi,\rho]=\sum_{k=0}^{K}\hbar^{k}\Omega^{(k)}[\Phi,\rho]. \tag{60}\] The approximate values of the order parameter and density are determined by the stationarity conditions \[\frac{\delta\Omega^{(K)}}{\delta\Phi(\mathbf{r})}=\frac{\delta\Omega^{(K)}}{ \delta\Phi^{*}(\mathbf{r})}=0 \tag{61}\] and \[\frac{\delta\Omega^{(K)}}{\delta\rho(\mathbf{r})}=0, \tag{62}\] and the chemical potential is fixed by \[N=-\frac{\delta\Omega^{(K)}}{\delta\mu}. \tag{63}\] The zero-th order approximation to the thermal DF \[\Omega^{(0)}[\Phi,\rho]=\int\!d^{3}r\left[\Phi^{*}(\mathbf{r})\left(-\frac{ \nabla^{2}}{2m}+V_{ext}(\mathbf{r})-\mu\right)\Phi(\mathbf{r})+\frac{g}{2}| \Phi(\mathbf{r})|^{4}\right], \tag{64}\] yields the time-independent Gross-Pitaevskii equation \[\frac{\delta\Omega^{(0)}}{\delta\Phi^{*}(\mathbf{r})}=\left(-\frac{\nabla^{2 }}{2m}+V_{ext}(\mathbf{r})-\mu+g|\Phi(\mathbf{r})|^{2}\right)\Phi(\mathbf{r})=0, \tag{65}\] and the constraint (63) takes a form \[N=-\frac{\delta\Omega^{(0)}}{\delta\mu}=\int\!d^{3}r\left|\Phi(\mathbf{r}) \right|^{2}. \tag{66}\] This means that in this approximation the total density is equal to the condensate density, which is consistent with the absence of the contributions to the particle density in the zero-th order. One has to stress that \(\Omega^{(0)}[\Phi,\rho]\) does not include any temperature corrections, and can be regarded only as an approximation to the zero-temperature DF, which describes a full condensation into the ground state. The first order term DF for bosons is given by \[\Omega^{(1)}[\Phi,\rho] = \int\!d^{3}r\left[\Phi^{*}(\mathbf{r})\left(-\frac{\nabla^{2}}{2m} +V_{ext}(\mathbf{r})-\mu\right)\Phi(\mathbf{r})+\frac{g}{2}|\Phi(\mathbf{r})| ^{4}\right] \tag{67}\] \[- \hbar\int\!d^{3}r\rho(\mathbf{r})J_{0}(\mathbf{r})-\frac{\hbar}{ 2\beta}TrLn\mathcal{G}^{-1},\] with \[TrLn{\cal G}^{-1}=\sum_{i}\ln\lambda_{i}, \tag{68}\] where the eigenvalues \(\lambda_{i}\) of the operator \({\cal G}^{-1}\) are determined by the Bogoliubov-de Gennes equations \[\begin{pmatrix}-i\omega_{n}-\frac{\nabla^{2}}{2m}+2g|\Phi({\bf r})|^{2}+V_{J^{(0 )}}({\bf r})&g\Phi^{2}({\bf r})\\ g\Phi^{*2}({\bf r})&i\omega_{n}-\frac{\nabla^{2}}{2m}+2g|\Phi({\bf r})|^{2}+V_{J^ {(0)}}({\bf r})\end{pmatrix}\begin{pmatrix}u_{i}({\bf r})\\ v_{i}({\bf r})\end{pmatrix}=\lambda_{i}\begin{pmatrix}u_{i}({\bf r})\\ v_{i}({\bf r})\end{pmatrix}, \tag{69}\] \[\frac{\delta\Omega^{(1)}}{\delta\Phi^{*}({\bf r})}=\left(-\frac{\nabla^{2}}{ 2m}+V_{ext}({\bf r})-\mu+g|\Phi({\bf r})|^{2}\right)\Phi({\bf r})=0, \tag{70}\] The above functional describes a system of independent particles subjected to an external potential \(V_{J^{(0)}}({\bf r})=-\mu+V_{ext}({\bf r})+J^{(0)}({\bf r})\), where the function \(J^{(0)}({\bf r})\) is unknown. Therefore, \(\Omega^{(1)}[\Phi,\rho]\) can be taken as the reference system in the Kohn-Sham approximation scheme. The \(K-\)th order density functional may be split as \[\Omega^{(K)}=\Omega^{(1)}+\Omega^{(K)}_{m-b}, \tag{71}\] where the many-body contribution, \(\Omega^{(K)}_{m-b}\), contains the terms of the order \(\hbar^{2}\) and higher. This results in the splitting of the condition (62) into two equations \[\frac{\delta\Omega^{(1)}}{\delta\hbar\rho({\bf r})}=-J^{(0)}({\bf r})\quad \mbox{ and }\quad\frac{\delta\Omega^{(K)}_{m-b}}{\delta\hbar\rho({\bf r})}=J^{(0)}({\bf r}), \tag{72}\] where the first equality follows from the formula (59). The first of the above equations can be regarded as describing the single-particle Kohn-Sham reference system. The fictitious potential \(J^{(0)}({\bf r})\) is determined by the second equation, which includes many-body effects to the order \(K\). An implicit character of the relation between the density and Kohn-Sham potential (59) leads to the self-consistent scheme for calculating physical quantities. The equilibrium density \(\rho^{(K)}_{eq}({\bf r})\) and order parameter \(\Phi^{(K)}_{eq}({\bf r})\) have to be determined by solving Eqs.61, 62 and 72 self-consistently. The \(K-\)th order approximation to the grand canonical potential can be obtained as \(\omega^{(K)}(\mu,\beta)=\Omega^{(K)}[\Phi^{(K)}_{eq},\rho^{(K)}_{eq}]\). The Legendre construction guarantees that the density and the order parameter determined by the exact functional \(\Omega[\Phi,\rho]\) are equal to those of the true system at the same temperature and chemical potential. The approximation series provides a systematic way of approaching \(\rho_{eq}({\bf r})\), \(\Phi_{eq}({\bf r})\) and the grand canonical potential \(\omega(\mu,\beta)\). Approximations to other physical quantities have to be derived first from the approximate functional \(\Gamma^{(K)}[\Phi,\rho]\), and then evaluated at \(\Phi=\Phi^{(K)}_{eq}({\bf r})\) and \(\rho=\rho^{(K)}_{eq}({\bf r})\). For example, approximations to one-particle excitation energies may be obtained from zero-modes of the inverse one-particle propagator \[\Gamma^{(K)}(x,y)=\left.\frac{\delta^{2}\Gamma^{(K)}[\Phi,\rho]}{\delta\Phi(x) \delta\Phi^{*}(y)}\right|_{\Phi^{(K)}_{eq},\rho_{eq}(K)} \tag{73}\] and those to density fluctuations from zero-modes of the inverse composite propagator \[\chi^{(K)}(x,y)=\left.\frac{\delta^{2}\Gamma^{(K)}[\Phi,\rho]}{\delta\rho(x) \delta\rho(y)}\right|_{\Phi^{(K)}_{eq},\rho_{eq}(K)}. \tag{74}\] ## VII Conclusions The Lagrangian formulation of QFT provides a rigorous formulation of DFT for fermions and bosons. The functional \(\Gamma[\Phi,\rho]\) is defined as the effective action for both elementary field and density operator. The formalism is universal and can be used to study time-dependent systems and the equilibrium phenomena. The formalism allows for extensions to other functional theories (spin-density, current-density,...) by introducing sources coupled to the corresponding operators. The path integral formulation provides a method for representing the effective action functional as a series in powers of the Planck constant. The expansion allows to formulate a systematic approximation scheme, which is a generalization of the Kohn-Sham approach to bosonic fields. From an approximation to the effective action a consistent set of approximations to physical quantities may be obtained. A single approximation to \(\Gamma[\Phi,\rho]\) describes the ground state, and provides a way to determine approximations to other quantities, such as for example one-particle excitations or density fluctuations.
2307.09249
UniTabE: A Universal Pretraining Protocol for Tabular Foundation Model in Data Science
Recent advancements in NLP have witnessed the groundbreaking impact of pretrained models, yielding impressive outcomes across various tasks. This study seeks to extend the power of pretraining methodologies to facilitating the prediction over tables in data science, a domain traditionally overlooked, yet inherently challenging due to the plethora of table schemas intrinsic to different tasks. The primary research questions underpinning this work revolve around the establishment of a universal pretraining protocol for tables with varied structures, the generalizability and transferability of learned knowledge across tasks, the adaptation to diverse downstream applications, and the incorporation of incremental columns over time. In response to these challenges, we introduce UniTabE, a straightforward yet effective method designed to process tables in a uniform manner, devoid of constraints imposed by specific table structures. UniTabE's core concept relies on representing each basic table element with a module, termed TabUnit. This is subsequently followed by a Transformer encoder to refine the representation. Moreover, our model is designed to facilitate pretraining and finetuning through the utilization of free-form prompts. In order to implement the pretraining phase, we curated an expansive tabular dataset comprising approximately 13B samples, meticulously gathered from the Kaggle platform. This research primarily centers on classification and regression tasks involving tabular data, and conducts rigorous experimental testing and analyses to validate the effectiveness of our methodology. The experimental results demonstrate UniTabE's superior performance against several baselines across massive benchmarks. This, therefore, underscores UniTabE's potential to significantly enhance the semantic representation of tabular data, thereby marking a significant stride for tabular data analysis.
Yazheng Yang, Yuqi Wang, Guang Liu, Ledell Wu, Qi Liu
2023-07-18T13:28:31Z
http://arxiv.org/abs/2307.09249v2
# UniTabE: Pretraining a Unified Tabular Encoder for Heterogeneous Tabular Data ###### Abstract Recent advancements in Natural Language Processing (NLP) have witnessed the groundbreaking impact of pretrained models, yielding impressive outcomes across various tasks. This study seeks to extend the power of pretraining methodologies to tabular data, a domain traditionally overlooked, yet inherently challenging due to the plethora of table schemas intrinsic to different tasks. The primary research questions underpinning this work revolve around the adaptation to heterogeneous table structures, the establishment of a universal pretraining protocol for tabular data, the generalizability and transferability of learned knowledge across tasks, the adaptation to diverse downstream applications, and the incorporation of incremental columns over time. In response to these challenges, we introduce UniTabE, a pioneering method designed to process tables in a uniform manner, devoid of constraints imposed by specific table structures. UniTabE's core concept relies on representing each basic table element with a module, termed TabUnit. This is subsequently followed by a Transformer encoder to refine the representation. Moreover, our model is designed to facilitate pretraining and finetuning through the utilization of free-form prompts. In order to implement the pretraining phase, we curated an expansive tabular dataset comprising approximately 13 billion samples, meticulously gathered from the Kaggle platform. Rigorous experimental testing and analyses were performed under a myriad of scenarios to validate the effectiveness of our methodology. The experimental results demonstrate UniTabE's superior performance against several baseline models across a multitude of benchmark datasets. This, therefore, underscores UniTabE's potential to significantly enhance the semantic representation of tabular data, thereby marking a significant stride in the field of tabular data analysis.1 Footnote 1: We will release our pretrained models. ## 1 Introduction Tabular data is extensively utilized to present and organize information in diverse contexts, including webpages, spreadsheets, and database systems, etc. Consequently, it has garnered significant attention from the research community due to its numerous practical applications, such as table lookup (Wang et al., 2021; Ye et al., 2022), table question answering (Table QA) (Herzig et al., 2020; Yin et al., 2020; Katsis et al., 2022; Cheng et al., 2022), formulas prediction (Cheng et al., 2021), and so on. In this work, we dedicate to pretrain a tabular encoder serving various downstream tasks. Pretraining over voluminous tables necessitates a model that is adaptable to diverse tabular structures. Consequently, we need to reconsider what constitutes the basic element of a table. In practical applications, a row or record in a table typically represents an instance's information, with each column in a row seen as a feature or attribute of the instance. A cell, being the intersection of a row and a column in a table, is the most granular unit. Generally, columns are independent of each other in terms of their order. This allows each instance in the table to be converted into an unordered sequence of cells, contributing to the model's flexibility in accepting any fixed-structure table as input and facilitating the addition of extra columns. Moreover, each cell in the table provides the finest level of granularity we can manipulate for different data types separately. Thus, in the context of this work, each cell is treated as the basic element of the table. In alignment with previous studies (Yin et al., 2020; Liu et al., 2022; Wang and Sun, 2022), we concentrate on the most common data types: numerical, textual, and categorical values. Specifically, we consider each digit of a numerical value as a token. For categorical values, we map the index to its meaning. For instance, we convert "1" to "True" and "0" to "False" for a column like "is_foreign_worker". Tokens of each cell are processed by the embedding layer containing the positional embedding to preserve the data structure, such as the order of tokens in text and the digital structure of numerical values. Motivated by the recent advancements in pretraining techniques for natural language processing, our objective is to develop a unified tabular encoder that can serve as a versatile module for obtaining semantic representations in various tabular tasks. However, the transferability of knowledge across tables remains uncertain. Existing approaches typically rely on consistent table structures in both training and testing data (Huang et al., 2020; Wang and Sun, 2022; Jiang et al., 2022). However, table structures often differ across tasks, rendering direct learning across tables impractical. One naive approach to address this issue involves merging distinct tables into a larger one, where the columns represent the union set of the original columns. Nevertheless, this approach introduces data sparsity and poses challenges when additional columns need to be appended, which is a common occurrence in practical scenarios. For example, in clinical trials, incremental columns are collected across different phases. In addition to adapting to different structures, the conversion of tabular data into the model input is crucial for neural network performance. Existing methods often propose strategies to transform a table into text format and subsequently apply pretraining similar to Masked Language Model (MLM) (Devlin et al., 2018; Salazar et al., 2019; Lewis et al., 2020) techniques in natural language processing. In this self-supervised training, the model is trained to predict the masked span within the text. However, tables contain various data formats, including natural language text, numerical values, categorical data, etc. Since numerical values make up a significant portion of tabular data, treating them as text without distinguishing their nature undermines the inherent structure and numerical meaning (Eisenschlos et al., 2021; Herzig et al., 2020). Furthermore, in practice, it is more common for entire cell values to be missing rather than just a portion of the value. However, the dynamic masking employed in MLM struggles to ensure complete cell coverage. Traditionally, previous approaches failed to inform the model about the correspondence between column names and their values during the transformation of tabular data into sequences of tokens (Herzig et al., 2020; Arik and Pfister, 2021; Liu et al., 2022). The literature mostly emphasizes the enhancement of powerful backbones, often at the expense of relatively simple and less sophisticated design of embedding modules tailored for different data types. This skewed focus often poses challenges to the model's capacity to effectively learn from such data, thereby underscoring a potential lacuna in the current methodological approach. To address these identified challenges, we introduce _UnTaB_, an efficient and straightforward framework designed to process tables uniformly, eliminating the need for fixed table structures. Within this module, we adopt the data type embedding to aid the model in dealing with different data types, taking into account the divergent meanings associated with different data formats while forming the sequence of value tokens. For example, values derived from natural texts reflect grammatical and syntactic structures, while numerical values denote a distinct recording structure. The data type embedding thus conveys the differences between various data types. In order to constrain the relationship between the column name and its corresponding value, we design a _linking layer_ that integrates a portion of the column name information into its value. Furthermore, we present a versatile model architecture that is tailored to accommodate a wide range of tabular tasks, facilitated by the use of prompts. Within this framework, we employ a shallow decoder to ensure the majority of the learned knowledge is stored into the semantic representation generated by the encoder. This vanilla decoder performs reasoning based on the high-level semantic representation provided by the encoder, serving as an adaptation module that can adjust to different tasks by simply employing task-specific prompts. For the training of our UniTabE, we have constructed an expansive tabular dataset amassed from Kaggle.2 This pretraining dataset comprises a considerable 13 billion examples sourced from approximately 300 distinct domains, with an average of 36.7 columns per table. To comprehensively evaluate the effectiveness of our method, we conduct extensive experiments including its application in predominant downstream tasks, tasks involving filling in missing values, zero-shot prediction, adaptation to incremental tables, and integration of the neural semantic representation with eXtreme Gradient Boosting (XGBoost), etc. This work makes three significant contributions: Footnote 2: [https://www.kaggle.com](https://www.kaggle.com) * We delve into the realm of large-scale pretraining over tables and propose a novel framework, UniTabE, to address the inherent challenges. Our model is capable of accepting tables with heterogeneous structures as input and learning the semantic representation. Moreover, by integrating prompts within our framework, we enhance its scalability, enabling it to accommodate an extensive array of pretraining and finetuning tasks for downstream applications. * We have crawled 7TB tables from Kaggle to construct a large-scale pretraining dataset that spans numerous domains. With this dataset, it substantiates the feasibility of conducting large-scale pretraining on tables. * We have carried out comprehensive experiments to evaluate the performance of our pre-trained UniTabE on a variety of benchmark datasets. We have also employed UniTabE in several Kaggle tasks, applied our model into the scenario of incremental columns, and explored equipping the learned semantic representation with XGBoost. The comparative results highlight the superiority of our approach. ## 2 Related Work ### Table Data Featurization Tables encompass various data types, and to process this tabular data with neural models, conventional methods typically convert each data format into a continuous space using individual strategies. For instance, texts are processed with tokenization and word embedding, while images are handled using image patch embedding. However, prior research tends to simplify this process by treating numerical values as text and then applying the same embedding strategy, which invariably disrupts the original recording structure and numerical meanings (Eisenschlos et al., 2021; Herzig et al., 2020). Given the prevalence of numerical values in tables, this area has garnered increasing attention (Gorishniy et al., 2022). To enhance the representation of numerical values, MATE (Eisenschlos et al., 2021) and TaPas (Herzig et al., 2020) introduced a ranking embedding based on numeric rank, which relies on comparisons. Further, TUTA (Wang et al., 2021) applied additional numerical features, such as magnitude, precision, the first digit, and the last digit, to distinguish numbers from text. Gorishniy et al. (2022) attempted to train the embedding vector for numbers. Wang and Sun (2022) suggested categorizing tabular data into three distinct types: textual, categorical, and numerical. Their model, TransTab, concatenates columns of the same type into a text sequence, with column names, column values, and different columns separated by a space. After the concatenation, these three text sequences are fed into the embedding layer individually. ### Pretraining Table Models In recent years, the pretraining of language models (LMs) over vast text corpora has led to noteworthy enhancements in performance for a variety of downstream tasks. This success has stimulated a growing body of work that focuses on pretraining and adapting LMs specifically for tabular data. The prevailing method employed by these studies is to fine-tune LMs that have been pretrained on NLP datasets, such as BERT (Devlin et al., 2018), BART (Lewis et al., 2019), etc. Typically, this training utilizes the Masked Language Model (MLM) objective, as evidenced by models like Tabtransformer (Huang et al., 2020), TABERT (Yin et al., 2020), Tabnet (Arik and Pfister, 2021), Saint (Sompealli et al., 2021), and so on. Liu et al. (2022) proposed a modality transformation that converts tabular data into textual data using a basic lexicon and ordered syntax before feeding it into the pretrained LMs. They then finetuned their model using the same MLM objective. However, LMs pretrained on natural texts do not perform optimally as the textualized tabular data differs fundamentally from natural language texts. The efficacy of finetuned LMs without architectural modifications has largely been confined to text data so far. But making modifications might lead to undesired outcomes like catastrophic forgetting of knowledge learned from natural language corpora (Chen et al., 2020; Bavarian et al., 2022). In this work, our focus lies on pretraining a large model from scratch on tabular data. In line with previous work, we construct our model upon the renowned Transformer encoder, utilizing a self-supervised objective of MLM, which masks parts of the model input and then predicts the masked content. Contrary to previous work, we abstain from textualizing the tabular data using a simplistic strategy. Instead, we introduce a TabUnit module designed to process the basic element of a table independently, leading to an improved modeling of tabular data. ## 3 UniTabE Architecture In this section, we present the architecture of our model, which is composed of three primary components: the _TabUnit_, the _Encoding Layer_, and a _Shallow Decoder_. Our model employs the foundational TabUnit module to handle data of varying types and then utilizes the Transformer's encoder for further encoding. By leveraging the setting of prompts and integrating a decoder, our model becomes adaptable to a wide range of tasks. **TabUnit Module** As depicted on the left side of Figure 1, we propose the use of an unified module, named TabUnit, for modeling the basic element of tabular data. To mitigate the influence of table structure, we treat each cell in a table as a key-value pair representing the column name and the column value respectively. The tokens of the column name are passed into the embedding module: \[\mathbf{v}_{cn}=f_{fl}(Emb_{DT}(t_{d}),Avg(Emb(X_{cn}))) \tag{1}\] where \(Emb\) represents the embeddings consisting of word embedding and positional embedding. \(\mathbf{x}_{cn}=Avg(Emb(X_{cn}))\) is the vector after mean pooling across the dimension of token sequence. Here, we adopt the data type embedding \(Emb_{DT}\), specifically designed to help the model adeptly handle diverse data formats, particularly in instances where columns share a name but contain values in different formats. For example, the values in the "salary" column of a table in a downstream task might be numerical, while those in the corresponding column of another table could be textual (e.g., high income, medium income, and low income, etc.). Our model integrates this information into column name embeddings to get column name vector via a fuse layer: \[\begin{split} g_{dt}=\mathrm{Sigmoid}(\mathbf{v}_{fl}\mathrm{ReLU }(\mathbf{w}_{fl}^{\top}\mathbf{x}_{dt}+\mathbf{b}_{fl}))\\ \mathbf{v}_{cn}=(1-g_{dt})\mathbf{x}_{cn}+g_{dt}*\mathbf{x}_{dt} \end{split} \tag{2}\] where \(\mathbf{w}_{fl}\), \(\mathbf{b}_{fl}\) and \(\mathbf{v}_{fl}\) are trainable parameters. \(\mathbf{x}_{dt}\) is the data type embedding. Theoretically, integrating an equal amount of data type information into the column name representation across columns of the same data format is reasonable, as opposed to computing the fusing ratio \(g_{dt}\) based on both \(\mathbf{x}_{cn}\) and \(\mathbf{x}_{dt}\). Consequently, we compute \(g_{dt}\) solely based on the type embedding. Figure 1: The left part shows the procedure of processing each cell, the basic element of tabular data in this work. The right part illustrates the overall architecture of our UniTabE. n denotes the number of cells in each example, Q denotes the length of prompt, while T here represents the length of target. The UniTabE takes the concatenation of [CLS] embedding and embeddings of cells as input. A shallow decoder is applied to guarantee the capability that stores most of the learned knowledge as well as the scalability that adapts to different downstream tasks. Tokens in each cell are also passed into the embedding module: \[\{\mathbf{x}_{cv}^{0},\mathbf{x}_{cv}^{1},\mathbf{x}_{cv}^{2},...,\mathbf{x}_{cv }^{q-1}\}=Emb(\{x_{cv}^{0},x_{cv}^{1},x_{cv}^{2},...,x_{cv}^{q-1}\}) \tag{3}\] Where the embedding module \(Emb(.)\) is shared to carry out embedding for column names and column values, \(q\) here denotes the length of the cell value. Given the orderless nature of self-attention within the Transformer encoder, it becomes challenging for the model to learn the connection between the column name and its value when all cells are concatenated into a sequence. As a result, we introduce a _Linking Layer_ to establish the relationship within each name-value pair. We employ a gated function to weave the information from the column name into its corresponding value: \[\begin{split}\alpha=\mathrm{Sigmoid}(\mathbf{v}_{lk}\mathrm{ ReLU}(\mathbf{w}_{lk}^{\top}\mathbf{v}_{cn}+\mathbf{b}_{lk}))\\ \mathbf{v}_{cv}^{i}=\mathbf{x}_{cv}^{i}+\alpha*\mathbf{v}_{cn} \end{split} \tag{4}\] where \(\mathbf{w}_{lk}\), \(\mathbf{b}_{lk}\) and \(\mathbf{v}_{lk}\) are learnable parameters. We employ \(\alpha\) to ensure that an equal amount of column name information is integrated into the value vectors. This allows the model to recognize which parts of vectors are values corresponding to specific column. To avoid having the value information overshadowed by the column name information, we only apply the multiplication operation to \(\alpha\) and \(\mathbf{v}_{cn}\). Overall, the TabUnit can be briefly formulated as: \[\mathbf{X}_{TU}=\{\mathbf{v}_{cn},\mathbf{v}_{cv}^{0},\mathbf{v}_{cv}^{1},...,\mathbf{v}_{cv}^{q-1}\}=f_{TabUnit}(t_{d},X_{cn},X_{cv}) \tag{5}\] where \(t_{d}\), \(X_{cn}\) and \(X_{cv}\) denote the data type indicator, tokens of column name and tokens of column value, respectively. The concatenation of column name vector and value vectors is treated as the inner representation of a tabular cell. In our implementation, all cells are parallel processed. **Encoding Layer** We concatenate the representations of all cells, and attach a trainable [CLS] vector to the head of such sequence. We leverage the Transformer encoder as the encoding layer: \[\{\mathbf{h}_{cls},\mathbf{h}^{0},\mathbf{h}^{1},...,\mathbf{h}^{N-1}\}=f_{ Enc}(\mathbf{v}_{cls},\mathbf{X}_{TU}^{0},\mathbf{X}_{TU}^{1},...,\mathbf{X}_{ TU}^{n-1}) \tag{6}\] where \(n\) is the number of columns, and \(N\) is the length after concatenating all cells' representations. **Shallow Decoder** During pretraining, we want to encourage the encoder to store most of the learned knowledge. Hence, we adopt a Long Short-Term Memory network (LSTM) [Hochreiter and Schmidhuber, 1997] as the weak decoder. Specifically, the hidden state of [CLS] token and the prompt are passed to the decoder to compute the initial state of our decoder: \[\{\mathbf{y}_{0}^{P},\mathbf{y}_{1}^{P},...,\mathbf{y}_{Q-1}^{P}\}=f_{attn}( \mathbf{W}_{1}^{\top}Emb(\{y_{0}^{P},y_{1}^{P},...,y_{Q-1}^{P}\}),\mathbf{W}_{ 2}^{\top}\{\mathbf{X}_{TU}^{0},\mathbf{X}_{TU}^{1},...,\mathbf{X}_{TU}^{n-1}\}) \tag{7}\] \[\mathbf{v}_{p}=\sum_{i}\frac{\exp(\mathbf{v}_{1}^{\top}\mathbf{y}_{i}^{P})}{ \sum_{j}\exp(\mathbf{v}_{1}^{\top}\mathbf{y}_{j}^{P})}\mathbf{y}_{i}^{P} \tag{8}\] \[\mathbf{v}_{state}=\mathbf{W}_{s}^{\top}\{\mathbf{v}_{p},\mathbf{h}_{cls}\}+ \mathbf{b}_{s} \tag{9}\] where the \(f_{attn}\) denotes the dot-attention. \(\mathbf{W}_{1}\), \(\mathbf{W}_{2}\), \(\mathbf{v}_{1}\), \(\mathbf{W}_{s}\) and \(\mathbf{b}_{s}\) are trainable parameters. Here \(\mathbf{v}_{p}\) represents the weighted average of attention states of the prompt. The embedding layer of decoder also share the same parameter as that one on TabUnit. The target sequence of tokens are generated by the decoder step by step conditioned on the initial state and previously produced token. ## 4 Pretraining & Finetuning ### Pretraining Objective Previous research in NLP pretraining utilized self-supervised tasks within datasets to provide supervised training signals, such as predicting next token, generating masked spans of texts, and determining the subsequent sentence, etc. These studies have shown that unlabeled data can aid in the learning of semantic representation. In this work, we also adopt the mask-then-predict approach to facilitate self-supervised training. We treat each cell as the basic masked unit, as opposed to the token level in NLP pretraining. In practical applications, filling in the entire content of a cell is more useful than merely filling in part of the cell content. As such, we randomly replace the content of the masked cells with a special token, [MASK]. We also use [MASK] as the default content for cells whose values are missing. For each example, we arbitrarily mask columns and train the model to predict the masked content. Often, there may be missing values in downstream applications. Therefore, we also train the model under conditions where several values are missing, to familiarize the model with such situations. The number of masked cells varies and follows a standard normal distribution, with single-cell masking being the most likely outcome. The prompt template for pretraining is set to "fill in missing value, <column name> :", which specifies the precise masked column to predict. The details regarding the number of randomly masked cells and their corresponding probabilities will be elaborated in SS 6.1. Our model is trained with the optimization objective of the maximum log likelihood estimation. ### Finetuning Formulation Filling in Missing Value as Prediction.When finetuning our model for downstream tasks, we can consider the target as an additional column of the table. The model is then tasked with predicting the masked values of this target column using the same prompt as during pretraining. Thanks to the decoder, UniTabE is capable of generating textual and numerical targets. For example, the trained model is suited for classification and regression tasks, as well as predicting missing values in tables. In our implementation, we also support constrained generation. This feature is particularly beneficial for classification tasks, as the model only needs to predict from a small subset of the vocabulary. Finetuning with Task-specific Prompt.Apart from those tasks where we treat the target as the masked column of the table, there are tasks that require the model to perform reasoning over the table and other inputs. For instance, in table question answering (TableQA) tasks, the model needs to produce an answer conditioned on the provided table and question. The prompt used in these tasks may include the task description and the question, appropriately formatted. ## 5 Tabular Dataset As there are no large-scale, high-quality tabular datasets available for pretraining, we have collected our own dataset by crawling from Kaggle. We downloaded CSV tables, omitting empty columns. Specifically, we constructed initial keywords for each domain and extended the set of keywords using WordNet under the same topic. We then searched and downloaded tables using these keywords. For tables originating from the same Kaggle dataset, we attempted to join tables using primary and foreign keys. This process resulted in a 7TB dataset containing 13 billion tabular examples. Statistics about the pretraining data are presented in Table 1. We also present the statistics of the Top-5 domains in this table. Figure 2 shows the distribution of domains and the proportion of cells of different data types for each split of our dataset. We aimed to split our dataset in such a way as to maintain a similar distribution of data types and domains. ## 6 Experiments and Analyses ### Implementation Details We train our UniTabE with 32 A100 GPUs in the distributed way. Our model is implemented with the PyTorch v1.12. The Transformer encoder used as the backbone of our model is borrowed from the huggingface "transformers" module. During training, the learning rate is set to be 1e \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Domains** & **\# Domains** & **\# Tables** & **\# Examples** & **Avg\# NC** & **Avg\# CC** & **Avg\# TC** \\ \hline ALL & 303 & 283K & 13B & 28.7 & 0.4 & 7.7 \\ \hline Investing & 1 & 71K & 1B & 29.33 & 0.02 & 1.58 \\ Time Series & 1 & 65K & 1B & 6.47 & 0.02 & 2.27 \\ Finance & 1 & 52K & 773M & 37.57 & 0.04 & 1.46 \\ Economics & 1 & 47K & 488M & 40.34 & 0.01 & 1.27 \\ Games & 1 & 32K & 430M & 23.37 & 0.66 & 3.66 \\ \hline \hline \end{tabular} \end{table} Table 1: Overall statistics of our pretraining dataset. **“Avg# NC”**, **“Avg# CC”** and **“Avg# TC”** means the average number of numerical columns, categorical column and textual column in each table respectively. The bottom part demonstrates the Top-5 domains in our dataset. 5, the batch size is 64. We use Adam [Kingma and Ba, 2015] as the optimizer with \(\beta_{1}\)=0.9, \(\beta_{2}\)=0.999 and \(\epsilon\) =\(10^{-8}\). We train three variants of our model, \(\mathrm{UniTabE}_{base}\), \(\mathrm{UniTabE}_{large}\) and \(\mathrm{UniTabE}_{large}\). The hidden size and embedding size for \(\mathrm{UniTabE}_{base}\) are both set as 768. Its encoder is the 12 layers of self-attention stack with 12 heads. \(\mathrm{UniTabE}_{large}\) consists of 24 layers encoder, 16 attention heads. Its hidden size and embedding size are both 1024. \(\mathrm{UniTabE}_{large}\) is the 48 layers' version of \(\mathrm{UniTabE}_{large}\). We randomly sample a number \(p\) from a standard normal distribution to determine the number of randomly masked cells: (1 cell, abs(\(p\)) \(\leq\) 1.0 or abs(\(p\)) \(>\) 2.5), (2 cells, 1.0 \(<\) abs(\(p\)) \(\leq\) 1.25), (3 cells, 1.25 \(<\) abs(\(p\)) \(\leq\) 1.5), (4 cells, 1.5 \(<\) abs(\(p\)) \(\leq\) 1.75), (5 cells, 1.75 \(<\) abs(\(p\)) \(\leq\) 2.0), (7 cells, 2.0 \(<\) abs(\(p\)) \(\leq\) 2.5). Masking single cell is used as the backup option in some cases, such as there are no sufficient column number to mask. ### Benchmarks & Baselines Many practical tabular tasks fall into the categories of classification or regression. As such, we evaluate the effectiveness of our pretrained model on these types of tasks. To do this, we have selected several baseline models for comparison. These include: **XGBoost** Despite a large number of proposed neural network architectures for tabular data, the performance gap between them and the "shallow" ensembles of decision trees, like XGBoost, often remains significant in industry. We use it as baseline for classification tasks. Implemented based on the XGBoost package.3**TransTab** actualizes columns of the same data type into a text by inserting a space among columns, column names and column values. **TransTab-LSTM** is equipped with the same shallow decoder as our UniTabE. **Linear Regress** is also used to compare the performance for regression tasks. We adopt the Scikit-learn implementation of linear regression in the experiments. Footnote 3: xgboost.sklearn ### Results & Analyses **Overall Results.** We selected a diverse set of tasks comprising 6 representative classification assignments and 6 regression tasks from Kaggle. To ensure a fair and unbiased evaluation, we deliberately excluded these specific tabular datasets from our pretraining corpus. This precaution was taken to prevent the pretrained model from gaining undue familiarity with the target columns, thereby avoiding any potential bias in the results. The classification tasks and their abbreviated names in this work are Pima Indians Diabetes (**PID**, url), Red Wine Quality (**RWQ**, url), Heart Failure Prediction (**HFP**, url), Health Insurance Cross Sell (**HIC**, url), Eligibility Prediction for Loan (**EPL**, url), and Loan Default Prediction (**LDP**, url). The regression tasks are Medical Insurance Payout Figure 3: BLEU score of generating textual values by different model size on different dataset size. Figure 2: Distribution visualization. The left part (a) demonstrates the distribution of domains and the number of tables in each domain. The right part shows the proportion (cell level) of different data types in train/dev/test splits. (**MIP**, url), Gold Price Prediction (**GPP**, url), Reliance Stock Price (**RSP**, url), Credit Card Limit Prediction (**CCL**, url), Miami Housing Prediction (**MHP**, url), and House Prices - Advanced (**HPA**, url), respectively. The comparison of results among methods is presented in Table 2. As a whole, our UniTabE outperforms the other contenders, particularly excelling in regression tasks. The marked superiority of UniTabE over TransTab-LSTM offers clear evidence that a fine-grained processing of tabular data is more beneficial than straightforward textualization. This is indicative of the efficacy of our model's approach to handling and understanding the nuanced structure of tabular data. **Model Size Analysis.** We want to see the impact of the size of UniTabE on performance across different size of dataset. 50 tables are reserved and not included in the pretraining dataset. The results presented in Figure 3 indicate that for larger finetuning datasets, larger models (like \(\mathrm{UniTabE}_{xlarge}\)) tend to perform better. However, for smaller datasets, the performance of larger models tends to decrease. We notice that \(\mathrm{UniTabE}_{large}\) is a balance size for different datasets. Hence, we apply the \(\mathrm{UniTabE}_{large}\) to compare the performance with baselines in subsequent experiments. **Out-of Domain Benchmark Datasets.** Since those 12 tasks coming from Kaggle might have the similar domains to our pretraining data, we also use other public tabular datasets to further evaluate the efficacy of our method. These datasets are all binary classification tasks. We also simplify the dataset name and attach the hyperlink for each dataset: credit-g (**CG**, url), credit-approval (**CA**, url), dress-sales (**DS**, url), adult (**AD**, url), cylinder-bands (**CB**, url), blastchar (**BL**, url), insurance-co (**IO**, url). Table 3 presents experimental results on these datasets. Similar to those results in 12 Kaggle tasks, our method also achieves impressive results indicating its superiority in learning the intrinsic semantic information of tabular data. **Ablation Analysis.** As seen in Table 2, when compared to the model without pretraining, "UniTabE scratch", our finetuned UniTabE demonstrates a substantial improvement, achieving a total gain of +0.43 and +0.78 for classification and regression tasks respectively. This trend is also mirrored in Table 3. These results underscore the significant advantages brought about by the pretraining phase for handling a diverse range of tabular tasks. **Fill in Missing Value.** We evaluate the model's capability to fill in missing values, utilizing the Mean Absolute Error (MAE) metric for numerical columns and the BLEU score for textual predictions. For comparative analysis, we also fine-tune and test the GPT-3 model. The results, outlined in Table 4, indicate a significant improvement over other baseline models. This substantial gain in performance further corroborates the efficacy of our pretrained model in handling missing values, demonstrating its potential for applications in data completion and recovery tasks. **Zero-shot Prediction.** Table 5 presents the results of zero-shot prediction. The "Random Initial" approach, which does not load the pretrained parameters, exhibits inferior performance. In contrast, our pretrained model demonstrates strong performance, even without fine-tuning on the target datasets, indicating its promising generalizability. These results also suggest that UniTabE acquires a \begin{table} \begin{tabular}{l c c c c c c|c c c c c c} \hline \hline \multirow{2}{*}{**Method/Dataset**} & \multicolumn{6}{c|}{**Classification (AUC \(\uparrow\))**} & \multicolumn{6}{c}{**Regression (R2 \(\uparrow\))**} \\ \cline{2-13} & **PID** & **RWQ** & **HFP** & **HIC** & **EPL** & **IDP** & **MIP** & **GPP** & **RSP** & **CCL** & **MMP** & **HPA** \\ \hline XGBoost & 0.79 & **0.67** & **0.89** & 0.85 & 0.73 & 0.50 & - & - & - & - & - & - \\ TransTab-LSTM & 0.81 & 0.56 & 0.74 & 0.85 & 0.73 & 0.50 & 0.50 & 0.55 & 0.64 & 0.65 & 0.51 & 0.47 \\ UniTabE scratch & 0.76 & 0.57 & 0.61 & 0.85 & 0.73 & 0.52 & 0.53 & 0.76 & **0.99** & 0.72 & 0.82 & 0.54 \\ UniTabE finetune & **0.83** & 0.66 & 0.81 & **0.86** & **0.78** & **0.53** & **0.75** & **0.99** & **0.99** & **0.96** & **0.87** & **0.58** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance comparison in 12 Kaggle tasks coming from different domains. The dataset name is abbreviated referring to § 6.3. “UniTabE scratch” means the model is trained from scratch without loading pretrained parameters. Bold results are the best. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline **Method/Dataset** & **CG** & **CA** & **DS** & **AD** & **CB** & **BL** & **IO** \\ \hline XGBoost & 0.78 & 0.93 & 0.54 & **0.91** & 0.87 & 0.82 & 0.71 \\ TransTab & 0.73 & 0.86 & 0.52 & 0.90 & 0.80 & 0.71 & 0.73 \\ TransTab-LSTM & 0.70 & 0.85 & 0.56 & 0.90 & 0.72 & 0.83 & 0.73 \\ UniTabE scratch & 0.76 & 0.93 & 0.62 & **0.91** & 0.85 & **0.84** & 0.74 \\ UniTabE + XGBoost & **0.79** & **0.93** & 0.60 & **0.91** & **0.88** & 0.83 & 0.74 \\ UniTabE finetune & **0.79** & **0.94** & **0.66** & **0.91** & **0.88** & **0.84** & **0.76** \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation results with AUC on public tabular datasets. Bold results are the best. certain degree of high-level reasoning capabilities through extensive large-scale pretraining, which contributes to its robust performance in zero-shot settings. **Adaption of Incremental Columns.** We want to investigate the scalability of various models when confronted with an increasing number of columns. To simulate this scenario, we remove k columns from the original training set and train models using the modified data. Subsequently, we perform inference using the unaltered test set. The results of this experiment are displayed in Table 6. Thanks to the flexibility afforded by the TabUnit component of UniTabE, our model exhibits adaptability to the introduction of new columns with a relatively minor performance deterioration. **Learned Feature + XGBoost.** We are interested in examining the synergistic effects of combining the semantic representation derived from UniTabE with traditional machine learning algorithms, such as XGBoost. We integrate the original features with the generated representation vector, and use this composite data as input to XGBoost. The outcomes are displayed in Table 3. These results indicate that the neural feature acts as a beneficial supplement, leading to a slight performance enhancement in most instances. ## 7 Conclusions In this research, we investigate the challenge of pretraining large-scale models specifically on tabular data. To address the inherent difficulties in pretraining over tabular data, we introduce UniTabE, a flexible approach capable of modelling arbitrary tables using a single neural network architecture. We have also collected a substantial dataset, comprising 13B examples spanning various domains, for pretraining purposes. To ascertain the effectiveness of our methodology, we carry out extensive experiments across 19 downstream tasks, with the outcomes confirming the superiority of our approach. Additionally, we explore issues related to the practical application of our method, including zero-shot prediction, adaptation to incrementally added columns, and combination of learned representation and XGBoost. ## 8 Limitations In this study, our primary objective is to learn the semantic representation of tabular data via a pretraining process. To maximize the knowledge retention within the encoder, we integrate our model with a comparatively weak decoder. In theory, the shallow decoder may constrain performance in downstream applications as it serves as a high-level reasoning module that adjusts to various tasks. Nevertheless, the experimental results presented earlier illustrate that our model yields substantial performance improvements even with this weak decoder. This outcome, in turn, substantiates the success of our method. It should be noted that our focus is solely on certain data types common to most scenarios, such as textual, numerical, and categorical values, without considering other modalities like images, videos, or audio. \begin{table} \begin{tabular}{l c c} \hline \hline **Method/Drop k** & **1** & **2** \\ \hline TransTab-LSTM & 7.5 & 11.7 \\ UniTabE scratch & 5.3 & 8.4 \\ UniTabE finetune & **3.5** & **5.8** \\ \hline \hline \end{tabular} \begin{tabular}{l c} **Learned Feature + XGBoost.** We are interested in examining the synergistic effects of combining the semantic representation derived from UniTabE with traditional machine learning algorithms, such as XGBoost. We integrate the original features with the generated representation vector, and use this composite data as input to XGBoost. The outcomes are displayed in Table 3. These results indicate that the neural feature acts as a beneficial supplement, leading to a slight performance enhancement in most instances. \end{table} Table 6: The percentage(%) of AUC drop facing with incremental columns. \begin{table} \begin{tabular}{l c c c c c c|c c c c c} \hline \hline \multirow{2}{*}{**Method/Dataset**} & \multicolumn{6}{c|}{**Regression (MAE \(\downarrow\))**} & \multicolumn{6}{c}{**Text Generation (BLEU \(\uparrow\))**} \\ \cline{2-11} & **CG** & **CA** & **DS** & **AD** & **CB** & **BL** & **CG** & **CA** & **DS** & **AD** & **CB** & **BL** \\ \hline mean/mode & 0.81 & 0.61 & 0.84 & 0.78 & 0.73 & 0.84 & 48 & 62 & 46 & 62 & 65 & 43 \\ Linear Regress & 0.64 & 0.58 & 0.91 & 0.45 & 0.60 & 0.51 & - & - & - & - & - & - \\ GPT-3 & 0.58 & 0.57 & 0.69 & 0.64 & 0.61 & 0.46 & 48 & 66 & 39 & 73 & 71 & 82 \\ UniTabE scratch & 0.49 & 0.56 & 0.83 & 0.69 & 0.72 & 0.38 & 35 & 63 & 28 & 67 & 18 & 75 \\ UniTabE finetune & **0.40** & **0.51** & **0.61** & **0.43** & **0.51** & **0.22** & **59** & **76** & **51** & **80** & **85** & **92** \\ \hline \hline \end{tabular} \end{table} Table 4: Performance comparison in filling in missing value. “mean/mode” uses the average in numerical column as prediction, and takes the most common text as prediction for textual column. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline **Method/Dataset** & **CG** & **CA** & **DS** & **AD** & **CB** & **BL** & **IO** \\ \hline UniTabE finetune & 0.75 & 0.89 & 0.62 & 0.86 & 0.78 & 0.80 & 0.94 \\ \hline Random Initial & 0.30 & 0.41 & 0.50 & 0.54 & 0.41 & 0.26 & 0.06 \\ Zero-Shot & **0.70** & **0.56** & **0.58** & **0.76** & **0.57** & **0.73** & **0.94** \\ \hline \hline \end{tabular} \end{table} Table 5: The accuracy of different methods for zero-shot classification.
2306.06253
Decision Stacks: Flexible Reinforcement Learning via Modular Generative Models
Reinforcement learning presents an attractive paradigm to reason about several distinct aspects of sequential decision making, such as specifying complex goals, planning future observations and actions, and critiquing their utilities. However, the combined integration of these capabilities poses competing algorithmic challenges in retaining maximal expressivity while allowing for flexibility in modeling choices for efficient learning and inference. We present Decision Stacks, a generative framework that decomposes goal-conditioned policy agents into 3 generative modules. These modules simulate the temporal evolution of observations, rewards, and actions via independent generative models that can be learned in parallel via teacher forcing. Our framework guarantees both expressivity and flexibility in designing individual modules to account for key factors such as architectural bias, optimization objective and dynamics, transferrability across domains, and inference speed. Our empirical results demonstrate the effectiveness of Decision Stacks for offline policy optimization for several MDP and POMDP environments, outperforming existing methods and enabling flexible generative decision making.
Siyan Zhao, Aditya Grover
2023-06-09T20:52:16Z
http://arxiv.org/abs/2306.06253v2
# Decision Stacks: Flexible Reinforcement Learning ###### Abstract Reinforcement learning presents an attractive paradigm to reason about several distinct aspects of sequential decision making, such as specifying complex goals, planning future observations and actions, and critiquing their utilities. However, the combined integration of these capabilities poses competing algorithmic challenges in retaining maximal expressivity while allowing for flexibility in modeling choices for efficient learning and inference. We present Decision Stacks, a generative framework that decomposes goal-conditioned policy agents into 3 generative modules. These modules simulate the temporal evolution of observations, rewards, and actions via independent generative models that can be learned in parallel via teacher forcing. Our framework guarantees both expressivity and flexibility in designing individual modules to account for key factors such as architectural bias, optimization objective and dynamics, transferability across domains, and inference speed. Our empirical results demonstrate the effectiveness of Decision Stacks for offline policy optimization for several MDP and POMDP environments, outperforming existing methods and enabling flexible generative decision making.1 Footnote 1: The project website and code can be found here: [https://siyan-zhao.github.io/decision-stacks/](https://siyan-zhao.github.io/decision-stacks/) ## 1 Introduction Modularity is a critical design principle for both software systems and artificial intelligence (AI). It allows for the creation of flexible and maintainable systems by breaking them down into smaller, independent components that can be easily composed and adapted to different contexts. For modern deep learning systems, modules are often defined with respect to their input and output modalities and their task functionalities. For example, Visual ChatGPT (Wu et al., 2023) defines a family of 22+ vision and language foundation models, such as ChatGPT (OpenAI, 2022) (language generation), SAM (Kirillov et al., 2023) (image segmentation), and StableDiffusion (Rombach et al., 2022) (text-to-image generation) for holistic reasoning over text and images. In addition to enabling new compositional applications, modularity offers the promise of interpretability, reusability, and debugging for complex workflows, each of which poses a major challenge for real-world AI deployments. This paper presents progress towards scalable and flexible reinforcement learning (RL) through the introduction of a new modular probabilistic framework based on deep generative models. Prior work in modular RL focusses on spatiotemporal abstractions that simplify complex goals via hierarchical RL, e.g., (McGovern and Barto, 2001; Andreas et al., 2017; Simpkins and Isbell, 2019; Ahn et al., 2022; Kulkarni et al., 2016). Distinct but complementary to the prior lines of work, our motivating notion of modularity is based on enforcing token-level hierarchies in generative models of trajectories. In the context of RL, trajectories typically consist of a multitude of different tokens of information: goals, observations, rewards, actions. As shown in many recent works (Chen et al., 2021; Janner et al., 2021; Janner et al., 2022; Ajay et al., 2022; Zheng et al., 2022; Reed et al., 2022), we can effectively reduce RL to probabilistic inference (Levine, 2018) via learning deep generative models over token sequences. However, these frameworks lack any modular hierarchies over the different tokens leading to adhoc choices of generative architectures and objectives, as well as conditional independence assumptions that can be suboptimal for modeling long trajectory sequences. We introduce Decision Stacks, a family of generative algorithms for goal-conditioned RL featuring a novel modular design. In Decision Stacks, we parameterize a distinct generative model-based module for future observation prediction, reward estimation, and action generation and chain the outputs of each module autoregressively. See Figure 1 for an illustration. While our factorization breaks the canonical time-induced causal ordering of tokens, we emphasize that the relative differences in different token types is significant to necessitate token-level modularity for learning effective policies and planners. Besides semantic differences, the different token types also show structural differences with respect to dimensionalities, domains types (discrete or continuous), modalities (e.g., visual observations, numeric rewards), and information density (e.g., rewards can be sparse, state sequences show relatively high continuity). Instead of modeling the token sequence temporally, parameterizing a distinct module for each token type can better respect these structural differences. In practice, we can train each module in Decision Stacks independently using teacher forcing (Williams and Zipser, 1989). Decision Stacks shares similarities with many recent works (Janner et al., 2021; Ajay et al., 2022; Janner et al., 2022) that aim to reduce planning to sampling from a generative model. However, our modular design offers additional flexibility and expressivity. Each generative module itself is not restricted to being an autoregressive model and we experiment with modules based on transformers, diffusion models, and novel hybrids. Each generative modeling family makes tradeoffs in architecture, sampling efficiency, and can show varied efficacy for different data modalities. A modular design that easily allows for the use of arbitrary generative models, along with an autoregressive chaining across the modules permits both flexibility and expressivity. Empirically, we evaluate Decision Stacks on a range of domains in goal-conditioned planning and offline RL benchmarks for both MDPs and POMDPs. We find that the joint effect of modular expressivity and flexible parameterization in our models provides significant improvements over existing offline RL methods. This holds especially in partially-observable settings, where Decision Stacks achieves a \(15.7\%\) performance improvement over the closest baseline, averaged over 9 offline RL setups. We also demonstrate the flexibility of our framework by extensive ablation studies over the choice of generative architectures and inputs for each module. Figure 1: Illustration for the Decision Stacks framework for learning reinforcement learning agents using probabilistic inference. In contrast to a time-induced ordering, we propose a modular design that segregates the modeling of observation, rewards, and action sequences. Each module can be flexibly parameterized via any generative model and the modules are chained via an autoregressive dependency graph to provide high overall expressivity. ## 2 Preliminaries ### Goal conditioned POMDPs We operate in the formalism of goal-conditioned Partially Observable Markov Decision Processes (POMDP) defined by the tuple \(\mathcal{M}:=(\mathcal{O},\mathcal{S},\mathcal{A},\mathcal{G},\mathcal{P}, \mathcal{R},\mathcal{E},\gamma,p_{0}(s),T)\). Respectively, \(\mathcal{O}\) and \(\mathcal{S}\) denote the observation space and the underlying state space, which are fully observable in the case of MDPs. \(\mathcal{A}\), the action space, is consistent with that of MDPs. In the goal-condition setting, \(\mathcal{G}\) specifies the task goal distribution which could be e.g., a language instruction or a visual destination state for multi-task policies, or a designed cumulative return for single-task policies. The transition probability function, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\), describes the transition dynamics. Meanwhile, \(\mathcal{R}:\mathcal{G}\times\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}\) defines the rewards that the decision-maker receives after performing action \(a\) in state \(s\). The observation emission model \(\mathcal{E}=P(o|s)\), determines the probability of observing \(o\) in state \(s\). The \(\gamma\), \(p_{0}\left(s_{0}\right)\), and \(T\) denote the discount factor (Puterman, 2014), initial latent state distribution, and horizon of an episode. In a POMDP context, the observations generated from the underlying state are intrinsically non-Markovian. The goal-conditioned RL objective is to find the optimal policy \(\pi^{*}\) that maximizes the expected cumulative discounted reward over the episode horizon: \(\eta_{\mathcal{M}}:=\mathbb{E}_{G\sim\mathcal{G},a_{t}\sim\pi(\cdot|s_{t},g), s_{t+1}\sim\mathcal{P}(\cdot|s_{t},a_{t})}\left[\sum_{t=0}^{T}\gamma^{t}r \left(s_{t},a_{t},G\right)\right].\) ### Offline reinforcement learning Offline reinforcement learning (RL) is paradigm for policy optimization where the agent is only given access to a fixed dataset of trajectories and cannot interact with the environment to gather additional samples. Offline RL can be useful in domains where collecting data online is challenging or infeasible such as healthcare (Murphy et al., 2001) and autonomous driving. A major obstacle in offline RL is dealing with distributional shifts. If we naively use Bellman backup for learning the Q-function of a given policy, the update which relies on the actions sampled from policy \(\pi\) can learn inaccurately high values for out-of-distribution actions, leading to instability in the bootstrapping process and causing value overestimation (Kumar et al., 2020). ### Generative models: Autoregressive transformers and Diffusion models In this work, we are interested in learning the distribution \(p_{\text{data}}(\mathbf{x}|\mathbf{c})\) using a dataset \(D\) consisting of trajectory samples \(\mathbf{x}\) and conditioning \(\mathbf{c}\). We consider two conditional generative models for parameterizing our agent policies to learn the distribution: **Transformer** is a powerful neural net architecture for modeling sequences (Vaswani et al., 2017). It consists of multiple identical blocks of multi-head self-attention modules and position-wise fully-connected networks. The vanilla transformer can be modified with a causal self-attention mask to parameterize an autoregressive generative model as in GPT (Radford et al., 2018). Autoregressive generative models, such as the transformers, factorize the joint distribution \(p(x_{1},\dots,x_{n})\) as a product of conditionals, which can be represented as: \(p(\mathbf{x})=\prod_{i=1}^{n}p(x_{i}|\mathbf{x}_{<i})\). This equation shows that the probability of each variable \(x_{i}\) depends on the previous variables \(x_{1},\dots,x_{i-1}\). One advantage of this factorization is that each conditional probability can be trained independently in parallel via teacher forcing (Williams and Zipser, 1989). In autoregressive generation, sampling is done sequentially, where each variable \(x_{i}\) is sampled based on its preceding variables. **Diffusion Models**(Sohl-Dickstein et al., 2015; Ho et al., 2020) are latent variable models that consist of a predefined forward noising process \(q(\mathbf{x}_{k+1}|\mathbf{x}_{k}):=\mathcal{N}(\mathbf{x}_{k+1};\sqrt{\alpha_{k}}\mathbf{x}_ {k},(1-\alpha_{k})I)\) that gradually corrupts the data distribution \(q(\mathbf{x}_{0})\) into \(\mathcal{N}(0,I)\) in \(K\) steps, and a learnable reverse denoising process \(p_{\theta}(\mathbf{x}_{k-1}|\mathbf{x}_{k}):=\mathcal{N}(\mathbf{x}_{k-1}|\mu_{\theta}( \mathbf{x}_{k},k),\Sigma_{k})\). For sampling, we first generate a latent sample from the Gaussian prior \(\mathcal{N}(0,I)\) and gradually denoise it using the learned model \(p_{\theta}(\mathbf{x}_{k-1}|\mathbf{x}_{k})\) for \(K\) steps to obtain the data sample \(\mathbf{x}_{0}\). Diffusion models can be extended for conditional generation using _classifier-free guidance_(Ho and Salimans, 2022) where a conditional \(\epsilon_{\theta}\left(\mathbf{x}_{k},\mathbf{c},k\right)\) and an unconditional \(\epsilon_{\theta}\left(\mathbf{x}_{k},k\right)\) noise model is trained. Conditional data is sampled with the perturbed noise \(\epsilon_{\theta}\left(\mathbf{x}_{k},k\right)+\omega\left(\epsilon_{\theta}\left( \mathbf{x}_{k},\mathbf{c},k\right)-\epsilon_{\theta}\left(\mathbf{x}_{k},k\right)\right)\), where \(\omega\) is the guidance strength and \(\mathbf{c}\) is the conditioning information. ## 3 Flexible and Modular RL via Decision Stacks In Figure 2, we consider a directed graphical model for the data generation process in Partially Observable Markov Decision Processes (POMDP). The environment encodes the underlying state transitions \(P(s_{t+1}|s_{t},a_{t})\), goal-dependent reward function \(P(r_{t}|s_{t},a_{t},G)\), and the observation emission probability \(P(o_{t}|s_{t})\). Unlike the learning agent which only has access to observations, the behavioral policy used for generating the offline trajectory dataset might also have access to the hidden state information. Such a situation is common in real-world applications, e.g., a human demonstrter may have access to more information about its internal state than what is recorded in video demonstration datasets available for offline learning. Finally, note that a Markov Decision Process (MDP) can be viewed as a special case of a POMDP, where the observation at each timestep, \(o_{t}\), matches the underlying state, \(s_{t}\). To avoid ambiguity, we overload the use of \(o_{t}\) to denote both the state and observation in an MDP at time \(t\). In the context of goal-conditioned decision-making, a finite-horizon trajectory in the offline dataset \(\mathcal{D}\) is composed of a goal \(G\) and a sequence of observations, actions, and reward tokens. \[\tau=(G,o_{0},a_{0},r_{0},\dots,o_{t},a_{t},r_{t},\dots,o_{T},a_{T},r_{T})\,. \tag{1}\] Our primary objective lies in learning a goal-conditioned distribution for \(P_{\text{data}}(a_{0:T},o_{1:T},r_{0:T}|o_{0},G)\) conditioned on an arbitrary goal \(G\in\mathcal{G}\) and an initial observation \(o_{0}\in\mathcal{O}\). Leveraging the chain rule of probability, we can factorize this joint distribution into a product of conditional probabilities. For example, Janner et al. (2021) use a time-induced autoregressive factorization: \[P_{\text{data}}\left(a_{0:T},o_{1:T},r_{0:T}\mid o_{0},G\right)\approx\prod_{ t=1}^{T}P_{\theta}(o_{t}|\tau_{<t})\prod_{t=0}^{T}P_{\theta}(r_{t}|o_{t}, \tau_{<t})\prod_{t=0}^{T}P_{\theta}(a_{t}|o_{t},r_{t},\tau_{<t}) \tag{2}\] where \(\tau_{<t}\) denotes all the tokens in the trajectory before time \(t\). Each conditional factor is parameterized via an autoregressive transformer with shared parameters \(\theta\). If the parameterization is sufficiently expressive, any choice of ordering for the variables suffices. However, in practice, we are limited by the size of our offline dataset and the choice of factorization can play a critical role. In Decision Stacks, we propose to use a modular factorization given as: \[P_{\text{data}}\left(a_{0:T},o_{1:T},r_{0:T}\mid o_{0},G\right)\approx\underbrace {P_{\theta_{1}}\left(o_{1:T}\mid o_{0},G\right)}_{\text{observation module}}\cdot\underbrace{P_{\theta_{2}}\left(r_{0:T}\mid o_{0:T},G \right)}_{\text{reward module}}\cdot\underbrace{P_{\theta_{3}}\left(a_{0:T} \mid o_{0:T},r_{0:T},G\right)}_{\text{action module}}. \tag{3}\] Each of the 3 modules (observations, rewards, or actions) focuses on predicting a distinct component of the POMDP and has its own set of parameters (\(\theta_{1},\theta_{2},\theta_{3}\)). Our motivation stems from the fact that in real-world domains, each component is sufficiently distinct from the others in its semantics and representation. Such variances span across a multitude of factors including dimensionalities, domain types (discrete or continuous), modalities (e.g., visual observations, numeric rewards), and information density (e.g., rewards can be sparse, state sequences show relatively high continuity). Modular Expressivity.In Eq. 3, each module is chained autoregressively with the subsequent modules. This is evident as the output variables of one module are part of the input variables for all the subsequent modules. Under idealized conditions where we can match each module to the data conditional, this autoregressive structure guarantees maximal expressivity. Further, our explicit decision to avoid any parameter sharing across modules also permits trivial hardware parallelization and transfer to new environments with shared structure. Figure 2: Graphical model for the data generation process in a POMDP. Here, we show the case where the behavioral policy can potentially act based on hidden state information. The dashed circles implies that this state information is not stored in the offline dataset. \(G\) represents the task conditioning, e.g., a target return (for single-task agents) or a navigation goal (for multi-task agents). Flexible Generative Parameterization.Since each module predicts a sequence of objects, we use any deep generative model for expressive parameterization of each module. Our experiments primarily focus on autoregressive transformers and diffusion models. We also consider hybrid combinations, as they are easy to execute within our framework and can avoid scenarios where individual model families suffer, e.g., diffusion models lag behind transformer for discrete data; whereas transformers are generally poor for modeling continuous signals such as image observations. In real-world environments, many of these challenges could simultaneously occur such as agents executing discrete actions given continuous observations. Finally, each module is conditioned on a goal. For training a multi-task agent, the goal can be specified flexibly as spatial coordinates, a visual image, a language instruction, etc. For single-task agents, we specify the goal as the trajectory returns during training and desired expert-valued return during testing, following prior works in return-conditioned offline RL (Chen et al., 2021; Emmons et al., 2021). Learning and Inference.Given an offline dataset, each module can be trained in parallel using teacher forcing (Williams and Zipser, 1989). At test-time, our framework naturally induces a planner, as in order to predict an action at time \(t\), we also need to predict the future observations and rewards. We can execute either an open-loop or closed-loop plan. Open-loop plans are computationally efficient as they predict all future observations and rewards at once, and execute the entire sequence of actions. In contrast, a closed-loop plan is likely to be more accurate as it updates the inputs to the modules based on the environment outputs at each time-step. Using a closed-loop plan, we can sample the action at time \(t\) as follows: \[\hat{o}_{t+1:T} \sim P_{\theta_{1}}\left(o_{t+1:T}\mid o_{0:t},G\right) \tag{4}\] \[\hat{r}_{t+1:T} \sim P_{\theta_{2}}\left(r_{t+1:T}\mid r_{0:t},o_{0:t},\hat{o}_{t +1:T},G\right)\] (5) \[\hat{a}_{t} \sim P_{\theta_{3}}\left(a_{t}\mid a_{0:t-1},o_{0:t},\hat{o}_{t +1:T},r_{0:t},\hat{r}_{t+1:T},G\right) \tag{6}\] The hat symbol (\(\star\)) indicates predicted observations, rewards, and actions, while its absence denotes observations, rewards, and actions recorded from the environment and the agent in the previous past timesteps. For closed-loop planning, Eqs. 4, 5, 6 require us to condition the joint observation, reward and action distributions on the past trajectory tokens. For a module that is parameterized autoregressively, this is trivial as we can simply choose a time-induced ordering and multiply the conditionals for the current and future timesteps. For example, if the observation module is an autoregressive transfer, then we can obtain the sampling distribution in Eq. 4 as: \(P_{\theta_{1}}\left(o_{t+1:T}\mid o_{0:t},G\right)=\prod_{i=t+1}^{T}P_{\theta_{ 1}}\left(o_{i}\mid o_{<i},G\right).\) For a diffusion model, this task is equivalent to inpainting and can be done by fixing the environment observations until time \(t\) at each step of the denoising process (Janner et al., 2022). Distinction with Key Prior Works.We will include a more detailed discussion of broad prior works in SS5 but discuss and contrast some key baselines here. While the use of generative models for goal-conditioned offline RL is not new, there are key differences between Decision Stacks and recent prior works. First, we choose a planning approach unlike other model-free works, such as Decision Transformers (Chen et al., 2021) and diffusion-based extensions (Wang et al., 2022). Second, there exist model-based approaches but make different design choices; Trajectory Transformer (Janner et al., 2021) uses a time-induced causal factorization parameterized by a single autoregressive transformer, Diffuser (Janner et al., 2022) uses diffusion models over stacked state and action pairs, Decision Diffuser (Ajay et al., 2022) uses diffusion models for future state prediction and an MLP-based inverse dynamics model to extract actions. Unlike these works, we propose a modular structure that is maximally expressive as it additionally models reward information and does not make any conditional independence assumption for the state, reward and action modules. As our experiments demonstrate, the modular expressivity and architectural flexibility in Decision Stacks are especially critical for goal-conditioned planning and dealing with partial observability. ## 4 Experiments Our experiments aim to answer the following questions: SS4.1 How does Decision Stacks perform for long-horizon multi-task planning problems? SS4.2 How does Decision Stacks compare with other offline RL methods in MDP environments? SS4.3 How does Decision Stacks compare with other offline RL methods in POMDP environments? SS4.4 How does the architectural feasibility for each module affect downstream performance? How important is the role of reward modeling for Decision Stacks? For SS4.1, SS4.2, and SS4.3, we experiment with D4RL environments and parameterize Decision Stacks with a diffusion-based observation model, an autoregressive transformer-based reward model, and an autoregressive transformer-based action model. Finally, in SS4.4, we will ablate the full spectrum of architecture design choices for each module. ### Long-Horizon Goal-Conditioned Environments We first test for the planning capabilities of Decision Stacks on the Maze2D task from the D4RL [Fu et al., 2020] benchmark. This is a challenging environment requiring an agent to generate a plan from a start location to a goal location. The demonstrations contain a sparse reward signal of +1 only when the agent reaches close to the goal. Following Janner et al. [2022], we consider 2 settings. In the Single Goal setting, the goal coordinates are fixed, and in the Multi Goal setting, the goals are randomized at test-time. We compare against classic trajectory optimization techniques that have knowledge of the environment dynamics (MPPI [Williams et al., 2015], extensions of model-free RL baselines (CQL [Kumar et al., 2020] and IQL [Kostrikov et al., 2021]), and the two most closely related works in generative planning based on diffusion models: Diffuser [Janner et al., 2022] and Decision Diffuser (DD) [Ajay et al., 2022]. We show our results in Table 1 for different goal types and maze grids. While Janner et al. [2022] previously demonstrated remarkable ability in generating long-horizon plans using Diffuser, their trajectory plans were executed by a handcoded controller. However, we experimentally found that Diffuser and DD's own generated actions fail to perfectly align with their generated plans, as shown in the example rollouts in Figure 3. We hypothesize this could stem from the lack of modularity in Diffuser affecting the generation fidelity, or the lack of expressivity in using an MLP-based inverse dynamics model in DD which limits the context length required for long-horizon planning. In contrast, we find that DS generates robust trajectory plans and matching action sequences with significant improvements over baselines. ### Offline Reinforcement Learning Performance in MDPs Next, we examine the performance of Decision Stacks in offline RL tasks across various high-dimensional locomotion environments from the D4RL offline benchmark suite [Fu et al., 2020] in Table 2. We compare Decision Stacks (DS) with other offline RL algorithms including imitation learning via Behavior Cloning (BC), value-based approaches like IQL [Kostrikov et al., 2021] and CQL [Kumar et al., 2020], model-based algorithm MOReL [Kidambi et al., 2020], transformer-based generative models such as Decision Transformer (DT) [Chen et al., 2021] and Trajectory Transformer (TT) [Janner et al., 2021], and diffusion-based generative models Diffuser [Janner et al., 2022] and Decision Diffuser (DD) [Ajay et al., 2022]. In our evaluation, we also included our reproduced scores for DD. DD uses the same architecture for observation prediction as Decision Stacks and is hence, the closest baseline. However, we found its performance to be sensitive to return conditioning and in spite of an extensive search for hyperparameters and communication with the authors, our reproduced \begin{table} \begin{tabular}{c c c c c c c|c c c c} \hline \hline \multirow{2}{*}{**Task**} & **Environment** & **MPPI** & **CQL** & **IQL** & **Diffuser** & \multicolumn{2}{c|}{**DD**} & \multicolumn{2}{c}{**DS**} & \multicolumn{2}{c}{Diffuser with} & DS /DD with \\ \hline \multirow{4}{*}{Single Goal} & \multirow{2}{*}{\begin{tabular}{c} uname \\ medium \\ large \\ \end{tabular} } & 33.2 & 5.7 & 47.4 & 86.9 \(\pm\)30.4 & **113.8 \(\pm\)13.1 & **111.3 \(\pm\)12.2** & 113.9 \(\pm\)1.3 & 119.5 \(\pm\)6.6 \\ & & 10.2 & 5.0 & 34.9 & 105.5 \(\pm\)11.4 & 103.7 \(\pm\)9.22 & 111.7 \(\pm\)2.4 & 121.5 \(\pm\)5.7 & 112.9 \(\pm\)11.8 \\ & & 5.1 & 12.5 & 58.6 & 45.4 \(\pm\)14.5 & **111.8 \(\pm\)5.44** & **117.6 \(\pm\)13.4 & 123.0 \(\pm\)6.4 & 132.8 \(\pm\)21.0 \\ \cline{2-11} & \multirow{2}{*}{\begin{tabular}{c} Average \\ medium \\ large \\ \end{tabular} } & 16.2 & - & 47.0 & 80.2 & 100.9 & **13.5** & 119.5 & 121.7 \\ \cline{2-11} \multirow{2}{*}{Multi Goals} & \multirow{2}{*}{\begin{tabular}{c} uname \\ medium \\ large \\ \end{tabular} } & 41.2 & - & 24.5 & 114.4 \(\pm\)14.3 & 105.0 \(\pm\)14.5 & 121.3 \(\pm\)12.2 & 120.9 \(\pm\)1.5 & 136.1 \(\pm\)5.2 \\ & & 15.4 & - & 12.1 & 54.6 \(\pm\)14.5 & **122.4 \(\pm\)3.7** & 127.2 \(\pm\)5.4 & 124.6 \(\pm\)13.3 \\ & & 8.0 & - & 13.9 & 41.0 \(\pm\)20.1 & 116.0 \(\pm\)33.1 & 126.7 \(\pm\)21.8 & 132.1 \(\pm\)5.8 & 134.8 \(\pm\)12.3 \\ \cline{2-11} \cline{2-11} & \multirow{2}{*}{ \begin{tabular}{c} Average \\ \end{tabular} } & 21.5 & - & 16.9 & 70.0 & 111.6 & 123.4 & 129.4 & 131.8 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance on Maze2D tasks. DS significantly outperforms other baselines without the need for a handcoded controller. Note that DD and DS share the same diffusion-based observation model architecture and hence with a handcoded controller, their performance is the same. We average the results over 15 random seeds and emphasize in bold scores within 5 percent of the maximum per task (\(\geq 0.95\cdot\max\)). numbers are slightly lower. We provide more details in the Appendix. For a fair comparison, we used the same set of hyperparameters that give the best performance for the DD baseline. We show results averaged over 15 planning seeds and normalize the scores such that a value of 100 represents an expert policy, following standard convention (Fu et al., 2020). Decision Stacks outperforms or is competitive with the other baselines on 6/9 environments and is among the highest in terms of aggregate scores. These results suggest that even in environments where we can make appropriate conditional independence assumptions using the MDP framework, the expressivity in the various modules of Decision Stacks is helpful for test-time generalization. \begin{table} \begin{tabular}{l l c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Dataset**}} & \multicolumn{1}{c}{**Environment**} & \multicolumn{1}{c}{**BC**} & \multicolumn{1}{c}{**IQL**} & \multicolumn{1}{c}{**COL**} & \multicolumn{1}{c}{**DT**} & \multicolumn{1}{c}{**TT**} & \multicolumn{1}{c}{**MOREL**} & \multicolumn{1}{c}{**DD**} & \multicolumn{1}{c}{**(**reproduced**)} & \multicolumn{1}{c}{**Diffuser**} & \multicolumn{1}{c}{**DS (ours)**} \\ \hline Medium-Expert & HalfCheath & \(55.2\) & \(86.7\) & \(\mathbf{91.6}\) & \(86.8\) & \(\mathbf{95.0}\) & \(53.3\) & \(90.6\) & \(\mathbf{95.9}\pm 2.5\) & \(79.8\) & \(\mathbf{95.7}\pm 0.3\) \\ Medium-Expert & Hopper & \(52.5\) & \(91.5\) & \(105.4\) & \(\mathbf{107.6}\) & \(\mathbf{110.0}\) & \(\mathbf{108.7}\) & \(\mathbf{111.8}\) & \(\mathbf{111.6}\pm 2.8\) & \(\mathbf{107.2}\) & \(\mathbf{107.8}\pm 2.2\) \\ Medium-Expert & Walker2d & \(\mathbf{107.5}\) & \(\mathbf{109.6}\) & \(\mathbf{108.8}\) & \(\mathbf{108.1}\) & \(101.9\) & \(95.6\) & \(\mathbf{108.8}\) & \(\mathbf{105.2}\pm 2.3\) & \(\mathbf{108.4}\) & \(\mathbf{108.0}\pm 0.1\) \\ \hline Medium & HalfCheath & \(42.6\) & \(\mathbf{47.4}\) & \(44.0\) & \(42.6\) & \(\mathbf{46.9}\) & \(42.1\) & \(\mathbf{49.1}\) & \(46.4\pm 5.1\) & \(44.2\) & \(\mathbf{47.8}\pm 0.4\) \\ Medium & Hopper & \(52.9\) & \(66.3\) & \(58.5\) & \(67.6\) & \(61.1\) & \(\mathbf{95.4}\) & \(79.3\) & \(81.2\pm 7.2\) & \(\mathbf{56.5}\) & \(76.6\pm 2.4\) \\ Medium & Walker2d & \(75.3\) & \(78.3\) & \(72.5\) & \(74.0\) & \(79.0\) & \(77.8\) & \(\mathbf{82.5}\) & \(\mathbf{79.9}\pm 5.3\) & \(\mathbf{79.7}\) & \(\mathbf{83.6}\pm 0.3\) \\ \hline Medium-Replay & HalfCheath & \(36.6\) & \(\mathbf{44.2}\) & \(\mathbf{45.5}\) & \(36.6\) & \(41.9\) & \(40.2\) & \(39.3\) & \(39.4\pm 1.5\) & \(42.2\) & \(41.1\pm 0.1\) \\ Medium-Replay & Hopper & \(18.1\) & \(94.7\) & \(\mathbf{95.0}\) & \(82.7\) & \(91.5\) & \(93.6\) & \(\mathbf{100}\) & \(\mathbf{95.3}\pm 3.7\) & \(96.8\) & \(89.5\pm 4.2\) \\ Medium-Replay & Walker2d & \(26.0\) & \(73.9\) & \(77.2\) & \(66.6\) & \(\mathbf{82.6}\) & \(49.8\) & \(75\) & \(72.3\pm 3.1\) & \(61.2\) & \(\mathbf{80.7}\pm 5.5\) \\ \hline \multicolumn{1}{c}{**Average**} & \multicolumn{1}{c}{\(51.9\)} & \(77.0\) & \(77.6\) & \(74.7\) & \(\mathbf{78.9}\) & \(72.9\) & \(\mathbf{82.2}\) & \(\mathbf{80.3}\) & \(75.3\) & \(\mathbf{81.1}\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Offline Reinforcement Learning Performance in MDP**. Our results are averaged over 15 random seeds. Following Kostrikov et al. (2021), we bold all scores within 5 percent of the maximum per task (\(\geq 0.95\cdot\max\)). Figure 3: Example rollouts on the Maze2D-medium-v1 environment. The goal is located at the bottom right corner of the maze. The trajectory waypoints are color-coded, transitioning from blue to red as time advances. The bottom two rows demonstrates that Diffuser, DD, and DS are all capable of generating good plans that can be executed well with a handcoded controller. However, the respective action models result in differing executions. Compared to DD and Diffuser, DS generates smoother trajectories that are more closely aligned with the future waypoints planned by the observation model. ### Offline Reinforcement Learning Performance in POMDPs Next we consider the POMDP setting where the logged observations are incomplete representations of the underlying states. To generate the POMDPs datasets, we exclude two dimensions of the state for each environment from the D4RL locomotion datasets. We report our results in Table 3 and compare against other generative baselines. Decision Stacks (DS), consistently achieves competitive or superior results compared to the other algorithms, including BC, DT, TT, and DD. Notably, DS outperforms other methods in most environments and attains the highest average score of \(74.3\), which reflects a \(15.7\%\) performance improvement over the next best-performing approach Diffuser. This highlights the effectiveness of our approach in handling POMDP tasks by more expressively modeling the dependencies among observations, actions, and rewards. ### Architectural Flexibility Decision Stacks distinctly separates the prediction of observations, rewards, and actions employing three distinct models that can be trained independently using teacher forcing. In this section, we explore the additional flexibility offered by different architecture choices for each module. For observation, reward, and action prediction, we consider diffusion models and Transformer-based \begin{table} \begin{tabular}{c c c c|c c c} \hline \hline \multirow{2}{*}{**Reward models**} & \multicolumn{5}{c}{**Action models**} \\ \cline{2-7} & Transformer & Diffusion & MLP & Transformer & Diffusion & MLP \\ \hline Transformer & 57.7 \(\pm\)3.9 & **58.2**\(\pm\)4.3 & 45.6 \(\pm\)4.1 & 53.0 \(\pm\)3.7 & 54.3 \(\pm\)3.3 & 36.7 \(\pm\)4.2 \\ Diffusion & 51.7 \(\pm\)1.7 & 56.9 \(\pm\)2.2 & 36.3 \(\pm\)3.1 & **58.0**\(\pm\)4.4 & 46.9 \(\pm\)3.7 & 34.9 \(\pm\)3.5 \\ MLP & 56.0 \(\pm\)3.5 & 52.6 \(\pm\)2.5 & 33.3 \(\pm\)3.0 & 55.0 \(\pm\)3.9 & 52.1 \(\pm\)2.7 & 42.5 \(\pm\)4.1 \\ \cline{2-7} & \multicolumn{5}{c|}{Diffusion-based **observation model**} & \multicolumn{5}{c}{Transformer-based **observation model**} \\ \hline \hline \end{tabular} \end{table} Table 4: Performance on Hopper-medium-v2 POMDP using various reward and action models, with **diffusion-based** or **transformer-based** observation model. In each choice of observation model, the algorithm with the highest performance is highlighted. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline **Dataset** & **Environment** & **BC** & **DT** & **TT** & **DD** & **Diffuser** & **DS (ours)** \\ \hline Medium-Expert & HalfCheetah & \(42.1\) & \(80.8\) & **94.9** & \(19.07\) & \(82.2\) & \(\textbf{92.7}\pm 0.8\) \\ Medium-Expert & Hopper & \(51.1\) & \(105.2\) & \(61.6\) & \(32.7\) & \(70.7\) & \(\textbf{110.9}\pm 0.4\) \\ Medium-Expert & Walker2d & \(51.3\) & **106.0** & \(51.7\) & \(74.8\) & \(82.4\) & \(94.1\pm 8.5\) \\ \hline Medium & HalfCheetah & \(43.3\) & \(42.7\) & **46.7** & \(40.3\) & **45.4** & \(\textbf{47.1}\pm 0.3\) \\ Medium & Hopper & \(36.4\) & **63.1** & \(55.7\) & \(38.1\) & **62.2** & \(57.7\pm 3.9\) \\ Medium & Walker2d & \(39.4\) & \(64.2\) & \(28.5\) & \(53.2\) & \(55.7\) & \(\textbf{74.3}\pm 4.2\) \\ \hline Medium-Replay & HalfCheetah & \(2.1\) & \(35.5\) & **43.8** & \(39.8\) & \(39.3\) & \(40.3\pm 1.2\) \\ Medium-Replay & Hopper & \(24.3\) & \(78.3\) & **84.4** & \(22.1\) & \(80.9\) & \(\textbf{86.9}\pm 2.6\) \\ Medium-Replay & Walker2d & \(23.8\) & \(45.3\) & \(10.2\) & \(58.4\) & \(58.7\) & \(\textbf{66.8}\pm 1.8\) \\ \hline \hline \multicolumn{2}{c|}{**Average**} & \(34.9\) & \(47.9\) & \(53.0\) & \(42.0\) & \(64.2\) & **74.3** \\ \hline \hline \end{tabular} \end{table} Table 3: **Offline Reinforcement Learning Performance in POMDP. Our results are averaged over 15 random seeds. Following Kostrikov et al. (2021), we bold all scores within 5 percent of the maximum per task (\(\geq 0.95\cdot\max\)).** \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{4}{c|}{Action models} & \multicolumn{4}{c}{Action models} \\ \cline{2-7} **Observation model** & \multicolumn{2}{c}{**without** reward modelling} & \multicolumn{4}{c}{**with** reward modelling} \\ \cline{2-7} & Transformer & Diffusion & MLP & Transformer & Diffusion & MLP \\ \hline Diffusion-based & 43.6 \(\pm\)1.3 & 43.7 \(\pm\)3.4 & 38.1 \(\pm\)2.1 & 57.7 \(\pm\)3.9 & 58.2 \(\pm\)4.3 & 45.6 \(\pm\)4.1 \\ \hline Transformer-based & 45.1 \(\pm\)5.2 & 39.4 \(\pm\)3.2 & 39.6 \(\pm\)3.7 & 58.0 \(\pm\)4.4 & 54.3 \(\pm\)3.3 & 42.5 \(\pm\)4.1 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation results comparing the performance of action models with and without reward information as input, across different architectures, in the dense reward POMDP task of Hopper-medium-v2. The results suggests a clear advantage when incorporating reward modeling. autoregressive models. For reward and action models, we additionally consider MLPs that are restricted in their window and only look at the immediate state information to make a decision. The results shown in Table 4 display a combination of 2x3x3 policy agents for the Hopper-medium v2 POMDP environment. Since we adopt a modular structure, we can compose the different modules efficiently and hence, we only needed to train 2 (state) + 3 (reward) + 3 (action) models. In Table 4, we find that the performance of pure transformer- or diffusion-based Decision Stacks gives reasonable performance (transformers: 53.0, diffusion: 56.9) but these pure combinations can be slightly outperformed by hybrids, e.g., the best achieving entry (58.2) in Table 4 uses a diffusion-based obsevation model, a transformer-based reward model and a diffusion-based action model. MLPs generally are outperformed by generative architectures, especially when used for modeling actions. Furthermore, we compare the best reward modeling architectures with alternatives that do not consider rewards. This is standard practice for Diffuser (Janner et al., 2022) and Decision Diffuser (DD) (Ajay et al., 2022). For example, DD predict the action at time \(t\) only based on the current observation and next observation, \(P(a_{t}|o_{t},o_{t+1})\), parameterized via an MLP. As delineated in Table 5, the inclusion of reward models significantly boosts performance in the dense reward POMDP environment. We include additional analysis and discussion in the Appendix. ## 5 Related works **Offline Reinforcement Learning** is a paradigm that for learning RL policies directly from previously logged interactions of a behavioral policy. The key challenge is that any surrogate models trained on an offline dataset do not generalize well outside the dataset. Various strategies have been proposed to mitigate the challenges due to distribution shifts by constraining the learned policy to be conservative and closely aligned with the behavior policy. These include learning a value function that strictly serves as a lower bound for the true value function in CQL (Kumar et al., 2020), techniques focus on uncertainty estimation such as Kumar et al. (2019), and policy regularization methods (Wu et al., 2019; Ghasemipour et al., 2021; Kumar et al., 2019; Fujimoto and Gu, 2021; Fujimoto et al., 2019). Model-based methods like MORel (Kidambi et al., 2020) and ReBeL (Lee et al., 2021) add pessimism into the dynamics models. In the context of partially observed settings, Rafailov et al. (2021) extends model-based offline RL algorithms by incorporating a latent-state dynamics model for high-dimensional visual observation spaces, effectively representing uncertainty in the latent space and Zheng et al. (2023) derive algorithms for offline RL in settings where trajectories might have missing actions. Our work takes the RL as inference perspective (Levine, 2018) and employs the tools of probabilistic reasoning and neural networks for training RL agents. **Generative models for offline RL.** Over the past few years, the RL community has seen a growing interest in employing generative models for context-conditioned sequence generation by framing the decision-making problem as a generative sequence prediction problem. Here, we expand on our discussion from SS3 with additional context and references. Decision Transformer (Chen et al., 2021) and Trajectory Transformer (Janner et al., 2021) concurrently proposed the use of autoregressive transformer-based models (Radford et al., 2018) for offline RL in model-based and model-free setups. Online decision transformer (Zheng et al., 2022) further finetunes the offline pretrained policies in online environments through a sequence-level exploration strategy. GATO (Reed et al., 2022; Lee et al., 2022) and PEDA (Zhu et al., 2023) scale these models to multi-task and multi-objective settings. MaskDP Liu et al. (2022) shows that other self-supervised objectives such as masking can also enable efficient offline RL, especially in goal-conditioned settings. These advancements have also been applied to other paradigms in sequential decision making such as black-box optimization and experimental design (Nguyen and Grover, 2022; Krishnamoorthy et al., 2023, 2023). Recent works have shown that changing the generative model from transformer to a diffuser with guidance can improve performance in certain environments and also permit planning for model-based extensions (Janner et al., 2021; Ajay et al., 2022; Wang et al., 2022; Chen et al., 2022). Dai et al. (2023) considers an extension where the state model is a pretrained text2image model and actions are extracted from consecutive image frames. As discussed in SS3, these works make specific design choices that do not guarantee the modularity, flexibility, and expressivity ensured by our framework. Conclusion We proposed Decision Stacks, a modular approach for learning goal-conditioned policies using offline datasets. Decision Stacks comprises of 3 modules tasked with the prediction of observations, rewards, and actions respectively. In doing so, we strive for the twin benefits of expressivity through autoregressive conditioning across the modules and flexibility in generative design within any individual module. We showed its empirical utility across a range of offline RL evaluations for both MDP and POMDP environments, as well as long-horizon planning problems. In all these settings, Decision Stacks matches or significantly outperforms competing approaches while also offering significant flexibility in the choice of generative architectures and training algorithms. Limitations and Future Work.Our experiments are limited to state-based environments and extending Decision Stacks to image-based environments is a promising direction for future work especially in light of the gains we observed for POMDP environments. We are also interested in exploring the benefits of a modular design for pretraining and transfer of modules across similar environments and testing their generalization abilities. Finally, online finetuning of Decision Stacks using techniques similar to Zheng et al. (2022) is also an exciting direction of future work.
2304.10248
Hotelling Deflation on Large Symmetric Spiked Tensors
This paper studies the deflation algorithm when applied to estimate a low-rank symmetric spike contained in a large tensor corrupted by additive Gaussian noise. Specifically, we provide a precise characterization of the large-dimensional performance of deflation in terms of the alignments of the vectors obtained by successive rank-1 approximation and of their estimated weights, assuming non-trivial (fixed) correlations among spike components. Our analysis allows an understanding of the deflation mechanism in the presence of noise and can be exploited for designing more efficient signal estimation methods.
Mohamed El Amine Seddik, José Henrique de Morais Goulart, Maxime Guillaud
2023-04-20T12:16:05Z
http://arxiv.org/abs/2304.10248v1
# Hotelling Deflation on Large Symmetric Spiked Tensors ###### Abstract + Footnote †: J. H. de M. Goulart’s work was supported by the ANR LabEx CIMI (ANR-11-LABX-0040) within the French Programme “Investisements d’Avenir”. + Footnote †: J. H. de M. Goulart’s work was supported by the ANR LabEx CIMI (ANR-11-LABX-0040) within the French Programme “Investisements d’Avenir”. + Footnote †: J. H. de M. Goulart’s work was supported by the ANR LabEx CIMI (ANR-11-LABX-0040) within the French Programme “Investistisements d’Avenir”. + Footnote †: J. H. de M. Goulart’s work was supported by the ANR LabEx CIMI (ANR-11-LABX-0040) within the French Programme “Investis ments \(|\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle|\neq 0\) are fixed and not \(o(1)\). Such an analysis is a first step towards devising more sophisticated algorithms such as orthogonalized deflation, as has been recently done in the asymmetric case [15]. To this end, we build upon a recently developed approach [8, 14] which allows studying random tensor models by deploying tools from random matrix theory. The core idea of this approach is to study partial contractions, which give rise to large random matrices. More concretely, we study the (random) alignments between the vectors obtained by deflation and the components \(\mathbf{x}_{i}\) of our CPD model, in the regime of asymptotically large tensor dimensions. Under the assumption that these alignments (and the estimates of the weights \(\beta_{i}\)) concentrate and some additional technical conditions, we derive a system of equations that are satisfied by the limiting values of these quantities. Once the deflation procedure is applied, one can plug its output into the equations and numerically solve for the other unknown quantities, including the weights \(\beta_{i}\) and the alignments \(\langle\mathbf{x}_{i},\mathbf{u}_{j}\rangle\), where the vectors \(\mathbf{u}_{j}\) are the estimated components obtained by deflation. Our numerical results for finite dimensions show that the obtained values closely match the predictions given by the derived equations. ## 2 Spiked tensor and deflation We consider the following rank-\(r\) and order-\(d\) symmetric spiked random tensor \[\mathbf{\mathsf{S}}\equiv\sum_{i=1}^{r}\beta_{i}\mathbf{x}_{i}^{\otimes d}+\frac{1}{ \sqrt{n}}\mathbf{\mathsf{W}}, \tag{2}\] with \(\mathbf{x}_{i}\) on the unit sphere \(\mathbb{S}^{n-1}\) and \(\mathbf{\mathsf{W}}\) a \(d\)th-order symmetric Gaussian tensor (see [8] for a formal definition). The signal part is modeled by the rank-\(r\) component with weights \(\beta_{i}>0\), which collectively determine the signal-to-noise ratio 1 of the model. We further assume that the rank-one components are non-orthogonal and we denote Footnote 1: Here we assume for simplicity that \(\beta_{i}>0\) for all \(i\in[r]\), which implies no loss of generality for odd \(d\). The case with arbitrary signs can be treated similarly, at the expense of more cumbersome derivations. \[\alpha_{ij}\equiv\langle\mathbf{x}_{i},\mathbf{x}_{j}\rangle\neq 0\quad\text{for all $i \neq j$}. \tag{3}\] In the following, we will study a deflation approach aimed at approximately recovering the low-rank signal tensor, which consists in performing successive rank-one approximations and subtracting the result at each iteration. Specifically, at iteration \(i\in[r]\), we compute the best rank-one approximation of \(\mathbf{\mathsf{S}}_{i}\), denoted \(\hat{\lambda}_{i}\mathbf{u}_{i}^{\otimes d}\), and subtract it from \(\mathbf{\mathsf{S}}_{i}\). Starting with \(\mathbf{\mathsf{S}}_{0}=\mathbf{\mathsf{S}}\), this yields the sequence of tensors \[\mathbf{\mathsf{S}}_{i}=\mathbf{\mathsf{S}}_{i-1}-\hat{\lambda}_{i-1}\mathbf{u}_{i-1}^{ \otimes d}, \tag{4}\] where \(\hat{\lambda}_{0}=0\) by convention, and \[\mathbf{u}_{i} \equiv\operatorname*{arg\,max}_{\|\mathbf{u}\|=1}\mathbf{\mathsf{S}}_{i} \cdot\mathbf{u}^{d}, \tag{5}\] \[\hat{\lambda}_{i} \equiv\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}_{i}^{d}, \tag{6}\] where \(\mathbf{\mathsf{S}}\cdot\mathbf{u}^{m}\) denotes \(m\)-fold contraction of the tensor \(\mathbf{\mathsf{S}}\) with the vector \(\mathbf{u}\). It follows that each \(\mathbf{\mathsf{S}}_{i}\) is also a low-rank spiked random tensor given by \[\mathbf{\mathsf{S}}_{i}=\sum_{j=1}^{r}\beta_{j}\mathbf{x}_{j}^{\otimes d}-\sum_{j=1}^{ i-1}\hat{\lambda}_{j}\mathbf{u}_{j}^{\otimes d}+\tfrac{1}{\sqrt{n}}\mathbf{\mathsf{W}}. \tag{7}\] Note that the solution to the best rank-one tensor approximation problem (5) is in general _not_ a component of the CPD of \(\mathbf{\mathsf{S}}\). This is due to the fact that the Eckhart-Young theorem is not applicable in the non-orthogonally decomposable setting [7]. Thus, there is admittedly a mismatch between the objective of estimating the components of the CPD and the strategy of computing successive rank-one approximations. Nonetheless, the deflation approach is algorithmically simple and easier to analyze than joint optimization schemes (as it relies on rank-1 approximation), and can also provide acceptable approximate solutions when cross-component correlations are small. In the sequel, we give analytical tools to characterize and improve the accuracy achieved by Hotelling-type tensor deflation. To understand the performance of this procedure in the large-dimensional regime, our main task consists in estimating the following quantities, which we refer to as _summary statistics_ as introduced by [4], when \(n\to\infty\), as functions of the parameters \(\alpha_{ij}\) and \(\beta_{i}\): \[\hat{\lambda}_{i},\quad\hat{\rho}_{ij}\equiv\langle\mathbf{u}_{i},\mathbf{x}_{j} \rangle,\quad\hat{\eta}_{ij}\equiv\langle\mathbf{u}_{i},\mathbf{u}_{j}\rangle\quad \text{for $i,j\in[r]$}. \tag{8}\] We will see in the sequel how this problem can be addressed through the analysis of certain random matrices, built from contractions of the tensors \(\mathbf{\mathsf{S}}_{i}\). ## 3 Main results ### Associated random matrices For \(r=1\), the problem (5) is tantamount to the maximum likelihood estimation (MLE) of \(\mathbf{x}_{1}\). In this setting, [8] introduced an approach for studying the performance of MLE by borrowing tools from random matrix theory. This approach is based upon two crucial observations: (i) critical points \(\mathbf{u}\) of (5) are eigenvectors of \(\mathbf{\mathsf{S}}_{i}\) satisfying [13] \[\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}^{d-1}=\lambda\mathbf{u}, \tag{9}\] with the eigenvalue \(\lambda\) given by \(\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}^{d}\); (ii) every eigenpair \((\lambda,\mathbf{u})\) of the tensor \(\mathbf{\mathsf{S}}_{i}\) is also an eigenpair of the matrix resulting from the contraction \(\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}^{d-2}\), since by (9) \[\big{(}\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}^{d-2}\big{)}\ \mathbf{u}=\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}^{d-1}= \lambda\mathbf{u}. \tag{10}\] Hence the solution \(\mathbf{u}_{i}\) of (5) is an eigenvector (in fact, the dominant eigenvector [8]) of the matrix \(\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}_{i}^{d-2}\). The matrix eigenproblem (10) does not provide a constructive way to solve for \(\mathbf{u}\) since \(\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}^{d-2}\) itself depends on \(\mathbf{u}\); however, its analysis through random matrix theory allows to characterize the properties of the solutions. Specifically, by analyzing contractions of this form, combined with the tensor eigenvalue equation (9), [8] derived an asymptotic expression for the performance of MLE in terms of the alignment of \(\mathbf{u}_{1}\) and \(\mathbf{x}_{1}\) in the regime where estimation is possible (that is, beyond the phase transition characterized by [11]). Here, assuming now that \(r\) is a fixed integer such that \(r>1\), we carry out a similar study of the random tensor models \(\mathbf{\mathsf{S}}_{i}\) through the analysis of the contractions \(\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}_{i}^{d-2}\). ### Limiting spectrum Our first result characterizes the limiting spectral measure of the contractions \(\mathbf{\mathsf{S}}_{i}\cdot\mathbf{u}_{i}^{d-2}\), and is instrumental in proving our main result, which is an asymptotic characterization of the summary statistics in (8). **Theorem 3.1**.: _The empirical spectral measures of \(\mathbf{S}_{i}\cdot\boldsymbol{u}_{i}^{d-2}\) and of \(\frac{1}{\sqrt{n}}\textbf{W}\cdot\boldsymbol{u}_{i}^{d-2}\) converge weakly almost surely to the semi-circle distribution \(\mu\) whose Stieltjes transform is given by_ \[g(z)\equiv\frac{2}{\gamma_{d}^{2}}\left(-z+\sqrt{z^{2}-\gamma_{d}^{2}}\right),\] _and whose density reads \(\mu(dx)=\frac{2}{\pi\gamma_{d}^{2}}\sqrt{\gamma_{d}^{2}-x^{2}}\,dx\) and is supported on \([-\gamma_{d},\gamma_{d}]\)._ _Proof sketch:_ The proof starts by noticing that the random matrix \(\mathbf{S}_{i}\cdot\boldsymbol{u}_{i}^{d-2}\) can be written as \(\boldsymbol{L}+\frac{1}{\sqrt{n}}\textbf{W}\cdot\boldsymbol{u}_{i}^{d-2}\) where \(\boldsymbol{L}\) is a low-rank matrix. Therefore, involving classical random matrix arguments, the matrices \(\mathbf{S}_{i}\cdot\boldsymbol{u}_{i}^{d-2}\) and \(\frac{1}{\sqrt{n}}\textbf{W}\cdot\boldsymbol{u}_{i}^{d-2}\) share the same limiting spectrum, and the former is characterized similarly to the rank-one case from [8]. ### Limiting summary statistics We now look into the asymptotic values of the summary statistics introduced in (8). To derive them, we start from the tensor eigenvalue equations relating the pairs \((\hat{\lambda}_{i},\boldsymbol{u}_{i})\) and the tensors \(\mathbf{S}_{i}\), that is \[\hat{\lambda}_{i}\,\boldsymbol{u}_{i}=\mathbf{S}_{i}\cdot \boldsymbol{u}_{i}^{d-1}=\sum_{j=1}^{r}\beta_{j}\langle\boldsymbol{u}_{i}, \boldsymbol{x}_{j}\rangle^{d-1}\boldsymbol{x}_{j} \tag{11}\] \[-\sum_{j=1}^{i-1}\hat{\lambda}_{j}\langle\boldsymbol{u}_{i}, \boldsymbol{u}_{j}\rangle^{d-1}\boldsymbol{u}_{j}+\frac{1}{\sqrt{n}}\textbf{W} \cdot\boldsymbol{u}_{i}^{d-1},\] where we used (7). Then, we can have access to \(\hat{\lambda}_{i}\) by taking the scalar product of both sides with \(\boldsymbol{u}_{i}\), since this vector has a unit norm. Similarly, \(\hat{\rho}_{ij}\) and \(\hat{\eta}_{ij}\) are obtained by taking scalar products with \(\boldsymbol{x}_{j}\) and \(\boldsymbol{u}_{j}\), respectively. Next, one can compute the expectations of these quantities by invoking Stein's lemma (a.k.a. Gaussian integration by parts) to handle the dependence between **W** and each \(\boldsymbol{u}_{i}\), and take the limit \(n\to\infty\). Finally, similarly to [8, 14, 15] we assume that these random quantities concentrate around their expectations, and impose some technical conditions on their limiting values, as follows. **Assumption 3.2** (Almost sure convergence).: We suppose that for each tensor \(\mathbf{S}_{i}\) involved in the deflation there exists a sequence of eigenpairs \(\{(\hat{\lambda}_{i},\boldsymbol{u}_{i})\}_{n\in\mathbb{N}}\) of \(\mathbf{S}_{i}\) such that \[\hat{\lambda}_{i}\xrightarrow[n\to\infty]{\text{s.s.}}\lambda_{i},\quad\hat{ \rho}_{ij}\xrightarrow[n\to\infty]{\text{a.s.}}\rho_{ij},\quad\hat{\eta}_{ij} \xrightarrow[n\to\infty]{\text{s.s.}}\eta_{ij},\] with \(\lambda_{i}>\gamma_{d}(d-1)\), \(\rho_{ij}\neq 0\) and \(\eta_{ij}\neq 0\) where \(\gamma_{d}=2/\sqrt{d(d-1)}\). Figure 1: Empirical (dots, Monte-Carlo simulations) versus asymptotic (lines, as per Corollary 3.4) summary statistics of the symmetric Hotelling deflation, for parameters \(r=2\), \(d=3\), \(n=100\) and \(\alpha=0.4\), for a range of \(\beta_{1}\) and a fixed \(\beta_{2}=5\). (a) First deflation step: alignments of \(\boldsymbol{u}_{1}\) with \(\boldsymbol{x}_{1}\) and \(\boldsymbol{x}_{2}\). (b) Second deflation step: alignments of \(\boldsymbol{u}_{2}\) with \(\boldsymbol{x}_{1}\) and \(\boldsymbol{x}_{2}\). (c) Eigenvalues \(\hat{\lambda}_{i}\) and their limits \(\lambda_{i}\) resp. (d) Alignment \(\eta_{12}\) between the eigenvectors estimated at the first and second deflation step. The asymptotic curves are obtained by solving numerically \(\Psi(\cdot,\boldsymbol{\beta},\cdot)=\mathbf{0}\) in Corollary 3.4 initialized with the simulated summary statistics for one realization of the noise tensor **W**. Under these assumptions, we can derive a system of equations characterizing the summary statistics in the limit \(n\to\infty\). As our numerical results will show, the solutions to these equations match the empirical observations for \(n\) large enough. **Theorem 3.3**.: _Suppose that Assumption 3.2 holds, then the limiting summary statistics \(\lambda_{i},\rho_{ij}\) and \(\eta_{ij}\) satisfy the system of equations shown in (S) on the preceding page with \(i,j\in[r]\), where \(h(z)\equiv z+g\left(z/(d-1)\right)/d\) and \(q(z)\equiv g(z/(d-1))/(d(d-1))\)._ Note that Theorem 3.3 only states that if the summary statistics converge to their respective limits, then the latter are solutions to the system of equations in (S); the converse is not necessarily true. In fact, studying the existence and uniqueness of the solutions of (S) is still an open question. As we will see in the next section, when the system (S) is solved numerically with proper initialization (e.g., with the empirical summary statistics), the obtained solutions describe well the asymptotic behavior of the _maximizers_ of (5), despite the fact that the tensor eigenvalue equations characterize _all critical points_ of these problems. We refer the reader to [8, 14] for a discussion on similar phenomena observed in the rank-one case, whose rigorous explanation remains open. **Corollary 3.4**.: _Suppose \(r=2\), \(d=3\) and denote \(\boldsymbol{\lambda}\equiv(\lambda_{1},\lambda_{2},\eta_{12})\), \(\boldsymbol{\beta}\equiv(\beta_{1},\beta_{2},\alpha_{12})\) and \(\boldsymbol{\rho}\equiv(\rho_{ij})_{i,j\in[2]}\). Then, the limiting summary statistics \(\boldsymbol{\lambda}\) and \(\boldsymbol{\rho}\) satisfy \(\Psi(\boldsymbol{\lambda},\boldsymbol{\beta},\boldsymbol{\rho})=\boldsymbol{0}\), where the mapping \(\Psi:\mathbb{R}^{3}\times\mathbb{R}^{3}\times\mathbb{R}^{4}\to\mathbb{R}^{7}\) is defined by_ \[\Psi\begin{pmatrix}\boldsymbol{\lambda}\\ \boldsymbol{\beta}\\ \boldsymbol{\rho}\end{pmatrix}\equiv\left(\begin{array}{c}\sum_{i=1}^{2} \beta_{i}\rho_{i1}^{2}-f(\lambda_{1})\\ \sum_{i=1}^{2}\beta_{i}\alpha_{i1}\rho_{i1}^{2}-h(\lambda_{1})\rho_{11}\\ \sum_{i=1}^{2}\beta_{i}\alpha_{i2}\rho_{i1}^{2}-h(\lambda_{1})\rho_{12}\\ \sum_{i=1}^{2}\beta_{i}\beta_{i}\rho_{i2}^{2}-f(\lambda_{2})-\lambda_{1}\eta_{1 }^{2}\\ \sum_{i=1}^{2}\beta_{i}\alpha_{i1}\rho_{i2}^{2}-h(\lambda_{2})\rho_{21}-\lambda _{1}\rho_{11}\eta_{12}^{2}\\ \sum_{i=1}^{2}\beta_{i}\alpha_{i2}\rho_{i2}^{2}-h(\lambda_{2})\rho_{22}-\lambda _{1}\rho_{12}\eta_{12}^{2}\\ \sum_{i=1}^{2}\beta_{i}\rho_{i1}\rho_{i2}^{2}-h(\lambda_{2})\eta_{12}-[ \lambda_{1}+q(\lambda_{1})]\eta_{12}^{2}\end{array}\right).\] ## 4 Discussion Fig. 1 illustrates the accuracy of using the deflation approach of (4)-(6) to estimate the spike components \(\boldsymbol{x}_{i}\) and weights \(\beta_{i}\) in the correlated case \(\alpha=0.4\). As the result depends critically on the relative values of \(\beta_{1}\) and \(\beta_{2}\), we let \(\beta_{1}\) vary for a fixed \(\beta_{2}=5\). In Figs. 1(a-b), as expected from the deflation procedure, when \(\beta_{1}<\beta_{2}\), \(\boldsymbol{u}_{1}\) tends to correlate with \(\boldsymbol{x}_{2}\), the strongest component, hence \(\rho_{12}\) is high; conversely, for \(\beta_{1}>\beta_{2}\), \(\boldsymbol{u}_{1}\) tends to correlate with \(\boldsymbol{x}_{1}\) and \(\rho_{11}\) is high. Naturally, \(\rho_{21}\) and \(\rho_{22}\) behave symmetrically. Interestingly, in the regime \(\beta_{1}\approx\beta_{2}\), \(\boldsymbol{u}_{1}\) aligns fully neither with \(\boldsymbol{x}_{1}\) nor with \(\boldsymbol{x}_{2}\). This indicates a significant weakness in the deflation approach with non-orthogonally decomposable tensors when several components have comparable power, since improperly estimating and subtracting the first component has the detrimental effect of _increasing_ the rank of the non-noise component in \(\boldsymbol{\mathsf{S}}_{1}\) with respect to \(\boldsymbol{\mathsf{S}}_{0}\) (see eq. (7)). We also note that, during the second deflation step, the estimator fails to achieve positive correlation of \(\boldsymbol{u}_{2}\) with either \(\boldsymbol{x}_{1}\) or \(\boldsymbol{x}_{2}\) for very low values of \(\beta_{1}\). Fig. 1(c) shows that \(\lambda_{1}\) fairly accurately tracks the power of the strongest component (equal to \(\max(\beta_{1},5)\)), while \(\lambda_{2}\) is affected by a noise floor at the low range of \(\beta_{1}\) and constitutes a poor estimator of the power of the weakest component (equal to \(\min(\beta_{1},5)\)). As is common with random matrix theory, the asymptotic results from Theorem 3.3 hold approximately with remarkable accuracy for finite dimension problems thanks to the concentration of measure phenomenon[5].
2308.01090
Exploring CP Violation beyond the Standard Model and the PQ Quality with Electric Dipole Moments
In some models of physics beyond the Standard Model (SM), one of the leading low energy consequences of the model appears in the form of the chromo-electric dipole moments (CEDMs) of the gluons and light quarks. We examine if these CEDMs can be distinguished from the QCD $\theta$-term through the experimentally measurable nuclear and atomic electric dipole moments (EDMs) in both cases with and without the Peccei-Quinn (PQ) mechanism solving the strong CP problem. We find that the nucleon EDMs show a distinctive pattern when the EDMs are dominantly induced by the light quark CEDMs without the PQ mechanism. In the presence of the PQ mechanism, the QCD $\theta$-parameter corresponds to the vacuum value of the axion field, which might be induced either by CEDMs or by UV-originated PQ breaking other than the QCD anomaly, for instance the PQ breaking by quantum gravity effects. We find that in case with the PQ mechanism the nucleon EDMs have a similar pattern regardless of what is the dominant source of EDMs among the CEDMs and $\theta$-term, unless there is a significant cancellation between the contributions from different sources. In contrast, some nuclei or atomic EDMs can have characteristic patterns significantly depending on the dominant source of EDMs, which may allow identifying the dominant source among the CEDMs and $\theta$-term. Yet, discriminating the gluon CEDM from the QCD $\theta$-parameter necessitates additional knowledge of low energy parameters induced by the gluon CEDM, which is not available at the moment. Our results imply that EDMs can reveal unambiguous sign of CEDMs while identifying the origin of the axion vacuum value, however it requires further knowledge of low energy parameters induced by the gluon CEDM.
Kiwoon Choi, Sang Hui Im, Krzysztof Jodłowski
2023-08-02T11:49:39Z
http://arxiv.org/abs/2308.01090v3
# Exploring CP Violation beyond the Standard Model and the PQ Quality with Electric Dipole Moments ###### Abstract In some models of physics beyond the Standard Model (SM), one of the leading low energy consequences of the model appears in the form of the chromoelectric dipole moments (CEDMs) of the gluons and light quarks. We examine if these CEDMs can be distinguished from the QCD \(\theta\)-term through the experimentally measurable nucleon and atomic electric dipole moments (EDMs) in both cases with and without the Peccei-Quinn (PQ) mechanism solving the strong CP problem. We find that the nucleon EDMs can show a distinctive pattern when the EDMs are dominantly induced by light quark CEDMs without the PQ mechanism. In the presence of the PQ mechanism, the nucleon EDMs due to the gluon or light quark CEDMs have a similar pattern as those due to the QCD \(\theta\)-parameter, regardless of the origin of the axion vacuum value which determines the \(\theta\)-parameter. In contrast, diamagnetic atomic EDMs due to the gluon or light quark CEDMs have characteristic patterns distinguishable from the pattern due to the \(\theta\)-parameter which is induced dominantly by UV-originated PQ breaking other than the QCD anomaly, for instance by quantum gravity effects. Our results suggest that EDMs may provide information not only on CP violation beyond the SM, but also on the existence of the PQ mechanism and the quality of the PQ symmetry characterized by the strength of UV-originated PQ breaking other than the QCD anomaly. CTPU-PTC-23-34 ## 1 Introduction Permanent electric dipole moments (EDM) of particles are known to provide a sensitive tool to probe CP violation beyond the Standard Model (SM) of particle physics. Furthermore, the sensitivity of the experimental search for EDMs is expected to be significantly improved within the foreseeable future (see e.g. [1]). As is well known, CP violation in the SM can be described by the two angle parameters, the Kobayashi-Maskawa phase \(\delta_{\rm KM}\) inducing CP violation in the weak interactions and the QCD angle \(\bar{\theta}\) for CP violation in the strong interactions. These two angle parameters are determined by the SM parameters as [2; 3] \[\delta_{\rm KM}=\arg\cdot\det([Y_{u}Y_{u}^{\dagger},Y_{d}Y_{d}^{ \dagger}]),\quad\bar{\theta}=\theta_{0}+\arg\cdot\det(Y_{u}Y_{d}), \tag{1}\] where \(Y_{u}\) and \(Y_{d}\) denote the complex Yukawa couplings of the 3 generations of the up-type and down-type quarks, and \(\theta_{0}\) is the bare QCD angle. CP violating phenomena induced by \(\delta_{\rm KM}\) have been experimentally well tested, implying \(\delta_{\rm KM}={\cal O}(1)\)[4]. On the other hand, CP violation by \(\bar{\theta}\) in the strong interactions is not observed yet, which results in the stringent upper bound [5; 6; 7; 8; 9] \[|\bar{\theta}|\,\lesssim\,10^{-10}. \tag{2}\] Although \(\delta_{\rm KM}\) is of order unity, EDMs induced by \(\delta_{\rm KM}\) are highly suppressed by the involved quark masses and mixing parameters [10]. As a result they all have a value well below the current experimental bounds. On the other hand, \(\bar{\theta}\) can generate hadronic EDMs near the current bound, if \(\bar{\theta}\) has a value near \(10^{-10}\). Generically there can also be CP-violating interactions beyond the SM (BSM), which may result in EDMs again near the current experimental bounds. Therefore, once a nonzero hadronic EDM were detected experimentally, one of the key questions is whether it originates from \(\bar{\theta}\) or from BSM CP violation. To answer this question, one needs to measure multiple EDMs in the experimental side, and examine in the theory side if the observed pattern of EDMs can be explained by \(\bar{\theta}\) or requires an alternative source of CP violation.1 More generically, one may consider an effective theory defined at a scale around the QCD scale, involving the CP-odd interactions Footnote 1: For recent studies in this direction, see for instance [11; 12]. \[\Delta{\cal L}=\frac{g_{s}^{2}}{32\pi^{2}}\bar{\theta}G^{a\mu\nu}\tilde{G}_{ \mu\nu}^{a}+\sum_{i}\lambda_{i}{\cal O}_{i}, \tag{3}\] and examine which region of the parameter space of \(\{\bar{\theta},\lambda_{i}\}\) can explain the observed pattern of EDMs, where \({\cal O}_{i}\) denote the non-renormalizable local CP-odd operators involving the gluons and/or light quarks, e.g. the chromoelectric dipole moments (CEDMs) of the gluons and light quarks, EDMs of light quarks, and CP-odd four-fermion operators, which would describe the low energy consequence of generic BSM physics existing at higher energy scales, and \(\lambda_{i}\) are their Wilson coefficients. In view of the reparameterization-invariant expression (1) for \(\delta_{\rm KM}\) and \(\bar{\theta}\), the smallness of \(\bar{\theta}\) requires a severe fine-tuning. An appealing solution to this problem is to introduce a global \(U(1)\) Peccei-Quinn (PQ) symmetry [13; 14; 15] (see e.g. [16; 17; 18] for reviews) which is non-linearly realized at least in low energy limits, under which the associated Nambu-Goldstone boson, the axion \(a(x)\), transforms as \[U(1)_{\rm PQ}:\;\;a(x)\to a(x)+{\rm constant}. \tag{4}\] A key assumption involved in this solution is that \(U(1)_{\rm PQ}\) is broken _dominantly_ by the QCD anomaly, i.e. by the axion coupling to the gluons of the form \[\frac{g_{s}^{2}}{32\pi^{2}}\frac{a(x)}{f_{a}}G^{a\mu\nu}\tilde{G}_{\mu\nu}^{a}, \tag{5}\] to the extent that the resulting axion vacuum value is small enough to satisfy \[\bar{\theta}=\langle a(x)\rangle/f_{a}\lesssim 10^{-10}. \tag{6}\] Yet, the PQ mechanism solving the strong CP problem does not predict the value of \(\bar{\theta}\). Generically there can be a variety of model-dependent physics generating nonzero axion vacuum value, which may give any value of \(\bar{\theta}\) below \(10^{-10}\). It includes for instance (i) BSM physics generating the CP-odd interactions \(\sum_{i}\lambda_{i}{\cal O}_{i}\) in (3), which would shift the axion vacuum value when it is combined with the \(U(1)_{\rm PQ}\)-breaking by the QCD anomaly, as well as (ii) additional, typically UV-originated, \(U(1)_{\rm PQ}\)-breaking _other than_ the QCD anomaly, e.g. quantum gravity effects, which would by itself generate an axion potential at the corresponding UV scale. EDMs then may provide a way to discriminate these two potentially dominant origins of the axion vacuum value from each other, since the origin (i) affects EDMs both directly and through the induced axion vacuum value, while the origin (ii) affects EDMs mostly through the induced axion vacuum value. This suggests that EDMs can provide information not only on BSM CP violation, but also on the existence of the PQ mechanism and the quality of the PQ symmetry characterized by the strength of UV-originated \(U(1)_{\rm PQ}\)-breaking other than the QCD anomaly. In this paper, we examine if certain class of BSM CP violations can give rise to a distinguishable pattern of nucleon and atomic EDMs from the pattern due to \(\bar{\theta}\), in both cases with and without the PQ mechanism. We also examine if these EDMs can discriminate between the two origins (i) and (ii) of the axion vacuum value in the presence of the PQ mechanism. For simplicity, we focus on BSM CP violation mediated to the SM sector dominantly by the gluons or the Higgs boson, whose low energy consequences are described dominantly by the CEDMs of the gluons and light quarks. We find that the nucleon EDMs can show a distinctive pattern when the EDMs are dominantly induced by light quark CEDMs without the PQ mechanism. In the presence of the PQ mechanism, the nucleon EDMs due to the gluon or light quark CEDMs have a similar pattern as those due to the QCD \(\theta\)-parameter, regardless of the origin of the axion vacuum value which determines the \(\theta\)-parameter. In contrast, diamagnetic atomic EDMs due to the gluon or light quark CEDMs have characteristic patterns distinguishable from the pattern due to the \(\theta\)-parameter which is induced dominantly by UV-originated PQ breaking other than the QCD anomaly, for instance by quantum gravity effects. Our results imply that indeed EDMs can provide information on the existence of the PQ mechanism and the quality of the PQ symmetry, as well as on BSM CP violation. The organization of this paper is as follows. In the next section, we briefly discuss the quality of the PQ symmetry which is about the axion vacuum value in the presence of both BSM CP violation and UV-originated \(U(1)_{\rm PQ}\) breaking other than the QCD anomaly. In section 3, we discuss BSM CP violation mediated mainly by the SM gauge bosons and/or the Higgs boson, as well as the resulting CEDMs of the gluons and light quarks at low energy scales. Section 4 is devoted to the analysis of the nucleon and diamagnetic atomic EDMs induced by \(\bar{\theta}\) and the gluon and quark CEDMs in both cases with and without the PQ mechanism. In section 5, we provide some examples of BSM models yielding low energy CP violations dominated by the gluon and light quark CEDMs. Section 6 is the conclusion. ## 2 PQ quality with BSM CP violation In models with QCD axion, the axion potential is generically given by \[V(a)=V_{\rm QCD}(a)+\delta V(a) \tag{1}\] where \[V_{\rm QCD}(a)\simeq-\frac{f_{\pi}^{2}m_{\pi}^{2}}{(m_{u}+m_{d})}\sqrt{m_{u}^ {2}+m_{d}^{2}+2m_{u}m_{d}\cos(a/f_{a})} \tag{2}\] is the axion potential induced by the \(U(1)_{\rm PQ}\)-breaking by the QCD anomaly [18], i.e. the axion coupling (5), which has the global minimum2 at \(\langle a\rangle=0\), and \(\delta V\) denotes the model-dependent additional axion potential which has a minimum at \(\langle a\rangle\neq 0\), therefore generates a nonzero axion vacuum value. Here \(m_{u,d}\) are the light quark masses. Footnote 2: Here the axion field is defined in such a way that \(\langle a\rangle/f_{a}\) is identified as the QCD angle \(\bar{\theta}\) violating CP in the strong interactions, which can be always done by an appropriate constant shift of the axion field. Generically there can be two different sources of \(\delta V\). The first is the _combined_ effect of the PQ-breaking by the QCD anomaly and a CP violating effective interaction of gluons and/or light quarks around the QCD scale where the QCD anomaly becomes important. This includes, first of all, the SM contribution [19] \[\delta V_{\rm SM}\sim 10^{-19}f_{\pi}^{2}m_{\pi}^{2}\sin\delta_{\rm KM}\sin(a/ f_{a}), \tag{6}\] which results in \(\bar{\theta}_{\rm SM}=\langle a\rangle_{\rm SM}/f_{a}\sim 10^{-19}\sin\delta_{ \rm KM}\) which is too small to be phenomenologically interesting in the near future. On the other hand, in the presence of BSM physics generating CP-odd effective interactions around the QCD scale, the resulting \(\bar{\theta}\) might be as large as \(10^{-10}\). For instance, for the effective interactions given by \[\mathcal{L}_{\rm eff}=\sum_{i}\lambda_{i}\mathcal{O}_{i}, \tag{7}\] where \(\mathcal{O}_{i}\) are non-renormalizable CP-odd effective interactions of the gluons and/or light quarks and \(\lambda_{i}\) are the associated Wilson coefficients, one finds \[\delta V_{\rm BSM}\sim\sum_{i}\lambda_{i}\int d^{4}x\left\langle \frac{g_{s}^{2}}{32\pi^{2}}G\tilde{G}(x)\mathcal{O}_{i}(0)\right\rangle\sin(a/ f_{a}). \tag{8}\] The resulting shift of the axion vacuum value is given by \[\bar{\theta}_{\rm BSM}=\frac{\langle a\rangle_{\rm BSM}}{f_{a}} \sim\frac{\sum_{i}\lambda_{i}\int d^{4}x\left\langle\frac{g_{s}^{2}}{32\pi^{2 }}G\tilde{G}(x)\mathcal{O}_{i}(0)\right\rangle}{f_{\pi}^{2}m_{\pi}^{2}} \tag{9}\] which can have any value below \(10^{-10}\). The second potentially dominant source of \(\delta V\) is additional, typically UV-originated, PQ breaking other than the QCD anomaly. For instance, it has been argued that generically quantum gravity does not respect global symmetries, so can generate a \(U(1)_{\rm PQ}\)-breaking axion potential around the scale of quantum gravity [20; 21; 22; 23]. Study of axions in string theory and also of axionic Euclidean wormholes imply that string/brane instantons or gravitational wormholes generate (For reviews, see for instance [24; 25]) \[\delta V_{\rm UV}=\Lambda_{\rm UV}^{4}e^{-S_{\rm ins}}\cos(a/f_{a}+\delta_{ \rm UV}), \tag{10}\] where \(\Lambda_{\rm UV}\) is a model-dependent UV scale3, \(S_{\rm ins}\) is the Euclidean action of the associated string/brane instanton or of the Euclidean wormhole, and \(\delta_{\rm UV}\) is a phase angle which is generically of order unity. This shifts the axion vacuum value as Footnote 3: Often it is given by \(\Lambda_{\rm UV}^{4}\sim m_{3/2}M_{\rm Pl}^{3}\) or \(\sim m_{3/2}^{2}M_{\rm Pl}^{2}\)[24; 26; 27] for axions in string theory, where \(M_{\rm Pl}\simeq 2\times 10^{18}\) GeV is the reduced Planck scale and \(m_{3/2}\) is the gravitino mass. \[\bar{\theta}_{\rm UV}\sim e^{-S_{\rm ins}}\Lambda_{\rm UV}^{4}\sin\delta_{ \rm UV}/f_{\pi}^{2}m_{\pi}^{2} \tag{11}\] which again can have any value below \(10^{-10}\). As noted in the previous section, the above two origins of nonzero axion vacuum value may give distinguishable patterns of EDMs because the effective interaction (4) affects EDMs both directly and through the induced \(\bar{\theta}\), while the additional \(U(1)_{\rm PQ}\) breaking generating the axion potential (7) affects EDMs mostly through the induced \(\bar{\theta}\). As we will see, for the case that BSM CP violation around the QCD scale is dominated by the gluon and light quark CEDMs, the two origins give distinguishable patterns of EDMs of diamagnetic atoms. ## 3 BSM CP violation mediated by the SM gauge and Higgs bosons As for BSM CP violation, for simplicity, our analysis focuses on a class of BSM scenarios where the new physics sector communicates with the SM sector dominantly through the SM gauge interactions and/or couplings to the SM Higgs boson. The new physics sector can generally involve CP-violating (CPV) interactions. In such cases integrating out heavy fields of the new physics sector would give rise to CP-odd dimension-six operators composed of the SM gauge fields and the Higgs field as follows, \[\begin{split}\mathcal{L}_{\rm CPV}(\mu=\Lambda)=& c _{\widetilde{G}}f^{abc}G^{a\mu}_{\alpha}G^{b\delta}_{\mu}\widetilde{G}^{c \alpha}_{\delta}+c_{\widetilde{W}}\epsilon^{abc}W^{a\mu}_{\alpha}W^{b\delta}_ {\mu}\widetilde{W}^{c\alpha}_{\delta}\\ &+|H|^{2}\left(c_{H\widetilde{G}}G^{a}_{\mu\nu}\widetilde{G}^{a \mu\nu}+c_{H\widetilde{W}}W^{a}_{\mu\nu}\widetilde{W}^{a\mu\nu}+c_{H\widetilde {B}}B_{\mu\nu}\widetilde{B}^{\mu\nu}\right)\\ &+c_{H\widetilde{W}B}H^{\dagger}\tau^{a}H\widetilde{W}^{a}_{\mu \nu}B^{\mu\nu}\end{split} \tag{10}\] with the Wilson coefficients \(c_{i}\) defined at a certain scale \(\mu=\Lambda\) which is around the mass scale of the heavy fields in the new physics sector. At one-loop level the following operators are also induced by the renormalization group evolution (RGE) mixing from the operators in Eq. (10). \[\sum_{q=u,d}\sum_{X=G,W,B}i(c_{qX})_{ij}\bar{Q}_{Li}\sigma^{\mu\nu}X_{\mu\nu}q_ {Rj}H^{(*)}+\sum_{X=W,B}i(c_{eX})_{ij}\bar{L}_{i}\sigma^{\mu\nu}X_{\mu\nu}e_{ Rj}H^{(*)}+\text{h.c.}, \tag{11}\] where \(i,j\) are flavor indices, and \(H^{(*)}\equiv H\) or \(H^{*}\) in order to make the operators invariant under the SM gauge groups. The full one-loop RG equations of the involved operators are given in appendix A using the results of [28; 29; 30]. Here we only show the dominant RGE effect involving the QCD coupling and the flavor-diagonal part: \[\begin{split} 16\pi^{2}\frac{dc_{\widetilde{G}}}{d\ln\mu}& =(N_{c}+2n_{F})g_{s}^{2}c_{\widetilde{G}}\,,\\ 16\pi^{2}\frac{d(c_{qG})_{ii}}{d\ln\mu}&=-\left( \frac{8}{3}N_{c}+\frac{5}{N_{c}}-\frac{2}{3}n_{F}\right)g_{s}^{2}(c_{qG})_{ii} +(Y_{q})_{ii}\left(-4g_{s}c_{H\widetilde{G}}+3N_{c}g_{s}^{2}c_{\widetilde{G}} \right),\\ 16\pi^{2}\frac{dc_{H\widetilde{G}}}{d\ln\mu}&=- \frac{2}{3}(11N_{c}-2n_{F})g_{s}^{2}c_{H\widetilde{G}}+(2ig_{s}\text{Tr}[Y_{u} c_{uG}+Y_{d}c_{dG}]+\text{h.c.})\,,\end{split} \tag{12}\] where \(N_{c}=3\) is the number of the QCD color, \(n_{F}=6\) is the number of the Dirac quarks, and \((Y_{q})_{ii}\) is the flavor-diagonal quark Yukawa coupling. Below the electroweak scale the Higgs field and \(W/Z\)-field are integrated out. Consequently, the leading effective CPV interactions from the operators in Eq. (10) and Eq. (11) are given by \[\begin{split}\mathcal{L}_{\rm CPV}(\mu=m_{W})&=\frac{ 1}{3}wf^{abc}G_{\alpha}^{a\mu}G_{\mu}^{bb}\widetilde{G}_{\delta}^{c\alpha}- \frac{i}{2}\sum_{q}\tilde{d}_{q}g_{s}\bar{q}\sigma^{\mu\nu}G_{\mu\nu}\gamma_{5 }q-\frac{i}{2}\sum_{f=q,\ell}d_{f}e\bar{f}\sigma^{\mu\nu}F_{\mu\nu}\gamma_{5}f \\ &+\frac{g_{s}^{2}}{32\pi^{2}}\theta(m_{W})G_{\mu\nu}^{a} \widetilde{G}^{a\mu\nu},\end{split} \tag{12}\] where \(\theta(m_{W})\) includes the threshold correction from the \(c_{H\widetilde{G}}\)-term in Eq. (10), and the Wilson coefficients are determined by the following matching conditions at \(\mu=m_{W}\) \[\begin{split}\frac{1}{3}w&=c_{\tilde{G}}\,,\\ g_{s}\tilde{d}_{q_{i}}&=\sqrt{2}v(c_{qG})_{ii}\,, \\ e\,d_{f_{i}}&=\sqrt{2}v(s_{w}c_{fW}+c_{w}c_{fB})_{ii }\,.\end{split} \tag{13}\] Here \(v=246\) GeV, \(s_{w}=\sin\theta_{w}\), \(c_{w}=\cos\theta_{w}\) with the weak mixing angle \(\theta_{w}\), and \(q\) and \(\ell\) stand for active light Dirac quarks and leptons, respectively. Thus, the low energy CPV effect mediated by gauge and Higgs interactions is characterized by the Weinberg three-gluon operator (or the gluon chromo-electric dipole moment (CEDM)), quark CEDMs, and quark and lepton electric dipole moments (EDMs). On the other hand, the new physics contributions to the QCD \(\theta\)-parameter would be indistinguishable from the SM bare value. Since the lepton EDMs from the SM are predicted to be far below the current experimental bounds, we may be able to distinguish BSM CPV from the SM one by the lepton EDMs if the lepton EDMs from new physics are sizable. On the other hand, in this work, we will examine whether one can discriminate BSM CPV by means of the hadronic EDMs. Furthermore, we will focus on the case that the low energy CPV effect is dominated by the QCD interactions characterized by the gluon and quark CEDMs, while the quark EDMs are subdominant. We will discuss in section 5 that it is typically the case if the lightest new physics sector communicating with the SM through gauge and Higgs interactions carries the QCD color. The CP violation through the gluon and quark CEDMs will give rise to electric dipole moments of nucleons and atoms below the QCD scale. In order to estimate the nucleon and atomic EDMs, we need to bring the Wilson coefficients down to the QCD scale (\(\sim 1\) GeV) through the RGE. This running effect is important because the QCD gauge coupling becomes large (\(g_{s}^{2}\sim 4\pi\)) near the QCD scale. The RGE equations at leading order are given by [31; 32; 33; 34; 35] \[\frac{d\mathbf{C}}{d\ln\mu} = \frac{g_{s}^{2}}{16\pi^{2}}\gamma\,\mathbf{C}, \tag{14}\] where the redefined coefficients \(\mathbf{C}\equiv(C_{1}\ C_{2}\ C_{3})^{T}\) are \[C_{1}(\mu)=\frac{d_{q}(\mu)}{m_{q}Q_{q}},\quad C_{2}(\mu)=\frac{ \tilde{d}_{q}(\mu)}{m_{q}},\quad C_{3}(\mu)=\frac{w(\mu)}{g_{s}}, \tag{15}\] and the anomalous dimension matrix \(\gamma\) is \[\gamma\equiv\left(\begin{array}{ccc}\gamma_{e}&\gamma_{eq}&0\\ 0&\gamma_{q}&\gamma_{Gq}\\ 0&0&\gamma_{G}\end{array}\right)=\left(\begin{array}{ccc}8C_{F}&8C_{F}&0\\ 0&16C_{F}-4N_{c}&-2N_{c}\\ 0&0&N_{c}+2n_{f}+\beta_{0}\end{array}\right). \tag{10}\] Here \(C_{F}=(N_{c}^{2}-1)/2N_{c}=4/3\) is the quadratic Casimir, \(N_{c}=3\) is the number of color, \(n_{f}\) is the number of active light Dirac quarks, and \(\beta_{0}\equiv(33-2n_{f})/3\). The color fine structure constant \(\alpha_{s}=g_{s}^{2}/4\pi\) and the quark mass run according to \[\frac{d\alpha_{s}}{d\ln\mu}=-\beta_{0}\frac{\alpha_{s}^{2}}{2\pi},\quad\frac{dm _{q}}{d\ln\mu}=-4\frac{\alpha_{s}}{2\pi}m_{q}\,. \tag{11}\] Using Eq. (11), the analytic solution to the RGE equations is obtained as [34] \[C_{1}(\mu) = \eta^{\kappa_{e}}C_{1}(\Lambda)+\frac{\gamma_{qe}}{\gamma_{e}- \gamma_{q}}(\eta^{\kappa_{e}}-\eta^{\kappa_{q}})C_{2}(\Lambda)\] \[+ \left[\frac{\gamma_{Gq}\gamma_{qe}\eta^{\kappa_{e}}}{(\gamma_{q} -\gamma_{e})(\gamma_{G}-\gamma_{e})}+\frac{\gamma_{Gq}\gamma_{qe}\eta^{\kappa _{q}}}{(\gamma_{e}-\gamma_{q})(\gamma_{G}-\gamma_{q})}+\frac{\gamma_{Gq}\gamma _{qe}\eta^{\kappa_{G}}}{(\gamma_{e}-\gamma_{G})(\gamma_{q}-\gamma_{G})}\right] C_{3}(\Lambda),\] \[C_{2}(\mu) = \eta^{\kappa_{q}}C_{2}(\Lambda)+\frac{\gamma_{Gq}}{\gamma_{q}- \gamma_{G}}\left[\eta^{\kappa_{q}}-\eta^{\kappa_{G}}\right]C_{3}(\Lambda),\] \[C_{3}(\mu) = \eta^{\kappa_{G}}C_{3}(\Lambda), \tag{12}\] where \(\eta=\alpha_{s}(\Lambda)/\alpha_{s}(\mu)\) and \(\kappa_{x}=\gamma_{x}/(2\beta_{0})\). For the renormalization scale \(\mu<m_{c}\) and the BSM scale \(\Lambda\geq 1\) TeV, we derive useful analytic relations from Eq. (10) and Eq. (12): \[w(\mu)=\left(\frac{g_{s}(m_{c})}{g_{s}(\mu)}\right)^{\frac{21}{29}}\left(\frac {g_{s}(m_{b})}{g_{s}(m_{c})}\right)^{\frac{33}{25}}\left(\frac{g_{s}(m_{t})}{ g_{s}(m_{b})}\right)^{\frac{39}{23}}\left(\frac{g_{s}(\Lambda)}{g_{s}(m_{t})} \right)^{\frac{15}{7}}w(\Lambda), \tag{13}\] \[\Delta\tilde{d}_{q}(\mu) =\frac{m_{q}(\mu)}{g_{s}(\Lambda)}\Bigg{[}\frac{9}{11}\left\{ \left(\frac{g_{s}(m_{c})}{g_{s}(\mu)}\right)^{\frac{28}{59}}-\left(\frac{g_{s} (m_{c})}{g_{s}(\mu)}\right)^{\frac{50}{29}}\right\}\left(\frac{g_{s}(m_{b})}{ g_{s}(m_{c})}\right)^{\frac{33}{25}}\left(\frac{g_{s}(m_{t})}{g_{s}(m_{b})} \right)^{\frac{39}{23}}\left(\frac{g_{s}(\Lambda)}{g_{s}(m_{t})}\right)^{ \frac{15}{7}}\] \[+\frac{9}{19}\left(\frac{g_{s}(m_{c})}{g_{s}(\mu)}\right)^{\frac{ 28}{29}}\left(\frac{g_{s}(m_{b})}{g_{s}(m_{c})}\right)^{\frac{28}{25}}\left( \frac{g_{s}(m_{t})}{g_{s}(m_{b})}\right)^{\frac{28}{23}}\left\{\left(\frac{g_{s }(\Lambda)}{g_{s}(m_{t})}\right)^{\frac{4}{3}}-\left(\frac{g_{s}(\Lambda)}{g_{s }(m_{t})}\right)^{\frac{27}{7}}\right\}\Bigg{]}w(\Lambda), \tag{14}\] where \(\Delta\tilde{d}_{q}(\mu)\) is the RG-induced contribution to the quark CEDM from the gluon CEDM. Numerically the above equations are \[w(1\ \text{GeV})\simeq 0.33\left(\frac{g_{s}(\Lambda)}{g_{s}(1\ \text{TeV})} \right)^{\frac{15}{7}}w(\Lambda), \tag{15}\] \[\frac{\Delta\tilde{d}_{q}}{m_{q}}(1\ \text{GeV}) \simeq 0.13\left(\frac{g_{s}(\Lambda)}{g_{s}(1\ \text{TeV})}\right)^{\frac{8}{7}}w(\Lambda)\] \[+0.022\left[\left(0.97\left(\frac{g_{s}(\Lambda)}{g_{s}(1\ \text{TeV})}\right)^{\frac{1}{3}}-0.81\left(\frac{g_{s}(\Lambda)}{g_{s}(1\ \text{TeV})}\right)^{\frac{15}{7}}\right)/0.16\right]\,w(\Lambda). \tag{16}\] For instance, for the BSM scale \(\Lambda=1\) TeV or 10 TeV, we obtain the following numerical relations which will be useful later \[\frac{\Delta\tilde{d}_{q}}{m_{q}}(1\ \text{GeV})\simeq\begin{cases}0.41\,w(1\ \text{GeV}),&\Lambda=1\ \text{TeV}\\ 0.53\,w(1\ \text{GeV}),&\Lambda=10\ \text{TeV}\end{cases} \tag{21}\] These relations take into account the renormalization of the Weinberg operator to the scale 1 GeV. The Wilson coefficients around the QCD scale obtained from the above procedure can be matched to hadronic CPV observables such as nucleon EDMs and CP-odd pion-nucleon interactions by a variety of methods. In the next section, we will discuss the resultant nuclear and atomic EDMs from the gluon and quark CEDMs. ## 4 Nuclear and Atomic EDMs We have discussed that the BSM CP violation mediated by gauge and Higgs interactions can appear as the gluon and quark CEDMs below the weak scale. The QCD \(\bar{\theta}\)-parameter may be another dominant source of hadronic CP violation. In this section we estimate the nuclear and atomic EDMs from those CPV operators by matching conditions around the QCD scale \(\sim 1\) GeV. We will then examine whether the resultant EDM profiles can tell us the origins of CP violation and the quality of the PQ symmetry. ### Nucleon EDMs The nucleon EDMs from the QCD \(\bar{\theta}\)-parameter and quark (C)EDMs were computed in [36; 37; 38] with QCD sum rules. In this approach the nucleon EDMs are associated with the QCD \(\bar{\theta}\)-parameter and quark (C)EDMs at the renormalization scale \(\mu=1\) GeV as \[d_{N}(\bar{\theta},\tilde{d}_{q},d_{q})=-c_{0}\frac{m_{N}^{3}\langle\bar{q}q \rangle}{\lambda_{N}^{2}}\Theta_{N}(\bar{\theta},\tilde{d}_{q},d_{q}),\quad(N =p,n) \tag{22}\] where \(c_{0}=1.8\times 10^{-2}\), \(m_{N}\) is the nucleon mass, \(\langle\bar{q}q\rangle=-(0.225\,\text{GeV})^{3}\) is the quark condensate, \(\lambda_{N}=-0.0436(131)\,\text{GeV}^{3}\) is the coupling between the physical nucleon state and the corresponding interpolating field in the QCD sum rules approach, and \[\begin{split}\Theta_{p}(\bar{\theta},\tilde{d}_{q},d_{q})=& \,\chi m_{*}\left[(4e_{u}-e_{d})\left(\bar{\theta}-\frac{m_{0}^{2}}{2} \frac{\tilde{d}_{s}}{m_{s}}\right)+\frac{m_{0}^{2}}{2}(\tilde{d}_{u}-\tilde{d }_{d})\left(\frac{4e_{u}}{m_{d}}+\frac{e_{d}}{m_{u}}\right)\right]\\ &+\frac{1}{8}(2\kappa+\xi)(4e_{u}\tilde{d}_{u}-e_{d}\tilde{d}_{d} )+(4d_{u}-d_{d}),\\ \Theta_{n}(\bar{\theta},\tilde{d}_{q},d_{q})=&\, \chi m_{*}\left[(4e_{d}-e_{u})\left(\bar{\theta}-\frac{m_{0}^{2}}{2}\frac{ \tilde{d}_{s}}{m_{s}}\right)+\frac{m_{0}^{2}}{2}(\tilde{d}_{d}-\tilde{d}_{u}) \left(\frac{4e_{d}}{m_{u}}+\frac{e_{u}}{m_{d}}\right)\right]\\ &+\frac{1}{8}(2\kappa+\xi)(4e_{d}\tilde{d}_{d}-e_{u}\tilde{d}_{u} )+(4d_{d}-d_{u}).\end{split} \tag{23}\] Here \(m_{*}\equiv(\sum_{q=u,d,s}m_{q}^{-1})^{-1}\), and \(e_{q}\) denotes the electromagnetic (EM) charge of the quark \(q\). We have also the various susceptibilities of quark condensates defined as [36]: \[\begin{split}\langle\bar{q}\sigma_{\mu\nu}q\rangle& =e_{q}\chi F_{\mu\nu}\langle\bar{q}q\rangle,\quad g_{s}\langle \bar{q}G_{\mu\nu}q\rangle=e_{q}\kappa F_{\mu\nu}\langle\bar{q}q\rangle,\\ g_{s}\langle\bar{q}G^{\mu\nu}\sigma_{\mu\nu}q\rangle& =m_{0}^{2}\langle\bar{q}q\rangle,\quad 2g_{s}\langle\bar{q}\gamma_{5} \tilde{G}_{\mu\nu}q\rangle=ie_{q}\xi F_{\mu\nu}\langle\bar{q}q\rangle,\end{split} \tag{24}\] whose values are given as \(\chi=-5.7(6)\,\text{GeV}^{-2},\,m_{0}^{2}=0.8(1)\,\text{GeV}^{2}\), \(\kappa=-0.34(10),\,\xi=-0.74(20)\). On the other hand, the gluon CEDM (Weinberg operator) contribution to the nucleon EDMs was first evaluated in [39] by Naive Dimensional Analysis (NDA) as \(d_{N}(w)\approx\mathcal{O}(ef_{\pi}w)\) at the matching scale \(\mu_{*}\simeq 225\) MeV chosen by the condition \(\alpha_{s}(\mu_{*})=4\pi/6\). Later QCD sum rules were used to compute the one-particle reducible contribution in [40; 41] which is obtained by chiral rotation of the CP-odd nucleon mass. Thus, they found about a factor of two smaller neutron EDM than the NDA estimation while obtaining the opposite sign for the proton EDM [41]. Their result is \[d_{N}(w)=-\mu_{N}\frac{3g_{s}m_{0}^{2}}{32\pi^{2}}w\ln\frac{M^{2}}{\mu_{\text{ IR}}^{2}},\quad(N=p,n) \tag{10}\] where the matching scale is at \(\mu=1\) GeV, \(\mu_{N}\) is the anomalous magnetic moment of the nucleon, \(g_{s}=2.1\), \(M\) is the Borel mass in the sum-rule calculation, and \(\mu_{\text{IR}}\) is the IR cutoff. The theoretical uncertainty is about 50% for which the dominant uncertainty arises from the ratio \(M/\mu_{\text{IR}}\) which is taken to be \(\sqrt{2}\leq M/\mu_{\text{IR}}\leq 2\sqrt{2}\) in [41]. The anomalous magnetic moments of the nucleons are experimentally [4] \[\mu_{p}=2.79\frac{e}{2m_{p}}=-1.46\mu_{n} \tag{11}\] Recently the one-particle irreducible contribution was also calculated in [42] by the non-relativistic quark model, which is about 5 times smaller than the one-particle reducible contribution with the different sign. Using this latest result in [42], numerically \[d_{p}(w) =-18w\,e\,\text{MeV}, \tag{12}\] \[d_{n}(w) =20w\,e\,\text{MeV}\] at the matching scale \(\mu=1\) GeV with 60% theoretical uncertainty. Using the central values for the involved parameters, we obtain from Eq. (10) and Eq. (12) \[d_{p}(\bar{\theta},\tilde{d}_{q},d_{q},w)= -0.46\times 10^{-16}\bar{\theta}\,e\,\text{cm}+e\left(-0.17\tilde{d }_{u}+0.12\tilde{d}_{d}+0.0098\tilde{d}_{s}\right) \tag{13}\] \[+0.36d_{u}-0.09d_{d}-18w\,e\,\text{MeV},\] \[d_{n}(\bar{\theta},\tilde{d}_{q},d_{q},w)= 0.31\times 10^{-16}\bar{\theta}\,e\,\text{cm}+e\left(-0.13 \tilde{d}_{u}+0.16\tilde{d}_{d}-0.0066\tilde{d}_{s}\right)\] \[-0.09d_{u}+0.36d_{d}+20w\,e\,\text{MeV}.\] If the strong CP problem is resolved by the PQ mechanism, \(\bar{\theta}\) is no longer a constant parameter, but the vacuum expectation value (VEV) of the QCD axion, which is not independent of hadronic CPV operators. As outlined in Section 2, there can be two potentially competing contributions to the axion VEV: \[\bar{\theta}_{\text{PQ}}\equiv\frac{\langle a\rangle}{f_{a}}=\bar{\theta}_{ \text{UV}}+\bar{\theta}_{\text{BSM}}, \tag{14}\] where \(\bar{\theta}_{\rm UV}\) is the axion VEV induced by PQ-breaking _other than_ the QCD anomaly, e.g. the one in Eq. (8) which is induced by quantum gravity instantons at UV scales, while \(\bar{\theta}_{\rm BSM}\) arises from a combined effect of the PQ-breaking by the QCD anomaly and BSM CP-violation as in Eq. (6). In our case, \(\bar{\theta}_{\rm UV}\) is essentially a free parameter whose size is characterizing the quality of the PQ symmetry, while \(\bar{\theta}_{\rm BSM}\) from the gluon and quark CEDMs are given by \[\bar{\theta}_{\rm BSM}=\frac{m_{0}^{2}}{2}\sum_{q=u,d,s}\frac{\tilde{d}_{q}}{m_ {q}}+\mathcal{O}(4\pi f_{\pi}^{2}w), \tag{10}\] where the piece from the quark CEDMs is calculated with QCD sum rules [36], while the piece from the gluon CEDM is estimated with NDA. Replacing \(\bar{\theta}\) in Eq. (11) with \(\bar{\theta}_{\rm PQ}\), we obtain \[\begin{split}\Theta_{p}^{\rm PQ}(\bar{\theta}_{\rm UV},\tilde{d} _{q},d_{q})=&\,\chi m_{*}(4e_{u}-e_{d})\bar{\theta}_{\rm UV}- \left(\frac{1}{8}(2\kappa+\xi)+\frac{1}{2}\chi m_{0}^{2}\right)(4e_{u}\tilde{ d}_{u}-e_{d}\tilde{d}_{d})+(4d_{u}-d_{d}),\\ \Theta_{n}^{\rm PQ}(\bar{\theta}_{\rm UV},\tilde{d}_{q},d_{q})=& \,\chi m_{*}(4e_{d}-e_{u})\bar{\theta}_{\rm UV}+\left(\frac{1}{8}(2\kappa+\xi) +\frac{1}{2}\chi m_{0}^{2}\right)(4e_{d}\tilde{d}_{d}-e_{u}\tilde{d}_{u})+(4d _{d}-d_{u}),\end{split} \tag{11}\] where notably the strange quark CEDM \(\tilde{d}_{s}\) contribution is cancelled [36], and the gluon CEDM contribution via \(\bar{\theta}_{\rm PQ}\) is ignored, since it is actually negligible compared to the direct contribution in Eq. (10) due to the chiral suppression (\(\sim m_{*}/4\pi f_{\pi}\)). Numerically we then find \[\begin{split} d_{p}^{\rm PQ}(\bar{\theta}_{\rm UV},\tilde{d}_{ q},d_{q},w)=&\,-0.46\times 10^{-16}\bar{\theta}_{\rm UV}\,e\, \mathrm{cm}-e\left(0.58\tilde{d}_{u}+0.073\tilde{d}_{d}\right)\\ &\,+0.36d_{u}-0.089d_{d}-18w\,e\,\mathrm{MeV},\\ d_{n}^{\rm PQ}(\bar{\theta}_{\rm UV},\tilde{d}_{q},d_{q},w)=& \,0.31\times 10^{-16}\bar{\theta}_{\rm UV}\,e\,\mathrm{cm}+e \left(0.15\tilde{d}_{u}+0.29\tilde{d}_{d}\right)\\ &\,-0.089d_{u}+0.36d_{d}+20w\,e\,\mathrm{MeV},\end{split} \tag{12}\] for the nucleon EDMs in the presence of the PQ mechanism. As mentioned in the previous section, in this work we focus on gauge and/or Higgs-mediated CPV from a new physics sector which is charged under the QCD gauge group. In this case, typically the quark EDMs \(d_{q}\) are subdominant compared with the quark CEDMs \(\tilde{d}_{q}\). Thus, we will neglect the contribution from the quark EDMs in what follows. In this class of models, moreover, the quark chirality violation in the quark CEDM operators is from the SM Yukawa couplings. It implies that \[\tilde{d}_{q}(\mu)=m_{q}C_{2}(\mu) \tag{13}\] with the flavor-independent running coefficient \(C_{2}(\mu)\) as defined in Eq. (10). By this relation Eq. (10) becomes \[\begin{split} d_{p}(\bar{\theta},C_{2},w)=&\,-0.46 \times 10^{-16}\bar{\theta}\,e\,\mathrm{cm}+1.1C_{2}\,e\,\mathrm{MeV}-18w\,e\, \mathrm{MeV},\\ d_{n}(\bar{\theta},C_{2},w)=&\,0.31\times 10^{-16} \bar{\theta}\,e\,\mathrm{cm}-0.15C_{2}\,e\,\mathrm{MeV}+20w\,e\,\mathrm{MeV}. \end{split} \tag{14}\] On the other hand, if the PQ mechanism is working for resolving the strong CP problem, Eq. (4.11) yields \[\begin{split} d_{p}^{\rm PQ}(\bar{\theta}_{\rm UV},C_{2},w)& =-0.46\times 10^{-16}\bar{\theta}_{\rm UV}\,e\,{\rm cm}-1.7C_{2}\,e \,{\rm MeV}-18w\,e\,{\rm MeV},\\ d_{n}^{\rm PQ}(\bar{\theta}_{\rm UV},C_{2},w)&=0.31 \times 10^{-16}\bar{\theta}_{\rm UV}\,e\,{\rm cm}+1.7C_{2}\,e\,{\rm MeV}+20w \,e\,{\rm MeV}.\end{split} \tag{4.14}\] From the numerical values in Eq. (4.13) and Eq. (4.14), one can observe that \[d_{p}(\bar{\theta},w) \approx -d_{n}(\bar{\theta},w), \tag{4.15}\] \[d_{p}^{\rm PQ}(\bar{\theta}_{\rm UV},C_{2},w) \approx -d_{n}^{\rm PQ}(\bar{\theta}_{\rm UV},C_{2},w), \tag{4.16}\] while \[d_{p}(C_{2})\approx-7d_{n}(C_{2}). \tag{4.17}\] These approximate relations can be confirmed more precisely by the analytic results Eq. (4.2), Eq. (4.4), and Eq. (4.10) from QCD sum rules. Imposing the relation Eq. (4.12) in Eq. (4.2) and Eq. (4.10), we find \[\frac{d_{p}(\bar{\theta})}{d_{n}(\bar{\theta})} = \frac{d_{p}^{\rm PQ}(\bar{\theta}_{\rm UV})}{d_{n}^{\rm PQ}( \bar{\theta}_{\rm UV})}\simeq\frac{4e_{u}-e_{d}}{4e_{d}-e_{u}}=-\frac{3}{2}, \tag{4.18}\] \[\frac{d_{p}(C_{2})}{d_{n}(C_{2})} \simeq -4\left(1-\frac{5(2\kappa+\xi)}{4m_{0}^{2}\chi}\right)^{-1}\simeq -(6.5\pm 1.1),\] (4.19) \[\frac{d_{p}^{\rm PQ}(C_{2})}{d_{n}^{\rm PQ}(C_{2})} \simeq \frac{4e_{u}m_{u}-e_{d}m_{d}}{4e_{d}m_{d}-e_{u}m_{u}}\simeq-1. \tag{4.20}\] where we have used \(m_{d}/m_{u}\simeq 2\). On the other hand, Eq. (4.4), Eq. (4.6) and Eq. (4.11) tell us \[\frac{d_{p}(w)}{d_{n}(w)}\simeq\frac{d_{p}^{\rm PQ}(w)}{d_{n}^{ \rm PQ}(w)}\simeq-1.2(3). \tag{4.21}\] Given other estimates of the nucleon EDMs based on current algebra [7], chiral perturbation theory [43; 44], lattice-QCD calculations [45; 46; 47; 48; 49], and their discrepancy with the sum rule results, one may trust the above derived ratios from QCD sum rules up to about 50% uncertainty. Thus, Eqs. (4.18)-(4.21) confirm the approximate relations in Eqs. (4.15)-(4.17) by the sum rule approach up to theoretical uncertainty. By the way, the RGE effect may affect these relations. Using Eq. (3.15), we find that the RG-induced quark CEDMs from the gluon CEDM give rise to only about 10% change to \(d_{p}^{(\rm PQ)}(w)/d_{n}^{(\rm PQ)}(w)\). As depicted in Fig. 1, the approximate relations Eqs. (4.15)-(4.17) tell us that the nucleon EDMs can discern only the quark CEDM-dominated CP violation without QCD axion from the other scenarios. If there exists QCD axion, \(d_{p}\approx-d_{n}\) regardless of what is the dominant source of CP violation. Interestingly this suggests that the future nucleon EDM measurements may give us an important information on the existence of QCD axion. If the relation \(d_{p}\approx-d_{n}\) is experimentally observed, at least it supports the existence of QCD axion. However, the nucleon EDMs cannot tell us about the origin of a non-zero axion vacuum value, i.e. among UV-originated quantum gravity effects and BSM CP violation sources. Thus, we need to look for other CPV observables beyond the nucleon EDMs in order to get information on the quality of the PQ symmetry. In the next subsection, we will discuss the use of atomic EDMs for this purpose. ### Atomic EDMs In the previous subsection, we have discussed that the nucleon EDMs cannot discriminate among different sources of explicit PQ breaking provided that there is QCD axion. In this subsection, we examine whether atomic EDMs are capable of distinguishing them by means of other hadronic CPV observables. Diamagnetic atomic EDMs are sensitive to CP-odd nuclear forces like CPV pion-nucleon couplings. For instance, the EDMs of diamagnetic light nuclei such as Deuteron and Helium [50], Radium [12], and Xenon [51] are generated by nucleon EDMs and CPV pion-nucleon interactions as \[d_{D} = 0.94(1)(d_{n}+d_{p})+0.18(2)\bar{g}_{1}\,e\,\mathrm{fm}, \tag{4.22}\] \[d_{He} = 0.9d_{n}-0.05d_{p}+[0.10(3)\bar{g}_{0}+0.14(3)\bar{g}_{1}]\;e\, \mathrm{fm},\] (4.23) \[d_{Ra} = 7.7\times 10^{-4}\left[(2.5\pm 7.5)\bar{g}_{0}-(65\pm 40)\bar{g}_{ 1}\right]\,e\,\mathrm{fm},\] (4.24) \[d_{Xe} = 1.3\times 10^{-5}d_{n}-10^{-5}\left[1.6\bar{g}_{0}+1.7\bar{g}_{ 1}\right]\,e\,\mathrm{fm}, \tag{4.25}\] Figure 1: The predicted ratios of the proton EDM to the neutron EDM depending on the different origins of CP violation. The shaded regions denote the cases where the EDMs originate dominantly from i) the QCD \(\bar{\theta}\)-parameter (gray) regardless of the PQ mechanism, ii) the gluon CEDM (i.e. the Weinberg operator) (green) again regardless of the PQ mechanism, iii) the quark CEDMs (red/blue) without/with the PQ mechanism. Here we assume that the CEDMs are generated at the BSM scale \(\Lambda=1\) TeV and subsequently follow the RGE down to the low energy scales, however our results are not sensitive to the value of \(\Lambda\). where \(\bar{g}_{0}\) is isospin-conserving CPV pion-nucleon coupling, while \(\bar{g}_{1}\) is an isospin-breaking one as follows. \[\bar{g}_{0}\bar{N}\frac{\vec{\sigma}}{2}\cdot\vec{\pi}N+\bar{g}_{1} \pi_{3}\bar{N}N. \tag{4.26}\] Here \(N=(p\ n)^{T}\) is the isospin-doublet nucleon field, and \(\vec{\pi}\) is the isospin-triplet pion field. The CPV pion-nucleon couplings have been computed by various methods including QCD sum rules and chiral perturbation theory. The contributions from the QCD \(\bar{\theta}\)-parameter were estimated in [52; 53] using chiral symmetry relation between CPV pion-nucleon couplings and quark mass corrections to baryon masses. \[\bar{g}_{0}(\bar{\theta}) = \frac{\delta m_{N}}{2f_{\pi}}\frac{1-\epsilon^{2}}{2\epsilon}\bar {\theta}=(15.7\pm 1.7)\times 10^{-3}\bar{\theta}, \tag{4.27}\] \[\bar{g}_{1}(\bar{\theta}) = \left(8c_{1}m_{N}\frac{\epsilon(1-\epsilon^{2})}{16f_{\pi}m_{N}} \frac{m_{\pi}^{4}}{m_{K}^{2}-m_{\pi}^{2}}+\mathcal{O}\left(\epsilon\frac{m_{ \pi}^{4}}{m_{N}^{3}f_{\pi}}\right)\right)\bar{\theta}=-(3.4\pm 2.4)\times 10^{-3} \bar{\theta},\] where \(\delta m_{N}=m_{n}-m_{p}=2.49(17)\) MeV, \(\epsilon=(m_{d}-m_{u})/2\bar{m}=0.37(3)\), \(\bar{m}=(m_{u}+m_{d})/2=3.37(8)\) MeV, \(c_{1}=1.0(3)\)GeV\({}^{-1}\) related to the nucleon sigma term [54], \(f_{\pi}=92.2\) MeV, \(m_{K}=494.98\) MeV, and \(m_{\pi}=135\) MeV. Here \(\bar{g}_{1}\) is subject to large theoretical uncertainty, since the Next-to-Leading Order (NLO) correction is as large as the Leading Order (LO) contribution, and it is uncertain how fast the convergence of the estimation from chiral perturbation theory is. The contributions from the quark CEDMs to the CPV pion-nucleon couplings were estimated with QCD sum rules in [55], and recently the estimation was improved in [12] using chiral symmetry relation and neglecting contribution related to the matrix elements of quark chromomagnetic dipole moments based on the argument of [56]. Using the result of [12], we get \[\bar{g}_{0}(\tilde{d}_{q},\bar{\theta}) \simeq \delta_{g_{0}}\frac{1}{4f_{\pi}}(\tilde{d}_{u}+\tilde{d}_{d})\frac {m_{0}^{2}}{2}\frac{d\delta m_{N}}{d\bar{m}\epsilon}+\frac{\delta m_{N}}{2f_{ \pi}}\frac{1-\epsilon^{2}}{2\epsilon}(\bar{\theta}-\bar{\theta}_{\rm PQ}+ \bar{\theta}_{\rm UV}) \tag{4.29}\] \[\simeq (2.2\pm 0.7){\rm GeV}\,(\tilde{d}_{u}+\tilde{d}_{d})+(15.7\pm 1.7) \times 10^{-3}(\bar{\theta}-\bar{\theta}_{\rm PQ}+\bar{\theta}_{\rm UV}),\] \[\bar{g}_{1}(\tilde{d}_{q}) \simeq \delta_{g_{1}}\frac{1}{2f_{\pi}}(\tilde{d}_{u}-\tilde{d}_{d}) \frac{m_{0}^{2}}{2}\frac{\sigma_{\pi N}}{\bar{m}}=(38\pm 13){\rm GeV}\,(\tilde{d}_{u} -\tilde{d}_{d}), \tag{4.30}\] at the matching scale \(\mu=1\) GeV. Here \(d\delta m_{N}/d\bar{m}\epsilon\simeq\delta m_{N}/\bar{m}\epsilon=2.49(17)\,{ \rm MeV}/\bar{m}\epsilon\), \(\sigma_{\pi N}=59.1(35)\) MeV, \(\bar{\theta}_{\rm PQ}\) and \(\bar{\theta}_{\rm UV}\) are defined in Eq. (4.9), and \(\delta_{g_{0,1}}=(1.0\pm 0.3)\) to account for theoretical uncertainty. Finally, the gluon CEDM (Weinberg operator) contribution to \(\bar{g}_{1}\) was computed with QCD sum rules and chiral perturbation theory in [51] as \[\bar{g}_{1}(w)\simeq\langle 0|\mathcal{L}_{w}|\pi^{0}\rangle\left(\frac{ \sigma_{\pi N}}{f_{\pi}^{2}m_{\pi}^{2}}+\frac{5g_{A}^{2}m_{\pi}}{64\pi f_{\pi }^{4}}\right)\simeq\pm(2.6\pm 1.5)\times 10^{-3}w\,{\rm GeV}^{2}, \tag{4.31}\] at the matching scale \(\mu=1\) GeV, where \(\mathcal{L}_{w}=\frac{1}{3}wf^{abc}G_{\alpha}^{a\mu}G_{\mu}^{b\delta}\widetilde {G}_{\delta}^{\alpha\alpha}\) is the Weinberg operator, and \(g_{A}=1.27\). Here the sign ambiguity is from the matrix element of the Weinberg operator estimated by QCD sum rules. On the other hand, to our knowledge, there has been no dedicated study on the estimation of \(\bar{g}_{0}\) from the gluon CEDM so far. However, the direct contribution to \(\bar{g}_{0}\) and \(\bar{g}_{1}\) from the gluon CEDM at \(\mu=1\) GeV is expected to be negligible compared with the RG-induced quark CEDM contribution from the gluon CEDM of high scales above TeV by the RG mixing Eq. (3.10). This can be explicitly seen for \(\bar{g}_{1}\) by applying Eq. (3.15) to Eq. (4.30) and Eq. (4.31), which gives \(\bar{g}_{1}(\Delta\tilde{d}_{q})\sim\mathcal{O}(10)\bar{g}_{1}(w)\). For \(\bar{g}_{0}\), if we use the NDA estimation, \[\bar{g}_{0}(w)\sim(m_{u}+m_{d})\mathcal{O}(4\pi f_{\pi}w), \tag{4.32}\] which is comparable to \(\bar{g}_{0}(\Delta\tilde{d}_{q})\) as can be seen from Eq. (3.15) and Eq. (4.29). As the diamagnetic atomic EDMs in Eqs. (4.22)-(4.25) have better or comparable sensitivity on \(\bar{g}_{1}\) compared to \(\bar{g}_{0}\) and also \(\bar{g}_{1}(\tilde{d}_{q})\) is predicted to be an order of magnitude larger than \(\bar{g}_{0}(\tilde{d}_{q})\) in Eq. (4.29) and Eq. (4.30), we can ignore \(\bar{g}_{0}(w)\) and \(\bar{g}_{1}(w)\) in order to estimate such diamagnetic atomic EDMs from the gluon CEDM generated above TeV scale, unless \(\bar{g}_{0}(w)\) is unreasonably bigger than the NDA estimation Eq. (4.32). In fact, therefore, Eq. (4.29) and Eq. (4.30) may be enough in order to estimate the contributions to \(\bar{g}_{0}\) and \(\bar{g}_{1}\) from the gluon CEDM operator generated above TeV scale because of the large RG-induced quark CEDM contributions. Since \(\bar{g}_{1}(\tilde{d}_{q})\) is more important than \(\bar{g}_{0}(\tilde{d}_{q})\) for the diamagnetic atomic EDMs in Eqs. (4.22)-(4.25), let us consider the ratio \(\bar{g}_{1}/(m_{n}d_{n})\) which may have certain characteristic values depending on CPV origins. Assuming the relation Eq. (4.12) for the gauge and Higgs mediated CPV, we find \[\frac{e\bar{g}_{1}(\bar{\theta})}{m_{n}d_{n}(\bar{\theta})} = \frac{e\bar{g}_{1}^{\rm PQ}(\bar{\theta}_{\rm UV})}{m_{n}d_{n}^{ \rm PQ}(\bar{\theta}_{\rm UV})}\approx-(2.3\pm 2.1), \tag{4.33}\] \[\frac{e\bar{g}_{1}(C_{2})}{m_{n}d_{n}(C_{2})} \approx (6.6\pm 4.8)\times 10^{2},\] (4.34) \[\frac{e\bar{g}_{1}^{\rm PQ}(C_{2})}{m_{n}d_{n}^{\rm PQ}(C_{2})} \approx -(72\pm 50),\] (4.35) \[\frac{e\bar{g}_{1}(\Delta C_{2},w)}{m_{n}d_{n}(\Delta C_{2},w)} \simeq \frac{e\bar{g}_{1}^{\rm PQ}(\Delta C_{2},w)}{m_{n}d_{n}^{\rm PQ}( \Delta C_{2},w)}\approx-(5.0\pm 3.5)\,r(\Lambda), \tag{4.36}\] where \(C_{2}\equiv(\tilde{d}_{q}/m_{q})_{1\;{\rm GeV}}\), \(\Delta C_{2}(\Lambda)\equiv(\Delta\tilde{d}_{q}/m_{q})_{1\;{\rm GeV}}\) is the RG-induced quark CEDM coefficient renormalized at \(1\) GeV from the gluon CEDM generated at some high scale \(\Lambda\), and \(r(\Lambda)\equiv(\Delta C_{2}(\Lambda)/w)_{1\;{\rm GeV}}=0.41\) (for \(\Lambda=1\) TeV), \(0.53\) (for \(\Lambda=10\) TeV), and so on as given in Eq. (3.15). We see that the quark CEDM-dominated CPV scenarios predict clearly different ratios from the \(\bar{\theta}\)-dominant case regardless of the PQ mechanism. Moreover, the predicted central values are quite different depending on whether there is QCD axion or not, although they are subject to the large uncertainties. On the other hand, the gluon CEDM-dominated CPV scenarios at high scales predict similar values for the ratio as the \(\bar{\theta}\)-dominant case. Thus, it would be still challenging to discriminate the gluon CEDM-dominant scenarios from the \(\bar{\theta}\)-dominant case even via hadronic CPV observables sensitive to the coupling \(\bar{g}_{1}\). Yet if we look at some diamagnetic atoms such as He (Eq. (4.23)) and Xe (Eq. (4.25)) which are equally sensitive to \(\bar{g}_{0}\) as well as \(\bar{g}_{1}\), the \(\bar{\theta}\)-dominant scenario would be distinguishable from the gluon CEDM-dominant cases by the relatively large \(\bar{g}_{0}(\bar{\theta})\) compared to \(\bar{g}_{1}(\bar{\theta})\). In Fig. 2, we depict the ratios of various diamagnetic atomic EDMs to the neutron EDM for the BSM CPV scenarios that we are concerned with. As anticipated, the quark CEDM-dominant scenarios show clearly different patterns from the other scenarios, while the difference between the gluon CEDM-dominance and the \(\bar{\theta}\)-dominance is less clear. Yet we find that the EDMs of He and Xe are able to distinguish between the gluon CEDM and the \(\bar{\theta}\)-parameter via their sensitivity on the coupling \(\bar{g}_{0}\). In the \(\bar{\theta}\)-dominant scenario with the PQ mechanism, the axion VEV is by definition induced dominantly by the PQ-breaking other than the QCD anomaly such as quantum gravity effects, i.e. \(\bar{\theta}_{\rm PQ}\simeq\bar{\theta}_{\rm UV}\), while \(\bar{\theta}_{\rm PQ}\simeq\bar{\theta}_{\rm BSM}\) in the gluon or quark CEDM-dominance scenarios. As the EDMs of He and Xe discriminate the \(\bar{\theta}\)-diminance from the gluon or quark CEDM-diminance, regardless of the presence of the PQ mechanism, future EDM measurements of such diamagnetic atoms might provide information not only on the BSM CP violation, but also on the origin of the axion VEV, so on the quality of the PQ symmetry. Figure 2: The EDMs of light diamagnetic nuclei which are sensitive to the CPV pion-nucleon couplings compared to the neutron EDM from the same CPV source. The shaded regions denote the cases where the EDMs originate dominantly from i) the QCD \(\bar{\theta}\)-parameter (gray) regardless of the PQ mechanism, ii) the gluon CEDM (i.e. the Weinberg operator) (green/orange) without/with the PQ mechanism, iii) the quark CEDMs (red/blue) without/with the PQ mechanism. Here, we assume that the CEDMs are generated at \(\Lambda=1\) TeV, but again the results are not sensitive to the value of \(\Lambda\). BSM examples Here we discuss specific BSM examples which communicate with the SM sector mainly through gauge and Higgs interactions. As we have discussed in section 3, their CP violation will be therefore manifested dominantly via the gluon and quark CEDMs. ### Vector-like Quarks Vector-Like Quarks (VLQs) may be among the simplest new physics scenarios which transmit CPV to the SM by gauge and Higgs interactions. For CP violation, we consider a general renormalizable lagrangian for a VLQ \(\psi+\psi^{c}\) with a real singlet scalar [57] \[\mathcal{L}\supset-\left(m_{\psi}\psi\psi^{c}+y_{\psi}S\psi\psi^{c}+\text{h.c.}\right)-\frac{1}{2}m_{S}^{2}S^{2}-A_{SH}S|H|^{2}+\cdots, \tag{10}\] where the vector-like quark mass \(m_{\psi}\) and the Yukawa coupling \(y_{\psi}\) are complex parameters, and \(H\) is the SM doublet Higgs field. Here we will discuss this model in some details, because it has not been comprehensively studied before concerning its EDM signatures beyond the scope of [57]. One can remove the phase of the fermion mass by chiral rotation so that a complex CP phase appears in the Yukawa coupling only. Then we may write the lagrangian without loss of generality as \[\mathcal{L}\supset-\left(m_{\psi}\bar{\Psi}\Psi+y_{\psi}\cos\alpha\,S\bar{ \Psi}\Psi+y_{\psi}\sin\alpha\,S\bar{\Psi}i\gamma_{5}\Psi\right)-\frac{1}{2}m_{ S}^{2}S^{2}-A_{SH}S|H|^{2}+\cdots, \tag{11}\] where the parameters \(m_{\psi}\) and \(y_{\psi}\) are now real, and \(\alpha\) denotes the CP phase. Here \(\Psi\equiv(\psi\ \psi^{c*})^{T}\) is the Dirac field of the VLQ. If \(\cos\alpha\,\sin\alpha\neq 0\) (i.e. \(\alpha\neq 0,\pi/2\)), CP has to be broken, because \(S\) couples to both the CP-even fermion bilinear and the CP-odd fermion bilinear. Assuming the VLQ and the singlet scalar are heavier than the electroweak scale, one can integrate them out. The effective lagrangian below the mass scales of the VLQ and the singlet scalar is then given by some of the operators in Eq. (10) from the first two diagrams in Fig. 3. \[\begin{split}\mathcal{L}_{\text{CPV}}(\mu=\Lambda)=& c _{\widetilde{G}}f^{abc}G_{\alpha}^{a\mu}G_{\mu}^{b\delta}\widetilde{G}_{\delta} ^{c\alpha}+c_{\widetilde{W}}\epsilon^{abc}W_{\alpha}^{a\mu}W_{\mu}^{b\delta} \widetilde{W}_{\delta}^{c\alpha}\\ &+|H|^{2}\left(c_{H\widetilde{G}}G_{\mu\nu}^{a}\widetilde{G}^{a \mu\nu}+c_{H\widetilde{W}}W_{\mu\nu}^{a}\widetilde{W}^{a\mu\nu}+c_{H \widetilde{B}}B_{\mu\nu}\widetilde{B}^{\mu\nu}\right)\end{split} \tag{12}\] with [58; 39] \[c_{\widetilde{X}} = -\frac{1}{12}\frac{g_{X}^{3}}{(4\pi)^{4}}\frac{y_{\psi}^{2}}{m_ {\psi}^{2}}c_{\alpha}s_{\alpha}\,2\text{Tr}(T_{X}^{2}(\Psi))\,h(\tau), \tag{13}\] \[c_{H\widetilde{X}} = -\frac{g_{X}^{2}}{32\pi^{2}}\frac{y_{\psi}}{m_{\Psi}}s_{\alpha} \frac{A_{SH}}{m_{S}^{2}}\,2\text{Tr}(T_{X}^{2}(\Psi))\,f(\tau), \tag{14}\] where \(X=G,W,\) or \(B,\)\(T_{X}(\Psi)\) is the representation of \(\Psi\) in the gauge group associated with the gauge field \(X\), \(\tau\equiv m_{\psi}^{2}/m_{S}^{2}\), and the loop functions \(h(\tau)\) and \(f(\tau)\) are given by \[h(\tau) = 4\tau^{2}\int_{0}^{1}dx\int_{0}^{1}dy\frac{x^{3}y^{3}(1-x)}{[\tau x (1-xy)+(1-x)(1-y)]^{2}}, \tag{15}\] \[f(\tau) = -2\tau\int_{0}^{1}dx\frac{1}{x}\ln[1-x(1-x)/\tau], \tag{45}\] \[= \begin{cases}-\tau\left[\ln\left(\frac{1+\sqrt{1-4\tau}}{1-\sqrt{ 1-4\tau}}\right)-i\pi\right]^{2},&\tau<1/4\\ 4\tau\arcsin^{2}(1/2\sqrt{\tau}),&\tau\geq 1/4\end{cases}.\] We note that the asymptotic behavior of the loop functions: \[h(\tau)\simeq\begin{cases}-4\tau\ln\tau&(\tau\ll 1)\\ 1&(\tau\gg 1)\end{cases},\qquad f(\tau)\simeq\begin{cases}\tau^{2}&(\tau\ll 1)\\ 1&(\tau\gg 1)\end{cases} \tag{46}\] The RG equations Eq. (21) tells us that the operators in Eq. (20) are also induced at low energies by RG mixing through the third diagram in Fig. 3, and consequently around the weak scale the following operators in Eq. (21) are generated4 Footnote 4: If the VLQ \(\Psi\) is charged under the electromagnetism \(U(1)_{\rm em}\), the electron EDM is also generated, which we are not concerned with here. \[\mathcal{L}_{\rm CPV}(\mu=m_{W})=\frac{1}{3!}wf^{abc}\epsilon^{ \alpha\beta\gamma\delta}G^{a}_{\mu\alpha}G^{b}_{\beta\gamma}G^{c\mu}_{\delta} -\frac{i}{2}\sum_{q}\left(\tilde{d}_{q}g_{s}\bar{q}\sigma^{\mu\nu}G_{\mu\nu} \gamma_{5}q+d_{q}e\bar{q}\sigma^{\mu\nu}F_{\mu\nu}\gamma_{5}q\right). \tag{47}\] The sizes of the Wilson coefficients are roughly \[w\sim\frac{g_{s}^{3}}{(4\pi)^{4}}\frac{y_{\psi}^{2}}{\Lambda^{2 }}s_{2\alpha},\ \tilde{d}_{q}\sim\frac{g_{s}^{2}}{(4\pi)^{4}}\frac{y_{\psi}}{\Lambda}\frac{m_{ q}}{v}s_{\alpha}s_{\xi},\quad d_{q}\sim\frac{e^{2}}{(4\pi)^{4}}\frac{y_{\psi}}{ \Lambda}\frac{m_{q}}{v}s_{\alpha}s_{\xi} \tag{48}\] where \(\Lambda\sim m_{\psi}\sim m_{S}\) and \(\xi\) is the Higgs-singlet scalar mixing angle \(s_{\xi}\sim A_{S}Hv/m_{S}^{2}\). Therefore, the quark EDMs are relatively small compared with the quark CEDMs by the factor \(\alpha/\alpha_{s}\), and the quark EDMs' contribution to the nuclear and atomic EDMs can be neglected. In Fig. 4, we estimate the neutron EDM from the CPV VLQs in terms of VLQ mass \(m_{Q}\) and singlet scalar mass \(m_{S}\) assuming the CP angle \(\alpha=1\), no \(S\)-\(H\) mixing (\(\xi=0\)), and the Yukawa coupling \(y_{\psi}=1\). Even without \(S\)-\(H\) mixing, non-zero quark CEDMs are Figure 3: The diagrams for the dimension-six CPV operators from a VLQ and a singlet scalar. The blob in the third diagram is from the second diagram. If the VLQ is charged under the electroweak gauge groups, the gluons can be replaced by the electroweak gauge bosons. induced by the RGE from the Weinberg operator as can be seen from Eq. (28). However, the figure shows that the neutron EDM is dominantly given by the Weinberg operator with about 10% correction from the RGE-induced quark CEDMs. In Fig. 5, on the other hand, we consider a non-vanishing \(S\)-\(H\) mixing \(\sin\xi\simeq v/m_{S}\) for which sizable quark CEDMs are generated at the UV scale \(\Lambda=\min(m_{\psi},m_{S})\). For this case, the corrections from the RGE are not important for neutron EDM, and the neutron EDM is mostly determined by the quark CEDMs in viable parameter space with \(d_{n}<10^{-26}\,e\,\mathrm{cm}\). The contribution from the Weinberg operator is rather small below 5%. ### Supersymmetry In supersymmetric (SUSY) extensions of the SM, the dominant CP violating operator is determined by details of the mass spectrum of SUSY particles. Even in the simplest phenomenologically viable scenarios, such as the MSSM, there are multiple new sources of CPV, which can have a significant impact on the phenomenology of the model. In the case that sfermions are as light as the gauginos and Higgsinos, the leading CPV operator is typically the quark CEDM [59] generated by the 1-loop diagram such as the one shown on the right side of Fig. 6. CPV is generated by the complex nature of the SUSY breaking parameters, as typically many of them contain a non-zero phase that remains even after performing field redefinitions in gaugino and Higgsino masses. Other complex parameters of the MSSM include, e.g., squark or slepton mass matrices and bilinear or trilinear couplings;5 for extensive discussion of these terms, we refer to [60; 59; 61]. Figure 4: The neutron EDM from a CPV VLQ by the Weinberg operator (left) and the RGE induced quark CEDMs (right) with vanishing singlet scalar-Higgs mixing (\(\xi=0\)). For the plot, we choose the Yukawa coupling \(y_{\psi}=1\) and the CP angle \(\alpha=1\). The Weinberg operator gives a dominant contribution to the neutron EDM for the vanishing mixing angle. In fact, these one-loop diagrams involving CPV complex parameters are enhanced by a potentially large \(\tan\beta\). This can easily lead in a generic SUSY scenario to an electron or neutron EDM that is much larger than experiments allow. The discrepancy between such theoretical expectation and experimental results is called the SUSY CP problem and several explanations for it have been investigated in the literature, an overview of some of them can be found in [59]. An apparent solution to evade these constraints is to assume that some SUSY particles are very heavy or that the CPV phases are aligned or canceled by other effects. Another, more complete, possibility is to consider specific scenarios of SUSY breaking that achieve this by some well-motivated mechanism, such as split SUSY [62; 63; 64] or natural SUSY [65; 66; 67]. The former scenario assumes that the scalar superpartners are much heavier than the fermionic ones, such as gauginos and higgsinos. This can suppress the EDMs from one-loop diagrams involving scalars, but it also enhances the EDMs from two-loop diagrams involving gauginos. For example, the gluino can induce a large EDM for the quarks through its interaction with the gluon. In fact, the split (or high scale) SUSY is an excellent example in which the SUSY CPV is dominantly mediated by gauge and Higgs interactions with the SM sector [68; 38]. In particular, the gluon CEDM shown on the left of Fig. 6 can be the dominant CPV operator if the gluino has a mass comparable to that of charginos and neutralinos [38]. Figure 5: The neutron EDM from a CPV VLQ by the quark CEDMs (left) and the ratio of the neutron EDM from the Weinberg operator to the one from the quark CEDMs (right) with non-vanishing singlet scalar-Higgs mixing \(\sin\xi\simeq v/m_{S}\). For the plot, we choose the Yukawa coupling \(y_{\psi}=1\) and the CP angle \(\alpha=1\). The quark CEDMs give a dominant contribution to the neutron EDM for the non-vanishing mixing angle. On the other hand, the natural SUSY is a scenario where only the superpartners that are relevant for electroweak symmetry breaking, such as stops and higgsinos, are light. Such spectrum typically avoids problems associated with fine-tuning, while at the same time it introduces new sources of CPV from the Higgs sector. For example, a new tree-level interaction between the Higgs and a singlet field (introduced, e.g., to solve the so-called \(\mu\) problem) can generate a large EDM for the electron or quarks through two-loop Barr-Zee type diagrams [61]. The extended Higgs sector of the MSSM - which is required to cancel the chiral anomalies - is another source of SUSY contributions to EDMs. It consists of two Higgs doublets, which result in five physical Higgs bosons: two CP-even scalars \(h\), \(H\), one CP-odd pseudoscalar \(A\), and two charged scalars \(H^{\pm}\). The exchange of these Higgs bosons at one-loop level can induce EDMs for quarks and leptons through their Yukawa couplings and their CKM matrix elements. In fact, this type of Higgs sector is a special case of the more general class of models known as type II Two-Higgs-Doublet Models (2HDMs) that predict such extended scalar sector; we discuss them in the next section. The EDMs from the extended Higgs sector of the MSSM depend on the masses and couplings of the Higgs bosons, as well as the CPV phase in the Higgs potential. Another possibility for SUSY contributions to EDMs is the R-parity violating (RPV) MSSM, which allows for lepton and baryon number violating interactions among the superpartners. The RPV MSSM does not introduce new one-loop diagrams contributing to the EDMs [69], and the leading contribution takes place at the two-loop level, mainly through the Barr-Zee type diagrams, which involve a loop of charged particles and a loop of neutral particles. However, the discussion of the RPV MSSM is beyond the scope of this work, while an extensive discussion can be found in [70]. ### 2HDMs 2HDMs are a class of models that can mediate CP violation through heavy beyond the Standard Model (BSM) Higgs bosons, 3 neutral and 2 charged ones, with a \(Z_{2}\) symmetry imposed to suppress the flavor-changing neutral currents, see [71; 72] for an extended Figure 6: The diagrams illustrating the dimension-six CPV operators generated in supersymmetric extensions of the SM. The blob in the first diagram denotes the gluino CEDM originating from the CP phase of gluino mass. discussion of its EDM phenomenology. CPV phases can enter through both Yukawa interactions, parameterized in general by arbitrary complex matrices,6 and by the CPV terms in the potential of neutral scalars. Footnote 6: The special case of phases described by a scalar matrix corresponds to the so-called Aligned 2HDM. Compared to the Higgs sector of the MSSM, the 2HDM can potentially exhibit more significant CPV effects, due to the possible presence of physical CP-violating phases in the Higgs sector. These CPV phases can exist even if all the input parameters are real and, in contrast to the MSSM, cannot be rotated away by field redefinitions, owing to the absence of R-symmetry. Thus, even if the input parameters are chosen to be real, spontaneous symmetry breaking in the 2HDM can give rise to CPV, which does not hold for MSSM at the tree level. On the other hand, in the MSSM, CP violation can arise from the complex phases of the soft SUSY-breaking parameters or from loop-level effects, as discussed in the previous section, even if the Higgs sector parameters are chosen to be real. 2HDMs are characterized by a rich EDM phenomenology, which depends largely on how the Higgs doublets couple to the SM fermions, and therefore fall into several types - see, e.g., [73] for an overview. In these models, the quark CEDMs are the dominant CPV operators, and they can be generated by the top quark loops, as illustrated in Fig. 7, which also involves the exchange of neutral and charged Higgs bosons. Another significant source of CPV emerges from the CEDM of the gluon [71]. In contrast, the CPV four-fermion operators, which arise from the exchange of two heavy Higgs bosons, are typically negligible. This takes place because they are suppressed by the product of two small Yukawa couplings and the absence of the potentially large factor \(\tan^{3}\beta\); the parameter \(\tan\beta\) is the ratio of the vacuum expectation values of the two Higgs doublets, which determines the strength of the Yukawa couplings. Therefore, the EDMs in 2HDMs with a \(Z_{2}\) symmetry are mainly sensitive to the quark and gluon CEDMs. ## 6 Conclusions Since the SM predictions of the nuclear and atomic EDMs from the Kobayashi-Maskawa phase are well below the current and near-future experimental bounds, the observation of Figure 7: The diagrams depict the dimension-six CPV operators originating from 2HDMs. Here, \(\phi_{i}^{0}=h,H^{0},A^{0}\) denotes the neutral Higgs bosons. The left panel illustrates the generation of the Weinberg operator, while the right one presents the generation of the quark CEDM. a non-vanishing EDM in near future indicates that the underlying CP violation is due to the QCD \(\theta\)-parameter or a BSM source. In this work, we have examined whether future EDM measurements of nucleons and diamagnetic atoms can give us information on the origin of CP violation and the PQ mechanism for the dynamical relaxation of the QCD \(\theta\)-parameter. In the presence of the PQ mechanism, BSM CP violation affects EDMs both directly and by shifting the axion vacuum value when combined with the PQ breaking by the QCD anomaly. On the other hand, PQ breaking other than the QCD anomaly, e.g. quantum gravity effects, which typically takes place at UV scales and characterizes the quality of the PQ symmetry, affects the EDMs _mostly_ by shifting the axion vacuum value. By this reason, the patterns of nucleon and atomic EDMs can be sensitive to the existence of QCD axion and the quality of the associated PQ symmetry, in addition to providing information on the effective operators describing the BSM CP violation at low energy scales. To be concrete and for simplicity, we focus on a class of BSM scenarios where BSM CP violation is dominantly mediated to the SM sector by the SM gauge and Higgs interactions. In this class of BSM scenarios, CP violation around the QCD scale is dominantly given by the gluon CEDM (i.e. Weinberg operator) and/or the light quark CEDMs. Motivated examples include vector-like quarks and certain parameter spaces of the MSSM and the 2 Higgs-doublet models. We find that the nucleon EDMs can show a distinctive pattern when the EDMs are dominantly induced by light quark CEDMs _without_ having a QCD axion. In cases with QCD axion, the nucleon EDMs due to the gluon or light quark CEDMs have a similar pattern as those due to the QCD \(\theta\)-parameter, regardless of the origin of the axion vacuum value which determines the \(\theta\)-parameter. In contrast, diamagnetic atomic EDMs due to the gluon or light quark CEDMs have characteristic patterns distinguishable from the pattern due to the \(\theta\)-parameter which is induced dominantly by UV-originated PQ breaking other than the QCD anomaly, for instance by quantum gravity effects. Therefore future measurements of nuclear and atomic EDMs may tell us quite a lot about the origin of CP violation, the existence of QCD axion, and the quality of PQ symmetry. More extensive studies on this matter with other BSM CPV sources and hadronic/leptonic CPV observables are subject to future work. This work was supported by IBS under the project code, IBS-R018-D1. We thank Nodoka Yamanaka for helpful discussions. ## Appendix A RGE of the CPV dimension-six operators In the gauge and Higgs mediated CP violation, the CPV effect above the electroweak scale appears through the following dimension-six operators of the SM gauge fields and the Higgs field, as given in Eq. (3.1) and (3.2): \[\mathcal{L}_{\rm CPV}=c_{\widetilde{G}}f^{abc}G_{\alpha}^{a\mu}G_{ \mu}^{b\delta}\widetilde{G}_{\delta}^{c\alpha}+c_{\widetilde{W}}\epsilon^{abc}W _{\alpha}^{a\mu}W_{\mu}^{b\delta}\widetilde{W}_{\delta}^{c\alpha}\] \[+|H|^{2}\left(c_{H\widetilde{G}}G_{\mu\nu}^{a}\widetilde{G}^{a\mu \nu}+c_{H\widetilde{W}}W_{\mu\nu}^{a}\widetilde{W}^{a\mu\nu}+c_{H\widetilde{B} }B_{\mu\nu}\widetilde{B}^{\mu\nu}\right)+c_{H\widetilde{W}B}H^{\dagger}\tau^{a} H\widetilde{W}_{\mu\nu}^{a}B^{\mu\nu}\] \[+\left(\sum_{q=u,d}\sum_{X=G,W,B}i(c_{qX})_{ij}\bar{Q}_{Li} \sigma^{\mu\nu}X_{\mu\nu}q_{Rj}H^{(*)}+\sum_{X=W,B}i(c_{eX})_{ij}\bar{L}_{i} \sigma^{\mu\nu}X_{\mu\nu}e_{Rj}H^{(*)}+{\rm h.c.}\right)\] (A.1) where the Wilson coefficients \(c_{\alpha}\left(\alpha=\widetilde{G},\widetilde{W},...\right)\) are all real-valued, \(i,j\) denotes flavor indices, and \(H^{(*)}\equiv H\) or \(H^{*}\) in order to make the operators invariant under the SM gauge groups. The RG equations of the above dimension-six operators at one-loop are given in [28; 29; 30]. Here we use the Yukawa couplings defined as \[\mathcal{L}_{\rm Yukawa}=-\left[(Y_{u})_{ij}\bar{u}_{Ri}Q_{Lj}H+(Y_{d})_{ij} \bar{d}_{Ri}Q_{Lj}H^{*}+(Y_{e})_{ij}\bar{e}_{Ri}L_{j}H^{*}+{\rm h.c.}\right]\] (A.2) with the flavor indices \(i,j\). The other parameters appearing in the following RG equations are \(c_{A,3}=N_{c}\), \(c_{A,2}=2\), \(c_{F,3}=(N_{c}^{2}-1)/2N_{c}\), \(c_{F,2}=3/4\), \(b_{0,3}=11N_{c}/3-2n_{F}/3\), \(b_{0,2}=22/3-1/6-(N_{c}+1)\), \(b_{0,1}=-1/6-(11N_{c}/9+3)\) with \(N_{c}=3\), and \(q_{\psi}\) denotes the \(U(1)_{Y}\) hypercharge of the field \(\psi\). The RG equations for the operators in Eq. (A.1) at one-loop are then given by \[16\pi^{2}\frac{dc_{\widetilde{G}}}{d\ln\mu} = (12c_{A,3}-3b_{0,3})g_{3}^{2}c_{\widetilde{G}}\,,\] (A.3) \[16\pi^{2}\frac{dc_{\widetilde{W}}}{d\ln\mu} = (12c_{A,2}-3b_{0,2})g_{2}^{2}c_{\widetilde{W}}\,,\] (A.4) \[16\pi^{2}\frac{dc_{H\widetilde{G}}}{d\ln\mu} = \left(-6q_{H}^{2}g_{1}^{2}-\frac{9}{2}g_{2}^{2}-2b_{0,3}g_{3}^{2 }\right)c_{H\widetilde{G}}+(2ig_{3}{\rm Tr}[Y_{u}c_{uG}+Y_{d}c_{dG}]+{\rm h.c. })\,,\] (A.5) \[16\pi^{2}\frac{dc_{H\widetilde{W}}}{d\ln\mu} = -15g_{2}^{3}c_{\widetilde{W}}+\left(-6q_{H}^{2}g_{1}^{2}-\frac{5} {2}g_{2}^{2}-2b_{0,2}g_{2}^{2}\right)c_{H\widetilde{W}}+2g_{1}g_{2}q_{H}c_{H \widetilde{W}B}\,,\] (A.6) \[16\pi^{2}\frac{dc_{H\widetilde{B}}}{d\ln\mu} = \left(2q_{H}^{2}g_{1}^{2}-\frac{9}{2}g_{2}^{2}-2b_{0,1}g_{1}^{2} \right)c_{H\widetilde{B}}+6g_{1}g_{2}q_{H}c_{H\widetilde{W}B}\,,\] (A.7) \[16\pi^{2}\frac{dc_{H\widetilde{W}B}}{d\ln\mu} = 6g_{1}g_{2}^{2}q_{H}c_{\widetilde{W}}+\left(-2q_{H}^{2}g_{1}^{2}+ \frac{9}{2}g_{2}^{2}-b_{0,1}g_{1}^{2}-b_{0,2}g_{2}^{2}\right)c_{H\widetilde{W}B}\] (A.8) \[+4g_{1}g_{2}q_{H}c_{H\widetilde{B}}+4g_{1}g_{2}q_{H}c_{H\widetilde {W}}\,,\] \[16\pi^{2}\frac{d(c_{uG})_{ij}}{d\ln\mu} = \left[(10c_{F,3}-4c_{A,3}-b_{0,3})\,g_{3}^{2}-3c_{F,2}g_{2}^{2}+ \left(-3q_{u}^{2}+8q_{u}q_{Q}-3q_{Q}^{2}\right)g_{1}^{2}\right](c_{uG})_{ij}\] \[+8c_{F,2}g_{2}g_{3}(c_{uW})_{ij}+4g_{1}g_{3}(q_{u}+q_{Q})(c_{uB}) _{ij}\] \[+{\rm Im}\left[-4(Y_{u}^{\dagger})_{ij}g_{3}(c_{HG}+ic_{H\widetilde {G}})+3g_{3}^{2}c_{A,3}(Y_{u}^{\dagger})_{ij}\left(c_{G}+ic_{\widetilde{G}} \right)\right],\] \[16\pi^{2}\frac{d(c_{uW})_{ij}}{d\ln\mu} =\left[2c_{F,3}g_{3}^{2}+\left(3c_{F,2}-b_{0,2}\right)g_{2}^{2}+ \left(-3q_{u}^{2}+8q_{u}q_{Q}-3q_{Q}^{2}\right)g_{1}^{2}\right](c_{uW})_{ij} \tag{111}\] \[+2c_{F,3}g_{2}g_{3}(c_{uG})_{ij}+g_{1}g_{2}(3q_{Q}-q_{u})(c_{uB})_ {ij}\] \[-\mathrm{Im}\left((Y_{u}^{\dagger})_{ij}\left[g_{2}(c_{HW}+ic_{H \widetilde{W}})-g_{1}(q_{Q}+q_{u})(c_{HWB}+ic_{H\widetilde{W}B})\right]\right),\] \[16\pi^{2}\frac{d(c_{uB})_{ij}}{d\ln\mu} =\left[2c_{F,3}g_{3}^{2}-3c_{F,2}g_{2}^{2}+\left(3q_{u}^{2}+4q_{u }q_{Q}+3q_{Q}^{2}-b_{0,1}\right)g_{1}^{2}\right](c_{uB})_{ij}\] \[+4c_{F,3}g_{1}g_{3}\left(q_{u}+q_{Q}\right)(c_{uG})_{ij}+4c_{F,2 }g_{1}g_{2}(3q_{Q}-q_{u})(c_{uW})_{ij}\] \[-\mathrm{Im}\left((Y_{u}^{\dagger})_{ij}\left[2g_{1}(q_{Q}+q_{u} )(c_{HB}+ic_{H\widetilde{B}})-\frac{3}{2}g_{2}(c_{HWB}+ic_{H\widetilde{W}B}) \right]\right),\] \[16\pi^{2}\frac{d(c_{dG})_{ij}}{d\ln\mu} =\left[\left(10c_{F,3}-4c_{A,3}-b_{0,3}\right)g_{3}^{2}-3c_{F,2}g_ {2}^{2}+\left(-3q_{d}^{2}+8q_{d}q_{Q}-3q_{Q}^{2}\right)g_{1}^{2}\right](c_{dG}) _{ij} \tag{112}\] \[+8c_{F,2}g_{2}g_{3}(c_{dW})_{ij}+4g_{1}g_{3}(q_{d}+q_{Q})(c_{dB}) _{ij}\] \[+\mathrm{Im}\left[-4(Y_{d}^{\dagger})_{ij}g_{3}(c_{HG}+ic_{H \widetilde{G}})+3g_{3}^{2}c_{A,3}(Y_{d}^{\dagger})_{ij}\left(c_{G}+ic_{ \widetilde{G}}\right)\right],\] \[16\pi^{2}\frac{d(c_{dW})_{ij}}{d\ln\mu} =\left[2c_{F,3}g_{3}^{2}+\left(3c_{F,2}-b_{0,2}\right)g_{2}^{2}+ \left(-3q_{d}^{2}+8q_{d}q_{Q}-3q_{Q}^{2}\right)g_{1}^{2}\right](c_{dW})_{ij} \tag{113}\] \[+2c_{F,3}g_{2}g_{3}(c_{dG})_{ij}+g_{1}g_{2}(3q_{Q}-q_{d})(c_{dB}) _{ij}\] \[-\mathrm{Im}\left((Y_{d}^{\dagger})_{ij}\left[g_{2}(c_{HW}+ic_{H \widetilde{W}})+g_{1}(q_{Q}+q_{d})(c_{HWB}+ic_{H\widetilde{W}B})\right]\right),\] \[16\pi^{2}\frac{d(c_{dB})_{ij}}{d\ln\mu} =\left[2c_{F,3}g_{3}^{2}-3c_{F,2}g_{2}^{2}+\left(3q_{d}^{2}+4q_{d }q_{Q}+3q_{Q}^{2}-b_{0,1}\right)g_{1}^{2}\right](c_{dB})_{ij} \tag{114}\] \[+4c_{F,3}g_{1}g_{3}\left(q_{d}+q_{Q}\right)(c_{dG})_{ij}+4c_{F,2 }g_{1}g_{2}(3q_{Q}-q_{d})(c_{dW})_{ij}\] \[-\mathrm{Im}\left((Y_{d}^{\dagger})_{ij}\left[2g_{1}(q_{Q}+q_{d} )(c_{HB}+ic_{H\widetilde{B}})+\frac{3}{2}g_{2}(c_{HWB}+ic_{H\widetilde{W}B}) \right]\right),\] \[16\pi^{2}\frac{d(c_{eW})_{ij}}{d\ln\mu} =\left[\left(3c_{F,2}-b_{0,2}\right)g_{2}^{2}+\left(-3q_{e}^{2}+ 8q_{e}q_{L}-3q_{L}^{2}\right)g_{1}^{2}\right](c_{eW})_{ij}+g_{1}g_{2}(3q_{L}-q _{e})(c_{eB})_{ij} \tag{115}\] \[-\mathrm{Im}\left((Y_{e}^{\dagger})_{ij}\left[g_{2}(c_{HW}+ic_{ H\widetilde{W}})+g_{1}(q_{L}+q_{e})(c_{HWB}+ic_{H\widetilde{W}B})\right]\right),\] \[16\pi^{2}\frac{d(c_{eB})_{ij}}{d\ln\mu} =\left[-3c_{F,2}g_{2}^{2}+\left(3q_{e}^{2}+4q_{e}q_{L}+3q_{L}^{2 }-b_{0,1}\right)g_{1}^{2}\right](c_{eB})_{ij}+4c_{F,2}g_{1}g_{2}(3q_{L}-q_{e} )(c_{eW})_{ij} \tag{116}\] \[-\mathrm{Im}\left((Y_{e}^{\dagger})_{ij}\left[2g_{1}(q_{L}+q_{e} )(c_{HB}+ic_{H\widetilde{B}})+\frac{3}{2}g_{2}(c_{HWB}+ic_{H\widetilde{W}B}) \right]\right).\] The RG equations for \(c_{qX}\) and \(c_{eX}\) in Eqs. (A.9)-(A.16) involve the Wilson coefficients of the following CP-even operators through the complex phase of the Yukawa couplings: \[\begin{split}\mathcal{L}_{\text{CP-even}}&=c_{G}f^{ abc}G^{a\mu}_{\alpha}G^{b\delta}_{\mu}G^{c\alpha}_{\delta}+|H|^{2}\left(c_{HG}G^{a}_{ \mu\nu}G^{a\mu\nu}+c_{HW}W^{a}_{\mu\nu}W^{a\mu\nu}+c_{HB}B_{\mu\nu}B^{\mu\nu} \right)\\ &+c_{HWB}H^{\dagger}\tau^{a}HW^{a}_{\mu\nu}B^{\mu\nu}\,.\end{split}\] (A.17)
2307.11406
Observations of Mini Coronal Dimmings Caused by Small-scale Eruptions in the Quiet Sun
Small-scale eruptions could play an important role in coronal heating, generation of solar energetic particles (SEPs), and mass source of the solar wind. However, they are poorly observed, and their characteristics, distributions, and origins remain unclear. Here a mini coronal dimming was captured by the recently launched Solar Orbiter spacecraft. The observations indicate that a minifilament eruption results in the dimming and takes away approximately $(1.65\pm0.54)\times10^{13}$ g of mass, which also exhibits similar features as the sources of SEP events. The released magnetic free energy is of the order of $\sim10^{27}$ erg. Our results suggest that weak constraining force makes the flux rope associated with the minifilament easily enter a torus-unstable domain. We discuss that weak magnetic constraints from low-altitude background fields may be a general condition for the quiet-Sun eruptions, which provide a possible mechanism for the transport of coronal material and energy from the lower to the middle or even higher corona.
Rui Wang, Ying D. Liu, Xiaowei Zhao, Huidong Hu
2023-07-21T08:00:46Z
http://arxiv.org/abs/2307.11406v1
# Observations of Mini Coronal Dimmings Caused by Small-scale Eruptions in the Quiet Sun ###### Abstract Small-scale eruptions could play an important role in coronal heating, generation of solar energetic particles (SEPs), and mass source of the solar wind. However, they are poorly observed, and their characteristics, distributions, and origins remain unclear. Here a mini coronal dimming was captured by the recently launched Solar Orbiter spacecraft. The observations indicate that a minifilament eruption results in the dimming and takes away approximately \((1.65\pm 0.54)\times 10^{13}\) g of mass, which also exhibits similar features as the sources of SEP events. The released magnetic free energy is of the order of \(\sim 10^{27}\) erg. Our results suggest that weak constraining force makes the flux rope associated with the minifilament easily enter a torus-unstable domain. We discuss that weak magnetic constraints from low-altitude background fields may be a general condition for the quiet-Sun eruptions, which provide a possible mechanism for the transport of coronal material and energy from the lower to the middle or even higher corona. Quiet Sun -- Solar coronal transients -- Solar filaments -- Solar magnetic fields ## 1 Introduction Solar eruptive activities, including flares, eruptive filaments, coronal mass ejections (CMEs), and coronal jets, are widely distributed in energy scales. The observed solar eruptions span a range of at least a factor of \(10^{8}\) in energy (Schrijver et al., 2012). Large, highly energetic "X-class" eruptions can release energy substantially exceeding \(\sim 10^{33}\) erg. With the advent of high-resolution observations, the scaled-down versions of the eruptive activities are getting more attention, e.g., nanoflares (Bahauddin et al., 2021), minifilaments (Wang et al., 2000; Innes et al., 2009, 2010; Sterling et al., 2015; Panesar et al., 2016, 2018; McGlasson et al., 2019), and the newly discovered "campfires" (Berghmans et al., 2021), which are at lower energies probably below the current detection limit. Even-smaller-scale eruptive structures have been speculated to exist (Hermans & Martin, 1986; Sterling & Moore, 2016). The latest observations of Parker Solar Probe near the Sun (McComas et al., 2019) indicate that small solar energetic particle (SEP) events, which cannot be captured by 1 au spacecraft but are only observable close to the Sun, may be much more common than previously thought. Where do these small SEP events come from? Is it possible that they are generated by these small-scale solar eruptions? Jackson & Howard (1993) and Webb & Howard (1994) suggest that the ratio of the annualized CME to solar wind mass flux is no more than 16%, and Lamy et al. (2017) indicate that this ratio is only up to 6%. However, it should be noted that the mass of these counted CMEs is usually in the range of \(\sim 10^{14}-10^{15}\) g. Then what about small solar eruptions with less mass? Aschwanden et al. (2000) and Schrijver et al. (2012) indicate that the events at the lowest energies of \(\sim 10^{24}\) erg can occur millions of times each day. Namely, smaller eruptions generally have higher frequency of occurrence. How do the small-scale eruptions, with the advantages of large numbers and high occurrence rate, contribute to the mass of the solar wind? Most of our knowledge of the energy-mass relation of eruptions is based on the observations of large-scale eruptions. It remains unclear what the actual relationship is between the mass and energy of small-scale eruptions in the source region. Coronal dimmings are considered to be an important characteristic of solar eruptions, which can provide mass information of eruptions (Rust and Hildner, 1976; Thompson et al., 1998). It is generally accepted that they are caused by plasma evacuation during the eruption of a CME (Harrison and Lyons, 2000). On the Sun, the dimming regions correspond to the footprint of the CME (Jin et al., 2022). Upflowing expanding plasma has been observed in the dimming region (Harra and Sterling, 2001; Tian et al., 2012). Harrison and Lyons (2000) find that the mass loss in the coronal dimmings is on the same order as the estimated mass of the associated CMEs. Therefore, the mass of small-scale eruptions can also be estimated by their associated coronal dimmings. However, small-scale coronal dimmings have been poorly studied. A small dimming associated with an eruption is not easy to identify, since the eruption itself and the associated characteristic structures, such as post-flare arcades, flare ribbons, and coronal waves, are difficult to recognize under the previous observational limits. Therefore, it is difficult to determine whether the observed dimming is related to an eruption or not. In short, we do not know much about how small-scale coronal dimmings are produced, what their characteristics are, and how they are distributed. In this Letter, we identify the characteristic structures associated with a small-scale eruption with greater certainty, owing to the 17.4 nm EUV High Resolution Imager (HRI\({}_{EUV}\)) of the Extreme Ultraviolet Imager (EUI; Rochus et al., 2020)1 on board Solar Orbiter (Muller et al., 2020). We estimate the evacuated mass and the released energy of the eruption. A survey of mini coronal dimmings from Solar Orbiter observations is also carried out for the first time to investigate the distributions and occurrence frequency. These results provide crucial information for understanding small-scale eruptions and their contributions to SEPs, the solar wind, and space weather. Footnote 1: We used level 2 (L2) EUI data, which can be accessed via [http://sidc.be/EUI/data/releases/202112_release_4.0](http://sidc.be/EUI/data/releases/202112_release_4.0). Information about the data processing can be found in the release notes. DOI: [https://doi.org/10.24414/s5da-7e78](https://doi.org/10.24414/s5da-7e78). ## 2 Results ### High-resolution Observations by Solar Orbiter On 2020 May 20 21:20 universal time (UT), a small-scale eruption with coronal dimming occurred near the central meridian, which was captured by the recently launched Solar Orbiter spacecraft. At this moment, Solar Orbiter was located at a distance of 0.612 au from the Sun during its perihelion pass with a separation angle of \(\sim\) 17\({}^{\circ}\) from the Sun-Earth line (Figure 1a) and was almost in quadrature with STEREO-A. From this vantage point, the HRI\({}_{EUV}\) angular pixel size of 0\({}^{\prime\prime}\).492 corresponds to 217 km on the solar surface. Even under such a high spatial resolution, the eruption itself is still too small (length scale \(\sim\)10 Mm) to be noticed. By contrast, the associated coronal dimming is relatively easier to be observed (length scale \(\sim\)34 Mm; see Figure 1b, c, d, and the animation of Figure 1; images for making the video aligned by means of a cross-correlation method to remove the effect of jitter in the data). Solar Dynamics Observatory (SDO; Pesnell et al., 2012) allows us to carry out a joint observation of this small event. The HRI\({}_{EUV}\) of EUI provides a more distinct EUV image than the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) on board SDO. Figure 1e shows clearly discernible strip structures in an HRI\({}_{EUV}\) bandpass centered at 17.4 nm. The distinct bright strips traced by the red dotted curves are considered as the post-eruption arcades, which are generally associated with newly reconnected field lines caused by the rise of filaments during the eruption process (Shibata et al., 1995; Reeves & Golub, 2011). We remap the SDO's AIA image at 1600 A and the Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012; Schou et al., 2012) line-of-sight (LOS) magnetic fields from the SDO view to the Solar Orbiter view. It shows that the footpoints of the bright strips are also brightened in AIA 1600 A (Figure 1f), which are considered as the flare ribbons following the eruption. The ribbons are aligned with the relatively strong photospheric magnetic fields where the post-eruption arcades are rooted (Figure 1g). ### Temperature and Mass in the Mini Dimming Region We use a differential emission measure (DEM) analysis to examine the temperature and density properties of the coronal dimming. In this work, we adopt the "simple_reg_dem.pro" routine (Plowman and Caspi, 2020) in SSW packages to compute the DEM with the AIA EUV images at six passbands (94, 131, 171, 193, 211, and 335 A) in the temperature range of \(5.5\leq\log_{10}(T)\leq 7.5\). This algorithm is fast and relatively free from idiosyncrasies, which is suitable for long time series analysis of the DEM of numerous pixels. Figure 2 shows the evolution of EM for 0.7 MK plasma. We select a heart-shaped area (contoured in red), which almost contains all the dimming areas during the eruption and meanwhile excludes the Figure 1: Joint observations of Solar Orbiter and SDO on 2020 May 20. (a) Heliospheric positions of planets and spacecraft. (b) Location of the mini coronal dimming (red arrow) in HRI\({}_{EUV}\) 174 Å. (c) and (d) Enlarged views of the dimming at different scales. (e) Post-eruption arcades (traced by red dotted curves) in HRI\({}_{EUV}\) 174 Å, with location shown by the green arrow in (c). (f) Flare ribbons in SDO/AIA 1600 Å. (g) SDO/HMI LOS magnetic fields. An animation of this figure is available. It shows the EUI animation of the dimming from 21:20 to 22:16 UT without annotations. Its real-time duration is 5 s. It has an FOV slightly larger than (d) but smaller than (c). bright plasma (i.e., the core region of the eruption under the post-eruption arcades within the black contours). The cool-color region within the heart-shaped area (lower density) gradually expanded (Figures 2a-c). At 21:07 UT, the beginning of the eruption, there are some preexisting dim locations in the corona (within the heart-shaped area) that are not associated with the eruption. At 21:40 UT, we can observe a small dimming area appearing near the core region. Then, the dimming area rapidly expanded around 22:00 UT and filled the whole heart-shaped area by 22:26 UT. Figure 2e shows that the changes of the normalized intensities of the flare region exhibit a trend of increase before 21:00 UT. Figure 2f shows a significant decrease in the intensity curve of the heart-shaped area after 21:40 UT which is attributed to the occurrence of coronal dimming in that area. Prior to the flares associated with the dimming, the flare region exhibited continuous EUV brightenings, which may have been caused by magnetic reconnection resulting from the motion of the photospheric magnetic field. Figure 2g shows the averaged EM within the dimming region as a function of time, which is calculated using \[EM=\int DEM(T)dT. \tag{1}\] The EM has an obvious decrease after 21:40 UT and almost keeps constant after 22:00 UT, which indicates that coronal plasma probably escaped from the dimming region during the eruption. The EM-weighted median temperature (red curve) in Figure 2e is defined as \[\overline{T}=\frac{\int DEM(T)\times TdT}{\int DEM(T)dT}. \tag{2}\] It almost keeps constant below \(10^{6.5}\) K, which is a typical temperature in a dimming region (Cheng et al., 2012). This implies that the EUV dimming mainly resulted from plasma depletion during the eruption. By the DEM analysis, the mass loss within the heart-shaped area is estimated to be \((1.65\pm 0.54)\times 10^{13}\) g (see Appendix). This mass is 1 to 2 orders lower than the normal CME mass of \(\sim 10^{14}-10^{15}\) g (Vourlidas et al., 2010). We do not observe a filament structure in EUI/HRI\({}_{EUV}\) 174 A. Fortunately, the 304 A data from SDO/AIA can provide supplementary information for observing filament structures that may not be visible in EUI/HRI\({}_{EUV}\) 174 A images. Although AIA has a relatively lower spatial resolution, we were able to track the erupting filament structure at 304 A as it rose above the bright structures at the source region of the eruption (see the animation of Figure 3a). The S-shaped filament (contoured by the dotted line) is probably related to a rising, twisted flux-rope structure (Figure 3a). Figure 3b shows the time-distance profile of the rising filament along the slit (3 pixel width) in Figure 3a. The pattern in Figure 3b implies that the filament rises in a rolling manner expanding to higher altitudes. Figure 2: DEM analysis of the coronal dimming. (a-d) Time evolution of AIA 0.7 MK DEMs at four typical times. (e-f) EUV light curves of the normalized integrated intensities of the flare region (black contour) and the dimming region within the heart-shaped area (red contour). (g) Averaged EM (black) and EM-weighted median temperature (red) within the dimming region as a function of time. The green vertical dashed lines mark the corresponding times in (a-d). The maximum of the local projected speed is around 12 km \(s^{-1}\); the average projected speed is only around 3.6 km \(s^{-1}\). Although it is rather slow compared with regular filament eruptions (Wang et al., 2016; Cheng et al., 2020), this speed is considered to be relatively typical for minifilament eruptions (Wang et al., 2000; Panesar et al., 2020; Sterling et al., 2022). Panesar et al. (2020) indicate that the speed of the quiet-region jet-making minifilament eruptions is at least \(\sim\) 1 km s\({}^{-1}\) during the slow-rise phase. The average projected speed in our case is comparable to that. Sterling et al. (2022) suggest that the slow speed reported by Panesar et al. (2020) may be attributed to a projection effect caused by differences in viewing angles. On the other hand, time-distance plot helps us to roughly determine the start time of the eruption, which is around 21:15 UT. Figure 3c shows the distribution of the SDO/HMI LOS magnetic fields. The length scale of the major magnetic polarity is around 10 Mm. Filaments with lengths ranging from 10 to 20 Mm are generally classified as minifilaments (Wang et al., 2000; Sterling et al., 2015). Therefore, the filament in our case should fall into this category. The fast moving negative magnetic polarities cancel the continually emerging positive polarities in the dotted box region of Figure 3c. We make a narrow slit (3 pixel width) along the converging direction of the moving negative polarities. The time-distance plot of Figure 3d displays that the negative magnetic polarities converge with the emerging positive ones around 19:30 UT. This is similar to the magnetic cancellation in a normal active region. Flux emergence and magnetic cancellation are a coupled process and have always been observed to appear together and interact with each other (Wang et al., 2018, 2022; Chintzoglou et al., 2019). Due to the emergence of the positive flux, we can only calculate the change of the negative flux in the dotted box region (black curve in Figure 3d). The red curve in Figure 3d presents a significant drop in the flux in the hour before the eruption (the vertical dotted-dashed line). The time-distance plot patterns indicate a closer convergence of the positive and negative magnetic poles at the onset of the eruption. Moreover, the calculation region corresponds to the EUV brightenings at 304 A observed after the onset of the filament eruption (see Figure 3a and 3c). These results suggest that magnetic cancellation occurred before the eruption and likely resulted in the eruption. Magnetic cancellation is generally related to the buildup of a magnetic flux rope and is also related to magnetic reconnection resulting in the final eruption (van Ballegooijen & Martens, 1989; Moore et al., 2001; Wang et al., 2018, 2022). In fact, eruptions caused by cancellation in the quiet Sun often manifest as minifilament eruptions with coronal jets (Sterling et al., 2015; Panesar et al., 2016, 2018; McGlasson et al., 2019; Muglach, 2021). Figure 3: Small-scale filament eruption and the associated magnetic cancellation. (a) Rising minifilament in AIA 304 Å (contoured by the cyan dotted line). The black square corresponds to the boundary for extrapolation in Figure 4. (b) Time-distance profile of the rising filament along the slit (cyan) in (a). The blue vertical line marks the onset of the filament rise. (c) Positive (blue) and negative (red) magnetic fields scaled to \(\pm\)150 G corresponding to the black contours in (a). (d) HMI time-distance map along the slit in (c). The integrated negative magnetic flux as a function of time computed inside the dotted box in (c) is overplotted in black. A least-squares polynomial fit curve (red) indicates the trend of the flux. The vertical dotted-dashed and dashed lines correspond to the times of the flare and dimming, respectively. An animation of this figure is available. It shows the AIA 304 Å animation from 21:00 to 21:57 UT, and has an FOV larger than (a). Its real-time duration is 7 s. The green arrow in the first frame of the animation indicates the approximate position of the rising filament. Figure 4a shows that the extrapolated magnetic fields exhibiting twisted morphology along the polarity inversion line (PIL), which is regarded as a flux rope. A magnetic dip of the flux rope is just above the cancellation region (see Figure 4a). The cool, dense filament material is probably located here via a magnetic tension force (Gilbert et al., 2000). Meanwhile, it is often related to the appearance of bald patches in the PIL. The field lines threading the bald patch form a separatrix surface where reconnection preferentially occurs (Titov et al., 1993). Figure 4a shows a hook structure of the flux rope in the southeast corresponding to the EUV filament at 304 A in Figure 4b, which should be part of the entire filament and is located at a higher altitude above the bright magnetic reconnection region. We adopt two different methods to reconstruct the flux-rope structure, i.e., a magnetohydrostatic (MHS) method (Zhu & Wiegelmann, 2018) and a widely used optimization approach (Wiegelmann, 2004). Both methods give similar twisted S-shaped flux-rope structures. Figure 4a only displays the reconstructed results of the MHS for its better performance in the lower corona. SDO/HMI does not provide the direct boundary data for extrapolation in our case. We adopt the "bvec2cea.pro" routine in SSW packages to convert the disambiguated full-disk vector magnetic field "hmi.B_720s" series from the native CCD coordinates to the cylindrical equal area heliographic coordinates (Hoeksema et al., 2014), which is appropriate for extrapolation. We centered the remapped data on Carrington coordinates \((22^{\circ}W,12^{\circ}N)\) where the major magnetic polarities are located. The "bvec2cea.pro" routine uses a radial-acute method (Hoeksema et al., 2014) to resolve the \(180^{\circ}\) azimuthal uncertainty in the transverse field direction. Meanwhile, we cut out a relatively small region from the remapped data for extrapolation in order to keep flux balance of the boundary condition (see Figure 3a). The magnetic free energy is also calculated by the two methods, which give relatively similar results, i.e., \(\Delta E_{free}^{MHS}=(4.87\pm 0.80)\times 10^{27}\) erg, and \(\Delta E_{free}^{NLF}=(3.10\pm 0.68)\times 10^{27}\) erg. An eruption of \(\Delta E_{free}\sim 10^{27}\) erg is classified as a microflare by the flare-energy power law summarized from early flare statistics (Aschwanden et al., 2000; Schrijver et al., 2012). Figure 4: Magnetic topological structure of the small-scale filament (prominence) and its ascending process in the STEREO-A/EUVI view. (a) Reconstructed flux-rope structure with the EUV background at 304 Å. The positive (white) in [20, 50] G and negative (black) fluxes in [-20, -50] G are overplotted. The rising part of the minifilament is contoured in cyan dashed lines in (b). (c) Side view from STEREO-A/EUVI of the ascending filament (red arrows) along a red radial line, which shows the slit for the time-distance map in (d). The red cross marks the approximate position of the eruption source. An animation of this figure is available. The EUVI animation starts at 2020 May 20 18:06 UT and ends at 2020 May 22 01:58 UT. Its real-time duration is 7 s. With the exception of the arrows, it has the same annotations as (c) but a smaller FOV. The red cross disappears as the source rotates to the far side of the Sun after 2020 May 21 08:00 UT. Our results indicate that the small-scale coronal dimming results from an eruption of a minifilament, which is likely to have similar eruption features (e.g., post-eruption arcades and flare ribbons), trigger mechanisms (associated with magnetic cancellation), and magnetic topology (e.g., flux-rope structures and magnetic dips) to large eruptions. We compare the mass and energy loss with previous statistics on the power-law relationship between the flare X-ray fluence and CME mass (Drake et al., 2013). Our results fit the energy-mass relation, i.e., an eruption of \(\Delta E_{free}\sim 10^{27}\) erg carries the mass in the order of \(\sim 10^{13}\) g. Sterling et al. (2015) and Sterling & Moore (2016) indicate that solar coronal jets probably result from eruptions of small-scale filaments. Panesar et al. (2016, 2018) suggest that the continuous cancellation can destabilize minifilaments and result in eruptions, where internal magnetic reconnection occurs. The minifilament as shown here has similar size and takes comparable energy of \(\sim 10^{27}\) erg with the small filaments for driving jets. Moreover, the EUI/HRI\({}_{EUV}\) observations present jet-like structures at the beginning of the eruption and post-flare arcades (see animation of Figure 1). Therefore, this eruption is likely to have similar formation and trigger mechanisms as coronal jets. McComas et al. (2019) indicate that small, longitudinally distributed SEP events are associated with jet-like coronal emissions, some of which are not observed at 1 au but only detected close to the Sun. Thus, the mini eruptions as shown here become the potential candidates for these small SEP events. There have been quite a few reports of small-scale eruptions with coronal dimming (e.g., Innes et al., 2009, 2010; Yang et al., 2012; Innes & Teriaca, 2013; Zhang & Ji, 2014; Sterling et al., 2022), but due to limitations in observation conditions, it is difficult to discern the intricate features of the eruption sources associated with these events, such as the post-flare arcades or the temperature and density of the corona. These difficulties result in challenges in determining, for instance, whether the coronal dimming is a long-lasting effect caused by mass evacuation or just a wave-like dimming and whether they are associated with the aforementioned on-disk jets. The combined observations of Solar Orbiter and SDO in this work have helped to address some of these challenges. The small-scale eruptions are omnipresent features in the quiet Sun, estimated with as many as 1400 events per day over the whole Sun (Innes et al., 2009; Madjarska et al., 2022). Meanwhile, the total number of mini dimmings (associated with eruption or not) for a full disk in a day (e.g., Alipour et al., 2012) is much more than twice of Innes et al. (2009). We make a survey of mini coronal dimmings from EUI/HRI\({}_{EUV}\) high-resolution observations as shown in Table 1, which gives the approximate time and location of the dimmings. These dimmings are all associated with small-scale eruptions. Although the valid EUI/HRI\({}_{EUV}\) observations are only available on certain days in certain months, we still identify at least one dimming event from the valid 1 or 2 hr observation periods for a day. We only select the relatively obvious events in the field of view (FOV) of a local area. There should be more events on a global scale. On the other hand, we check the torus instability of the flux rope in the lower coronal (below \(\sim\)70 Mm) by the decay index \(n=-\partial ln\|\mathbf{B}_{p}\|/\partial lnR\)(Kliem & Torok, 2006), where \(\|\mathbf{B}_{p}\|\) is an external poroidal field and \(R\) is the height of the flux-rope axis's apex above the photosphere, which shows a low critical height \(\sim\)8 Mm above the PIL. This height is favorable for the escape of a CME from its source region. Understandably, weaker quiet-Sun magnetic fluxes result in weaker constraining force above the flux rope. However, it does not mean that CMEs can successfully escape into interplanetary space. Figure 4c and 4d show that the minifilament propagated slowly along a radial direction in the lower corona with an average speed only \(\sim\)5.4 km s\({}^{-1}\). In fact, the slow speed is not unexpected, which is also attributed to the weaker quiet-Sun magnetic fluxes. Namely, they generate a non-strong toroidal current inside the flux rope, which results in a relatively weak upward hoop force. On the other hand, quiet-Sun eruptions in the current solar cycle are mainly under streamer belts where closed background fields are generally dominated (see Figure 4c). Nonetheless, it remains challenging to determine whether quiet-Sun eruptions are confined or eruptive. In our case, we find many narrow blobs moving outwards in the STEREO-A/COR2 view. However, due to the lack of usable COR1 data, we cannot determine the exact origin of these blobs. Regardless, quiet-Sun eruptions are more likely to get into torus-unstable regions due to the relatively weak quiet-Sun magnetic fluxes. The mass and magnetic free energy can be transferred from the lower corona to the middle or higher corona, which could become a possible source of the solar wind. Chitta et al. (2023) gave a direct observational evidence for a coronal separatrix web driving highly structured slow solar wind through complex middle-coronal dynamic processes. Perhaps the minifilament eruption we observed could transport mass and energy to the middle or higher corona, which then becomes the solar wind through the mechanism proposed by their research. We thank Xudong Sun for providing suggestions on HMI data processing and D. Berghmans for valuable suggestions on EUI data. The research was supported by National Key R&D Program of China No. 2021YFA0718600 and No. 2022YFF0503800, NSFC under grants 12073032, 42274201, 42004145 and 42150105, the Specialized Research Fund for State Key Laboratories of China. We acknowledge the use of data from Solar Orbiter and SDO. Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. The EUI instrument was built by CSL, IAS, MPS, MSSL/UCL, PMOD/WRC, ROB, LCF/IO with funding from the Belgian Federal Science Policy Office (BELSPO/PRODEX PEA 4000134088); the Centre National d'Etudes Spa \begin{table} \begin{tabular}{l l c|l c c} \hline \hline Event & Date/Time\({}^{a}\) & Location\({}^{b}\) (arcsec) & Event & Date/Time & Location (arcsec) \\ \hline 1 & 2020-05-20/21:20 & -228, 288 & 10 & 2021-03-22/11:40 & 98, 556 \\ 2 & 2020-05-21/16:50 & -307, -203 & 11 & 2021-08-21/21:33 & -400, -5 \\ 3 & 2020-10-19/20:14 & 192, 147 & 12 & 2021-08-21/22:35 & 152, -68 \\ 4 & 2020-10-19/20:14 & 562, 71 & 13 & 2021-08-31/08:54 & 322, 172 \\ 5 & 2020-10-21/11:39 & 22, 209 & 14 & 2021-08-31/10:47 & 46, 204 \\ 6 & 2020-11-19/12:39 & -526, 375 & 15 & 2022-02-08/13:48 & -32, -115 \\ 7 & 2021-02-22/03:51 & -63, 117 & 16 & 2022-02-08/14:00 & -105, -85 \\ 8 & 2021-03-22/10:16 & -300, 313 & 17 & 2022-03-27/21:30 & -318, -123 \\ 9 & 2021-03-22/11:13 & -154, -119 & & & \\ \hline \end{tabular} \({}^{a}\)Approximate onset time of the eruptions. \({}^{b}\)Helioprojective coordinates from the Solar Orbiter perspective. \end{table} Table 1: List of the small-scale eruptions with coronal dimmings from EUI/HRI\({}_{EUV}\) observations at 174 Å tiales (CNES); the UK Space Agency (UKSA); the Bundesministerium fur Wirtschaft und Energie (BMWi) through the Deutsches Zentrum fur Luft- und Raumfahrt (DLR); and the Swiss Space Office (SSO). ## Appendix A Mass-Loss Estimate of the Eruption We use the DEM techniques to determine the CME masses from coronal dimmings. Considering the limitations posed by the signal-to-noise ratio of the AIA passbands, we only utilize the passbands centered at 171, 193, and 211 A, which are sufficient to encompass the temperature range of the quiet Sun within the heart-shaped area. To determine the total evacuated mass \(M\) of the eruption, we use \[\begin{split} M&=\sum_{i=0}^{n}\mu m_{p}A_{s}\int_ {0}^{L}N(z)dz\\ &=\sum_{i=0}^{n}\mu m_{p}A_{s}\lambda_{p}N_{0}I\left(\frac{ \lambda_{p}}{R_{\odot}}\right),\end{split}\] (A1) where \(n\) is the total number of the pixels within the heart-shaped area, \(\mu\) the mean molecular weight of the hydrogen ion (\(\mu=1.27\)), m\({}_{p}\) the proton mass, A\({}_{s}\) the area of each pixel, \(\int_{0}^{L}N(z)dz\) the total number of coronal plasma at each pixel in the dimming region along the LOS, and \(\lambda_{p}\) the pressure scale height defined as \[\lambda_{p}=\frac{2k_{B}T_{e}}{\mu m_{p}g_{\odot}},\] (A2) where \(k_{B}\) is the Boltzmann constant, \(T_{e}\) the electron temperature at each pixel calculated from the DEM analysis by Equation 2, and \(g_{\odot}\) the gravitational acceleration at the solar surface. We obtain the value of the basal electron density \(N_{0}\) with the help of the EM measurement (for detailed derivation, please refer to the previous implementation about CME mass determination of Lopez et al., 2017), i.e., \[N_{0}=\sqrt{\frac{EM}{\frac{\lambda_{p}}{2}I\left(\frac{\lambda_{p}}{2R_{\odot }}\right)}},\] (A3) where \(R_{\odot}\) is the solar radius, and \(I\) the integral quantity defined as \[I(\alpha)=\int_{0}^{2L/\lambda_{p}}exp\left(-\frac{x}{\alpha x+1}\right)dx.\] (A4) We have set \(L=5\lambda_{p}\) in order to include as much mass as possible along the LOS. The total evacuated mass \(\Delta M\) is determined by the difference between the mass \(M_{before}\) before the eruption and the mass \(M_{after}\) after the eruption, i.e., \[\Delta M=M_{before}-M_{after}.\] (A5)
2304.08407
The Dirichlet problem of homogeneous complex k-Hessian equation in a (k-1)-pesudoconvex domain with isolated singularity
In this paper, we consider the homogeneous complex k-Hessian equation in $\Omega\backslash\{0\}$. We prove the existence and uniqueness of the $C^{1,\alpha}$ solution by constructing approximating solutions. The key point for us is to construct the subsolution for approximating problem and establish uniform gradient estimates and complex Hessian estimates which is independent of the approximation.
Zhenghuan Gao, Xi-Nan Ma, Dekai Zhang
2023-04-17T16:18:46Z
http://arxiv.org/abs/2304.08407v1
The Dirichlet problem of homogeneous complex \(k\)-Hessian equation in a \((k-1)\)-pseudoconvex domain with isolated singularity ###### Abstract. In this paper, we consider the homogeneous complex \(k\)-Hessian equation in \(\Omega\setminus\{0\}\). We prove the existence and uniqueness of the \(C^{1,\alpha}\) solution by constructing approximating solutions. The key point for us is to construct the subsolution for approximating problem and establish uniform gradient estimates and complex Hessian estimates which is independent of the approximation. ## 1. Introduction Let \(\Omega\) be a smooth bounded domain of \(\mathbb{C}^{n}\) and \(k\) be an integer such that \(1\leq k\leq n\). We consider the homogeneous complex \(k\)-Hessian equations \[(dd^{c}u)^{k}\wedge\omega^{n-k}=0\quad\text{in }\Omega\setminus\{0\}.\] Let \(u\) be a real \(C^{2}\) function in \(\mathbb{C}^{n}\) and \(\lambda=(\lambda_{1},\cdots,\lambda_{n})\) be the eigenvalues of the complex Hessian (\(\frac{\partial^{2}u}{\partial z_{i}\partial z_{j}}\)), the complex \(k\)-Hessian operator is defined by \[H_{k}[u]:=\sum_{1\leq i_{1}<\cdots i_{k}\leq n}\lambda_{i_{1}}\cdots\lambda_{ i_{k}},\] where \(1\leq k\leq n\). Using the operators \(d=\partial+\overline{\partial}\) and \(d^{c}=\sqrt{-1}(\overline{\partial}-\partial)\), such that \(dd^{c}=2\sqrt{-1}\partial\overline{\partial}\), one gets \[(dd^{c}u)^{k}\wedge\omega^{n-k}=4^{n}k!(n-k)!H_{k}[u]d\lambda,\] where \(\omega=dd^{c}|z|^{2}\) is the fundamental Kahler form and \(d\lambda\) is the volume form. When \(k=1\), \(H_{1}[u]=\frac{1}{4}\Delta u\). When \(k=n\), \(H_{n}[u]=\det u_{i\bar{j}}\) is the complex Monge-Ampere operator. ### Some known results and motivations Let \(S_{k}(D^{2}u)\) be the \(k\)-Hessian of a real \(C^{2}\) function \(u\) in \(\mathbb{R}^{n}\). When \(k>1\), the Hessian equations \(S_{k}(D^{2}u)=f\) and \(H_{k}[u]=f\) are both nonlinear. When \(f>0\), the Hessian equation is nondegenerate. When \(f\) vanishes somewhere, the Hessian equation is degenerate. #### 1.1.1. Results on bounded domain For the Hessian equation on \(\mathbb{R}^{n}\), its Dirichlet problem with positive \(f\) \[\begin{cases}S_{k}(D^{2}u)=f&\text{in }\Omega,\\ u=\varphi&\text{on }\partial\Omega,\end{cases}\] was studied by Ivochkina [21] for \(k=1,2,3,n\) on convex domain with further assumptions on \(f\) and by Caffarelli-Nirenberg-Spruck [10] for general \(f>0\) and \(k=1,2,\cdots,n\) by assuming \(\Omega\) is \((k-1)\)-convex. B. Guan [16] showed the geometric condition on \(\Omega\) could be removed by assumption of existence of a strict subsolution. In [35], Trudinger-Wang developed a Hessian measure theory for Hessian operator. One can see a survey in Wang [38] for more related topics. For the complex \(k\)-Hessian equation in \(\mathbb{C}^{n}\), Li [30] solved its Dirichlet problem via the subsolution approach. For Monge-Ampere equation in a bounded domain of \(\mathbb{R}^{n}\), when \(f=0\), Caffarelli-Nirenberg-Spruck [11] proved the \(C^{1,1}\) regularity in a bounded convex domain. For general \(f\geq 0\), Guan-Trudinger-Wang [20] proved the \(C^{1,1}\) regularity result when \(f^{\frac{1}{n-1}}\in C^{1,1}\). Due to the counterexample by Wang [37], \(C^{1,1}\) regularity is optimal. For \(k\)-Hessian equation in \(\mathbb{R}^{n}\), the \(C^{1,1}\) regularity is obtained by Krylov [24, 25] and Ivochkina-Trudinger-Wang [22]. For complex Monge-Ampere equation, Lempert [26, 27] proved the Dirichlet problem admits a smooth solution with a logarithm pole at the origin on a strictly convex punctured domain \(\Omega\backslash\{0\}\) when \(f=0\). As for strongly pseudoconvex domain, Guan [17] and Blocki [5] proved the solution is \(C^{1,1}\). In [19], Guan obtained the \(C^{1,1}\) regularity for the solution on a ring domain. For general \(f\geq 0\), the optimal \(C^{1,1}\) regularity was in Caffarelli-Kohn-Nirenberg-Spruck [8],Krylov [24, 25] for strongly pseudoconvex domain. #### 1.1.2. Results on unbounded domain The viscosity solution to nondegenerate \(k\)-Hessian equation on unbounded domain has been researched extensively. Caffarelli-Li [9] solved the viscosity solution to the Monge-Ampere equation \(\det D^{2}u=1\) with prescribed asymptotic behavior at infinity. Bao-Li-Li [2] studied the \(k\)-Hessian equation case. For the related results on other type nondegenerate fully nonlinear equations, one can see [1, 28, 31]. In [29], Li-Wang consider the \(\det D^{2}u=0\) on a strip region \(\Omega:=\mathbb{R}^{n}\times[0,1]\). By assuming two boundary functions are both strictly convex \(C^{1,1}(\mathbb{R}^{n-1})\) functions, they obtained the solutions is \(C^{1,1}(\overline{\Omega})\). If the boundary functions are locally uniformly convex \(C^{k+2,\alpha}(\mathbb{R}^{n-1})\) function, then \(u\) is the unique \(C^{k+2,\alpha}(\overline{\Omega})\) function. Recently, Xiao [39] and Ma-Zhang [34] proved the \(C^{1,1}\) regularity of Dirichlet fot the homogeneous \(k\)-Hessian equation out of \(\Omega\subset\mathbb{R}^{n}\), by assuming \(\Omega\) is starshaped, \((k-1)\)-convex and and \(1\leq k<\frac{n}{2}\) or \(\Omega\) is \((k-1)\)-convex and \(1\leq k\leq n\) respectively. For homogeneous complex \(k\)-Hessian equation, Gao-Ma-Zhang [15] obtained the \(C^{1,1}\) regularity. #### 1.1.3. Motivations Our paper is motivated by the research on the regularity of extremal function or Green function. In [23], Klimek introduced the following extremal fucntion \[g_{\Omega}(z,\xi)=\sup\{v\in\mathcal{PSH}(\Omega):v<0,\ v(z)\leq\log|z-\xi|+O(1)\}.\] \(g_{\Omega}(z,\xi)\) is also call the pluricomplex Green function on \(\Omega\) with a logarithminc pole at \(\xi\). If \(\Omega\) is hyperconvex, Demailly [13] showed that \(u(z)=g_{\Omega}(z,\xi)\) is continuous and is a unique solution to the homogeneous complex Monge-Ampere equation, \[\begin{cases}(dd^{c}u)^{n}=0&\text{in }\Omega\setminus\{\xi\},\\ u=0&\text{on }\partial\Omega,\\ u(z)=\log|z-\xi|+O(1)&\text{as }z\to\xi.\end{cases} \tag{1.1}\] If \(\Omega\) is strictly convex domain in \(\mathbb{C}^{n}\) with smooth boundary, Lempert [26] proved (1.1) admits a unique plurisubharmonic solution which is smooth. In the strongly pseudoconvex case, B. Guan [17] proved \(g_{\Omega}(z,\xi)\in C^{1,\alpha}(\overline{\Omega}\setminus\{\xi\})\) and later, Blocki improved it to \(C^{1,1}(\overline{\Omega}\setminus\{\xi\})\) in [5] and generalized it to several poles in [6]. Due to the counterexamples found by Bedford-Demailly [3], \(C^{1,1}\) regularity is optimal. P. Guan [19] established \(C^{1,1}\) regularity of extremal function associated to intrinsic norms of Chen-Levine-Nirenberg [12] and Bedford-Taylor [4] by considering \[\begin{cases}(dd^{c}u)^{n}=0&\text{in }\Omega_{0}\setminus(\cup_{i=1}^{m} \Omega_{i}),\\ u=0&\text{on }\partial\Omega_{i},\ i=1,\cdots,n\\ u=1&\text{on }\partial\Omega_{0}.\end{cases}\] Applying the techniques from [19], B. Guan proved the \(C^{1,1}\) regularity of pluricomplex Green function for the union of a finite collection of strongly pseudoconvex domains in \(\mathbb{C}^{n}\). In [14], we considered the following homogeneous (real) \(k\)-Hessian equation in a punctured domain \[\begin{cases}S_{k}(D^{2}u)=0&\text{in }\Omega\backslash\{0\},\\ u=c_{k}&\text{on }\partial\Omega,\\ u(x)=h_{k}(x)&\text{as }x\to 0\end{cases} \tag{1.2}\] where \(c_{k}=1\) and \(h_{k}(x)=0\) if \(k>\frac{n}{2}\), \(c_{k}=-1\) and \(h_{k}(x)=-|x|^{2-\frac{n}{k}}+O(1)\) if \(k<\frac{n}{2}\), \(c_{k}=0\) and \(h_{k}(x)=\log|x|+O(1)\) if \(k=\frac{n}{2}\). Assume that \(\Omega\) is \((k-1)\)-convex, we proved the existence and uniqueness of \(C^{1,1}\) solution to (1.2). Moreover the solution can be controlled pointwisely by fundamental solutions of homogenous k-Hessian equations up to the second order. If \(\Omega\) is also starshaped with respect to the origin, we proved the positive lower bound of the gradient of the solution and then we show a nearly monotonicity formula along the level set of the approximating solution. ### Our result In this section, we consider the following problem for complex \(k\)-Hessian equation \[\begin{cases}(dd^{c}u)^{k}\wedge\omega^{n-k}=0&\text{in $\Omega\setminus\{0\}$,}\\ u=-1&\text{on $\partial\Omega$,}\\ u(z)=-|z|^{2-\frac{2n}{k}}+O(1)&\text{as $z\to 0$.}\end{cases} \tag{1.3}\] **Theorem 1.1**.: _Assume \(1\leq k<n\). Let \(\Omega\) be a smooth \((k-1)\)-pseudoconvex domain containing the origin. Then there exists a unique \(k\)-subharmonic solution \(u\) of (1.3) in \(C^{1,\alpha}(\overline{\Omega}\setminus\{0\})\). Moreover, \(u\) satisfies the estimate_ \[-C\leq u+|z|^{2-\frac{2n}{k}}\leq 0, \tag{1.5}\] \[|Du|+|z||\Delta u|\leq C|z|^{1-\frac{2n}{k}}. \tag{1.4}\] Here \(k\)-subharmonic function and \((k-1)\)-pseudoconvex domain are introduced in Section 2. We suppose \(\Omega\) contains the origin and we use the notation \(\Omega_{r}=\Omega\setminus\overline{B}_{r}(0)\). We use \(B_{r}\) instead of \(B_{r}(0)\) for short. To prove Theorem 1.1, we consider the approximating problem \[\begin{cases}H_{k}[u^{\varepsilon,r}]=\varepsilon&\text{in $\Omega_{r}$,}\\ u=\underline{u}&\text{on $\partial B_{r}$,}\end{cases} \tag{1.6}\] where \(\underline{u}\) is a subsolution constructed in Section 3. The solution \(u\) to (1.3) with be obtained by approximating solution \(u^{\varepsilon,r}\) to (1.6). The existence of \(u^{\varepsilon,r}\) follows from subsolution method in [30]. The rest of the paper is organized as follows. In Section 2, we first give the definition and some notations. Then we recall some new gradient estimates and complex Hessian estimates in [15] motivated by B. Guan [18], which will be used in the proof of (1.5). In Section 3, we establish uniform gradient estimates and complex Hessian estimates. Theorem 1.1 will be proved in the last section. ## 2. Preliminaries ### Elementary symmetric functions For any \(k=1,\cdots,n\) and \(\lambda=(\lambda_{1},\cdots,\lambda_{n})\in\mathbb{R}^{n}\), the \(k\)-th elementary symmetric function on \(\lambda\) is defined by \[S_{k}(\lambda):=\sum_{1\leq i_{1}<\cdots<i_{k}\leq n}\lambda_{i_{1}}\cdots \lambda_{i_{k}}.\] Let \(S_{k}(\lambda|i)\) be the symmetric function with \(\lambda_{i}=0\). Let \(A=(a_{ij})\in\mathbb{R}^{n\times n}\) be an \(n\times n\) matrix. Let \(S_{k}(A)\) be the \(k\)-th elementary symmetric function on \(A\), which is the sum of \(k\times k\) principal minors of \(A\). We use the convention that \(S_{0}(A)=1\). It is clear that \(S_{k}(A)=S_{k}(\lambda(A))\), where \(\lambda(A)\) are the eigenvalues of \(A\). The elementary symmetric functions have the following simple properties from [32]. \[S_{k}(\lambda)=S_{k}(\lambda|i)+\lambda_{i}S_{k-1}(\lambda|i), \tag{2.1}\] and \[\sum_{i=1}^{n}S_{k}(\lambda|i)=(n-k)S_{k}(\lambda). \tag{2.2}\] Recall the \(\Gamma_{k}\)-cone is defined by \[\Gamma_{k}:=\{\lambda\in\mathbb{R}^{n}\mid S_{i}(\lambda)>0,1\leq i\leq k\}\] For \(\lambda\in\Gamma_{k}\) and \(1\leq l\leq k\), the well-known MacLaurin inequality (see [32]) says \[\left(\frac{S_{k}(\lambda)}{C_{n}^{k}}\right)^{\frac{1}{k}}\leq\left(\frac{S_ {l}(\lambda)}{C_{n}^{l}}\right)^{\frac{1}{l}}.\] One can find the concavity property of \(S_{k}^{\frac{1}{k}}\) in [10]. **Proposition 2.1**.: \(S_{k}^{\frac{1}{k}}\) _is a concave function in \(\Gamma_{k}\)._ ### \(k\)-subharmonic solutions In this section we give the definition of \(k\)-subharmonic functions and definition of \(k\)-pseudoconvex domains. One can see the lecture notes by Wang [38] for more properties of the \(k\)-Hessian operator, and see Blocki [7] for those of the complex \(k\)-Hessian operator. We following the definition by Blocki [7] to give the definition of \(k\)-subharmonic functions. **Definition 2.2**.: _Let \(\alpha\) be a real \((1,1)\)-form in \(U\), a domain of \(\mathbb{C}^{n}\). We say that \(\alpha\) is \(k\)-positive in \(U\) if the following inequalities hold_ \[\alpha^{j}\wedge\omega^{n-j}\geq 0,\forall\ j=1,\cdots,k.\] **Definition 2.3**.: _Let \(U\) be a domain in \(\mathbb{C}^{n}\)._ _(1). A function \(u:U\to\mathbb{R}\cup\{-\infty\}\) is called \(k\)-subharmonic if it is subharmonic and for all \(k\)-positive real \((1,1)\)-form \(\alpha_{1},\cdots,\alpha_{k-1}\) in \(U\),_ \[dd^{c}u\wedge\alpha_{1}\wedge\cdots\wedge\alpha_{k-1}\wedge\omega^{n-k}\geq 0.\] _The class of all \(k\)-subharmonic functions in \(U\) will be denoted by \(\mathcal{SH}_{k}(U)\)._ _(2). A function \(u\in C^{2}(U)\) is called \(k\)-subharmonic (strictly \(k\)-subharmonic) if \(\lambda\big{(}\frac{\partial^{2}u}{\partial z_{i}\partial\bar{z}_{j}}\big{)} \in\overline{\Gamma}_{k}\) (\(\lambda\big{(}\frac{\partial^{2}u}{\partial z_{i}\partial\bar{z}_{j}}\big{)} \in\Gamma_{k}\))._ If \(u\in\mathcal{SH}_{k}(U)\cap C(U)\), \((dd^{c}u)^{k}\wedge\omega^{n-k}\) is well defined in pluripotential theory by Blocki [7]. We need the following comparison principle by Blocki [7] to prove the uniqueness of the continuous solution of the problem (1.3). **Lemma 2.4**.: _Let \(U\) be a bounded domain in \(\mathbb{C}^{n}\), \(u,v\in\mathcal{SH}_{k}(U)\cap C(\overline{U})\) satisfy_ \[\begin{cases}(dd^{c}u)^{k}\wedge\omega^{n-k}\geq(dd^{c}v)^{k}\wedge\omega^{n-k}& \text{in }U,\\ u\leq v&\text{on }\partial U.\end{cases}\] _Then \(u\leq v\) in \(U\)._ ### Gradient estimates and complex Hessian estimates Motivated by [18], we proved the following new gradient estimates and complex Hessian estimates in [15]. **Theorem 2.5**.: _Let \(u\in C^{3}(U)\cap C^{1}(\overline{U})\cap\mathcal{SH}_{k}(U)\) be a negative solution to \(H_{k}[u]=f\) in \(U\), where \(f\in C^{1}(\overline{U})\) is positive. Denote by_ \[P=|Du|^{2}(-u)^{-\frac{2n-k}{n-k}}.\] _Then_ \[\max_{\overline{U}}P\leq\max\Big{\{}\max_{\partial U}P,\max_{\overline{U}} \Big{(}\frac{2(n-k)}{k(2n-k)}\Big{)}^{2}(-u)^{-\frac{k}{n-k}}|D\log f|^{2} \Big{\}} \tag{2.3}\] **Theorem 2.6**.: _Let \(u\in C^{4}(U)\cap C^{2}(\overline{U})\cap\mathcal{SH}_{k}(U)\) be a negative solution to \(H_{k}[u]=f\) in \(U\), where \(f\in C^{2}(\overline{U})\) is positive. Assume that \(P=|Du|^{2}(-u)^{-\frac{2n-k}{n-k}}\), \((-u)^{-\frac{k}{n-k}}|D\log f|^{2}\) and \((-u)^{-\frac{k}{n-k}}|D^{2}\log f|\) are bounded. Denote by_ \[H=u_{\xi\bar{\xi}}(-u)^{-\frac{n}{n-k}}(M-P)^{-\sigma},\] _where \(M=2\max_{\overline{U}}P+1\), \(\sigma\leq\frac{n(n-k)}{8(2n-k)^{2}}\). Then we have_ \[\max_{\overline{U}}H\leq C+\max_{\partial U}H, \tag{2.4}\] _where \(C\) is a positive constant depending only on \(n\), \(k\), \(P\), \((-u)^{-\frac{k}{n-k}}|D\log f|^{2}\) and \((-u)^{-\frac{k}{n-k}}|D^{2}\log f|\)._ We need the following lemma by P. Guan [19] to construct the subsolution of the complex \(k\)-Hessian equation in a ring domain. **Lemma 2.7**.: _Suppose that \(U\) is a bounded smooth domain in \(\mathbb{C}^{n}\). For \(h,g\in C^{m}(U)\), \(m\geq 2\), for all \(\delta>0\), there is an \(H\in C^{m}(U)\) such that_ 1. \(H\geq\max\{h,g\}\) _and_ \[H(z)=\left\{\begin{array}{ll}h(z),&\text{if }\ h(z)-g(z)>\delta,\\ g(z),&\text{if }\ g(z)-h(z)>\delta;\end{array}\right.\] 2. _There exists_ \(|t(z)|\leq 1\) _such that_ \[\left\{H_{i\bar{j}}(z)\right\}\geq\left\{\frac{1+t(z)}{2}g_{i\bar{j}}+\frac{1 -t(z)}{2}h_{i\bar{j}}\right\},\text{ for all }x\in\left\{|g-h|<\delta\right\}.\] We can prove that \(H\) is \(k\)-subharmonic if \(f\) and \(g\) are both \(k\)-subharmonic by the concavity of \(S^{\frac{1}{k}}\) in Proposition 2.1. At the last of this subsection, we recall the definition of \((k-1)\)-pseudoconvex domain. **Definition 2.8**.: _A \(C^{2}\) domain \(U\) is called \((k-1)\)-pseudoconvex if there is \(C_{U}>0\), such that \(\lambda(-d_{i\bar{j}}+C_{U}(d^{2})_{i\bar{j}})\in\Gamma_{k}\) on \(\partial U\), where \(d(z)=\operatorname{dist}(z,\partial U)\) is the distance function from \(z\) to \(\partial U\)._ ## 3. Solving the approximating problem in \(\Omega\setminus B_{r}\) In this section, we will solve the approximating problem by a-priori estimates and the subsolution method. Before this, we make an assumption on \(\Omega\). **Assumption 3.1**.: _Assume \(\Omega\) contains the origin and \(B_{r_{0}}\subset\subset\Omega\subset\subset B_{(1-\tau_{0})R_{0}}\) for some \(\tau_{0}\in(0,\frac{1}{2})\)._ Denote by \(\Omega^{\mu}=\{z\in\Omega:d(z)<\mu\}\). In this section, we use \(C\) and \(c\) with subscript to denote some positve constant which are independent of \(\varepsilon\) and \(r\). The following lemma about \((k-1)\)-pesudoconvex domain in \(\mathbb{C}^{n}\) is a parallel version to \((k-1)\)-convex domain in \(\mathbb{R}^{n}\) with can be found in [10, Section 3]. It plays an important roles in constructing the subsolution. **Lemma 3.2**.: _Let \(\Omega\) be a smooth \((k-1)\)-pseudoconvex bounded domain. There exists \(\mu_{0}\in(0,\frac{1}{2C_{\Omega}})\) small enough such that \(B_{r_{0}}\subset\subset\{z\in\Omega:d(z)>2\mu_{0}\}\). Moreover \(\rho:=-d+C_{\Omega}d^{2}\) is smooth and strictly \(k\)-subharmonic and \(H_{k}[\rho]\geq\epsilon_{0}\) in \(\overline{\Omega^{2\mu_{0}}}\) for some \(\epsilon_{0}>0\)._ ### The approximating equation We will approximate the solution to the homogeneous complex \(k\)-Hessian equation in \(\Omega\backslash\{0\}\) by solutions to a sequence of nongenerate equation in \(\Omega_{r}\). The existance of approximating solution can be obtained if we can construct a smooth subsolution. In the following, we use the technique from P. Guan [19] to construct a subsolution. Denote \(w:=-|z|^{2-\frac{2\mu}{k}}+R_{0}^{2-\frac{2\mu}{k}}-1+a_{0}\frac{|z|^{2}}{R_ {0}^{2}}\), where \(a_{0}=\frac{1}{2}\Big{(}(1-\tau_{0})^{2-\frac{2\mu}{k}}-1\Big{)}R_{0}^{2-\frac {2\mu}{k}}\). Then by \(\Omega\subset B_{(1-\tau_{0})R_{0}}\), we have \[w\leq-\frac{1}{2}\Big{(}(1-\tau_{0})^{2-\frac{2\mu}{k}}-1\Big{)}R_{0}^{2-\frac {2\mu}{k}}-1\quad\text{in}\ \overline{\Omega}.\] By Proposition 2.1, we have \[H_{k}^{\frac{1}{k}}[w]\geq H_{k}^{\frac{1}{k}}[-|z|^{2-\frac{2\mu}{k}}]+H_{k}^ {\frac{1}{k}}[a_{0}\frac{|z|^{2}}{R_{0}^{2}}]=(C_{n}^{k})^{\frac{1}{k}}a_{0}R_ {0}^{-2}\quad\text{in}\ \Omega.\] Then by Lemma 2.7, we can construct a smooth and strictly \(k\)-subharmonic function \(\underline{u}\) from \(w\) and \(\rho\). **Lemma 3.3**.: _There is a strictly \(k\)-subharmonic function \(\underline{u}\in C^{\infty}(\overline{\Omega}_{r})\) satisfying_ \[\underline{u}(z)=\begin{cases}K_{0}\rho(z)-1\quad\text{if}\quad d(z)\leq\frac{ \mu_{0}}{M_{0}},\\ w(z)\quad\text{if}\quad d(z)>\mu_{0},\end{cases}\] \[\underline{u}(z)\geq\max\{K_{0}\rho(z)-1,w(z)\}\quad\text{if}\quad\frac{\mu_{ 0}}{M_{0}}\leq d(z)\leq\mu_{0},\] \[H_{k}[\underline{u}]\geq\epsilon_{1}:=\min\{C_{n}^{k}a_{0}^{k}R_{0}^{-2k},K_{ 0}^{k}\epsilon_{0}\}\quad\text{in}\quad\Omega,\] _where \(K_{0}\) and \(M_{0}\) are uniform constants._ Proof.: Since \(B_{r_{0}}\subset\{z\in\Omega:d(z)>2\mu_{0}\}\), by choosing \(K_{0}=\frac{r_{0}^{2-\frac{2\mu}{k}}}{c_{\Omega}\mu_{0}^{2}-\mu_{0}}\), we find \(\forall\,z\in\overline{\Omega^{2\mu_{0}}}\setminus\Omega^{\mu_{0}}\), there holds \[w-(K_{0}\rho-1)= -|z|^{2-\frac{2\mu}{k}}+R_{0}^{2-\frac{2n}{k}}+a_{0}\frac{|z|^{ 2}}{R_{0}^{2}}-K_{0}\rho\] \[\geq -r_{0}^{2-\frac{2n}{k}}+R_{0}^{2-\frac{2n}{k}}-K_{0}(-\mu_{0}+C_ {\Omega}\mu_{0}^{2})\] \[\geq R_{0}^{2-\frac{2n}{k}},\] For any \(z\in\overline{\Omega_{\frac{\mu_{0}}{M_{0}}}}:=\{z\in\overline{\Omega}:d(z) \leq\frac{\mu_{0}}{M_{0}}\}\), there also holds \[(K_{0}\rho-1)-w\geq\frac{1}{2}\Big{(}(1-\tau_{0})^{2-\frac{2n}{k}}-1)\Big{)}R_ {0}^{2-\frac{2n}{k}}+K_{0}\Big{(}-\frac{\mu_{0}}{M_{0}}+C_{\Omega}(\frac{\mu_ {0}}{M_{0}})^{2}\Big{)}\geq\frac{1}{4}\Big{(}(1-\tau_{0})^{2-\frac{2n}{k}}-1 \Big{)}R_{0}^{2-\frac{2n}{k}},\] provided that \(M_{0}\) is a positive solution to \[K_{0}(-\frac{\mu_{0}}{M_{0}}+C_{\Omega}(\frac{\mu_{0}}{M_{0}})^{2})\geq-\frac {1}{4}((1-\tau_{0})^{2-\frac{2n}{k}}-1)R_{0}^{2-\frac{2n}{k}} \tag{3.1}\] In fact, we can choose \(\tau_{0}\) small enough such that (3.1) holds if \(M_{0}>1\). Take \(\delta:=\min\{\frac{1}{4}\Big{(}(1-\tau_{0})^{2-\frac{2n}{k}}-1\Big{)}R_{0}^{2 -\frac{2n}{k}},R_{0}^{2-\frac{2n}{k}}\}\) and we apply Lemma 2.7 with \(g=K_{0}\rho-1\), \(h=w\) and \(\delta\) on \(\Omega^{2\mu_{0}}\), we obtain a smooth and strictly \(k\)-subharmonic function \(\underline{u}\) in \(\Omega^{2\mu_{0}}\). Moreover \(\underline{u}=K_{0}\rho-1\) in \(\Omega^{\frac{\mu_{0}}{M_{0}}}\), and \(\underline{u}=w\) in \(\overline{\Omega^{2\mu_{0}}}\setminus\Omega^{\mu_{0}}\). At last, we set \(\underline{u}=w\) in \(\Omega_{r}\setminus\Omega^{2\mu_{0}}\). By Lemma 2.7, we have \[H_{k}[\underline{u}]\geq\min\{H_{k}[w],H_{k}[K_{0}\rho]\}\geq\min\{C_{n}^{k}a_ {0}^{k}R_{0}^{-2k},K_{0}^{k}\epsilon_{0}\}.\] We now consider the approximating equation \[\begin{cases}H_{k}[u]=\varepsilon&\text{in }\Omega_{r},\\ u=\underline{u}&\text{on }\partial\Omega_{r}\end{cases} \tag{3.2}\] Then \(\underline{u}\) is a strictly subharmonic solution of above equation for any \(\varepsilon<\epsilon_{1}\). By Li [30], (3.2) admits a strictly \(k\)-subharmonic solution \(u^{\varepsilon,r}\in C^{\infty}(\overline{\Omega}_{r})\). Let \(r_{1}=\min\{2^{\frac{2\varepsilon}{2k-n}}R_{0},(\frac{2a_{0}}{R_{0}^{2}})^{- \frac{k}{2n}}\}\), \(\forall\,r\leq r_{1}\), since \(\underline{u}=-1\) on \(\partial\Omega\) and \(\underline{u}=-r^{2-\frac{2n}{k}}+R_{0}^{2-\frac{2n}{k}}-1+a_{0}\frac{r^{2}}{ R_{0}^{2}}\) on \(\partial B_{r}\), we have \(\underline{u}\mid_{\partial\Omega}\leq-1\).By maximum principle, we have \(u^{\varepsilon,r}\leq-1\) when \(r\leq r_{1}\), \(\varepsilon\leq\epsilon_{1}\). In the following, we want to derive a \((\varepsilon,r)\)-independent uniform \(C^{2}\) estimate for \(u^{\varepsilon,r}\). We prove the following **Theorem 3.4**.: _Suppose \(\Omega\) be a smooth \((k-1)\)-pseudoconvex bounded domain. Assume that \(\Omega\) satisfies Assumption 3.1. For sufficient small \(r>0\) and \(\varepsilon>0\), (3.2) admits a \(k\)-subharmonic solution \(u^{\varepsilon,r}\), where \(\underline{u}\) is constructed above. Moreover, \(u^{\varepsilon,r}\) satisfies the following estimates,_ \[-|z|^{2-\frac{2n}{k}}+R_{0}^{2-\frac{2n}{k}}-1+a_{0}\frac{|z|^{2}}{R_{0}^{2}} \leq u^{\varepsilon,r}\leq-|z|^{2-\frac{2n}{k}}+r_{0}^{2-\frac{2n}{k}}-1, \tag{3.3}\] \[|Du^{\varepsilon,r}| \leq C|z|^{1-\frac{2n}{k}}, \tag{3.5}\] \[|\partial\bar{\partial}u^{\varepsilon,r}| \leq C|z|^{-\frac{2n}{k}}, \tag{3.4}\] _where \(C\) is a uniform positive constant which is independent of \(\varepsilon\) and \(r\)._ _In addition, if \(\Omega\) is starshaped with respect to the origin, there is a uniform positive constant \(c\) independent of \(\varepsilon\) and \(r\) such that_ \[|Du^{\varepsilon,r}|\geq c_{0}|z|^{1-\frac{2n}{k}}. \tag{3.6}\] ### \(C^{0}\) estimate Since \(\underline{u}\) is a subsolution to (3.2), we obtain that \[u^{\varepsilon,r}\geq\underline{u}\quad\text{in }\Omega_{r}. \tag{3.7}\] Let \[\overline{u}=-|z|^{2-\frac{2n}{k}}+r_{0}^{2-\frac{2n}{k}}-1.\] By taking \(r\leq\min\{r_{1},r_{2}\}\), where \(r_{2}=R_{0}(r_{0}^{2-\frac{2n}{k}}-R_{0}^{2-\frac{2n}{k}})^{\frac{1}{2}}a_{0} ^{-\frac{1}{2}}\), we have \[u^{\varepsilon,r}\leq\overline{u}\quad\text{on }\partial\Omega_{r}.\] Note that \(H_{k}[u^{\varepsilon,r}]=\varepsilon>0=H_{k}[\overline{u}]\) in \(\Omega_{r}\), it follows that \[u^{\varepsilon,r}\leq\overline{u}\quad\text{in }\Omega_{r}. \tag{3.8}\] By (3.7) and (3.8), we obtain \[-|z|^{2-\frac{2n}{k}}+R_{0}^{2-\frac{2n}{k}}-1+a_{0}\frac{|z|^{2}}{R_{0}^{2}} \leq u^{\varepsilon,r}\leq-|z|^{2-\frac{2n}{k}}+r_{0}^{2-\frac{2n}{k}}-1.\] This gives the \(C^{0}\) estimate (3.3). ### Gradient estimates Base on the key estimate (2.3), we can prove the global gradient estimate in this subsection. #### 3.3.1. **Reducing global gradient estimates to boundary gradient estimates.** Since \(u^{\varepsilon,r}<0\), \(f=\varepsilon\), by Theorem 2.5, we have \[\max_{\overline{\Omega}_{r}}P=\max_{\partial\Omega_{r}}P.\] #### 3.3.2. **Boundary gradient estimates.** To prove boundary gradient estimates, we will construct barriers near \(\partial\Omega\) and \(\partial B_{r}\) respectively. Since \(u^{\varepsilon,r}=\underline{u}=-1\) on \(\partial\Omega\) and \(u^{\varepsilon,r}\geq\underline{u}\) in \(\Omega_{r}\), we have \[|Du^{\varepsilon,r}|=u_{\nu}^{\varepsilon,r}\leq\underline{u}_{\nu}\quad \text{on }\partial\Omega, \tag{3.9}\] where \(\nu\) is the unit outer normal to \(\partial\Omega\). Let \(r_{3}\leq\min\{r_{1},r_{2},1\}\) and \(h_{1}\) be the harmonic function \(h_{1}\) in \(\Omega\setminus B_{r_{3}}\) with \(h_{1}=-1\) on \(\partial\Omega\) and \(h_{1}=-r_{3}^{2-\frac{2n}{k}}+r_{0}^{2-\frac{2n}{k}}-1\) on \(\partial B_{r_{3}}\). Then we have \(h_{1}\geq u^{\varepsilon,r}\) on \(\partial\Omega_{r_{3}}\). So \(h_{1}\geq u^{\varepsilon,r}\) in \(\Omega_{r_{3}}\), and it follows that \[u_{\nu}^{\varepsilon,r}\geq h_{1,\nu}>0\quad\text{on }\partial\Omega, \tag{3.10}\] That is there exist a positive constant \(C\) such that \[0<C^{-1}\leq u_{\nu}^{\varepsilon,r}\leq C\quad\text{on }\partial\Omega. \tag{3.11}\] Let \(h_{2}\) be a harmonic function with \(h_{2}=\underline{u}\) on \(\partial B_{r}\) and \(h_{2}=\overline{u}=-\frac{1}{2}|z|^{2-\frac{2n}{k}}\) on \(\partial B_{2r}\). Let \[\tilde{h}_{2}(z)=r^{\frac{2n}{k}-2}(h_{2}(rz)+r^{2-\frac{2n}{k}})=r^{\frac{2n }{k}-2}h_{2}(rz)+1.\] Then \(\tilde{h}_{2}\) is a harmonic function in \(B_{2}\setminus B_{1}\) with \(\tilde{h}_{2}=a_{0}r^{\frac{2n}{k}}R_{0}^{-2}+r^{\frac{2n}{k}-2}(R_{0}^{2- \frac{2n}{k}}-1)\) on \(\partial B_{1}\) and \(\tilde{h}_{2}=-2^{1-\frac{2n}{k}}\) on \(\partial B_{2}\). Let \[\tilde{u}=r^{\frac{2n}{k}-2}u^{\varepsilon,r}(rz)+1, \tag{3.12}\] and \[\underline{\tilde{u}}=r^{\frac{2n}{k}-2}\underline{u}(rz)+1. \tag{3.13}\] By maximum principle, we have \[\underline{\tilde{u}}\leq\tilde{u}\leq\tilde{h}_{2}\text{ in }B_{2}\setminus B _{1}.\] Note that \[\underline{\tilde{u}}=\tilde{u}=\tilde{h}_{2}=a_{0}r^{\frac{2n}{k}}R_{0}^{-2}+ r^{\frac{2n}{k}-2}(R_{0}^{2-\frac{2n}{k}}-1)\text{ in }\partial B_{1}.\] We obtain \[D^{\prime}\tilde{u}=D^{\prime}\underline{\tilde{u}}=D^{\prime}\tilde{h}_{2}=0 \quad\text{on }\partial B_{1},\] and \[0<c(n,k)\leq\underline{\tilde{u}}_{\nu}\leq\tilde{u}_{\nu}\leq\tilde{h}_{2, \nu}\leq\tilde{C}\quad\text{on }\partial B_{1},\] where \(\nu\) is the unit outer normal to \(\partial B_{1}\), \(\tilde{C}\) is independent of \(r\) and \(\varepsilon\). So we obtain \[C^{-1}\leq|D\tilde{u}|\leq C\quad\text{on }\partial B_{1}.\] By (3.3), we have \[|Du^{\varepsilon,r}|\leq Cr^{1-\frac{2n}{k}}=C|z|^{1-\frac{2n}{k}}\leq C(-u^{ \varepsilon,r})^{a}\quad\text{on }\partial B_{r}, \tag{3.14}\] and \[|Du^{\varepsilon,r}|\geq C^{-1}r^{1-\frac{2n}{k}}\quad\text{on }\partial B_{r}. \tag{3.15}\] By (2.3), (3.3), (3.9) and (3.14), we obtain \[|Du^{\varepsilon,r}|\leq C(-u^{\varepsilon,r})^{a}\leq C(|z|^{2-\frac{2n}{k} }-R_{0}^{2-\frac{2n}{k}}+1-a_{0}\frac{|z|^{2}}{R_{0}^{2}})\leq C|z|^{1-\frac{2 n}{k}}\quad\text{in }\Omega_{r}.\] #### 3.3.3. **Positive lower bound of \(|Du^{\varepsilon,r}|\)** Since \(\partial\Omega\) is starshaped with respect to the origin, we have \(t\cdot\nu>0\) on \(\partial\Omega\), where \(\nu\) is the unit outer normal to \(\partial\Omega\), \(t=(t_{1},\cdots,t_{2n})=(y_{1},\cdots,y_{n},x_{1},\cdots,x_{n})\), \(z_{i}=\frac{1}{\sqrt{2}}(x_{i}+\sqrt{-1}y_{i})\). By (3.11), \(|Du|\geq c\) for some uniform \(c\) on \(\partial\Omega\). Then we have \[\sum_{l=1}^{n}(z_{l}u_{l}^{\varepsilon,r}+\overline{z}_{l}u_{l}^{\varepsilon, r})=\sum_{l=1}^{2n}t_{l}u_{l}^{\varepsilon,r}=t\cdot\nu|Du^{\varepsilon,r}| \geq c\min_{\partial\Omega}t\cdot\nu:=c_{1}>0.\] Let \(F^{i\overline{j}}=\frac{\partial}{\partial u_{ij}^{\varepsilon,r}}(\log H_{k }[u^{\varepsilon,r}])\), \(L=F^{i\overline{j}}\partial_{i\overline{j}}\). Consider the function \[G:=2\text{Re}\{z_{l}u_{l}^{\varepsilon,r}\}+Au^{\varepsilon,r}-B|z|^{2},\] where \(A\), \(B\) are constants to be determined later. By calculation, we have \[\mathcal{F}:= \sum_{l=1}^{n}F^{\overline{i}}=\sum_{l=1}^{n}\frac{S_{k}^{ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ \[LG=(2+A)k-B\mathcal{F}=(2+A)k-Bk(C_{n}^{k})^{\frac{1}{k}}\varepsilon^{-\frac{1}{k} }<0\quad\text{in }\Omega_{r}.\] By maximum principle, \[G\geq\min_{\partial\Omega_{r}}G>0.\] Thus we prove \(G>0\) in \(\overline{\Omega}_{r}\) and (3.6) is obtained. ### Second order estimates Base on the key estimate (2.4), we can prove the global second order estimate in this subsection. #### 3.4.1. **The global second order estimates can be reduced to the boundary second order estimates.** By Theorem 2.6, we have \[\max_{\overline{\Omega}_{r}}H=\max_{\partial\Omega_{r}}H+C.\] So \[u_{\xi\bar{\xi}}^{\varepsilon,r}(-u)^{-\frac{n}{n-k}}\leq C(\max_{\partial \Omega_{r}}H+C)\leq C(\max_{\partial\Omega_{r}}|\partial\bar{\partial}u^{ \varepsilon,r}(-u^{\varepsilon,r})^{-\frac{n}{n-k}}|+1). \tag{3.16}\] On the other hand, let \(D_{\tau}=\sum\limits_{i=1}^{2n}a_{i}\frac{\partial}{\partial t_{i}}\), with \(\sum\limits_{i=1}^{2n}a_{i}^{2}=1\), from \(Lu_{\tau r}^{\varepsilon,r}\geq 0\), we obtain \[u_{\tau r}^{\varepsilon,r}\leq\max_{\partial\Omega_{r}}|D^{2}u^{\varepsilon, r}|\quad\text{in }\Omega_{r}.\] Since \(u\) is subharmonic, we have \[-(2n-1)\max_{\partial\Omega_{r}}|D^{2}u^{\varepsilon,r}|\leq u_{t_{i\ell i}} \leq\max_{\partial\Omega_{r}}|D^{2}u^{\varepsilon,r}|\quad\text{in }\Omega_{r}.\] Take \(\tau=\frac{1}{\sqrt{2}}(\frac{\partial}{\partial t_{i}}\pm\frac{\partial}{ \partial t_{j}})\), we get \[|u_{t_{i\ell j}}^{\varepsilon,r}|\leq C\max_{\partial\Omega_{r}}|D^{2}u^{ \varepsilon,r}|\quad\text{in }\Omega_{r}.\] Hence \[|D^{2}u^{\varepsilon,r}|\leq C\max_{\partial\Omega_{r}}|D^{2}u^{\varepsilon, r}|\quad\text{in }\Omega_{r}.\] #### 3.4.2. **Second order estimates on the boundary \(\partial\Omega_{r}\).** The second order estimate on \(\partial\Omega\) is almost the same as in [15]. So we only need to prove the second order estimate on \(\partial B_{r}\). **Step 1. Pure tangential derivatives estimates** Near \(p\in\partial B_{r}\), we may assume \(p=(0,\cdots,0,r)\). Near \(\tilde{p}=(0,\cdots,0,1)\), \(\partial B_{1}\) can be represented as a graph \[x_{n}=\rho(t^{\prime})=\bigg{(}1-\sum\limits_{i=1}^{2n-1}t_{i}^{2}\bigg{)}^{ \frac{1}{2}},\] where \(t^{\prime}=(1,\cdots,t_{2n-1})\). Let \(\tilde{u}\) and \(\tilde{u}\) be the functions defined in (3.12) and (3.13). Since \(\tilde{u}\) is equal to some constant on \(\partial B_{1}\), we have \[\tilde{u}_{t_{i}t_{j}}(\tilde{p})=\tilde{u}_{x_{u}}(\tilde{p})\delta_{ij}.\] It follows \[|\tilde{u}_{t_{i}t_{j}}(\tilde{p})|\leq C.\] Hence \[|u_{t_{i}t_{j}}^{\varepsilon,r}(p)|\leq Cr^{-\frac{2n}{k}}.\] Furthermore, we have \[\tilde{u}_{i\bar{j}}(\tilde{p})=\tilde{u}_{x_{u}}(\tilde{p})\delta_{ij}. \tag{3.17}\] **Step 2. Tangential-normal derivatives estimates** To estimate the tangential-normal second order derivatives on \(\partial B_{r}\), we just estimate \(\tilde{u}_{t_{a}x_{u}}(\tilde{p})\) for \(\alpha=1,\cdots,2n-1\). Note that \(F^{i\bar{j}}\) and \(\tilde{u}_{i\bar{j}}\) are both Hermitian matrix, and can be diagonalized by a same unitary matrix, \(F^{i\bar{k}}u_{j\bar{k}}\) is also an Hermitian matrix. It follows that \[F^{i\bar{j}}(z_{r}\tilde{u}_{s}-\bar{z}_{s}\tilde{u}_{\bar{r}})_{i\bar{j}}=0.\] Now we estimate the mixed tangential-normal derivative \(\tilde{u}_{t_{i}x_{u}}(\tilde{p})\) for \(\tilde{p}\in\partial B_{1}\). Since \(\tilde{u}(t^{\prime},\rho(t^{\prime}))\) is constant on \(\partial B_{1}(0)\), we have \[0=\tilde{u}_{t_{a}}+\tilde{u}_{t_{2n}}\rho_{t_{a}}=\tilde{u}_{t_{a}}-\frac{t_ {a}}{\rho}\tilde{u}_{t_{2n}},\quad\alpha=1,\cdots,2n-1.\] That is on \(\partial B_{1}\cap B_{\frac{1}{2}}(\tilde{p})\), \[x_{n}\tilde{u}_{x_{l}}-x_{i}\tilde{u}_{x_{u}}=0\quad i=1,\cdots,n-1\quad\text{ and}\quad x_{n}\tilde{u}_{y_{l}}-y_{i}\tilde{u}_{x_{n}}=0\quad i=1\cdots,n.\] It follows that \[y_{n}\tilde{u}_{x_{l}}-x_{i}\tilde{u}_{y_{n}}=0\quad i=1,\cdots,n-1\quad\text {and}\quad y_{n}\tilde{u}_{y_{i}}-y_{i}\tilde{u}_{y_{n}}=0\quad i=1,\cdots,n.\] To estimate \(\tilde{u}_{x_{l}x_{n}}(\tilde{p})\) for \(i=1,\cdots,n-1\), set \[g^{1}=2\text{Re}(z_{i}\tilde{u}_{n}-\bar{z}_{n}\tilde{u}_{\bar{r}})=x_{i} \tilde{u}_{x_{n}}-x_{n}\tilde{u}_{x_{l}}+y_{i}\tilde{u}_{y_{n}}-y_{n}\tilde{u} _{y_{i}}.\] Note that \[F^{i\bar{j}}g_{i\bar{j}}= F^{i\bar{j}}(z_{i}\tilde{u}_{n}-\bar{z}_{n}\tilde{u}_{\bar{r}})_{i \bar{j}}+F^{i\bar{j}}(\bar{z}_{i}\tilde{u}_{\bar{n}}-z_{n}\tilde{u}_{i})_{i \bar{j}}=0.\] On \(\partial B_{1}(0)\cap B_{\frac{1}{2}}(\tilde{p})\), consider the barrier function \[\Phi=A(1-x_{n})\pm g^{1}.\] Since \(g_{1}\) is bounded on \(\partial B_{1}(0)\cap B_{\frac{1}{2}}(\tilde{p})\), \(1-x_{n}\) is bounded from below on \(\partial B_{\frac{1}{2}}(\tilde{p})\cap B_{1}(0)\), we can choose a postive \(A\) such that \(\Phi\geq 0\) on \(\partial(\partial B_{1}(0)\cap B_{\frac{1}{2}}(\tilde{p}))\). It follows \[|g^{1}_{x_{n}}(\tilde{p})|\leq C.\] However, at \(\tilde{p}\), we have \[g^{1}_{x_{n}}=-\tilde{u}_{x_{i}}-\tilde{u}_{x_{i},x_{n}}.\] Thus \[\tilde{u}_{x_{i},x_{n}}(\tilde{p})\leq C,\quad i=1,\cdots,n-1.\] To estimate \(\tilde{u}_{y_{i},x_{n}}(\tilde{p})\) for \(i=1,\cdots,n\), set \[g^{2}= 2\text{Im}(z_{i}\tilde{u}_{n}-\bar{z}_{n}\tilde{u}_{\tilde{i}}) =y_{i}\tilde{u}_{x_{n}}-x_{n}\tilde{u}_{y_{i}}+y_{n}\tilde{u}_{x_{i}}-x_{i} \tilde{u}_{y_{n}}.\] Proceeding similarly, we obtain \[|\tilde{u}_{y_{i},x_{n}}(\tilde{p})|\leq C,\quad i=1,\cdots,n.\] **Step 3. Double normal derivative estimate** By pure tangential derivative estimate on \(\partial B_{1}\), we have \[|\tilde{u}_{y_{n}y_{n}}(p)|\leq C.\] To estimate \(\tilde{u}_{x_{n}x_{n}}(\tilde{p})\), it is suffices to estimate \(\tilde{u}_{n\tilde{n}}(\tilde{p})\). By rotating \(\{z_{1},\cdots,z_{n-1}\}\), we may assume \(\{\tilde{u}_{i\tilde{j}}(\tilde{p})\}_{1\leq i,j\leq n-1}\) is diagonal. Then \[r^{2n}\varepsilon=H_{k}[\tilde{u}]=\tilde{u}_{n\tilde{n}}S_{k-1}(\{\tilde{u}_ {i\tilde{j}}\}_{1\leq i,j\leq n-1})-\sum_{\beta=1}^{n-1}|\tilde{u}_{\beta n}| ^{2}S_{k-2}(\{\tilde{u}_{i\tilde{j}}\}_{1\leq i,j\leq n-1}).\] By (3.17), we obtain \[S_{k-1}(\{\tilde{u}_{i\tilde{j}}\}_{1\leq i,j\leq n-1})= S_{k-1}(\{\tilde{u}_{i\tilde{j}}\}_{1\leq i,j\leq n-1}+\frac{1}{2}( \tilde{u}-\tilde{\underline{u}})_{x_{n}}I_{n-1})\] \[\geq S_{k-1}(\{\tilde{u}_{i\tilde{j}}\}_{1\leq i,j\leq n-1})\] \[\geq C_{n}^{k-1}(C_{n}^{k})^{\frac{1-k}{k}}\min_{\partial\Omega}H_{k}^ {\frac{k-1}{k}}[\tilde{\underline{u}}]:=c_{1}.\] So \[\tilde{u}_{n\tilde{n}}(p)\leq C.\] Combining these three cases together, and noting that \(\tilde{u}\) is sunharmonic, we obtain \[|\partial\bar{\partial}\tilde{u}|\leq C\quad\text{on }\partial B_{1}.\] Hence \[|\partial\bar{\partial}u^{\varepsilon,r}|\leq Cr^{-\frac{2n}{k}}\quad\text{on } \partial B_{r}.\] By (3.16) and \(C^{0}\) estimate, we have \[|\partial\bar{\partial}u^{\varepsilon,r}|\leq C|z|^{-\frac{2n}{k}}\quad\text{ in }\Omega_{r}.\] ## 4. Proof of Theorem 1.1 ### Uniqueness The uniqueness follows from comparison theorem 2.4. Let \(u,v\) be two solutions to (1.3) in \(\Omega\setminus\{0\}\). For any \(z_{0}\in\Omega\setminus\{0\}\), we first show \(u(z_{0})\leq v(z_{0})\). In fact, for any \(t\in(0,1)\), since \(u-tv=-(1-t)|z|^{2-\frac{2n}{k}}+C\), there exists \(r\) sufficiently small such that \(z_{0}\in\Omega\setminus\overline{B_{r}}\) and \(u<tv\) on \(\partial B_{r}\). Note that \(u=-1<-t=tv\) on \(\partial\Omega\). By comparison theorem 2.4, we get \(u<tv\) in \(\Omega\setminus B_{r}\). Therefore \(u(z_{0})\leq tv(z_{0})\). Let \(t\to 1\), we obtain \(u(z_{0})\leq v(z_{0})\). Hence \(u\leq v\) in \(\Omega\setminus\{0\}\). Similarly, we obtain \(u\geq v\) in \(\Omega\setminus\{0\}\). Therefore \(u=v\) in \(\Omega\setminus\{0\}\). ### Existence The existence follows from the uniform \(C^{2}\)-estimates for \(u^{\varepsilon,r}\). For \(K=\Omega\setminus B_{r_{0}}\), for the solution to (3.2), by the estimate (3.3), we have \[|u^{\varepsilon,r}|_{C^{1}(K)}+|\Delta u^{\varepsilon,r}|\leq C(K).\] By Evans-Krylov theory, we obtain for any \(0<\alpha<1\), \[|u^{\varepsilon,r}|_{C^{2,\alpha}(K)}\leq C(K,\varepsilon).\] By compactness, we can find a sequence \(r_{i}\to 0\) such that \[u^{\varepsilon,r_{i}}\to u^{\varepsilon}\quad\text{in }C^{2}(K).\] where \(u^{\varepsilon}\) satisfies \[\begin{cases}H_{k}[u^{\varepsilon}]=\varepsilon&\text{in }K,\\ u=-1&\text{on }\partial\Omega,\end{cases}\] and \[-C-|z|^{2-\frac{2n}{k}}\leq u^{\varepsilon}(z)\leq-|z|^{2-\frac{2n}{k}}, \tag{4.1}\] \[|Du^{\varepsilon}(z)|\leq C|z|^{1-\frac{2n}{k}},\] \[|\partial\bar{\partial}u^{\varepsilon}(z)|\leq C|z|^{-\frac{2n}{k}}.\] Moreover, \[|u^{\varepsilon}|_{C^{2,\alpha}(K)}\leq C(K,\varepsilon).\] By the classical Schauder theory, \(u^{\varepsilon}\) is smooth. By above estimates (4.1) for \(u^{\varepsilon}\), for any sequence \(\varepsilon_{j}\to 0\), there is a subsequence of \(\{u^{\varepsilon_{j}}\}\) converging to a function \(u\) in \(C^{1,\alpha}\) norm on any compact subset of \(\Omega\setminus\{0\}\). Thus \(u\in C^{1,\alpha}(\Omega\setminus\{0\})\) and satisfies the estimates (1.4) and (1.5). By the convergence theorem of the complex \(k\)-Hessian operator proved by Trudinger-Zhang [36] (see also Lu [33]), \(u\) is a solution to (1.3). ## Acknowledgements: The second author was supported by National Natural Science Foundation of China (grants 11721101 and 12141105) and National Key Research and Development Project (grants SQ2020YFA070080). The third author was supported by NSFC grant No. 11901102.
2302.12909
Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap
We show that convex-concave Lipschitz stochastic saddle point problems (also known as stochastic minimax optimization) can be solved under the constraint of $(\epsilon,\delta)$-differential privacy with \emph{strong (primal-dual) gap} rate of $\tilde O\big(\frac{1}{\sqrt{n}} + \frac{\sqrt{d}}{n\epsilon}\big)$, where $n$ is the dataset size and $d$ is the dimension of the problem. This rate is nearly optimal, based on existing lower bounds in differentially private stochastic optimization. Specifically, we prove a tight upper bound on the strong gap via novel implementation and analysis of the recursive regularization technique repurposed for saddle point problems. We show that this rate can be attained with $O\big(\min\big\{\frac{n^2\epsilon^{1.5}}{\sqrt{d}}, n^{3/2}\big\}\big)$ gradient complexity, and $\tilde{O}(n)$ gradient complexity if the loss function is smooth. As a byproduct of our method, we develop a general algorithm that, given a black-box access to a subroutine satisfying a certain $\alpha$ primal-dual accuracy guarantee with respect to the empirical objective, gives a solution to the stochastic saddle point problem with a strong gap of $\tilde{O}(\alpha+\frac{1}{\sqrt{n}})$. We show that this $\alpha$-accuracy condition is satisfied by standard algorithms for the empirical saddle point problem such as the proximal point method and the stochastic gradient descent ascent algorithm. Further, we show that even for simple problems it is possible for an algorithm to have zero weak gap and suffer from $\Omega(1)$ strong gap. We also show that there exists a fundamental tradeoff between stability and accuracy. Specifically, we show that any $\Delta$-stable algorithm has empirical gap $\Omega\big(\frac{1}{\Delta n}\big)$, and that this bound is tight. This result also holds also more specifically for empirical risk minimization problems and may be of independent interest.
Raef Bassily, Cristóbal Guzmán, Michael Menart
2023-02-24T21:50:02Z
http://arxiv.org/abs/2302.12909v2
Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap ###### Abstract We show that convex-concave Lipschitz stochastic saddle point problems (also known as stochastic minimax optimization) can be solved under the constraint of \((\epsilon,\delta)\)-differential privacy with _strong (primal-dual) gap_ rate of \(\tilde{O}\big{(}\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{nc}\big{)}\), where \(n\) is the dataset size and \(d\) is the dimension of the problem. This rate is nearly optimal, based on existing lower bounds in differentially private stochastic optimization. Specifically, we prove a tight upper bound on the strong gap via novel implementation and analysis of the recursive regularization technique repurposed for saddle point problems. We show that this rate can be attained with \(O\big{(}\min\big{\{}\frac{n^{2}\epsilon_{1}.5}{\sqrt{d}},n^{3/2}\big{\}}\big{)}\) gradient complexity, and \(O(n)\) gradient complexity if the loss function is smooth. As a byproduct of our method, we develop a general algorithm that, given a black-box access to a subroutine satisfying a certain \(\alpha\) primal-dual accuracy guarantee with respect to the empirical objective, gives a solution to the stochastic saddle point problem with a strong gap of \(\tilde{O}(\alpha+\frac{1}{\sqrt{n}})\). We show that this \(\alpha\)-accuracy condition is satisfied by standard algorithms for the empirical saddle point problem such as the proximal point method and the stochastic gradient descent ascent algorithm. Finally, to emphasize the importance of the strong gap as a convergence criterion compared to the weaker notion of primal-dual gap, commonly known as the _weak gap_, we show that even for simple problems it is possible for an algorithm to have zero weak gap and suffer from \(\Omega(1)\) strong gap. We also show that there exists a fundamental tradeoff between stability and accuracy. Specifically, we show that any \(\Delta\)-stable algorithm has empirical gap \(\Omega\big{(}\frac{1}{\Delta n}\big{)}\), and that this bound is tight. This result also holds also more specifically for empirical risk minimization problems and may be of independent interest. ## 1 Introduction Stochastic (convex-concave) saddle point problems (SSP)1 (also referred to in the literature as stochastic minimax optimization problems) are an increasingly important model for modern machine learning, arising in areas such as stochastic optimization [16, 17, 18], robust statistics [1, 19], and algorithmic fairness [14, 15]. On the other hand, the reliance of modern machine learning on large datasets has led to concerns of user privacy. These concerns in turn have led to a variety of privacy standards, of which differential privacy (DP) has become the premier standard. However, for a variety of machine learning problems it is known that their differentially-private counterparts have provably worse rates. As such, characterizing the fundamental cost of differential privacy has become an important problem. Currently, the theory of solving SSPs under differential privacy has major limitations, compared to its non-private counterpart. To illustrate this point, we need to discuss the notions of accuracy used in the literature. In SSPs, the goal is to find an approximate solution of the problem \[\min_{w\in\mathcal{W}}\max_{\theta\in\Theta}\Big{\{}F_{\mathcal{D}}(w,\theta) :=\mathbb{E}_{x\sim\mathcal{D}}[f(w,\theta;x)]\Big{\}}, \tag{1}\] where \(\mathcal{D}\) is an unknown distribution for which we have access to an i.i.d. sample \(S\). Given a (randomized) algorithm \(\mathcal{A}\) with output \([\mathcal{A}_{w}(S),\mathcal{A}_{\theta}(S)]\in\mathcal{W}\times\Theta\), two studied measures of performance are the _strong and weak gap_2, defined respectively as Footnote 2: The weak gap is sometimes stated with \(\mathbb{E}_{\mathcal{A}}[\cdot]\) taken inside the max. However [1] showed this was not necessary to obtain the stability implies generalization result used in various works. \[\mathrm{Gap}(\mathcal{A}) = \mathbb{E}_{\mathcal{A},S}\left[\max_{\theta\in\Theta}\left\{F_{ \mathcal{D}}(\mathcal{A}_{w}(S),\theta)\right\}-\min_{w\in\mathcal{W}}\left\{ F_{\mathcal{D}}(w,\mathcal{A}_{\theta}(S))\right\}\right], \tag{2}\] \[\mathrm{Gap}_{\mathsf{weak}}(\mathcal{A}) = \mathbb{E}_{\mathcal{A}}\left[\max_{\theta\in\Theta}\left\{ \mathbb{E}_{S}\left[F_{\mathcal{D}}(\mathcal{A}_{w}(S),\theta)\right]\right\} -\min_{w\in\mathcal{W}}\left\{\mathbb{E}_{S}\left[F_{\mathcal{D}}(w, \mathcal{A}_{\theta}(S))\right]\right\}\right]. \tag{3}\] It is easy to see that the strong gap upper bounds the weak gap, and thus it is a stronger accuracy measure. On the other hand, even for simple problems, the difference between these measures can be \(\Omega(1)\); a fact we elaborate on in Section 6. We also note that the strong gap has a clear game-theoretic interpretation: if we consider \(\mathcal{A}_{w}(S)\) and \(\mathcal{A}_{\theta}(S)\) as the actions of two players in a (stochastic) zero-sum game, the strong gap upper bounds the most profitable unilateral deviation for either of the two players. In game theory this is known as an approximate Nash equilibrium. By contrast, there is no general guarantee associated with the weak gap. Non-privately, it is known how to achieve optimal rates w.r.t. the strong gap, and those rates are similar to those established for stochastic convex optimization (SCO) [11, 12]. However, for DP methods optimal rates are only known for the weak gap [1, 13, 14]. In a nutshell, the main limitation of these approaches is that -in order to amplify privacy- they make multiple passes over the data (e.g., by sampling with replacement stochastic gradients from the dataset), and the existing theory of generalization for SSPs is much more limited than it is for SCO [14, 15, 16]. Our approach largely circumvents the current limitations of generalization theory for SSPs, providing the first nearly-optimal rates for the strong gap in DP-SSP. ### Contributions In this work, we establish the optimal rates on the strong gap for DP-SSP. In the following, we let \(n\) be the number of samples, \(d\) be the dimension, and \(\epsilon,\delta\) be the privacy parameters. Our main result is an \((\epsilon,\delta)\)-DP algorithm for SSP whose strong gap is \(\tilde{O}\big{(}\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{ne}\big{)}\). This rate is nearly optimal, due to matching lower bounds for differentially private SCO [1, 10]. These minimization lower bounds hold for saddle point problems since minimization problems are a special case of saddle point problems when \(\Theta\) is constrained to be a singleton. For non-smooth loss function, we show this rate can be obtained in gradient complexity \(O\big{(}\min\left\{\frac{n^{2}\epsilon^{1.5}}{\sqrt{d}},n^{3/2}\right\}\big{)}\). This improves even upon the previous best known running time for achieving analogous rates on the _weak gap_, which was \(n^{5/2}\)[15]. Furthermore, we show that if the loss function is smooth, this rate can be achieved in linear gradient complexity. In order to obtain an upper bound for this problem, we present a novel analysis of the recursive regularization algorithm of [1]. Our work is the first to show how the sequential regularization approach can be repurposed to provide an algorithmic framework for attaining optimal strong gap guarantees for DP-SSP. As a byproduct of our analysis, we show that empirical saddle point solvers which satisfy a certain \(\alpha\) accuracy guarantee can be used as a black box to obtain an \(\tilde{O}\left(\alpha+1/\sqrt{n}\right)\) guarantee on the strong (population) gap. This class of algorithms includes common techniques such as the proximal point method, the extragradient method, and stochastic gradient descent ascent (SGDA) [12, 13, 14]. This fact may be of interest independent of differential privacy, as to the best of our knowledge, existing algorithms which achieve the optimal \(1/\sqrt{n}\) rate on the strong population gap rely crucially on a one-pass structure which optimizes the population gap directly [16]. Under the additional assumption that the loss function is smooth, we present a one-pass algorithm which also obtains the optimal private rate on the strong gap. This algorithm is based on an adaptation of the tree method first seen in [1]. In comparison to the near linear time _iterative_ regularization algorithm of [13] which obtains bounds only for the _weak_ gap, our algorithm obtains near optimal guarantees for the _strong_ gap, improves log factors and also does not require convexity-concavity of the loss function to ensure privacy. Our results stand in contrast to previous work on DP-SSPs, which has achieved optimal rates only for the weak gap and has crucially relied on "stability implies generalization" results for the weak gap. In this vein, we prove that even for simple problems, the strong and weak gap may differ by \(\Theta(1)\). We also elucidate the challenges of extending existing techniques to strong gap guarantees by showing a fundamental tradeoff between stability and empirical accuracy. Specifically, we show that even for the more specific case of empirical risk minimization, any algorithm which is \(\Delta\)-uniform argument stable algorithm must have empirical risk \(\Omega\left(\frac{1}{\Delta n}\right)\). We also show this bound is tight, and note that it may be of independent interest. Such a tradeoff was also investigated by [11], but their result only implies such a tradeoff for the specific case of \(\Delta=\frac{1}{\sqrt{n}}\) and their proof technique is unrelated to ours. ### Related Work Differentially private stochastic optimization has been extensively studied for over a decade [1, 1, 14, 15, 16, 17, 18, 19, 15]. Among such problems, stochastic convex minimization (where problem parameters are measured in the \(\ell_{2}\)-norm) is perhaps the most widely studied, where it is known the optimal rate is \(\tilde{O}(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n\epsilon})\)[1, 19]. Further, under smoothness assumptions such rates can be obtained in linear (in the sample size) gradient complexity [15]. Without smoothness, no linear time algorithms which achieve the optimal rates are known [11]. The study of stochastic saddle point problems under differential privacy is comparatively newer. In the non-private setting, optimal \(O(1/\sqrt{n})\) guarantees on the strong gap have been known as far back as [15]. Under privacy (without strong convexity/strong concavity), optimal rates are known only for the _weak gap_. These rates \(\tilde{O}(\frac{1}{\sqrt{n}}+\frac{\sqrt{d}}{n\epsilon})\) have been obtained by several works [1, 14, 15]. The work of [13] additionally showed that under smoothness assumptions such a result could be obtained in near linear gradient complexity by leveraging accelerated methods [15, 16]. All of these results are for the weak gap and they rely crucially on the fact that, for the weak gap, \(\Delta\)-stability implies \(\Delta\)-generalization [13]. By contrast, for the strong gap (without strong convexity/strong concavity assumptions), the best stability implies generalization result is a \(\sqrt{\Delta}\) bound obtained by [1] provided the loss is smooth. As a result of this discrepancy, known bounds on the strong gap under privacy are worse. The best known rates for the strong gap are \(O\left(\min\left(\frac{d^{1/4}}{\sqrt{n\epsilon}},\frac{1}{n^{1/3}}+\frac{ \sqrt{d}}{n^{2/3}\epsilon}\right)\right)\)[1]. This rate was obtained through of mixture of noisy stochastic extragradient and noisy inexact proximal point methods, avoiding stability arguments altogether and instead relying on one-pass algorithms which optimize the population loss directly. Without smoothness, we are not aware of any work which provides bounds on the strong gap under privacy, but one may note that a straightforward implementation of one-pass noisy SGDA leads to a rate of \(O\big{(}\frac{\sqrt{d}}{\sqrt{n\epsilon}}\big{)}\) in this setting. We give these details in Appendix A.2 and note this same algorithm establishes the optimal rate for SSPs under local differential privacy. Finally, under the stringent assumptions of \(\mu\)-strong convexity/strong concavity (\(\mu\)-SC/SC) and smoothness with constant condition number, \(\kappa\), optimal rates on the strong gap have been obtained [15]. Under these assumptions, the optimal rate of \(O\big{(}\frac{1}{\mu n}+\frac{d}{\mu n^{2}e^{2}}\big{)}\) was achieved by leveraging the fact that \(\Delta\) stability implies \(\kappa\Delta\) generalization [13]. The lower bound for this rate comes from lower bounds for the minimization setting [10, 1]. ## 2 Preliminaries Throughout, we consider the space \(\mathbb{R}^{d}\) endowed with the standard \(\ell_{2}\) norm \(\|\cdot\|\). Let the primal parameter space \(\mathcal{W}\) and the dual parameter space \(\Theta\) be compact convex sets such that \(\mathcal{W}\times\Theta\subset\mathbb{R}^{d}\) for some \(d>0\). Let \(\mathcal{D}\) be some distribution over data domain \(\mathcal{X}\). Consider the _stochastic saddle-point problem_ given in equation (1) for some loss function \(f\) that is convex w.r.t. \(w\) and concave w.r.t. \(\theta\). We define the corresponding population loss and empirical loss functions as \(F_{\mathcal{D}}(w,\theta)=\mathop{\mathbb{E}}_{x\sim\mathcal{D}}\left[f(w, \theta;x)\right]\) and \(F_{S}(w,\theta)=\frac{1}{n}\sum_{x\in S}f(w,\theta;x)\) respectively. For some \(B>0\) we assume that \(\max_{z,z^{\prime}\in\mathcal{W}\times\Theta}\|z-z^{\prime}\|\leq B\). To simplify notation, for vectors \(w\in\mathcal{W}\) and \(\theta\in\Theta\), we will use \([w,\theta]\) to denote their concatenation, noting \([w,\theta]\) is a vector in \(\mathbb{R}^{d}\). We primarily consider the case where \(f\) is \(L\)-Lipschitz, but will also consider the additional assumption of \(\beta\)-smoothness for certain results3. Specifically, these assumptions are that \(\forall w_{1},w_{2}\in\mathcal{W}\) and \(\forall\theta_{1},\theta_{2}\in\Theta\): Footnote 3: Throughout, any properties for \(f\) are considered as a function of \([w,\theta]\). No assumptions about \(f\) w.r.t. \(x\) are made. \[\text{Lipschitzness:} |f(w_{1},\theta_{1};x)-f(w_{2},\theta_{2};x)|\leq L\left\|[w_{1}, \theta_{1}]-[w_{2},\theta_{2}]\right\|\] \[\text{Smoothness:} \left\|\nabla_{[w,\theta]}f(w_{1},\theta_{1};x)-\nabla_{[w,\theta ]}f(w_{2},\theta_{2};x)\right\|\leq\beta\left\|[w_{1},\theta_{1}]-[w_{2}, \theta_{2}]\right\|.\] Under such assumptions (in fact, smoothness is not necessary), a solution for problem (1) always exists [14], which we will call as a _saddle point_ onwards. Further, given an SSP (1), we will denote a saddle point as \([w^{*},\theta^{*}]\). Gap functionsIn addition to the strong and weak gap functions defined in equations (2) and (3), it will be useful to define the following _gap function_ expressed as a function of the parameter vector instead of the algorithm, \(\widehat{\operatorname{Gap}}(\bar{w},\bar{\theta})=\max_{\theta\in\Theta} \left\{F_{\mathcal{D}}(\bar{w},\theta)\right\}-\min_{w\in\mathcal{W}}\left\{F_ {\mathcal{D}}(w,\bar{\theta})\right\}.\) We have the following useful fact regarding \(\widehat{\operatorname{Gap}}\) (see Appendix A for a proof). **Fact 1**.: _If \(f\) is \(L\)-Lipschitz then \(\widehat{\operatorname{Gap}}\) is \(\sqrt{2}L\)-Lipschitz._ Note the strong gap can be written as an expectation of the gap function. Further, since the gap function is zero if and only if \((\bar{w},\bar{\theta})\) is a solution for problem (1), the strong gap is considered the most suitable measure of accuracy for SSPs [10, 11]. We also define the empirical gap as, \(\operatorname{Gap}_{S}(\mathcal{A})=\operatorname*{\mathbb{E}}_{\mathcal{A}} \left[\max_{\theta\in\Theta}\left\{F_{S}(\mathcal{A}_{w}(S),\theta)\right\}- \min_{w\in\mathcal{W}}\left\{F_{S}(w,\mathcal{A}_{\theta}(S))\right\}\right].\) We will consider at various points the notion of _generalization error_ with respect to the strong/weak gap, which refers to difference between the strong/weak gap and the empirical gap. Note that because the empirical gap treats the dataset as a fixed quantity, there are not differing strong and weak versions of the empirical gap. Saddle OperatorDefine the _saddle operator_ as \(g(w,\theta;x)=\left[\nabla_{w}f(w,\theta;x),-\nabla_{\theta}f(w,\theta;x)\right]\). Similarly define \(G_{\mathcal{D}}(w,\theta)=\operatorname*{\mathbb{E}}_{x\sim\mathcal{D}}[g(w, \theta;x)]\) and \(G_{S}(w,\theta)=\frac{1}{n}\sum_{x\in S}g(w,\theta;x)\). Note that the assumption on the smoothness of \(f\) implies the Lipschitzness of \(g\). We note that since the saddle operator can be computed using one computation of the gradient, we refer indistinctly to saddle operator complexity or gradient complexity when discussing the running time of our algorithms. StabilityWe will also use the notion of uniform argument stability frequently in our analysis [1]. **Definition 1**.: _A randomized algorithm \(\mathcal{A}:\mathcal{X}^{n}\mapsto\mathcal{W}\times\Theta\) satisfies \(\Delta\)-uniform argument stability if for any pair of adjacent datasets \(S,S^{\prime}\in\mathcal{X}^{n}\) it holds that \(\operatorname*{\mathbb{E}}_{\mathcal{A}}\left[\left\|\mathcal{A}(S)-\mathcal{A }(S^{\prime})\right\|\right]\leq\Delta\)._ A fact we will use is that the (constrained) regularized saddle-point is stable. Specifically, for some \(\hat{w}\in\mathcal{W}\), \(\hat{\theta}\in\Theta\), and \(\lambda\geq 0\) consider the regularized objective function \[(w,\theta)\mapsto\frac{1}{n}\sum_{z\in S}f(w,\theta;z)+\frac{\lambda}{2}\|w- \hat{w}\|^{2}-\frac{\lambda}{2}\|\theta-\hat{\theta}\|^{2}. \tag{4}\] It is easy to see that his problem has a unique saddle point. The mapping which selects its output according the unique solution of (4) has the following stability property. **Lemma 1**.: _[_1_, Lemma 1]_ _The algorithm which outputs the regularized saddle point with parameters \(\lambda>0\), \(\hat{w}\in\mathcal{W}\) and \(\hat{\theta}\in\Theta\), is \(\left(\frac{2L}{\lambda n}\right)\)-uniform argument stable w.r.t. \(S\)._ Differential Privacy (DP) [10]:An algorithm \(\mathcal{A}\) is \((\epsilon,\delta)\)-differentially private if for all datasets \(S\) and \(S^{\prime}\) differing in one data point and all events \(\mathcal{E}\) in the range of the \(\mathcal{A}\), we have, \(\operatorname*{\mathbb{P}}\left(\mathcal{A}(S)\in\mathcal{E}\right)\leq e^{ \epsilon}\operatorname*{\mathbb{P}}\left(\mathcal{A}(S^{\prime})\in\mathcal{E }\right)+\delta\). ## 3 From Empirical Saddle Point to Strong Gap Guarantee via Recursive Regularization Our approach for obtaining near optimal rates on the strong gap leverages the recursive regularization technique of [1]. In addition to adapting this algorithm to fit SSP problems, we also provide a novel analysis which differs substantially from the analysis presented in previous work [19, 2]. Our recursive regularization algorithm works by solving a series of regularized objectives, \(f^{(1)},...,f^{(T)}\), with increasingly large regularization parameters. Specifically, after solving the \(t\)'th objective to obtain \([\bar{w}_{t},\bar{\theta}_{t}]\), the algorithm creates a new objective which is \(f^{(t+1)}(w,\theta;x)=f^{(t)}(w,\theta;x)+2^{t}\lambda\left\|w-\bar{w}_{t} \right\|^{2}-2^{t}\lambda\left\|\theta-\bar{\theta}_{t}\right\|^{2}\) for the subsequent round. Notice that each subsequent objective is easier in the sense that the strong convexity parameter is larger. Our analysis will leverage the fact that approximate solutions to intermediate objectives do not need to obtain good bounds on the strong gap for the regularization parameter to be increased. This is in contrast to, for example, the _iterative_ regularization technique of [24], which finds \([w,\theta]\) that satisfies a near optimal (weak) gap bound before adding noise to only one of these parameters (depending on whether the primal or dual component of the solution is being obtained). Empirical SubroutineRecursive regularization utilizes a subroutine, \(\mathcal{A}_{\mathsf{emp}}\), which is roughly an approximate empirical saddle point solver. In addition to a dataset and Lipschitz loss function, \(\mathcal{A}_{\mathsf{emp}}\) takes as input an initial point and a bound, \(\hat{D}\), on the expected distance between the initial point and the empirical saddle point. At round \(t\in[T]\) this distance is set to be \(\frac{B}{2^{t}}\), in order to obtain increasingly strong accuracy guarantees for each subproblem. Note also it can be verified that for all \(t\in[T]\), \(f^{(t)}\) is \(O(L)\)-Lipschitz due the scaling of the regularization. Specifically, the accuracy guarantee of interest is the following. **Definition 2** (\(\hat{\alpha}\)-relative accuracy).: _Given a dataset \(S^{\prime}\in\mathcal{X}^{n^{\prime}}\), \(L^{\prime}\)-Lipschitz loss \(f^{\prime}\), and an initial point \([w^{\prime},\theta^{\prime}],\) we say that \(\mathcal{A}_{\mathsf{emp}}\) satisfies \(\hat{\alpha}\)-relative accuracy w.r.t. the empirical saddle point \([w_{S}^{*},\theta_{S}^{*}]\) of \(F_{S}^{\prime}(w,\theta)=\frac{1}{n}\sum_{x\in S}f^{\prime}(w,\theta;x)\) if whenever \(\mathbb{E}\left[\|[w^{\prime},\theta^{\prime}]-[w_{S}^{*},\theta_{S}^{*}]\| \right]\leq\hat{D}\), the output, \([\bar{w},\bar{\theta}]\), of \(\mathcal{A}_{\mathsf{emp}}\) satisfies \(\mathbb{E}\left[F_{S}^{\prime}(\bar{w},\theta_{S}^{*})-F_{S}^{\prime}(w_{S}^{ *},\bar{\theta})\right]\leq\hat{D}\hat{\alpha}\)._ The relative accuracy guarantee for \(\mathcal{A}_{\mathsf{emp}}\) differs from the more standard gap guarantee, and is not necessarily implied by a bound on the empirical gap. The motivation for this notion of accuracy is twofold. First, when the loss function is additionally SC/SC, this guarantee is sufficient to provide a bound on the distance between the _output_ of \(\mathcal{A}_{\mathsf{emp}}\) and the saddle point, which will play a crucial role in our convergence proof for Algorithm 1. Second, while it is certainly true that a bound on the empirical gap implies the same bound on \(\mathbb{E}\left[F_{S}(\bar{w},\theta)-F_{S}(w,\bar{\theta})\right]\), for any given \([w,\theta]\), it is not necessarily the case that the gap itself may enjoy a bound that is proportional to the initial distance to the saddle point 4. The reason is that the gap function is defined by a supremum that is taken w.r.t. the whole feasible set \(\mathcal{W}\times\Theta\), and thus the information of the evaluation of the objective w.r.t. particular points is lost. However, it is usually the case that saddle point solvers provide a bound of the form \(F_{S}(\bar{w},\theta)-F_{S}(w,\bar{\theta})\leq\|[w,\theta]-[w^{\prime},\theta ^{\prime}]\|\,\hat{\alpha}\), for all \([w,\theta]\in\mathcal{W}\times\Theta\), and some initial point \([w^{\prime},\theta^{\prime}]\in\mathcal{W}\times\Theta\). Algorithms such as the proximal point method, extragradient method, and SGDA (with appropriately tuned learning rate) satisfy this condition, and thus satisfy the condition for relative accuracy [12, 13, 10]. Guarantees of Recursive RegularizationGiven such an algorithm, recursive regularization achieves the following guarantee. **Theorem 1**.: _Let \(\mathcal{A}_{\mathsf{emp}}\) satisfy \(\hat{\alpha}\)-relative accuracy for any \((25L)\)-Lipschitz loss function and dataset of size \(n^{\prime}=\frac{n}{\log(n)}\). Then Algorithm 1, run with \(\mathcal{A}_{\mathsf{emp}}\) as a subroutine satisfies_ \[\mathrm{Gap}(\mathcal{R})=O\left(\log(n)B\hat{\alpha}+\frac{\log^{3/2}(n)BL}{ \sqrt{n}}\right).\] Recall that \(B\) is a bound on the diameter of the constraint set. In the following, we will sketch the proof of this theorem and highlight key lemmas. We defer the full proof to Appendix B.2. For simplicity, let us here consider the case where \(\hat{\alpha}=0\). A crucial aspect of our proof is that we avoid the need to bound the strong gap of the actual iterates, \(\left\{\bar{w}_{t}\right\}_{t=1}^{T-1}\). Instead, we bound the strong gap of the _expected_ iterates, where the expectation is with respect to \(S_{t}\). More concretely, consider some \(t\in[T]\) and let \(\mathcal{B}\) be the algorithm which on input \([\bar{w}_{t-1},\bar{\theta}_{t-1}]\) outputs \(\mathop{\mathbb{E}}\limits_{S_{t}}\left[\mathcal{A}_{\mathsf{emp}}(S_{t},f^{ t},[\bar{w}_{t-1},\bar{\theta}_{t-1}],\frac{B}{2^{t}})\right]\). Note \(\mathcal{B}\) is deterministic and data independent. As a result, it is possible to prove bounds on the strong gap of \(\mathcal{B}\). **Lemma 2**.: _Let \(S\sim\mathcal{D}^{n}\). For any \(\Delta\)-uniform argument stable algorithm \(\mathcal{A}\), it holds that_ \[\widehat{\mathrm{Gap}}\left(\mathop{\mathbb{E}}\limits_{\mathcal{A},S}\left[ \mathcal{A}_{w}(S)\right],\mathop{\mathbb{E}}\limits_{\mathcal{A},S}\left[ \mathcal{A}_{\theta}(S)\right]\right)\leq\mathrm{Gap}_{\mathsf{weak}}( \mathcal{A}){\leq}\mathop{\mathbb{E}}\limits_{\mathcal{A},S}\left[\mathrm{Gap }_{S}(\mathcal{A})\right]+\Delta L.\] The proof follows straightforwardly from an application of Jensen's inequality and stability implies generalization result for the weak gap [1, Theorem 1]. We give full details in Appendix B.1. Note that, for this discussion, the LHS of the above is equal to \(\mathrm{Gap}(\mathcal{B})\) when we apply this lemma to the data batch \(S_{t}\) and subroutine \(\mathcal{A}_{\mathsf{emp}}\). In fact, running \(\mathcal{B}\) is infeasible. Instead, we show that the output \(\mathcal{A}_{\mathsf{emp}}\) is close to the output of \(\mathcal{B}\). This in turn can be accomplished using the fact that bounded stability implies bounded variance. Concretely, we use the vector valued version of McDiarmid's inequality. **Lemma 3**.: _[_12_, Lemma 6]_5 _Let \(\mathcal{A}\) be deterministic \(\Delta\)-uniform argument stable stable with respect to \(S\sim\mathcal{D}^{n}\). Then its output satisfies \(\mathop{\mathbb{E}}\nolimits\left[\left\|\mathcal{A}(S)-\mathop{\mathbb{E}} \nolimits_{\hat{S}\sim\mathcal{D}^{n}}\left[\mathcal{A}(\hat{S})\right] \right\|^{2}\right]\leq n\Delta^{2}\)._ Footnote 5: Although stated therein for the distance, the last step of their proof shows a squared distance bound can be obtained. Observe that the exact empirical saddle point is a deterministic quantity conditioned on the randomness of the \(t\)'th empirical objective. Using the fact that \((2^{t+1}\lambda)\)-regularization implies \(\left(\frac{L}{2^{t}\lambda n^{\prime}}\right)\)-stability of the empirical saddle point in conjunction with the above lemma, we obtain a (conditional) variance bound of \(\frac{L^{2}}{2^{2t}\lambda^{2}n^{\prime}}\). Under the setting of \(\lambda=\Omega(\frac{L}{B\sqrt{n^{\prime}}})\), we can ultimately prove that the distance between the output of \(\mathcal{A}_{\mathsf{emp}}\) and \(\mathcal{B}\) (at round \(t\)) is \(O(\frac{B}{2^{t}})\). Recalling that the strong gap of \(\mathcal{B}\) with respect to \(F_{\mathcal{D}}^{(t)}(w,\theta):=\mathop{\mathbb{E}}\nolimits_{x\sim\mathcal{ D}}[f^{(t)}(w,\theta;x)]\) is at most \(\Delta=\frac{L}{2^{t}\lambda n^{\prime}}\) by Lemma 2, and \(F_{\mathcal{D}}^{(t)}\) is \((2^{t+1}\lambda)\)-SC/SC, the output of \(\mathcal{B}\) must in turn be close to the population saddle point. Specifically, this distance is also bounded as \(\left(\frac{\Delta}{2^{t}\lambda}\right)^{1/2}=\frac{L}{2^{t}\lambda n^{\prime }}\frac{1}{\sqrt{2^{t}\lambda}}=O(\frac{B}{2^{t}})\). Thus we ultimately have that the distance between \([\bar{w}_{t},\bar{\theta}_{t}]\) and the population saddle point of \(F_{\mathcal{D}}^{(t)}\), \([w_{t}^{*},\theta_{t}^{*}]\), satisfies \(\mathbb{E}\left[\big{\|}[\bar{w}_{t},\bar{\theta}_{t}]-[w_{t}^{*},\theta_{t}^{* }]\big{\|}\right]=O(\frac{B}{2^{t}})\). These ideas also lead to a bound \(\mathbb{E}\left[\big{\|}[w_{t+1}^{*},\theta_{t+1}^{*}]-[\bar{w}_{t},\bar{ \theta}_{t}]\big{\|}\right]=O(\frac{B}{2^{t}})\), although the argument in this case is more technical and thus deferred to the full proof. The upshot of this analysis is that as the level of regularization increases, the distance of the iterates to the their respective population minimizers decreases in kind. One consequence of this fact is that \(\big{\|}[\bar{w}_{T},\bar{\theta}_{T}]-[w_{T}^{*},\theta_{T}^{*}]\big{\|}= \tilde{O}\left(\frac{B}{\sqrt{n}}\right)\), and thus by the Lipschitzness of the gap function the output of recursive regularization has gap bound close to that of \([w_{T}^{*},\theta_{T}^{*}]\). Turning now towards the utility of \([w_{T}^{*},\theta_{T}^{*}]\), using the fact that \(F_{\mathcal{D}}\) is convex-concave we have \[\widehat{\mathrm{Gap}}(w_{T}^{*},\theta_{T}^{*})\leq\max_{w^{\prime}\in \mathcal{W},\theta^{\prime}\in\Theta}\left\{\langle G_{\mathcal{D}}(w_{T}^{*},\theta_{T}^{*}),[w_{T}^{*},\theta_{T}^{*}]-[w^{\prime},\theta^{\prime}] \rangle\right\}.\] Further, an expression for \(G_{\mathcal{D}}\) be obtained using the definition of \(F_{\mathcal{D}}^{(T)}\), \[G_{\mathcal{D}}(w_{T}^{*},\theta_{T}^{*})=G_{\mathcal{D}}^{(T)}(w_{T}^{*}, \theta_{T}^{*})-\lambda\sum_{t=0}^{T-1}2^{t}([w_{T}^{*},-\theta_{T}^{*}]-[\bar {w}_{t},-\bar{\theta}_{t}]).\] Plugging the latter into the former and using the triangle inequality and the fact that \([w_{T}^{*},\theta_{T}^{*}]\) is optimal for \(F_{\mathcal{D}}^{(t)}\), one can obtain a bound on the gap in terms of the distances discussed previously. \[\mathbb{E}\left[\widehat{\mathrm{Gap}}(w_{T}^{*},\theta_{T}^{*}) \right]\leq B\cdot\mathbb{E}\left[\lambda\sum_{t=0}^{T-1}2^{t}\left\|[w_{T}^{ *},\theta_{T}^{*}]-[\bar{w}_{t},\bar{\theta}_{t}]\right\|\right]\] \[\stackrel{{(i)}}{{\leq}}B\cdot\mathbb{E}\left[ \lambda\sum_{t=0}^{T-1}2^{t}\left(\left\|[w_{t+1}^{*},\theta_{t+1}^{*}]-[\bar {w}_{t},\bar{\theta}_{t}]\right\|+\sum_{r=t+1}^{T-1}\big{\|}[w_{r+1}^{*}, \theta_{r+1}^{*}]-[w_{r}^{*},\theta_{r}^{*}]\big{\|}\right)\right]\] \[\stackrel{{(ii)}}{{=}}O\left(B\sum_{t=0}^{T-1}2^{t} \lambda\mathbb{E}\left[\big{\|}[w_{t+1}^{*},\theta_{t+1}^{*}]-[\bar{w}_{t}, \bar{\theta}_{t}]\big{\|}\right]+B\sum_{t=1}^{T-1}2^{t}\lambda\mathbb{E}\left[ \big{\|}[\bar{w}_{t},\bar{\theta}_{t}]-[w_{t}^{*},\theta_{t}^{*}]\big{\|} \right]\right)\] \[=O\left(B\sum_{t=0}^{T-1}2^{t}\lambda\frac{B}{2^{t}}+B\sum_{r=1} ^{T-1}2^{t}\lambda\frac{B}{2^{t}}\right)=O\left(T\lambda B^{2}\right)=O\left( \frac{\log_{2}(n)BL}{\sqrt{n^{\prime}}}\right),\] where step \((i)\) comes from a triangle inequality and step \((ii)\) is obtained from a series of algebraic manipulations which are expanded upon in the full proof. Finally, in the case where \(\hat{\alpha}>0\), extra steps are required to bound the distance of output of \(\mathcal{A}_{\text{emp}}\) to the exact saddle point of \(F_{S}^{(t)}(w,\theta):=\frac{1}{n^{\prime}}\sum_{x\in S_{t}}f^{(t)}(w,\theta;x)\). This is accomplished using the SC/SC property of \(F_{S}^{(t)}\) and the \(\hat{\alpha}\)-relative accuracy guarantee of \(\mathcal{A}_{\text{emp}}\). ## 4 Optimal Strong Gap Rate for Nonsmooth DP-SSP With the guarantees of recursive regularization established, what remains is to show there exist \((\epsilon,\delta)\)-DP algorithms which achieve a sufficient accuracy on the empirical objective. Note this suffices to make the entire recursive regularization algorithm private. **Theorem 2**.: _Let \(\mathcal{A}_{\text{emp}}\) used in Algorithm 1 be \((\epsilon,\delta)\)-DP. Then Algorithm 1 is \((\epsilon,\delta)\)-DP._ This follows simply from post processing the parallel composition theorem for differential privacy, since each run of \(\mathcal{A}_{\text{emp}}\) is run on a disjoint partition of the dataset. In the non-smooth setting, one can obtain optimal rates on the empirical gap using noisy stochastic gradient descent ascent (noisy SGDA). We give this algorithm in detail in Appendix C.2. More briefly, noisy SGDA starts at \([w_{0},\theta_{0}]\in\mathcal{W}\times\Theta\) and takes parameters \(T,\eta>0\), where \(T\) is the number of iterations and \(\eta\) is the learning rate. New iterates are obtained via the update rule \([w_{t+1},\theta_{t+1}]=[w_{t},\theta_{t}]-\frac{\eta}{|M_{t}|}\sum_{x\in M_{t}}g (w_{t},\theta_{t};x)+\xi_{t}\), where \(\xi_{0},...,\xi_{T-1}\) are i.i.d. Gaussian noise vectors and \(M_{t}\) is a minibatch sampled uniformly with replacement from \(S\). The algorithm then returns the average iterate, \(\frac{1}{T}\sum_{t=0}^{T-1}[w_{t},\theta_{t}]\). Noisy SGDA can be used to obtain the following result. **Lemma 4**.: _There exists an \((\epsilon,\delta)\)-DP algorithm which satisfies \(\hat{\alpha}\)-relative accuracy with \(\hat{\alpha}=O\left(\frac{\log(n)L\sqrt{d\log(1/\delta)}}{n\epsilon}\right)\) and runs in \(O\left(\min\left\{\frac{n^{2}\epsilon^{1.5}}{\log^{2}(n)\sqrt{d\log(1/\delta)} },\frac{n^{3/2}}{\log^{3/2}(n)}\right\}\right)\) gradient evaluations._ Applying Theorem 1 then yields a near optimal rate on the strong gap. **Corollary 1**.: _There exists an Algorithm, \(\mathcal{R}\), which is \((\epsilon,\delta)\)-DP, has gradient evaluations bounded by \(O\big{(}\min\left\{\frac{n^{2}\epsilon^{1.5}}{\log(n)\sqrt{d\log(1/\delta)}}, \frac{n^{3/2}}{\sqrt{\log(n)}}\right\}\big{)}\) and satisfies_ \[\mathrm{Gap}(\mathcal{R})=O\left(\frac{\log^{3/2}(n)BL}{\sqrt{n}}+\frac{\log^ {2}(n)BL\sqrt{d\log(1/\delta)}}{n\epsilon}\right).\] ## 5 One Pass Algorithm for the Smooth Case In this section, we present a DP-SSP algorithm for smooth objectives with optimal rates that makes only \(n\) saddle operator evaluations, i.e. it runs in linear time. At a high level, Algorithm 2 can be viewed as an implementation of noisy SGDA, but where the stochastic saddle operator estimates are computed using a more involved estimation procedure, given by the tree structure. Each node in the tree is associated with a parameter vector and a stochastic saddle operator estimate. The iteration over the tree is given by \(\mathrm{DFS}\left[D\right]\), which traverses a binary tree of depth \(D\), expanding left nodes first. Traversing from node \(s\) to \(s^{\prime}\) adds "0" to the binary string \(s\) if the left node is expanded or "1" if the right node is expanded. The traversal from \(\varnothing\) to a leaf node \(s\) will generate an estimate of \(G_{\mathcal{D}}\) at that leaf node, at which point a noisy SGDA step is performed; the result is passed on to the next node in the depth first search. Note this means each parameter vector at a non-leaf node is equal to the parameter vector at some leaf node. The privacy guarantees of the algorithm are easily obtained. The number of samples used by the algorithm is at most \(bD=n\), and thus the algorithm is one pass. Thus privacy follows easily from the parallel composition theorem for DP and the privacy guarantees of the Gaussian mechanism. The privacy guarantee also does not rely on convexity-concavity to ensure privacy, which is desirable in practice [10]. **Theorem 3**.: _Let \(f\) be \(\beta\)-smooth and \(L\)-Lipschitz. Then Algorithm 2 is \((\epsilon,\delta)\)-DP._ The utility guarantees are more involved, with the technical challenge being obtaining a bound on the variance-reduced saddle operator estimates. This variance reduction analysis is somewhat atypical in the saddle point setting due to the fact last-iterate convergence results for stochastic objectives are not known, whereas the benefit of variance reduction is generally applicable to the last iterate. However, in the private convex-concave setting, variance reduction can still serve the purpose of obtaining gradient estimates with error \(\tilde{O}(L^{2})\) instead of \(\tilde{O}(dL^{2})\). With this in mind, we have the following lemma. **Lemma 5**.: _Assume \(\frac{\log(n)\sqrt{d\log(1/\delta)}}{n\epsilon}\leq 1\). Then for any leaf node \([w_{s},\theta_{s}]\) it holds that \(\mathbb{E}\left[\nabla_{s}\right]=G_{\mathcal{D}}(w_{s},\theta_{s})\) and \(\mathbb{E}\left[\left\|G_{\mathcal{D}}(w_{s},\theta_{s})-\nabla_{s}\right\|^{ 2}\right]\leq 3\log(n)L^{2}\)_ Using the tree structure to obtain variance-reduced gradient (or in our case saddle operator) estimates is standard approach by now in DP optimization [1, 1, 1, 2], and so, here, we provide intuition for this lemma and defer the full proof to Appendix D.2. We remark on this note that in comparison to previous works, we remove the outer loop typically seen in the tree algorithm, slightly simplifying the algorithm. The saddle operator estimates use the fact that (unconditionally) unbiased estimates of the saddle operator can be obtained via a sum of saddle operator variation estimates [18]. Specifically, let \(s(i)\) denote the binary string formed from the first \(i\) components of the binary string \(s\), (ex: for \(s=01110\), \(s(2)=01\)). Define \(s(0)=\varnothing\). We have for any leaf node \(s\) \[G_{\mathcal{D}}(w_{s(D)},\theta_{s(D)})=\mathop{\mathbb{E}}_{x\sim\mathcal{D}} \left[g(w_{\varnothing},\theta_{\varnothing};x)\right]+\sum_{j=1}^{D}\mathop{ \mathbb{E}}_{x\sim\mathcal{D}}\left[g(w_{s(j)},\theta_{s(j)};x)-g(w_{s(j-1)}, \theta_{s(j-1)};x)\right].\] The variance of the saddle operator estimate can then be controlled using smoothness and minibatching, since \(\mathbb{E}\left[\left\|G_{\mathcal{D}}(w_{s(j)},\theta_{s(j)})-G_{\mathcal{D}}(w_{s (j-1)},\theta_{s(j-1)})-\Delta_{s(j)}\right\|^{2}\right]\) is \[\tilde{O}\left(\beta\left(\frac{1}{|S_{s}|}+\frac{d}{|S_{s}|^{2}}\right) \mathbb{E}\left[\left\|[w_{s(j)},\theta_{s(j)}]-[w_{s(j-1)},\theta_{s(j-1)}] \right\|^{2}\right]\right).\] The tree structure specifically allows for more efficient use of samples. At every leaf node, the tree ensures the "path" to \([w_{s(D)},\theta_{s(D)}]\), that is the vectors \(\left\{[w_{s(j)},\theta_{s(j)}]\right\}_{j=1}^{D}\), contains only \(\log(n)\) number of points. This is possible because the distance between consecutive nodes decrease geometrically as \(j\) increases, thus allowing the algorithm to form paths such that \(\left\|[w_{\varnothing},\theta_{\varnothing}]-[w_{s},\theta_{s}]\right\|=\Theta (B)\). However, as \(j\)_decreases_, \(|S_{s(j)}|\) increases geometrically, ensuring the overall variance in the gradient estimate remains \(\tilde{O}(L^{2})\). After bounding the error of the saddle operator estimates, the utility of the algorithm is obtained from the convergence of noisy SGDA. **Theorem 4**.: _Assume \(\frac{\log(n)\sqrt{d\log(1/\delta)}}{n\epsilon}\leq 1\) and \(\beta\leq\frac{L}{B}\), and that \(f\) is a convex-concave function which is \(L\)-Lipschitz and \(\beta\)-smooth. Then Algorithm 2, \(\mathcal{T}\), has gradient complexity \(O(n)\) and satisfies \(\mathrm{Gap}(\mathcal{T})=O\Bigg{(}\log(n)BL\left(\frac{1}{\sqrt{n}}+\frac{ \sqrt{d\log(1/\delta)}}{n\epsilon}\right)\Bigg{)}\)._ Bounds also hold for \(\beta>\frac{L}{B}\) at the cost of the bound not having the optimal dependence on \(B\) and \(L\). See Appendix D.1 for this more detailed statement and the full proof. ## 6 On the Limitations of Previous Approaches Prior work into DP SSPs has largely focused on the weak gap criteria. In this section, we provide further investigation into both the importance and challenges of bounding the strong gap over the weak gap. We start by considering a natural question. Do there exist cases where the strong and weak gap differ substantially? We answer this question affirmatively in the following. **Proposition 1**.: _There exists a function \(f\) with range \([-1,+1]\) and algorithm \(\mathcal{A}\) such that \(\mathrm{Gap}(\mathcal{A})-\mathrm{Gap}_{\mathsf{weak}}(\mathcal{A})=2\)._ Our construction shows that this result holds even for a simple one dimensional bilinear problem. Proof.: Consider the loss function \(f(w,\theta;x)=w\theta\), where \(w,\theta,x\in[-1,1]\). Let \(\mathcal{D}\) be the uniform distribution over \(\{\pm 1\}\). For \(\{x_{1},\ldots,x_{n}\}\sim\mathcal{D}^{n}\) consider the algorithm \(\mathcal{A}\) which outputs \(\bar{w}\) as the mode of the first half of the samples in \(S\) and similarly \(\bar{\theta}\) is set as the mode of the second half of the samples in \(S\)6. Note \(\bar{w}\) and \(\bar{\theta}\) are independent and distributed uniformly over \(\{\pm 1\}\) (under the randomness from \(\mathcal{D}\)). Footnote 6: Without much loss of generality, we assume that \(n\) is divisible by 2 but not by 4, so that the mode of each half of the data are well-defined and belong to \(\{-1,+1\}\). Now, since \(\mathcal{A}\) is a deterministic function of the dataset, the randomness in \(\bar{w},\bar{\theta}\) comes only from \(S\). Thus for the weak gap we have \(\max\limits_{\theta\in[-1,1]}\{\mathbb{E}\left[\bar{w}\theta\right]\}-\min \limits_{w\in[-1,1]}\{\mathbb{E}\left[w\bar{\theta}\right]\}\) which evaluates to \(\max_{\theta\in[-1,1]}\{\mathbb{E}\left[\bar{w}\right]\theta\}-\min_{w\in[-1,1 ]}\{w\underset{S}{\mathbb{E}}\left[\bar{\theta}\right]\}=0\). However, one can see for the strong gap we have \(\underset{S}{\mathbb{E}}\left[\max\limits_{\theta\in[-1,1]}\{\bar{w}\theta\}- \min\limits_{w\in[-1,1]}\left\{w\bar{\theta}\right\}\right]=\underset{S}{ \mathbb{E}}\left[\left|\bar{w}\right|+\left|\bar{\theta}\right|\right]=2\), where the first equality comes from evaluating \(\theta=\mathsf{sgn}(\bar{w})\) and \(w=-\mathsf{sgn}(\bar{\theta})\) in the maximization and minimization operators. Observe that the generalization error w.r.t. the strong gap of this algorithm is always \(0\) because the loss function does not depend on the random sample from \(\mathcal{D}\). The discrepancy between the gaps instead comes from the fact that having the expectation w.r.t. \(S\) inside the max/min changes the function over which the dual/primal adversary is maximizing/minimizing. Specifically, note here that the weak gap measures the ability of \(\theta\) to maximize the function \(\theta\mapsto\bar{w}\theta\) for \(\bar{w}=0\), but note \(\bar{w}=0\) does not occur for _any_ realization of the dataset \(S\). One might further observe that a key attribute of this construction is the high variance of the parameter vectors. One can show such behavior is in fact necessary to see such a separation; the full proof of the following is statement is given in Appendix E.1. **Proposition 2**.: _Let \(\mathcal{A}\) be an algorithm such that \(\mathop{\mathbb{E}}_{\mathcal{A},S}\left[\left\|\mathcal{A}(S)-\mathbb{E}_{ \hat{S}\sim\mathcal{D}^{n},\mathcal{A}}\mathcal{A}(\hat{S})\right\|^{2} \right]\leq\tau^{2},\) then if \(f\) is \(L\)-Lipschitz it holds that \(\mathrm{Gap}(\mathcal{A})-\mathrm{Gap}_{\mathsf{weak}}(\mathcal{A})\leq L\tau.\)_ Tradeoff between Accuracy and StabilityAn additional consequence of Proposition 2 (in conjunction with Lemma 3) is that \(\Delta\)-uniform argument stability implies \(\sqrt{n}\Delta L\) generalization bound w.r.t. the strong gap that does not rely on smoothness (in contrast to the \(\sqrt{L\beta\Delta}\) bound of [1] which does). We leave determining tight bounds for stability implies generalization on the strong gap as an interesting direction for future work. In this section however, we show that stronger upper bounds are likely necessary to obtain a more direct algorithm for DP-SSPs. In fact, our key result holds even for empirical risk minimization (ERM) problems. That is, for \(f:\mathcal{W}\times\mathcal{X}\mapsto\mathbb{R}\) and \(S\in\mathcal{X}^{n}\), consider the problem of minimizing the excess empirical risk \(F_{S}(w)-\min_{w\in\mathcal{W}}\left\{F_{S}(w)\right\}\), where \(F_{S}(w)=\frac{1}{n}\sum_{x\in S}f(w;x)\). We have the following. **Theorem 5**.: _For any (possibly randomized) algorithm \(\mathcal{A}:\mathcal{X}^{n}\mapsto\mathcal{W}\) which is \(\Delta\)-uniform argument stable, there exists a \(0\)-smooth \(L\)-Lipschitz loss function, \(f:\mathcal{W}\times\mathcal{X}\mapsto\mathbb{R}\), and dataset \(S\in\mathcal{X}^{n}\) such that \(\mathbb{E}[F_{S}(\mathcal{A}(S))-\min_{w\in\mathcal{W}}\left\{F_{S}(w)\right\} ]=\Omega\left(\frac{B^{2}L}{\Delta n}\right)\) provided \(\Delta\geq\frac{B}{\sqrt{\min\{n,d\}}}\)._ The proof can be found in Appendix E.2. Lemma 1 shows this bound is tight for both ERM and empirical saddle point problems. Generalization bounds are only useful when it is possible to obtain good empirical performance. Thus, the implication of this bound is that generalization error which is \(O(\Delta)\) is necessary to obtain the optimal \(O\left(1/\sqrt{n}\right)\) statistical rate. To elaborate, let \(H(\Delta)\) characterize some (potentially suboptimal) generalization bound for \(\Delta\) stable algorithms and assume \(H(\Delta)=\omega(\Delta)\). To then bound the sum of empirical risk and generalization error, Theorem 5 implies \(F_{S}(\mathcal{A}(S))-F_{S}(w^{*})+H(\Delta)=\Omega\left(\frac{1}{\Delta n}+H( \Delta)\right)=\omega\left(\frac{1}{\Delta n}+\Delta\right).\) Note the RHS is asymptotically larger than \(\frac{1}{\sqrt{n}}\) (i.e. not optimal) for any \(\Delta\). ## Acknowledgements RB's and MM's research is supported by NSF CAREER Award 2144532 and NSF Award AF-1908281. CG's research was partially supported by INRIA Associate Teams project, FONDECYT 1210362 grant, ANID Anillo ACT210005 grant, and National Center for Artificial Intelligence CENIA FB210017, Basal ANID.
2304.07734
Stability of galaxies across morphological sequence
We investigate the stability of nearby disc galaxies and galaxies at redshift ($z$) equal to 4.5. We explore the connection between the stability parameter $(Q_{RW})$, star formation rate ($SFR$), gas fraction $(f^{Gas})$, and the time scale for growth of gravitational instabilities $(\tau)$. We find that, despite differences in morphology $91$ $\%$ of the nearby galaxies have a minimum value of stability parameter ($Q^{Min}_{RW}$) greater than $1$ indicating stability against the growth of axisymmetric instabilities. The spirals in our sample have higher median star formation rate, lower median $Q_{RW}$, a lower $f^{Gas}$ and small time scale for growth of gravitational instabilities than irregular galaxies. We find that the gravitational instabilities in spirals convert a large fraction of gas into stars quickly, depleting the gas reservoirs. On the other hand, star formation occurs more gradually over longer timescales in irregulars with a higher gas fraction. We then compare the stability of the nearby galaxies with galaxies at $z\,=\,4.5$. We find that net stability levels in the nearby galaxies and the galaxies at $z\,=\,4.5$ are primarily driven by the stellar disc suggesting the presence of an inherent mechanism that self-regulates the stability. Finally, upon removing the contribution of the dark matter to the total potential, the median $Q_{RW}$ for the nearby galaxies and galaxies at $z \,= \,4.5$ remains unchanged indicating that the baryons can self-regulate the stability levels, at least in a statistical sense.
K. Aditya
2023-04-16T09:21:24Z
http://arxiv.org/abs/2304.07734v1
# Stability of galaxies across morphological sequence ###### Abstract We investigate the stability of nearby disc galaxies and galaxies at redshift (\(z\)) equal to 4.5. We explore the connection between the stability parameter (\(Q_{RW}\)), star formation rate (\(SFR\)), gas fraction (\(f^{Gas}\)), and the time scale for growth of gravitational instabilities (\(\tau\)). We find that, despite differences in morphology 91 % of the nearby galaxies have a minimum value of stability parameter (\(Q_{RW}^{Hun}\)) greater than 1 indicating stability against the growth of axisymmetric instabilities. The spirals in our sample have higher median star formation rate, lower median \(Q_{RW}\), a lower \(f^{Gas}\) and small time scale for growth of gravitational instabilities than irregular galaxies. We find that the gravitational instabilities in spirals convert a large fraction of gas into stars quickly, depleting the gas reservoirs. On the other hand, star formation occurs more gradually over longer timescales in irregulars with a higher gas fraction. We then compare the stability of the nearby galaxies with galaxies at \(z=4.5\). We find that net stability levels in the nearby galaxies and the galaxies at \(z=4.5\) are primarily driven by the stellar disc suggesting the presence of an inherent mechanism that self-regulates the stability. Finally, upon removing the contribution of the dark matter to the total potential, the median \(Q_{RW}\) for the nearby galaxies and galaxies at \(z=4.5\) remains unchanged indicating that the baryons can self-regulate the stability levels, at least in a statistical sense. keywords: Physical data and processes: instabilities- galaxies:kinematics and dynamics - galaxies: structure - galaxies: star formation ## 1 Introduction Observations of galaxies reveal a wide range of morphological features and physical properties, which are encoded into Hubble's classification scheme of galaxies. Different physical processes like tidal interaction, galaxy mergers, fragmentation of the gas disc, turbulence, and feedback play an important role in shaping the observed morphology of the galaxies. The tidal perturbations due to a passing satellite galaxy can alter the shape of both the target galaxy and the satellite leading to the formation of tidal tails and spiral arms (Toomre and Toomre, 1972). Also, the tidal perturbation arising from the high-speed head-on collision of a companion galaxy with a target disc galaxy is responsible for ring formation and enhancement of star formation activity (Renaud et al., 2018). Apart from direct tidal encounters, major and minor mergers can transform the spiral galaxies into elliptical galaxies (Barnes and Hernquist, 1992; Mihos and Hernquist, 1994; Bournaud et al., 2007; Pedrosa et al., 2014). Further, the transformation of the disc galaxies into elliptical galaxies is accompanied by the quenching of the star formation activity (Martig et al., 2009). Scannapieco et al. (2009) consider the effect of various physical processes like feedback and cooling of the gas and multi-phase treatment of interstellar medium (ISM) in hydrodynamical simulations. They find that the survival of disc galaxies in the \(\Lambda\)CDM scenario is strongly affected by major mergers and disc instabilities. Mihos et al. (1995) using numerical simulations showed that mergers of disc galaxies with low-mass satellites can trigger the formation of strong bars and also give rise to bending instabilities. High-resolution observation of the ISM reveals that ISM is inherently turbulent (Leroy et al., 2016; Brunetti et al., 2021; Leroy et al., 2021; Meidt et al., 2023). Bacchini et al. (2020) show that the feedback from the supernova explosions predominantly drive the turbulence observed in the ISM. Agertz et al. (2015) study hydro + N-body simulations by including stellar feedback, which leads to driven ISM turbulence. They find that models without stellar feedback show complete disc fragmentation, whereas the feedback-driven models are stable on all scales. The disc fragmentation due to gravitational instabilities is consistent with the earlier theoretical formulation of Goldreich and Lynden-Bell (1965). They consider the stability of the stratified self-gravitating sheet and show that the instabilities can break up the sheet into masses of definite sizes if \(\frac{\pi G\rho}{4\omega^{2}}\) is greater than a number between 0.7 and 1.8, \(\omega\) is the angular velocity, and \(\rho\) is the density of the medium. Furthermore, using the JWST observations of nearby galaxies Meidt et al. (2023) have shown that the filamentary structures observed in the nearby galaxies are consistent with the models of disc fragmentation driven by Jean's instabilities. Studies by Wang & Silk (1994) and Pandey & Van De Bruck (1999) have shown that axisymmetric gravitational instabilities primarily drive star formation activity. Li et al. (2005) show that the formation of the star clusters in numerical simulations results from axisymmetric gravitational instability quantified by Toomre Q parameter (Toomre, 1964). Also, observationally, it has been seen that the young stellar objects increase with decreasing Toomre Q (Gruendl & Chu, 2009; Dawson et al., 2013). Toomre instabilities typically are 2D instabilities, but a recent study by Meidt (2022) shows that the fragmentation of the molecular clouds is brought about by 3D instabilities. Gravitational instabilities are thus at the heart of different physical processes that dictate the observed structure, morphology, and physical properties of the galaxies. The observed physical properties of galaxies, i.e., the surface brightness profile \(\Sigma\), the epicyclic motion \(\kappa\), and radial velocity dispersion \(\sigma\), constitute essential ingredients for quantifying the stability/instability levels of the disc galaxies against the growth of axisymmetric gravitational instabilities. The condition for quantifying the stability of a galaxy against the growth of axisymmetric instabilities derived by Toomre (1964) is written as \[Q=\frac{\kappa\sigma}{\pi G\Sigma}. \tag{1}\] A value of \(Q>1\) indicates that the galaxy is stable against the growth of axisymmetric instabilities. The \({}^{\prime}Q^{\prime}\) parameter proposed by Toomre has been modified by Jog (1996); Rafikov (2001); Romeo & Wiegert (2011); Romeo & Falstad (2013) to include the effect of multiple self-gravitating components. The stability parameter has also been further modified to include the physical processes like the effects of the turbulence (Hoffmann & Romeo, 2012; Agertz et al., 2015) and the three-dimensional structure of ISM (Meidt, 2022). The multi-component stability parameter, apart from accounting for the self-gravity of the multiple mass components, also allows us to discern if the gravitational instabilities are driven by stars or by gas. The \({}^{\prime}Q^{\prime}\) parameter derived by Jog (1996); Rafikov (2001); Romeo & Wiegert (2011); Romeo & Falstad (2013) provides a simple method to quantify the stability levels in the galaxies. Romeo & Mogotsi (2017) analyze the stability of a sample of 12 nearby star-forming galaxies from the THINGS (Leroy et al., 2008) using the multi-component stability parameter (\(Q_{RW}\)) derived by Romeo & Falstad (2013). They find that the median value of \(Q_{RW}\) for the galaxies lies between 2 and 3, with a global median equal to 2.2. Also, using a sample of 34 spiral galaxies Romeo & Mogotsi (2018) point out that the mass-weighted average value of the stellar stability parameter \(<Q_{*}>\) is constant across spiral galaxies of all morphological types (Sa - Sd). In this work, we will explore the role of gravitational instabilities in galaxies with diverse physical properties and morphologies by addressing the following questions: 1. How does the stability levels quantified by the two-component stability parameter \(Q_{RW}\) vary as a function of the radius and morphology? 2. Out of the stars and the gas which is dominant mass component driving the stability levels in the galaxies? 3. What is the minimum value of stabilty parameter (\(Q_{RW}^{Min}\)) for the nearby galaxies? Are the nearby galaxies stable against non-axisymmetric instabilities and gas dissipation? 4. How does the star formation rates in galaxies which are susceptible to the growth of axisymmetric instabilities (\(Q_{RW}^{Min}<1\)) compare with galaxies which are stable against the growth of axisymmetric instabilities(\(Q_{RW}^{Min}>1\))? 5. Where does the galaxy become unstable, i.e., attain a minimum value of \(Q_{RW}\)? Are the most unstable regions driven by stars or by gas? 6. How does the stability in nearby galaxies compare with the galaxies observed at high redshift (\(z=4.5\))? 7. What is the role of dark matter in regulating stability? Can baryons self-regulate the stability? 8. What is the connection between \(Q_{RW}\), gas fraction (\(f^{Gas}\)) and star formation rate (\(SFR\)) with the time scale for growth of instabilities (\(\tau\))? In order to answer the above questions, we calculate the stability of a sample of 175 nearby galaxies with diverse morphologies and physical properties from the SPARC galaxy catalog (Lelli et al., 2016) using the two-component stability criterion (\(Q_{RW}\)) derived by Romeoo & Wiegert (2011). The two-component stability criterion considers the self-gravity of the stars and the gas on an equal footing and includes a correction for the disc thickness. The galaxies in the SPARC catalog span an unprecedented range of morphologies and physical properties, from lenticulars galaxies (S0) to blue compact dwarfs (BCDs), rotation velocities (\(\sim 20\)km s\({}^{-1}\)to \(\sim 300\)km s\({}^{-1}\)), luminosities (\(\sim 10^{7}L_{\odot}\)to \(\sim 10^{12}L_{\odot}\)), and gas content \(0.01\leq(M_{HI}/L_{[3,6]})\leq 10\). We use the data available in the SPARC galaxy catalog to homogenously calculate and benchmark the stability levels in nearby galaxies. We then compare the stability of the nearby galaxies with the cold rotation supported galaxies at \(z=4.5\), observed by Rizzo et al. (2020, 2021). Since the two-component stability criterion \(Q_{RW}\) considers the self-gravity of both the stars and gas, it enables us to ascertain the dominant mass component driving the stability levels. Further, \(Q_{RW}>1\) ensures stability against axisymmetric instabilities, but a higher value of \(Q_{RW}>Q_{critical}\) is needed for stabilizing the disc against non-axisymmetric perturbations and gas dissipation. Griv & Gedalin (2012) find that \(Q_{critical}\approx 2\) is needed for stability against non-axisymmetric instabilities, and further Elmegreen (2011) estimate \(Q_{critical}\approx 2-3\) for stability against gas dissipation. We examine if the nearby galaxies are critically stable by estimating the minimum value of stability parameter (\(Q_{RW}^{Min}\)). In order to estimate where the galaxy is susceptible to growth of instabil ities, we compute the radius at which the \(Q_{\text{RW}}\), \(Q_{\text{stars}}\) and \(Q_{\text{Gas}}\) attain their minimum values given by \((R/R_{D})_{\text{Minf}(Q_{\text{stars}})}\), \((R/R_{D})_{\text{Minf}(Q_{\text{stars}})}\) and \((R/R_{D})_{\text{Minf}(Q_{\text{Gas}})}\) respectively, where \(R_{D}\) is the scalelength corresponding to the exponential surface density of the stellar disc. We then bin the galaxies by their morphological type and measure the stability/instability levels by computing \(Q_{\text{RW}}\) as a function of the morphological type.We will then discuss the role of the gas fraction and the dark matter potential in regulating the stability levels. Finally, we note that, with the advent of the James Webb Space Telescope (JWST) and Square Kilometer Array (SKA), it will be possible to study the evolution of gravitational instabilities as a function of both morphological type and redshift. The paper is organized as follows: we discuss the data and methods in SS2. We present the results in SS3 and discuss the implication of our results in SS4. We will finally summarize our results in SS5. ## 2 Data & Method In this work, we consider a sample of 175 galaxies taken from the _Spitzer_ Photometry and Accurate Rotation Curves (SPARC) database1(Lelli et al., 2016). The sample spans a wide range of morphologies from spirals to lenticulars and blue compact dwarfs with stellar photometry in the 3.6 \(\mu\)m band and high-quality Hi+ H\(\alpha\) rotation curves. The near-infrared band traces the stellar mass distribution very well as the mass-to-light ratio (\(\gamma^{Star}\)) in 3.6 \(\mu\)m is fairly constant for galaxies with varying masses and morphologies (McGaugh and Schombert, 2014). We adopt value of \(\gamma^{Star}\) equal to 0.5 following McGaugh and Schombert (2014). Footnote 1: [http://astroweb.cwru.edu/SPARC/](http://astroweb.cwru.edu/SPARC/) In order to compute the stability, we use the two-component stability criterion introduced by Romeo and Wiegert (2011) \[\frac{1}{Q_{\text{RW}}}=\frac{\frac{W_{\sigma}}{T_{Star}Q_{\text{stars}}}+ \frac{1}{T_{\sigma_{\text{Obs}}}Q_{\text{stars}}}}{\frac{W_{\sigma}}{T_{Star} Q_{\text{stars}}}+\frac{W_{\sigma}}{T_{\sigma_{\text{Obs}}}Q_{\text{stars}}}}\quad\quad if \quad T_{\sigma_{\text{Star}}Q_{\text{stars}}}<T_{\sigma_{\text{Gas}}}Q_{ \text{Gas}} \tag{2}\] and the weight function \(W_{\sigma}\) is given by \[W_{\sigma}=\frac{2\sigma_{Star}\sigma_{\text{Gas}}}{\sigma_{\text{stars}}^{2 }+\sigma_{\text{Gas}}^{2}}. \tag{3}\] The thickness correction is defined as \[T\approx 0.8+0.7\frac{\sigma_{z}}{\sigma_{R}} \tag{4}\] where \(\sigma_{z}\) and \(\sigma_{R}\) are the vertical and the radial velocity dispersion, \(\sigma_{\text{stars}}\) and \(\sigma_{\text{Gas}}\) are the velocity dispersion of the stars and gas. \(Q_{\text{stars}}\) and \(Q_{\text{Gas}}\) correspond to the Toomre Q of the stellar and the gaseous disc. Toomre Q parameter is defined as \(Q_{i}=\frac{\kappa T_{i}}{\kappa\Omega_{i}}\), where \(\kappa\) is the epicyclic frequency, and \(\Sigma_{i}\) and \(\sigma_{i}\) are the surface density and the radial velocity dispersion of either stars or gas. A value of \(Q_{\text{RW}}>1\) indicates that the galaxy is stable against the growth of axisymmetric perturbations. The epicyclic frequency \(\kappa\) at a radius R is defined as \[\kappa^{2}(R)=\left(R\frac{d\Omega^{2}(R)}{dR}+4\Omega^{2}(R)\right) \tag{5}\] where \(\Omega\) is the angular frequency defined as \(\Omega^{2}(R)=\frac{1}{R}\frac{d\Phi_{Star}}{dR}=\frac{V_{\text{tot}}^{2}}{R^ {2}}\), \(\Phi_{Total}\) is the total gravitational potential and \(V_{Rot}\) the total rotation velocity. We use the observed rotation curve from the SPARC catalog to compute the value of \(\kappa\). We use the value of the inclination-corrected stellar luminosity as a function of radius from the SPARC database and multiply it by the \(\gamma^{Star}\)s to derive the stellar surface density profile (\(\Sigma_{Stars}\)). We compute the radial velocity dispersion of the stars following the methods detailed in Leroy et al. (2008); Romeo and Mogotsi (2017) and Villanueva et al. (2021). The radial velocity dispersion is defined as \[\sigma_{Stars}(R)=\frac{1}{0.6}\sqrt{\frac{2\pi GR_{D}\Sigma_{Stars}(R)}{7.3}}. \tag{6}\] In the above equation, \(R_{D}\) is the disc scalelength and is obtained by fitting the surface brightness profile with an exponential function \(\Sigma_{Stars}(R)=\Sigma_{0}e^{-R/R_{D}}\). We derive the gas surface density (\(\Sigma_{Gas}\)) using the value of the circular velocity of gas (\(V_{Gal}\)) given in the SPARC database, (Mestel, 1963; Mera et al., 1996; Binney and Tremaine, 2011) \[\Sigma_{Gas}=\frac{V_{Gas}^{2}}{2\pi GR}. \tag{7}\] The value of \(V_{Gas}\) has been multiplied by 1.33 to correct for the presence of Helium and other metals. We use a constant value of gas dispersion equal to 10 km s\({}^{-1}\), which is consistent with the observations of the gas dispersion in nearby galaxies. Mogotsi et al. (2016) measure mean Hi dispersion equal to \(11.7\pm 2.3\) km s\({}^{-1}\) for a sample of nearby galaxies taken from high-resolution THINGS Hi survey (Walter et al., 2008), similarly Tamburro et al. (2009) measure Hi dispersion equal to \(10\pm 2\) km s\({}^{-1}\), also see Romeo et al. (2020). With all the ingredients needed for computing the two-component stability parameter (\(Q_{\text{RW}}\)) in place, we will present the results and analysis in the following section. ## 3 Results In Figure 1, we show the properties of the galaxies in the SPARC sample as a function of the radius needed for computing the stability parameter (\(Q_{\text{RW}}\)). The median rotation velocity varies between \(56.2^{+224.4}_{-13.8}\leq\frac{V(R)}{\text{km}\,\text{s}^{-1}}\leq 146^{+280.4}_{-74.6}\), i.e. the radially binned 16\({}^{th}\) percentile of the rotation velocity lies between 13.8 km s\({}^{-1}\)to 74.6 km s\({}^{-1}\)whereas the 84\({}^{th}\) percentile value of the rotation velocity varies between 224km s\({}^{-1}\)to 280km s\({}^{-1}\), and the median rotation velocity varies between 56km s\({}^{-1}\)to 146km s\({}^{-1}\). The lower limits are the minimum and maximum of the 16\({}^{th}\) percentile, and the upper limits are the minimum and the maximum of the 84\({}^{th}\) percentile (\(M_{\text{inf}}[Med.1^{+M_{\text{inf}}(84^{th})}_{-M_{\text{inf}}(16^{th})} \leq Q_{\text{M}}\)_{\text{Mex}}[Med.1^{+Max(84^{th})}_{-Max(16^{th})}\)_) The median epicyclic frequency lies between \(18.9^{+35.8}_{-12.3}\leq\frac{\kappa(R)}{\text{kpckm s}\,\text{s}^{-1}}\leq 1 54.8^{+433.3}_{-54.8}\) The radially binned median stellar surface density varies between \(6.4^{+7.3}_{-5.6}\leq log_{10}(\frac{\Sigma_{Star}}{M_{\text{eff}}kpc^{-1}}) \leq 8.3^{+8.9}_{-7.3}\). The median gas surface density ranges between \(5.2^{+5.9}_{-4.5}\leq log_{10}(\frac{\Sigma_{Star}}{M_{\text{eff}}kpc^{-1}}) \leq 6.6^{+6.7}_{-6.3}\) The local median dispersion of the stars varies between \(6.7^{+20.3}_{-2.2}\leq\frac{\sigma_{stars}}{\rm km\,s^{-1}}\leq 74.2^{+161.5}_{-18.6}\). We use a constant gas dispersion equal to \(10\rm km\,s^{-1}\)motivated by the observation of Hi dispersion in the nearby galaxies (see discussion in SS2). The observed scatter in the input parameters reflects the diversity of the physical properties of the galaxies in the SPARC catalog. ### Radial Variation of \(Q_{kw}\) In the first panel of Figure 2, we show the radial variation of the two-component stability parameter (\(Q_{RW}\)), followed by the stability parameter of stars (\(Q_{Stars}\)) and the gas (\(Q_{Gas}\)) in the second and the third panel respectively. The median value of \(Q_{RW}\) varies between \(2.32\) and \(4.3\left(2.32^{+3.4}_{-1.6}\leq Q_{RW}^{Medim}\leq 4.3^{+7.3}_{-2.8}\right)\), whereas the median of \(Q_{Stars}\) varies between \(2.8\) and \(3.8\left(2.8^{+4.3}_{-1.8}\leq Q_{Stars}^{Medim}\leq 3.8^{+64}_{-2.5}\right)\). The local median of \(Q_{Gas}\) varies in between \(4.0\) and \(158.2\left(4.0^{+12.0}_{-2.1}\leq Q_{Gas}^{Medim}\leq 158.2^{+552}_{-40}\right)\). The median value of \(Q_{Gas}\) is high at the center and is comparable to the value of \(Q_{Stars}\) at the outer radius (\(>3.5R_{D}\)). The high stability of the gas disc at the center is due to a large value of the epicyclic frequency and a small value of the gas surface density. The median value of \(\Sigma_{Gas}\) at the center is only \(0.08\%\) relative to the median value of \(\Sigma_{Star}\) at the center. So, the contribution of the gas disc to the total self-gravity at the center is negligible compared to the contribution of the stellar disc. Hence, \(Q_{RW}\) in the inner region up to \(2R_{D}\) follows \(Q_{Stars}\), see Figure 3. In other words, the net stability in the inner region is driven by the stellar disc. The gas dispersion at \(2R_{D}\) is \((\sigma_{Gas}=10\rm km\,s^{-1})\) comparable to the value of the stellar dispersion (\(\sigma_{Stars}=9.2\rm km\,s^{-1}\)), but a small value of \(\sigma_{Gas}\) is insufficient to effect the total stability. In the following section (SS3.2), we test the effect of adopting a lower value of gas dispersion motivated by the observed velocity dispersion of the molecular gas. Beyond \(2R_{D}\), the median stability of stars \(\left(Q_{Stars}^{Med}(2R_{D})=2.9\right)\) becomes more than the two-component stability parameter \(\left(Q_{RW}^{Med}(2R_{D})=2.75\right)\) as the surface density of the stellar disc falls off faster than the epicyclic frequency(\(\kappa\)) and radial velocity dispersion (\(\sigma_{Stars}\)). The median epicyclic frequency (\(\kappa\)) and the stellar dispersion (\(\sigma_{Stars}\)) are \(14.5\%\) and \(12.4\%\) at \(2R_{D}\) relative to their corresponding values at the center. However, the value of stellar surface density at \(2R_{D}\) is only \(1.7\%\) relative to its value at the center. Finally, we find that the median values of \(Q_{Gas}\) are consistent with the critical surface density of the gas disc. Lin et al. (1993); Wang & Silk (1994); Boissier et al. (2003); Burkert & Hartmann (2013) show that the star formation takes place above a gas surface density called critical gas surface density. The critical gas density is defined as (Wang & Silk 1994) \[\Sigma_{critical}=\gamma\frac{\kappa\sigma_{Gas}}{\pi G}\,{\rm and}\,\gamma= \left(1+\frac{\Sigma_{\rm Stars}\sigma_{\rm Gas}}{\Sigma_{\rm Gas}\sigma_{ \rm Stars}}\right)^{-1}. \tag{8}\] The critical gas density can be further written as \(\Sigma_{Gas}/\Sigma_{Critical}=1/(\gamma Q_{Gas})\). The median value of \(\gamma\) for the galaxies in the SPARC sample is equal to \(0.25\). The median value of \(Q_{Gas}^{Min}\) is equal to \(2.45\), which gives a value of \(\Sigma_{Gas}/\Sigma_{Critical}\approx 0.6\), consistent with the values obtained by Boissier et al. (2003) for a sample of \(16\) spiral galaxies. Thus, a higher value of \(\Sigma_{Gas}/\Sigma_{Critical}\) driven by a smaller value of \(Q_{Gas}\) will induce favorable conditions for the onset of gravitational instabilities. ### Effect of \(\sigma_{Gas}\) on \(Q_{rw}\) In Figure 3, we see that the \(Q_{RW}\) in the inner radii is strongly regulated by the stellar component, whereas in the outer region, the effect of \(Q_{Gas}\) on \(Q_{RW}\) becomes important. This effect arises primarily due to negligible gas surface densities in the inner region compared to the stellar surface density. Since the epicyclic frequency, \(\kappa\) has a similar effect on both Figure 1: Radial variation of the physical properties of the galaxies in the SPARC catalog. The solid line indicates the median and the shaded region indicates the \(16^{th}\) and the \(84^{th}\) percentile obtained by radially binning the data. the stellar and the gas disc; it becomes imperative to check the effect of the gas dispersion on the radial variation of \(Q_{RW}\). We show the effect of adopting a smaller value of gas dispersion equal to 6km s\({}^{-1}\)on the median stability of the galaxies in the SPARC catalog in Figure 3. The value of the gas dispersion equal to 6km s\({}^{-1}\)is consistent with the velocity dispersion of the molecular gas measured in the Milky Way \(\sigma=4.4\pm 1.2\) km s\({}^{-1}\)(Marasco et al., 2017). From, Figure 3, it is evident that a smaller gas dispersion lowers the value of \(Q_{Gas}\); see the green dashed line in Figure 3. But, the \(Q_{RW}\) in the inner region is still primarily driven by the stellar component despite a small value of the gas dispersion. However, a small value of \(Q_{Gas}\) makes the value of \(Q_{RW}\) smaller in the outer region of the galaxies when \(\sigma_{Gas}=6\)km s\({}^{-1}\). Thus, a cold gas disc aided by a high gas surface density can potentially decrease the value of \(Q_{RW}\), making the outer region of nearby galaxies more susceptible to the growth of axisymmetric instabilities. Overall we find that the global median of \(Q_{RW}\) is unaffected by a smaller value of gas dispersion. \(Q_{RW}\) changes from 3.0 (\(\sigma_{Gas}=10\)km s\({}^{-1}\)) to 2.8 upon adopting a lower gas dispersion (\(\sigma_{Gas}=6\)km s\({}^{-1}\)). ### Critical Stability A value of \(Q_{RW}>1\) indicates that the galaxy is stable against axisymmetric instabilities. However, higher value of \(Q_{RW}>Q_{critical}\) is needed to stabilize the galaxy against non-axisymmetric instabilities (Griv & Gedalin, 2012) and gas dissipation (Elmegreen, 2011). Griv & Gedalin (2012) and Elmegreen (2011) estimate the value of \(Q_{critical}\) to be between 2 - 3 for the galaxy to be stable against non-axisymmetric instabilities and gas dissipation. In order to understand if the galaxies in the local universe are critically stable, we inspect the minimum values of total stability given by \(Q_{RW}^{Min}\), and the minimum values of the stability parameter for stars and gas given by \(Q_{Stars}^{Min}\) and \(Q_{Gas}^{Min}\) respectively. A value of \(Q_{RW}^{Min}\) in the range of \(2-3\) indicates that the galactic disc is critically stable, indicating stability against gas dissipation and non-axisymmetric instabilities as pointed out by Griv & Gedalin (2012) and Elmegreen (2011). The median value of \(Q_{RW}^{Min}\) for the nearby galaxies is equal to 2.2, whereas the median value of \(Q_{Stars}^{Min}\) and \(Q_{Gas}^{Min}\) is equal to 2.4 and 2.45 respectively. We note that 91% of galaxies in the SPARC catalog have \(Q_{RW}^{Min}>1\), 94% of galaxies have \(Q_{Stars}^{Min}>1\) and \(Q_{Gas}^{Min}>1\). This indicates that 91% galaxies in the SPARC catalog are stable against axisymmetric instabilities at all radii. Further, 56% of the galaxies in the sample have \(Q_{RW}^{Min}\) in the range of \(2-3\), which ensures critically stability at all radii. The SPARC sample consists of 81 irregular (\(Type=8-11\)) and 94 spiral galaxies (\(Type=0-7\)). We note that 63 out of 81 irregular galaxies have a \(Q_{RW}^{Min}>2\) and only 36 spiral galaxies out of 94 have a \(Q_{RW}^{Min}>2\). Further, 15 galaxies in the SPARC catalog have \(Q_{RW}^{Min}<1\), of which 5 are irregular galaxies and 10 are spiral galaxies. We note that a large number of irregular galaxies in SPARC catalog are stable against gas dissipation and non-axisymmetric instabilities. ### What about galaxies with \(Q_{RW}^{Min}<1\)? A direct consequence of \(Q_{RW}<1\) is that the galactic disc becomes unstable against the growth of axisymmetric instabilities. In order to investigate the effect of the \(Q_{RW}^{Min}<1\) for galaxies in our sample, we study the star formation rate, which is a direct outcome of gravitational instability. We query the near ultra-violet (NUV) magnitudes from Galaxy Figure 3: The solid lines depicts the radial variation of the median stability parameter when the gas dispersion is equal to 10km s\({}^{-1}\), whereas the dashed line depicts the radial variation of the median stability parameter when the gas dispersion is equal to 6km s\({}^{-1}\). The black dashed line shows the value of marginal stability \(Q_{RW}=1\). Figure 2: We show the radial variation of the two-component stability parameter (\(Q_{RW}\)) in the first panel, and the stability of stellar and the gas components given by \(Q_{Stars}\) and \(Q_{Gas}\) respectively in the second and the third column. The solid black line shows the median obtained by radially binning the data and the shaded region shows the median + 84\({}^{th}\) percentile and the median - 16\({}^{th}\) percentile in each radial bin. Evolution Explorer (GALEX) 2 database using the astroquery3 service. We find the NUV magnitudes for 120 galaxies out of the 175 galaxies in the SPARC catalog and convert the GALEX magnitudes to the star formation rate following the prescription given by Kennicutt Jr (1998) Footnote 2: [https://galex.stsci.edu/](https://galex.stsci.edu/) Footnote 3: [https://www.astropy.org/astroquery/](https://www.astropy.org/astroquery/) \[SFR(M_{\odot}yr^{-1})=1.4\times 10^{-28}L_{NUV}(ergs\,s^{-1}Hz^{-1}). \tag{9}\] In Figure 4, we show the star formation rate calculated using the NUV magnitudes for 120 nearby galaxies versus the \(Q_{RW}^{Min}\). The plot shows a negative relationship between \(Q_{RW}^{Min}\) and the star formation rate. The points marked with _triangles_ represent galaxies that have \(Q_{RW}^{Min}<1\). The median value of star formation rates (SFR) for galaxies with \(Q_{Min}^{RW}<1\) is equal to \(0.4\,M_{\odot}yr^{-1}\), whereas SFR for galaxies with \(Q_{RW}^{Min}>1\) is equal to \(0.07\,M_{\odot}yr^{-1}\). As expected, the galaxies with \(Q_{RW}^{Min}<1\) have higher star formation rate than those with \(Q_{RW}^{Min}>1\). Further, we report a negative relation between the star formation rates and the minimum value of the stability parameter given by \[log(SFR/M_{\odot}yr^{-1})=-0.65-1.69log(Q_{RW}^{Min}). \tag{10}\] We establish an anti-correlation between the global star formation rate and the stability parameter. Our results are consistent with the results previously obtained by Westfall et al. (2014), who show that the star formation surface densities anti-correlate with the value of \(Q_{RW}\) at \(1.5R_{D}\) for a sample of 27 galaxies. Further, we can see from Figure 4 that spiral galaxies (Type=0 - 7) and irregular galaxies (Type= 8 - 11) arrange themselves in two distinct clusters. Typically the spiral galaxies have a higher value of median star formation rate (\(SFR=0.2M_{\odot}yr^{-1}\)) compared to the irregular galaxies, which have a median star formation rate equal to \(0.02\,\,M_{\odot}yr^{-1}\). The observed star formation rate in the spirals and irregular galaxies is consistent with the measured values of the \(Q_{RW}\). The spiral galaxies typically have lower median stability (\(Q_{RW}^{Min}=1.7\)) and hence a higher star formation rate compared to the irregular galaxies (\(Q_{RW}^{Min}=2.6\)). ### Where do galaxies become unstable? In order to answer the above question, we estimate the radius at which \(Q_{RW}\), \(Q_{stars}\), and \(Q_{Gas}\) attain their minimum value, given by \((R/R_{D})_{Min(Q_{RW})}\), \((R/R_{D})_{Min(Q_{Gas})}\) and \((R/R_{D})_{Min(Q_{Gas})}\) respectively. The median value of the \((R/R_{D})_{Min(Q_{Obs})}\) is equal to 2.8, and the median values of \((R/R_{D})_{Min(Q_{Obs})}\) and \((R/R_{D})_{Min(Q_{Gas})}\) are equal to 1.4 and 5.1 respectively. The stellar disc has minimum stability close to 1.4 times the disc scalelength, i.e. at a point where the stellar density becomes e\({}^{-1.4}\) of its value at the center. On the other hand, the gas disc is much more extended than the stellar disc and has a minimum stability at 5.1 times the disc scalelength of stars. However, The composite stability parameter taking into account the self-gravity of both the stars and gas attains a minimum value at 2.8 times the disc scalelength of the stars. This highlights the importance of taking into account the self-gravity of stars and gas consistently. If one were to consider the stability of just the stellar disc, the galaxies would have minimum stability at \(1.4R_{D}\) close to the center, on the other hand, if one were to consider just the gas disc, the galaxies will have minimum stability at \(5.1R_{D}\), away from the center. However, the composite stability parameter indicates that the galaxies attain minimum stability at an intermediate value equal to \(2.8R_{D}\). ### Stability versus morphological type In order to understand how stability varies from one morphological type to another, we bin the galaxies according to their morphological type and compute \(Q_{RW}^{Min}\), \(Q_{stars}^{Min}\), and \(Q_{Gas}^{Min}\). Further, we measure the radius at which the galaxies attain a minimum value of \(Q_{RW}\), \(Q_{stars}\), and \(Q_{Gas}\). We present the results in Figure 5. In Figure 5, we adopt the convention for galaxy classification following Leili et al. (2016); Type 0=S0, 1=Sa, 2 = Sab, 3 = Sb, 4 = Sbc, 5 = Sc, 6 = Scd, 7 = Sd, 8 = Sdm, 9 = Sm, 10 = Im, 11 = BCD. Galaxy types numbered 1 - 7 are spirals, and 8 - 11 are irregular galaxies. In the top panel of Figure 5, we show the median values of \(Q_{RW}^{Min}\), \(Q_{stars}^{Min}\), and \(Q_{Gas}^{Min}\), for galaxies in each morphological bin. In the bottom panel, we show the median value of \(R/R_{D}\) at which \(Q_{RW}\), \(Q_{stars}\) and \(Q_{Gas}\) attain their minimum value as a function of the morphology. We leave out the galaxies with Type= 0 and Type= 1 as there are only 2 galaxies of Type=0 and 3 galaxies of Type=1. \(Q_{RW}^{Min}=2.0\) and BCDs (Type=11), have \(Q_{RW}^{Min}=2.6\). Further, we find that Sab (Type=2) galaxies attain minimum \(Q_{RW}\) at \(3.9R_{D}\), whereas BCDs (Type=11) and Im(Type=10) attain minimum \(Q_{RW}\) at \(1.2R_{D}\) and \(1.8R_{D}\) respectively. The median radius at which the \(Q_{RW}\) attains a minimum value ranges between 1.2 for BCDs (Type=11) and 4.9 for Sb (Type=3) galaxies. Although both the spiral and the irregular galaxies in the sample are stable against the growth of axisymmetric instabilities (\(Q_{RW}^{Min}>1\)), we find that the spiral galaxies are characterized by smaller values of \(Q_{RW}=1.7\) and a median gas fraction (\(f^{Gal}=0.1\)) compared to the irregular galaxies which have a median \(Q_{RW}=2.6\) and \(f^{Gas}=0.6\). The small value of \(Q_{RW}^{Min}\) and \(f^{Gas}\) observed in the spiral galaxies is consistent with the star formation rates observed in these galaxies. A small value of \(f^{Gas}\) in spiral galaxies indicates that the gravitational instabilities efficiently convert the available gas into stars. ### \(Q_{stars}\) versus morphological type The median \(Q_{stars}^{Min}\) varies between 1.4 for Sbc (Type =4) galaxies and 3.2 for Sm (type =9) galaxies. The median value of \(Q_{stars}^{Min}\) for Sab (Type=2) is equal to 2.2, and for BCDs (Type=11) is equal to 2.5. The stellar disc, in the case of Im and BCDs attains \(Q_{stars}^{Min}\) at \(0.8R_{D}\) and \(0.9R_{D}\) respectively. The Im (Type = 10) galaxies become unstable at \(0.8R_{D}\) whereas Sa (Type =2) become unstable at \(2.6R_{D}\). We note that the median \(Q_{stars}^{Min}\) for the spiral galaxies (\(Q_{Star}^{Min}\) = 1.9) is higher than the \(Q_{stars}^{Min}\) obtained for the irregular galaxies (\(Q_{Star}^{Min}\) = 2.8). ### \(Q_{Gas}\) versus morphological type The \(Q_{stars}^{Min}\) is equal to 1.7 for Sc(Type =5) and Scd (Type =6) galaxies and is equal to 6.8 for BCDs (Type =6). The Sm and Im galaxies attain \(Q_{Gas}^{Min}\) at \(4.1R_{D}\) and the Sab galaxies at \(8.4R_{D}\). Further, \(Q_{Gas}\) attains a minimum value outside \(3R_{D}\), beyond which the contribution of the stellar surface density becomes negligible. Also, we note that across morphological types from Sa to BCDs \(Q_{stars}\) attains minimum between \(0.8R_{D}\) to \(2.6R_{D}\), whereas the \(Q_{Gas}\) attains a minimum value outside \(3R_{D}\). If one considers just the stellar disc and neglects the gas disc, the stellar disc attains minimum values close to the center. On the other hand, if we consider just the gas disc neglecting the contribution of the stellar disc, we find that the gas disc becomes unstable away from the center. However, when we use two-component stability parameter (\(Q_{RW}\)) taking into account the contribution of both the stellar and the gas disc on equal footing, we find that \(Q_{Gas}\) drives the value of \(Q_{RW}\) outwards. We find that the irregular galaxies are more stable than the spiral galaxies, consistent with the higher gas fractions observed in the irregular galaxies than the spirals. This suggests that the gravitational instabilities are inefficient at converting the gas into stars in irregular galaxies consistent with the observed star formation rates (\(SFR=0.02M_{\odot}yr^{-1}\)). On the other hand, the spiral galaxies have a lower median \(Q_{RW}^{Min}\), a lower gas fraction, and a higher star formation rate (\(SFR=0.2M_{\odot}yr^{-1}\)) indicating that the gravitational instabil Figure 5: In the top panel we show \(Q_{RW}^{Min}\), \(Q_{stars}^{Min}\) and \(Q_{Gas}^{Min}\) as a function of the morphological type. In the lower panel, we show the radius at which the \(Q_{RW}\), \(Q_{stars}\) and \(Q_{Gas}\) attain their minimum value as a function of the morphology. The \({}^{\prime}black^{\prime}\) marker indicates the median value of the quantity. We use the following convention for the galaxy classification Type 0=S0, 1=Sa, 2 = Sab, 3 = Sb, 4 = Sbc, 5 = Sc, 6 = Scd, 7 = Sd, 8 = Sdm, 9 = Sm, 10 = Im, 11 = BCD. The galaxies are color coded according to the gas fraction (\(f^{Gal}\)). ities efficiently convert gas into stars. Finally, we note that the irregular galaxies attain \(Q_{RW}^{Min}\) at \(2.4R_{D}\), closer to the center, and the spiral galaxies attain minimum stability at \((3.6R_{D})\) from the center. ## 4 Discussion & Conclusions **Stability of high redshift galaxies** The galaxies observed at high redshift are precursors to the galaxies in the local universe. Comparing the stability of the galaxies in the local universe to those observed at high redshift provides important information about how gravitational instabilities evolve as a function of redshift. We compare \(Q_{RW}\) obtained for local galaxies, with a sample of 6 galaxies observed at a redshift \(z=4.5\) taken from Rizzo et al. (2020, 2021). The value of rotation to random motion (\(V/\sigma\)) for the galaxies at \(z=4.5\) lies between \(7-15\), indicating that galaxies in the early universe are rotation supported and dynamically cold. We take the structural and kinematic properties of the galaxies at \(z=4.5\) from Rizzo et al. (2020, 2021) and compute their stability following the methods detailed in SS2. From, Figure 6, we see that the galaxies at \(z=4.5\) are closer to the marginal stability (\(Q_{RW}=1\)) with a median \(Q_{RW}\) equal to \(0.98\). We note that the median \(Q_{RW}\) at \(z=4.5\) is lower than \(Q_{RW}\) of the nearby galaxies. The low stability of the galaxies at \(z=4.5\) is consistent with the high star formation rate of the order \(10^{2}-10^{3}\,M_{\odot}yr^{-1}\)(Rizzo et al., 2020, 2021). The SFR in galaxies at \(z=4.5\) is significantly higher than the nearby galaxies in SPARC catalog, which have a median star formation rate equal to \(0.07M_{\odot}yr^{-1}\). However, we note that nearby galaxies in the SPARC catalog, which have \(Q_{RW}\leq 1\), have median star formation rate equal to \(0.4\,M_{\odot}yr^{-1}\). Further, we find that in the case of SPT0113-46 and SPT2132-58, the \(Q_{RW}\) values closely follow \(Q_{Gas}\), whereas \(Q_{RW}\) for all the other galaxies at \(z=4.5\) is driven by \(Q_{stars}\). In other words, even in galaxies observed at \(z=4.5\), the stellar disc drives the net stability similar to the trend observed in local galaxies. The consonance between the \(Q_{RW}\) and \(Q_{stars}\) indicates that once the stellar disc has formed the stability levels are driven primarily by the stars suggesting the presence of an inherent mechanism that self-regulates the stability. Further, we inspect the value of gas fraction in the nearby spiral galaxies with a \(Q_{RW}<1\). We find that the spiral galaxies (Type = 1 - 7), which have \(Q_{RW}^{Min}<1\), have a gas fraction between \(0.06\) and \(0.34\). Whereas the galaxies at \(z=4.5\) have a median gas fraction equal to \(0.5\) and \(Q_{RW}<1\). Thus, a large gas fraction, a small median value of \(Q_{RW}\), and a high star formation rate suggest that the galaxies observed at \(z=4.5\) are currently undergoing star formation. The nearby spiral galaxies have relatively higher \(Q_{RW}\), and a lower gas fraction and a star formation rate lower than galaxies at \(z=4.5\), suggesting that the nearby spirals have reached a threshold star formation rate. **Role of the dark matter on stability levels** In order to gain insight into the self-regulation mechanism, we derive the \(Q_{RW}\) of the \({}^{\prime}star+gas^{\prime}\) disc by neglecting the contribution of the dark matter to the total potential. The effect of the dark matter on \(Q_{RW}\) is encapsulated in the epicyclic frequency (\(\kappa\)). The total epicyclic frequency can be written as \(\kappa_{Total}^{2}=\kappa_{Stars}^{2}+\kappa_{Gas}^{2}+\kappa_{DarkMatter}^ {2}\), where \(\kappa_{Total}\) is derived from the observed rotation curve. The total epicyclic frequency (\(\kappa_{Total}\)) can be derived by adding in quadrature the contribution from the stars, gas, and the dark matter halo, respectively. This is possible as \(\kappa\) is defined in terms of the angular frequency (\(\Omega\)), which in turn is defined in terms of the potential, and the total potential can be written as, \(\Phi_{Total}=\Phi_{Stars}+\Phi_{Gas}+\Phi_{DarkMatter}\) (See Equation 5, and the paragraph beneath). We derive \(Q_{RW}\) for the sample of nearby galaxies and galaxies at \(z=4.5\) by neglecting the contribution of the dark matter i.e. \(\kappa_{Total}^{2}=\kappa_{Gas}^{2}+\kappa_{Stars}^{2}\). From, Figure 7, we can see that the dark matter has a negligible effect on the median stability of the galaxies at \(z=4.5\) (see the red lines). The global median of \(Q_{RW}\), in the presence of dark matter, is equal to \(0.99\), and upon removing the contribution of the dark matter from the total potential, the median \(Q_{RW}\) changes to \(0.93\). Genzel et al. (2017) in their observational study of high redshift galaxies (\(z=0.6-2.6\)) find that the massive disc galaxies at high redshift are dominated by baryons with a negligible dark matter fraction. On the other hand, in the case of nearby galaxies, the overall stability decreases upon eliminating the dark matter's contribution to the total potential. The median value of \(Q_{RW}\) decreases from \(3.0\) in the presence of dark matter to \(2.3\) in the absence of dark matter. Although the overall stability curve is lowered upon eliminating the dark matter's contribution to the total potential, the nearby galaxies continue to be stable against the growth of axisymmetric instabilities. This indicates that the \({}^{\prime}star+gas^{\prime}\) disc can self-regulate the stability levels, atleast in a statistical sense. **What about Low Surface brightness galaxies?** Garg & Banerjee (2017) showed that the low surface brightness galaxies become susceptible to the growth of gravitational instabilities (\(Q_{RW}^{Min}=0.7-1.5\)) upon removing the contribution of the dark matter. We find that LSBs in the SPARC catalog have a median stability equal to \(2.4\), and upon removing the contribution of the dark matter to the total potential, the \(Q_{RW}^{Min}\) becomes \(1.5\), consistent with the previous study by Garg & Banerjee (2017), also see Narayanan & Banerjee (2022). **Connection between SFR, \(f^{Gas}\) and \(Q_{RW}\)** In this study, we find that the galaxies which have a smaller value of \(Q_{RW}\) have a higher star formation rate and relatively a lower gas fraction. The spiral galaxies in our sample have a lower median stability (\(Q_{RW}^{Min}=1.7\)) compared to irregular galaxies (\(Q_{RW}^{Min}=2.6\)). Further, the stability levels are consistent with the star formation rates in spiral galaxies (\(SFR=0.2M_{\odot}yr^{-1}\)) and irregular galaxies (\(SFR=0.02M_{\odot}yr^{-1}\)). We then inspect the gas fraction in the nearby galaxies and find that the spiral galaxies have a median \(f^{Gas}\) equal to \(0.1\), and the irregular galaxies have a median \(f^{Gas}=0.6\). A small gas fraction and a high SFR in spiral galaxies suggest that the gravitational instabilities efficiently convert the gas into stars depleting the gas reservoir. On the other hand, higher gas reserves in irregular galaxies suggest that the gravitational instabilities are inefficient at converting the gas into stars. In order to get a better picture of how \(Q_{RW}^{Min}\) is connected with \(f^{Gas}\) and the SFR, we compute the time scale for the growth of gravitational instabilities, which measures how quickly the gas is converted into stars. The time scale for the growth of gravitational instabilities is given by (Tal bor Jr & Arnett, 1975; Leroy et al., 2008; Wong, 2009) \[\tau=\frac{2\pi}{\omega_{J}}. \tag{11}\] In the above equation, \(\omega_{J}\) is defined as the growth rate of gravitational instabilities and is given as \[\omega_{J}=\frac{\pi G\Sigma_{Gas}}{\sigma_{Gas}}(1+\frac{\Sigma_{Star}\sigma _{Gas}}{\Sigma_{Gas}\sigma_{stars}}). \tag{12}\] In Figure 8, we show the time scale for the growth of gravitational instabilities as a function of gas fraction [panel - 1], star formation rate [panel -2 ], and the two-component stability parameter [panel - 3]. From the first panel, we see that the galaxies in which gravitational instabilities persist for a more extended period have a higher gas fraction. On the other hand, the galaxies that sustain instabilities for a short time have a lower gas fraction. Further, from the second and the third panel, we see that galaxies in which gravitational instabilities persist for an extended time have lower star formation rates and a higher value of \(Q_{RW}^{Min}\). Thus, instabilities characterized by \(Q_{RW}\) close to marginal stability acting for short time scales are responsible for depleting the gas reservoirs. On the other hand, instabilities persisting for a longer period in galaxies characterized by a higher \(Q_{RW}\) have a larger gas fraction and a lower star formation rate. The median value of \(\tau\) for the irregular galaxies is equal to \(214\,Myr\), and \(\tau\) for the spiral galaxies is equal to \(104\,Myr\). The values of \(\tau\) obtained for the nearby spiral galaxies in this study are consistent with the values of \(\tau\) obtained by Wong (2009) for 16 galaxies taken from THINGS sample (\(\tau=100\,Myr\)). The gravitational instabilities act over short times scales in spiral galaxies (\(\tau=104\,Myr\)) and efficiently convert the available gas into stars, which explains why the spiral galaxies have a small gas fraction (\(P^{Gas}=0.1\)) and a high star formation rate (\(SFR=0.2M_{\odot}yr^{-1}\)). Whereas on the other hand, in irregular galaxies, the gravitational instabilities persist for a longer time (\(214\,Myr\)) and convert the gas into stars more gradually, which explains why irregular galaxies have a higher gas fraction (\(f^{Gas}=0.6\)) and a lower star formation rate compared to the spiral galaxies. Finally, we note that the gravitational instabilities persist in star-forming galaxies at \(z=4.5\) for only \(6Myr\), and the median gas fraction is equal to \(0.5\). A small value of \(Q_{RW}\), a high star formation rate, and a large gas fraction indicate that the galaxies at \(z=4.5\) are in an active star-forming stage, unlike nearby galaxies, which have possibly reached their threshold gas fraction and star formation rate. It suggests that spiral galaxies possibly undergo intense star formation in multiple short bursts before exhausting their gas reserves and reach threshold star formation rate and stability level. Also, unlike spiral galaxies, irregular galaxies convert gas into stars more gradually over a much longer time scale compared to spiral galaxies. The mechanism uniting the instability levels and Figure 7: The \({}^{\prime}blue^{\prime}\) and the \({}^{\prime}red^{\prime}\) lines indicate the radially binned median \(Q_{RW}\) for galaxies from the SPARC catalog and the galaxies at \(z=4.5\) respectively. The dashed line indicates the median \(Q_{RW}\) upon removing the dark matter’s contribution to the total potential. the star formation rates with the gas fraction are consistent with the recent results from the FIRE -2 simulations (Hopkins et al., 2018). Using the FIRE - 2 simulations Parul et al. (2023); Flores Velazquez et al. (2021) show that star formation in Milky Way-like galaxies occurs in short bursts at high redshifts and proceeds more steadily at low redshifts. The results obtained in this work suggest a simple mechanism in which galaxies characterized by \(Q_{RW}\) close to marginal stability levels undergo intense star formation activity for a short time scale, depleting the gas reserves. Whereas in the second scenario, the star formation proceeds more slowly over longer time scales in galaxies with a relatively higher value of \(Q_{RW}\), gradually converting the available gas into stars. ## 5 Summary In this work, we have studied the stability of the nearby galaxies, which are a part of the SPARC catalog, using the two-component stability criterion proposed by Romeo & Wiegert (2011). We find: 1. The net stability in the galaxies are primarily driven by the stellar disc. 2. Despite their diverse morphological properties, 91% galaxies in the SPARC galaxy catalog have a \(Q_{RW}^{Min}>1\), indicating stability against axisymmetric instabilities. Further, atleast 50% of the galaxies have \(Q_{RW}^{Min}\) between \(2-3\), indicating critical stability against non-axisymmetric instabilities and gas dissipation. 3. The galaxies with \(Q_{RW}^{Min}<1\) have a median star formation rate equal to \(0.4~{}M_{\odot}yr^{-1}\) which is higher than galaxies with \(Q_{RW}^{Min}>1\) (\(SFR=0.07M_{\odot}yr^{-1}\)). Further, we find that the median star formation rates in the spiral galaxies (\(SFR=0.2M_{\odot}yr^{-1}\)) is higher than the irregular galaxies (\(SFR=0.02M_{\odot}yr^{-1}\)). We note that galaxies with higher star formation rates and lower stability parameters have a lower gas fraction. 4. The stellar disc attains minimum stability close to the center, at \(1.4R_{D}\), whereas the gas disc attains minimum values at \(5.1R_{D}\). However, the net stability of the galaxies determined by the two-component stability parameter \(Q_{RW}\) attains a minimum value at an intermediate value equal to \(2.8R_{D}\). 5. Finally, we compare the stability levels of the nearby galaxies in SPARC catalog with dynamically cold disc galaxies observed at \(z=4.5\). We find that the galaxies at \(z=4.5\) are characterized by low stability and a high star formation rate compared with the nearby galaxies. 6. We find that galaxies with \(Q_{RW}^{Min}<1\) in the SPARC catalogue have a median \(SFR=0.4M_{\odot}yr^{-1}\), whereas galaxies with \(Q_{RW}^{Min}<1\) at \(z=4.5\) have a median SFR equal to \(7\times 10^{2}\rm{M_{\odot}yr^{-1}}\). This indicates that the gravitational instabilities in the galaxies can self-regulate the stability levels. Also, the galaxies at \(z=4.5\) have \(f^{Gas}=0.5\) and nearby galaxies have a median \(f^{Gas}=0.1\). This suggests that the galaxies at \(z=4.5\) are in an active star-forming stage, whereas the nearby galaxies have reached a threshold star formation level and gas fraction. 7. In order to better understand how the galaxies can self-regulate the stability levels, we derive the stability of the galaxies at \(z=4.5\) and those in the SPARC catalog by neglecting the contribution of the dark matter to the total potential. The stability levels of galaxies at \(z=4.5\) are unchanged upon eliminating the dark matter's contribution from the total potential. On the other hand, the global median of \(Q_{RW}\) changes from 3.0 to 2.3 upon eliminating dark matter's contribution from total potential in the case of nearby galaxies. The galaxies in the SPARC catalog remain stable against axisymmetric instabilities upon removing the dark matter's contribution from the total potential suggesting that the baryons can regulate the stability levels, atleast statistically. 8. In this study, we find that the galaxies which have a higher value of \(Q_{RW}\) typically also have a higher gas fraction and a lower star formation rate. So, to understand how the star formation rate and the two-component parameter are related to the gas fraction, we measure the time scale for the growth of gravitational instabilities, which measures how quickly gas is converted into stars. We find that gravitational instabilities act for a short period in spiral galaxies, efficiently converting the gas into stars, which explains why spirals have relatively smaller gas fraction than irregular galaxies. Whereas gravitational instabilities persist over a more extended period and gradually convert the available gas into stars in the irregular galaxies. Figure 8: In the first panel, we plot the time scale for the growth of gravitational instabilities (\(\tau\)) as a function of \(f^{Gas}\). In the second panel, we show \(\tau\) as a function of the star formation rate (SFR) and plot \(\tau\) as a function of \(Q_{RW}^{Min}\) in the third panel. The vertical dashed line indicates \(f^{Gas}=0.5\). ## 6 Acknowledgement Aditya would like to thank the referee for their insightful comments that improved the quality of this manuscript. The work uses data from Spitzer Photometry and & Accurate Rotation Curves (SPARC) database (Lelli et al., 2016). Aditya is supported by a DST-SERB grant [CRG/2021/005174]. ## 7 Data Availability The data used in this study is publicly available at [http://astroweb.cwru.edu/SPARC/](http://astroweb.cwru.edu/SPARC/).
2301.02092
DepthP+P: Metric Accurate Monocular Depth Estimation using Planar and Parallax
Current self-supervised monocular depth estimation methods are mostly based on estimating a rigid-body motion representing camera motion. These methods suffer from the well-known scale ambiguity problem in their predictions. We propose DepthP+P, a method that learns to estimate outputs in metric scale by following the traditional planar parallax paradigm. We first align the two frames using a common ground plane which removes the effect of the rotation component in the camera motion. With two neural networks, we predict the depth and the camera translation, which is easier to predict alone compared to predicting it together with rotation. By assuming a known camera height, we can then calculate the induced 2D image motion of a 3D point and use it for reconstructing the target image in a self-supervised monocular approach. We perform experiments on the KITTI driving dataset and show that the planar parallax approach, which only needs to predict camera translation, can be a metrically accurate alternative to the current methods that rely on estimating 6DoF camera motion.
Sadra Safadoust, Fatma Güney
2023-01-05T14:53:21Z
http://arxiv.org/abs/2301.02092v1
# DepthP+P: Metric Accurate Monocular ###### Abstract Current self-supervised monocular depth estimation methods are mostly based on estimating a rigid-body motion representing camera motion. These methods suffer from the well-known scale ambiguity problem in their predictions. We propose DepthP+P, a method that learns to estimate outputs in metric scale by following the traditional planar parallax paradigm. We first align the two frames using a common ground plane which removes the effect of the rotation component in the camera motion. With two neural networks, we predict the depth and the camera translation, which is easier to predict alone compared to predicting it together with rotation. By assuming a known camera height, we can then calculate the induced 2D image motion of a 3D point and use it for reconstructing the target image in a self-supervised monocular approach. We perform experiments on the KITTI driving dataset and show that the planar parallax approach, which only needs to predict camera translation, can be a metrically accurate alternative to the current methods that rely on estimating 6DoF camera motion. ## 1 Introduction Understanding the 3D structure of a scene is fairly easy for human beings. We can easily reason about our surroundings and decompose them into different objects. Having this ability is crucial for autonomous vehicles to be able to drive in different environments. Training deep networks for estimating depth has proven successful in computer vision research. However, many such methods are supervised and require ground truth depth which is costly to achieve. Another line of work uses a stereo setup that must be carefully calibrated. Both of these approaches cannot use the vast amount of unlabeled videos that are easily available for training. On the other hand, self-supervised monocular depth estimation methods that do not rely on stereo supervision do not suffer from these limitations and, in practice, have been closing the gap with their supervised or stereo counterparts. Current self-supervised monocular depth estimation approaches all follow the same basic idea proposed in []. They use a pose network to estimate the ego-motion between a source frame and the target frame and a depth network to estimate the depth of the target image. These estimations can then be used to sample pixels from the source image to synthesize the target frame. The difference between the target frame and the synthesized can be used as the source of supervision for training the networks. In this paper, we propose another approach to synthesize the target image. Our approach, **DepthP+P**, illustrated in Fig. 1, uses the traditional planar parallax formulation [], which decomposes the motion into a planar homography and a residual parallax. Consider a plane in the scene and its motion represented by a homography from the source to the target image. By first warping the source image according to this homography, the motion of the plane is canceled. Then the residual image motion depends on two factors: (1) the deviations of the scene structure from the plane, i.e. the depth of points and their perpendicular distance to the plane and (2) only the translational motion of the camera. Autonomous driving is a perfect use case for this approach because there is typically a planar surface in front of the vehicle, i.e. the road. However, it is important to note that the plane in the planar parallax formulation does not necessarily have to be a real plane and can also be a virtual plane, but choosing the road as the planar surface makes it easier to implement in practice. Moreover, our approach does not rely on the availability of a plane to predict depth during inference. In this approach, we first align the road plane between the source and target images. This is achieved by calculating the homography between the road regions in two frames and then warping the source frame according to the homography to obtain the aligned image. By doing so, the road regions in the aligned image and target image match. The residual motion between the aligned image and the target image can be explained as follows: We first estimate the depth of each pixel with a monocular depth network and back-project them into 3D. Then using a known camera height, we can calculate the perpendicular distance of each point to the road. In addition, we estimate the translation between the camera origins. Note that this is different from the typical monocular depth approach [11, 12] which needs to estimate both the rotation and translation components. Finally, the target image can be synthesized from the aligned image using the calculated residual parallax as shown in Fig. 2. The planar parallax approach for self-supervised monocular estimations has a number of advantages over the previous paradigm. Firstly, it is much easier to optimize because it removes the ambiguities associated with predicting rotational camera motion [13]. Secondly, it can produce metric accurate outputs. Previous monocular depth methods can estimate depth and motion up to a scale. Typically, during inference, ground truth depth data is used to scale the predicted depth values such that the median of the predicted depth is equal to that of ground truth depth [13]. Our approach is able to predict metric accurate depth without needing ground truth depth data by only assuming a known camera height. Figure 1: **Overview of our Approach.** Using the source image \(\mathbf{I}_{s}\) and the target image \(\mathbf{I}_{t}\), we first calculate the homography \(\mathbf{H}\) that aligns the road plane across these two images. We then warp \(\mathbf{I}_{s}\) according to \(\mathbf{H}\) and obtain the aligned image \(\mathbf{I}_{w}\). The aligned image \(\mathbf{I}_{w}\) and the target image \(\mathbf{I}_{t}\) are input to the pose network which estimates the camera translation \(\mathbf{t}\) only. The depth network takes the \(\mathbf{I}_{t}\) and produces a metric accurate depth map \(\mathbf{\hat{D}}\). ## 2 Related Work ### Self-Supervised Monocular Depth View Synthesis:Garg et al. [1] were the first to propose a method that uses view synthesis as an objective for depth estimation from single images. Monodepth [1] uses Spatial Transformer Networks (STNs) to synthesize the images in a fully-differentiable way. SfmLearner generalizes view synthesis to temporally consecutive images by using another network to predict the relative pose between them. Zhan et al. [1] use stereo sequences to perform view synthesis using temporally consecutive pairs as well as the left-right pairs, enabling them to benefit from both monocular and stereo supervision. In addition to image reconstruction, they also use feature reconstruction as supervision. Similarly, by going beyond pixel-wise reconstruction error, Mahjourian et al. [1] propose to use a 3D point cloud alignment loss to enforce the estimated point clouds and the camera pose to be consistent temporally. Wang et al. use direct visual odometry in a differentiable manner to solve for ego-motion using the estimated depth. In addition to depth and camera pose, several methods estimate optical flow for residual motion. After predicting the camera motion, GeoNet estimates the remaining object motion using optical flow. In order to prevent the errors of camera pose or depth predictions from propagating to flow estimations, DF-Net enforces consistency between optical flow and the flow induced by the depth and pose predictions. GLNet uses epipolar constraint for optical flow, along with other geometric constraints, further improving the performance. EPC++ proposes a holistic 3D motion parser that uses predicted depth, pose, and optical flow to estimate segmentation masks for dynamic objects and their motion as well as background motion. Ranjan et al. jointly train networks for depth, pose, optical flow, and motion segmentation so that they can use geometric constraints on the static regions and generic optical flow on moving objects. MonoDepthSeg proposes to jointly estimate depth, independently moving regions, and their motion with an efficient architecture. Some approaches keep the original framework with a depth and a pose network but improve the performance with better loss functions, improved network architectures, and innovative design choices. When estimating depth at multiple scales, Monodepth2 proposes to first upsample the estimated low-scale depths to the input image size and then calculate the photometric loss at that scale. Monodepth2 also proposes to calculate the minimum of reprojection errors per pixel instead of averaging them when synthesizing the target image from multiple views to prevent blurry depth estimations. PackNet changes the architecture of the depth network and uses 3D convolutions to learn to preserve spatial information using symmetrical 3D packing and unpacking blocks for predicting depth. Scale Ambiguity:Self-supervised monocular depth estimation models suffer from the scale ambiguity problem, and the depth and pose outputs of such models are in an unknown scale. The median scaling technique used by many previous methods does not actually solve this problem because it relies on ground truth depth data during inference which is not always easily available. Bian et al. introduce a loss to minimize normalized differences of depth maps across the entire sequence. This makes the estimations globally scale-consistent. However, although this means that the predictions are at the same scale, that specific scale is still unknown, and the median scaling is still required during evaluation. There are a number of monocular methods that can output depth estimations in absolute scale. Roussel et al. use a network that was pre-trained with stereo pairs on a dataset and finetunes it on another dataset while maintaining the metric scale. Guizilini et al. [] propose a version of their PackNet that uses ground truth camera velocity and the timestamps of images to enforce the estimations to be metrically accurate. Bartoccioni et al. [] supervise their depth predictions with a sparse LiDAR. However, all of these approaches rely on ground truth data from extra sensors during training. There are a few other methods that do not require additional supervision and only use the camera height to achieve depth estimations in metric units similar to the proposed method. DNet [] estimates the ground plane during inference and, using the real height of the camera, recovers the scale of the predictions. However, it needs a ground plane to be visible during the test time. In other words, they do not train their depth outputs to be in absolute scale. Rather, they recover the scale of the estimations with another module during test time. Wagstaff and Kelly [] train a network that learns the metric scale during training using camera height. They introduce a plane segmentation network and propose a three-staged training procedure for training the depth estimation model in metric scale. First, they train an unscaled depth network and then use it to train the plane segmentation network. Finally, they train a new metrically accurate depth network using the pre-trained plane segmentation network. Similar to [], we also learn the metric scale during training, but we do not need a multi-stage process, nor do we rely on the existence of a ground plane during inference, differently from previous work []. ### Planar Parallax The Planar Parallax paradigm, also called Plane + Parallax (P+P), has been used to understand the 3D structure of a scene from multiple images by decomposing the motion into a planar homography and a residual parallax. Sawhney [] proposes a formulation for the residual parallax that uses depth and distance to the plane. Irani et al. [] use this formulation to derive a rigidity constraint between pairs of points over multiple images. Irani et al. [] derive trifocal constraints and use them to propose a simple method for new view synthesis. In a follow-up work [], they extend the planar parallax method to more than two uncalibrated frames. More recently, MR-Flow [] uses P+P to refine the optical flow estimations with rigidity constraints. Chaney et al. [] use P+P to estimate the height of points in the scene with event-based cameras. We propose a method to use the P+P formulation within the view synthesis framework for self-supervised monocular depth estimation. ## 3 Methodology Despite the success of current self-supervised monocular depth estimation approaches, they suffer from scale ambiguity. i.e. the estimated depth values are in an unknown scale. Therefore, in order to evaluate and compare these methods, they are usually normalized using the median scaling approach []. Here, we propose an approach that predicts depth maps in metric scale without using any ground truth depth supervision. ### DepthP+P Our approach is based on the Planar Parallax decomposition which has been studied in detail before []. We first introduce it here to establish our notation and then build our method to predict depth following that notation. Notation:Let \(\Pi\) be a 3D plane and \(\mathbf{H}\) be the homography aligning \(\Pi\) between the target image \(\mathbf{I}_{t}\) and the source image \(\mathbf{I}_{s}\). Let \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\) be the images of the 3D point \(\mathbf{x}=[\mathbf{X},\mathbf{Y},\mathbf{Z}]^{T}\) on the \(\mathbf{I}_{t}\) and \(\mathbf{I}_{s}\) respectively. As shown on the left in Fig. 2, we can warp \(\mathbf{p}^{\prime}\) by the homography \(\mathbf{H}\) and obtain the image point \(\mathbf{p}_{w}\): \[\mathbf{p}_{w}\sim\mathbf{H}\mathbf{p}^{\prime} \tag{1}\] where we omit the conversion to the homogenous coordinates. Note that by warping the source image \(\mathbf{I}_{s}\), we obtain the aligned image \(\mathbf{I}_{w}\) such that the plane \(\Pi\) matches between them. The displacement between \(\mathbf{p}_{w}\) and \(\mathbf{p}\) can be computed as follows: \[\mathbf{p}_{w}-\mathbf{p}=\frac{\gamma}{\mathbf{d}_{c}-\gamma\mathbf{t}_{z}} (\mathbf{t}_{z}\mathbf{p}-\mathbf{K}\mathbf{t}) \tag{2}\] where \(\mathbf{K}\) is the camera intrinsic, \(\mathbf{t}=[\mathbf{t}_{x},\mathbf{t}_{y},\mathbf{t}_{z}]^{T}\) is the translation vector between the \(\mathbf{I}_{t}\) and \(\mathbf{I}_{s}\), and \(\mathbf{d}_{c}\) is the distance between the camera for the source view to the plane \(\Pi\). The structure is represented by \(\gamma=\frac{\mathbf{h}}{\mathbf{Z}}\) where \(\mathbf{h}\) is the distance of \(\mathbf{x}\) to \(\Pi\). Note that when \(\mathbf{x}\) lies on the plane \(\Pi\), i.e. \(\mathbf{h}=0\), we will have \(\mathbf{p}_{w}=\mathbf{p}\). DepthP+P:Following the typical self-supervised monocular depth approach, our framework has two networks, one for estimating depth and another for estimating the translation between frames. Note that, unlike other methods, we do not need to estimate the rotation between the two views. Precisely, our pose network takes the source and images \(\mathbf{I}_{s},\mathbf{I}_{w}\) and outputs the translation vector \(\mathbf{t}\). The depth network takes the target image \(\mathbf{I}_{t}\) and outputs the depth map \(\hat{\mathbf{D}}\) for \(\mathbf{I}_{t}\). For every pixel \(\mathbf{p}=[x,y]\), let \(\hat{\mathbf{D}}(\mathbf{p})\) denote its estimated depth. We backproject \(\mathbf{p}\) using the camera intrinsics and the estimated depth to obtain the corresponding 3D point \(\hat{\mathbf{x}}\) in the camera coordinate system as follows: \[\hat{\mathbf{x}}=\hat{\mathbf{D}}(\mathbf{p})\ \mathbf{K}^{-1}\ [x,y,1]^{T}. \tag{3}\] Figure 2: **Visualization of the Planar Parallax.**_Left:_ The 3D point \(\mathbf{x}\) is projected to points \(\mathbf{p}\) and \(\mathbf{p}^{\prime}\) on the target image \(\mathbf{I}_{t}\) and the source image \(\mathbf{I}_{s}\) respectively. Using the homography \(\mathbf{H}\) induced by the plane \(\Pi\), the point \(\mathbf{p}^{\prime}\) will be transformed to point \(\mathbf{p}_{w}\) on the target image. _Right:_ Calculating \(\mathbf{h}\), distance of \(\mathbf{x}\) to the \(\Pi\) using the camera height \(\mathbf{d}_{c}\) and the normal vector \(\mathbf{N}\) of the plane. \(\mathbf{C}\) and \(\mathbf{C}^{\prime}\) are the camera centers of \(\mathbf{I}_{t}\) and \(\mathbf{I}_{s}\) and \(\mathbf{Z}\) is the depth of the point. Therefore, as demonstrated on the right in Fig. 2, we have the following: \[\mathbf{\hat{h}}=\mathbf{d}_{c}-\mathbf{N}^{T}\mathbf{\hat{x}},\qquad\mathbf{ \hat{\gamma}}=\frac{\mathbf{\hat{h}}}{\mathbf{\hat{D}}(\mathbf{p})} \tag{4}\] where \(\mathbf{N}\) is the normal vector of the plane \(\Pi\), \(\mathbf{\hat{h}}\) is the estimated distance of the point \(\mathbf{\hat{x}}\) to the plane \(\Pi\) and \(\mathbf{\hat{\gamma}}\) is our estimate of the structure variable \(\gamma\). As a result, we obtain all the parameters required to use Eq. (2) to reconstruct the target image \(\mathbf{I}_{t}\) by warping the aligned image \(\mathbf{I}_{w}\) resulting in \(\mathbf{\hat{I}}_{w}\). In other words, for each pixel \(\mathbf{p}\) on the \(\mathbf{I}_{t}\), we calculate the \(\mathbf{p}_{w}\) using (2) according to the depth and translation predicted by our two networks and then inverse warp \(\mathbf{I}_{w}\) and obtain \(\mathbf{\hat{I}}_{w}\) to reconstruct \(\mathbf{I}_{t}\): \[\mathbf{I}_{t}(\mathbf{p})\approx\mathbf{\hat{I}}_{w}(\mathbf{p})=\mathbf{I} _{w}(\mathbf{p}_{w}) \tag{5}\] We minimize the difference between \(\mathbf{I}_{t}\) and \(\mathbf{\hat{I}}_{w}\) for supervision as explained in the Section 3.2. In order to obtain the aligned images, we perform a pre-processing step on the dataset. We calculate a homography for every consecutive frame by using the road as the plane \(\Pi\) and warp the frames according to the calculated homographies. In other words, we calculate a homography \(\mathbf{H}\) for every target image \(\mathbf{I}_{t}\) and source image \(\mathbf{I}_{s}\) pair, and then warp \(\mathbf{I}_{s}\) according to \(\mathbf{H}\) to obtain the warped source image \(\mathbf{I}_{w}\). We explain the details of this pre-processing step in Section 4.1. ### Self-Supervised Training Loss In our approach, we define our photometric loss function as the linear combination of the L1 distance and the structural similarity (SSIM) [] to minimize the difference between the target image \(\mathbf{I}_{t}\) and the reconstructed image \(\mathbf{\hat{I}}_{w}\). Our photometric loss is therefore defined as follows: \[\mathcal{L}_{\text{photo}}(\mathbf{p})=(1-\alpha)\left|\mathbf{I}_{t}(\mathbf{ p})-\mathbf{\hat{I}}_{w}(\mathbf{p})\right|+\frac{\alpha}{2}\left(1-\text{SSIM}( \mathbf{I}_{t},\mathbf{\hat{I}}_{w})(\mathbf{p})\right) \tag{6}\] where we set \(\alpha=0.85\). Note that for every target image we consider two aligned images. One from warping the previous frame, and one from warping the next frame. We use the per-pixel minimum reprojection error introduced in [] and calculate the minimum of the \(\mathcal{L}_{photo}\) for each pixel across the previous and next aligned images. We also define \(\mathcal{L}_{smooth}\) as an edge-aware smoothness loss over the mean-normalized inverse depth estimates [] to encourage the depth predictions to be locally smooth. Our total loss function is a combination of \(\mathcal{L}_{smooth}\) and \(\mathcal{L}_{photo}\) averaged over all \(N\) pixels: \[\mathcal{L}=\frac{1}{N}\sum_{\mathbf{p}}\lambda\ \mathcal{L}_{smooth}(\mathbf{p}) +\min_{w}(\mathcal{L}_{photo}(\mathbf{p})) \tag{7}\] where \(\min_{w}\) calculates the minimum over the previous and next aligned frames and \(\lambda\) is a hyperparameter controlling the effect of the loss terms. ### Network Architecture Our depth network is based on the U-Net architecture []. We use a ResNet [] pre-trained on the ImageNet [] as the encoder for our depth network, and for the decoder we use the architecture similar to the one used by []. The difference is that we directly estimate depth by multiplying the output of the last sigmoid layer by 250, which is the maximum depth value that can be predicted, instead of estimating disparity as in []. The depth network takes as input a single target image \(\mathbf{I}_{t}\) and outputs the per-pixel depth estimates. Note that the output of our depth decoder is in metric scale. In DepthP+P, our second network takes \(\mathbf{I}_{w}\) and \(\mathbf{I}_{t}\) and outputs only the translation vector between the views. The network is similar to the pose network proposed in [], except that the output is a 3-element vector representing the translation and is metric scale in our case. ## 4 Experiments ### Dataset #### Kittti: We use the Eigen split [] of the KITTI dataset [] to train and evaluate our model. We use all of the images in the split for which we could accurately estimate the homography for aligning the road between consecutive images as explained in the next paragraph. This results in 45000 training and 1769 validation samples. We evaluate our model on the 697 test images in the split using the original ground truth provided by LiDAR. We also report results using the improved ground truth for 652 test images provided by Uhrig et al []. They use a stereo-reconstruction method to remove the outliers in LiDAR points and increase the ground truth density by accumulating laser scans which result in high-quality ground truth data. The camera height in this dataset is \(\mathbf{d}_{c}=1.65\) and we assume that the road is completely horizontal, i.e. \(\mathbf{N}=[0,1,0]^{T}\). #### Pre-processing the dataset for DepthP+P: In order to use our P+P approach, we need to calculate the homography between the consecutive frames and warp the source frame according to the estimated homographies. Since we work on the driving scenarios on KITTI, we choose the "road" as our plane \(\Pi\) which is visible in most of the frames. For calculating the homography, we need to find a set of (at least 4) corresponding pairs of road pixels between a source view \(\mathbf{I}_{s}\) and the target view \(\mathbf{I}_{t}\), i.e. two consecutive images. For this purpose, we use the optical flow between \(\mathbf{I}_{s}\) and \(\mathbf{I}_{t}\) using [] to find the corresponding pixels. We then use [] to select only the pixels that belong to the semantic class "road". Using the corresponding pairs of road pixels, we estimate the homography \(\mathbf{H}\) using OpenCV's RANSAC-based robust method. We do this to find the homography \(\mathbf{H}\) for all of the consecutive pairs of frames on KITTI. Note that for any consecutive pair of frames \(\mathbf{I}_{1},\mathbf{I}_{2}\) and the homography \(\mathbf{H}\) between them, we use \(\mathbf{H}\) to warp \(\mathbf{I}_{1}\) towards \(\mathbf{I}_{2}\) and also use \(\mathbf{H}^{-1}\) to warp \(\mathbf{I}_{2}\) towards \(\mathbf{I}_{1}\). ### Depth Estimation Results In Table 1, we report the depth estimation results of our method on the KITTI Eigen split using both the original and the improved ground truth. To the best of our knowledge, this is the first time that a deep learning model has been trained with view synthesis through the planar parallax paradigm (Eq. (2)). All of the previous methods are trained based on estimating the pose whereas our method introduces a novel approach. We can see that our method achieves significantly better results than the initial models by predicting the pose and depth. After the initial proposal of SfMLearner by Zhou et al. [], several improvements have been proposed to improve its performance. Therefore, we believe that similar improvements can follow our model as future work to make it perform better than our initial proposal as well as the other state-of-the-art models that are trained to estimate the full pose. Note that \(\llbracket\texttt{\small{c}}\rrbracket\) is not trained to estimate metrically accurate depth. Instead, its depth network outputs depth in an unknown scale, and then during inference, it needs a ground plane to be visible on the image to recover the scale of the network. When the ground plane is not visible on the image, \(\llbracket\texttt{\small{c}}\rrbracket\) fails completely as shown in Fig. 3. As can be seen in this figure, this image from the KITTI dataset does not have a ground plane, and \(\llbracket\texttt{\small{c}}\rrbracket\) cannot recover the scale and produces completely wrong estimates. While our method needs a ground plane during training, it does not rely on the availability of the ground plane during inference, therefore it can still perform well. For reference, the absolute relative (Abs Rel) error of \(\llbracket\texttt{\small{c}}\rrbracket\) on Fig. 3 is 1.178, while our model achieves a 0.252 error. \(\llbracket\texttt{\small{c}}\rrbracket\) achieves better results by using a pre-trained plane segmentation network in addition to the depth network, while our approach can achieve comparable results without a separate segmentation network. DepthP+P can also be trained with additional stereo supervision. In the proposed approach, we obtain monocular supervision from the P+P paradigm. In addition, using the known camera baseline and the estimated depth, we can warp the other image in the stereo setup to the input image for additional supervision signal. In Table 2, we report the per \begin{table} \begin{tabular}{|l|l|c|c c c c|c c c|} \hline \hline \multirow{3}{*}{**Method**} & \multirow{3}{*}{**Scale**} & \multicolumn{4}{c|}{Lower Better} & \multicolumn{4}{c|}{Higher Better} \\ & & & Abs Rel & \multicolumn{1}{c}{Sq Rel} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{RMSE\({}_{log}\)} & \(\delta<1.25\) & \(\delta<1.25^{2}\) & \(\delta<1.25^{3}\) \\ \hline \multirow{9}{*}{**Constructive**} & Zhou et al. [20] & ✗ & 0.183 & 1.595 & 6.709 & 0.270 & 0.734 & 0.902 & 0.959 \\ & Yang et al. [20] & ✗ & 0.182 & 1.481 & 6.501 & 0.267 & 0.725 & 0.906 & 0.963 \\ & Mahjourian et al. [20] & ✗ & 0.163 & 1.240 & 6.220 & 0.250 & 0.762 & 0.916 & 0.968 \\ & Yin et al. [20] & ✗ & 0.149 & 1.060 & 5.567 & 2.226 & 0.796 & 0.935 & 0.975 \\ & Wang et al. [20] & ✗ & 0.151 & 1.257 & 5.583 & 0.228 & 0.810 & 0.936 & 0.974 \\ & Zou et al. [20] & ✗ & 0.150 & 1.124 & 5.507 & 0.223 & 0.806 & 0.933 & 0.973 \\ & Yang et al. [20] & ✗ & 0.162 & 1.352 & 6.276 & 0.252 & - & - & - \\ & Ranjian et al. [20] & ✗ & 0.148 & 1.149 & 5.464 & 0.226 & 0.815 & 0.935 & 0.973 \\ & Luo et al. [20] & ✗ & 0.141 & 1.029 & 5.350 & 0.216 & 0.816 & 0.941 & 0.976 \\ & Chen et al. [20] & ✗ & 0.135 & 1.070 & 5.230 & 0.210 & 0.841 & 0.948 & 0.980 \\ & Godard et al. [20] & ✗ & **0.110** & 0.831 & 4.642 & **0.187** & **0.883** & **0.962** & **0.982** \\ & Guizilini et al. [20] & ✗ & 0.111 & **0.785** & **4.601** & 0.189 & 0.878 & 0.960 & **0.982** \\ & Safadoust et al. [20] & ✗ & **0.110** & 0.792 & 4.700 & 0.189 & 0.881 & 0.960 & **0.982** \\ & Xue et al. [20] & ✓ & 0.118 & 0.925 & 4.918 & 0.199 & 0.862 & 0.953 & 0.979 \\ & Wagstaff and Kelly [20] & ✓ & 0.123 & 0.996 & 5.253 & 0.213 & 0.840 & 0.947 & 0.978 \\ & DepthP+P (Ours) & ✓ & 0.152 & 1.322 & 6.185 & 0.239 & 0.781 & 0.920 & 0.970 \\ \hline \hline \multirow{9}{*}{**Constructive**} & Zhou et al. [20] & ✗ & 0.176 & 1.532 & 6.129 & 0.244 & 0.758 & 0.921 & 0.971 \\ & Mahjourian et al. [20] & ✗ & 0.134 & 0.983 & 5.501 & 0.203 & 0.827 & 0.944 & 0.981 \\ \cline{1-1} & Yin et al. [20] & ✗ & 0.132 & 0.994 & 5.240 & 0.193 & 0.833 & 0.953 & 0.985 \\ \cline{1-1} & Wang et al. [20] & ✗ & 0.126 & 0.866 & 4.932 & 0.185 & 0.851 & 0.958 & 0.986 \\ \cline{1-1} & Ranjian et al. [20] & ✗ & 0.123 & 0.881 & 4.834 & 0.181 & 0.860 & 0.959 & 0.985 \\ \cline{1-1} & Luo et al. [20] & ✗ & 0.120 & 0.789 & 4.755 & 0.177 & 0.856 & 0.961 & 0.987 \\ \cline{1-1} & Godard et al. [20] & ✗ & 0.085 & 0.468 & 3.672 & 0.128 & 0.921 & 0.985 & 0.995 \\ \cline{1-1} & Safadoust et al. [20] & ✗ & 0.085 & 0.458 & 3.779 & 0.131 & 0.919 & 0.985 & **0.996** \\ \cline{1-1} & Guizilini et al. [20] & ✗ & **0.078** & **0.420** & **3.485** & **0.121** & **0.931** & **0.986** & **0.996** \\ \cline{1-1} & DepthP+P (Ours) & ✓ & 0.134 & 1.042 & 5.566 & 0.199 & 0.820 & 0.946 & 0.983 \\ \hline \hline \end{tabular} \end{table} Table 1: **Quantitative Results for Monocular Training on KITTI.** This table compares our proposed approach, **DepthP+P**, to previous approaches on the KITTI dataset that were trained only with monocular supervision. The **scale** column specifies whether the method can estimate depth in metric scale. We provide results with the original and improved ground truth. We show the results for the input resolution \(640\times 192\). The best method in each column is shown in bold and the second best is underlined. formances of the methods that also use stereo supervision for training. Using stereo supervision significantly improves the performance of our DepthP+P model, outperforming all methods except for Monodepth2 []. We show that by using a ResNet50 backbone instead of ResNet18, DepthP+P can obtain comparable results to Monodepth2 []. ## 5 Conclusion and Future Work In this paper, we presented a new approach to self-supervised monocular depth estimation following the traditional planar parallax paradigm. We showed that our approach is able to produce metrically accurate depth estimates by using a known camera height. Unlike previous methods that rely on estimating the full rigid-body motion of the camera, our method only needs to estimate the camera translation. We discussed the advantage of our method compared to the other scale-aware depth prediction methods. We see our approach as a first step to unlocking the potential of the plane and parallax for efficient and metric-accurate depth estimation. An exciting future direction can focus on detecting moving foreground objects by checking the violations in the plane and parallax constraints []. \begin{table} \begin{tabular}{c|l|c c c c|c c c} \hline \hline & **Method** & \multicolumn{4}{c}{Lower Better} & \multicolumn{4}{c}{Higher Better} \\ & Abs Rel & Sq Rel & RMSE & RMSE\({}_{log}\) & \(\delta<1.25\) & \(\delta<1.25^{2}\) & \(\delta<1.25^{3}\) \\ \hline \multirow{6}{*}{Constration} & Li et al. [] & 0.183 & 1.730 & 6.570 & 0.268 & - & - & - \\ & Zhan et al. [] & 0.135 & 1.132 & 5.585 & 0.229 & 0.820 & 0.933 & 0.971 \\ & Luo et al. [] & 0.128 & 0.935 & 5.011 & 0.209 & 0.831 & 0.945 & **0.979** \\ & Godard et al. [] & **0.106** & **0.818** & **4.750** & **0.196** & **0.874** & **0.957** & **0.979** \\ & DepthP+P (ResNet18) & 0.110 & 0.907 & 4.888 & 0.199 & 0.867 & 0.954 & **0.979** \\ & DepthP+P (ResNet50) & **0.106** & 0.900 & 4.828 & 0.198 & 0.871 & 0.954 & **0.979** \\ \hline \hline \multirow{6}{*}{Constration} & Zhan et al. [] & 0.130 & 1.520 & 5.184 & 0.205 & 0.859 & 0.955 & 0.981 \\ & Luo et al. [] & 0.123 & 0.754 & 4.453 & 0.172 & 0.863 & 0.964 & 0.989 \\ \cline{1-1} & Godard et al. [] & **0.080** & **0.466** & **3.681** & **0.127** & **0.926** & **0.985** & **0.995** \\ \cline{1-1} & DepthP+P (ResNet18) & 0.088 & 0.572 & 3.905 & 0.138 & 0.911 & 0.981 & 0.994 \\ \cline{1-1} & DepthP+P (ResNet50) & 0.084 & 0.543 & 3.784 & 0.134 & 0.916 & 0.982 & **0.995** \\ \hline \hline \end{tabular} \end{table} Table 2: **Quantitative Results on KITTI with Additional Stereo Supervision.** We compare DepthP+P to previous approaches that use additional stereo supervision on KITTI. Stereo supervision significantly improves the results of DepthP+P model. By using ResNet50, our model performs on par with Monodepth2 []. Figure 3: **Qualitative Comparison.** We compare the absolute relative error of our depth estimation method (**left**) with DNet [] (**right**) on an image from the KITTI dataset without a ground plane. The colorbar on the right shows the values of the absolute relative error metric. We cap the max error at the value of 1.0 for visualization. We can see that [] completely fails to estimate the metric depth due to the wrong scale recovery because there is no ground plane in the image, while our model does not have this issue and can still perform well. The absolute relative error for [] is 1.178 while it is 0.252 for our method.
2303.14089
Optimizing the Procedure of CT Segmentation Labeling
In Computed Tomography, machine learning is often used for automated data processing. However, increasing model complexity is accompanied by increasingly large volume datasets, which in turn increases the cost of model training. Unlike most work that mitigates this by advancing model architectures and training algorithms, we consider the annotation procedure and its effect on the model performance. We assume three main virtues of a good dataset collected for a model training to be label quality, diversity, and completeness. We compare the effects of those virtues on the model performance using open medical CT datasets and conclude, that quality is more important than diversity early during labeling; the diversity, in turn, is more important than completeness. Based on this conclusion and additional experiments, we propose a labeling procedure for the segmentation of tomographic images to minimize efforts spent on labeling while maximizing the model performance.
Yaroslav Zharov, Tilo Baumbach, Vincent Heuveline
2023-03-24T15:52:42Z
http://arxiv.org/abs/2303.14089v1
# Optimizing the Procedure of CT Segmentation Labeling ###### Abstract In Computed Tomography, machine learning is often used for automated data processing. However, increasing model complexity is accompanied by increasingly large volume datasets, which in turn increases the cost of model training. Unlike most work that mitigates this by advancing model architectures and training algorithms, we consider the annotation procedure and its effect on the model performance. We assume three main virtues of a good dataset collected for a model training to be _label quality_, _diversity_, and _completeness_. We compare the effects of those virtues on the model performance using open medical CT datasets and conclude, that _quality_ is more important than _diversity_ early during labeling; the _diversity_, in turn, is more important than _completeness_. Based on this conclusion and additional experiments, we propose a labeling procedure for the segmentation of tomographic images to minimize efforts spent on labeling while maximizing the model performance. Keywords:Segmentation Computed Tomography Labeling. ## 1 Introduction Neural Networks for computer vision has achieved results on par with human experts [2]. However, these models require large datasets to be trained. Which leads to constant pressure to collect and label more training data. The bottleneck of collecting and sharing data becomes less prominent with increased automation, cheaper storage, and faster networks. Although, labeling becomes more expensive as the data resolution grows and tasks become more complex. Subject-matter experts (e.g., medical practitioners) are, therefore, required to spend many hours as labeling experts at the cost of other duties. Multiple vectors of research are focused on the question, of how to minimize the amount of labeled data required for training: transfer learning [13], self-supervised learning [4], active learning [10], etc. While only active learning modifies the labeling procedure, others mostly assume the dataset given. In application areas of Computed Tomography (CT), the number of data sets is rapidly increasing due to advances in instrumentation and efficient digital pixel array detectors [7]. Due to the diversity of biological or technical specimens, and due to strict data requirements of the medical area (e.g., anonymity and disease representation), task-specific datasets are often collected. To help the community meet this growing demand for datasets, we consider the question of how to acquire the best dataset with minimal effort. To define what the best dataset is, we note that the dataset collection is not the end of the process, but rather an intermediate step to produce a good model. The value of the dataset collected to train a model, therefore, can be measured as the performance of a model trained on this dataset. We focus this paper on medical CT segmentation, however, our conclusions hold for other CT segmentation tasks, since we never impose assumptions that are specific to the medical imaging data. We assume, that the main virtues of a good dataset collected for training are _quality_, _diversity_, and _completeness_. The _quality_ itself could be separated into the data quality and the label quality. In case of the data collected for model training, the data quality is defined by the application of the model. We will, therefore, exclude it from the consideration. The labeling quality is defined by the imperfectness of the annotation [8], where the segmentation labels are not accurately aligned with the actual anatomy presented in the images. Hereinafter the _quality_ refers to the label quality. The _diversity_ is understood as the ability to represent the general variety of samples controlled by known parameters (e.g., patient's age, sex, and anamnesis). We understand the _completeness_ as the ability to represent the natural fluctuations of the human morphology (e.g., even twins can have slightly different morphological features). From the point of view of the data manifold, _diversity_ is the ability to sparsely cover the whole manifold with representative prototypical examples. In contrast, _completeness_ is the ability to populate it densely and represent the distribution nuances. In this work, we want to identify the relative importance of these virtues for model performance. If we have all three virtues fulfilled to the maximal extent in a dataset, no model is required, since any possible example is already incorporated to a dataset, and prediction function is just a lookup table. In practice, however, we train models to interpolate the full manifold from sparse points provided in the training dataset. Of course, improving a dataset by each virtue makes a better model. However, given the limited time, experts balance them guided by intuition and often even implicitly. In contrast to the concurrent work [5], we focus on the segmentation task specifically, and consider only the fully supervised training, as opposed to the weak supervision as a way to vary label granularity. We also propose a labeling procedure that optimizes the effort. ## 2 Method ### Datasets Preparation & Model Training In this study we choose the brain tumor, heart, and liver tasks from the Medical Decathlon segmentation datasets [1]. For the sake of simplicity, we joined all available classes, to represent binary segmentation, however, our private experience shows that the results hold for a multiclass segmentation. For each dataset, we took the openly available markup and split substracted 20% of the available data as the test set. The test is selected once for a dataset and never altered. We other 80% train+val set, as it will be split again later on. In medical datasets, it is typical to have a relatively small amount of volumes collected from a representative variety of patients [9]. Hence, we assume that the portion of random volumes used for training could be a proxy to the _diversity_, and we specify it as a number \(\in(0,1]\) representing this portion. Although this subsampling also affects _completeness_, as we show in Section 3.3 and especially in Figure 5, the model responds differently to diversity and completeness. We conclude that this is a plausible and sufficient proxy for the purposes of qualitative comparisons presented in this paper. We follow [12], and always train a 2D model on slices, instead of a 3D one on volumes. Based on assumption that adjacent slices represent small variations of roughly the same morphology, we use the portion of the slices used for training as a proxy for the _completeness_, which is reported as number \(\in(0,1]\) as well. Finally, as a proxy for the _quality_ of the dataset, we take a subset of equidistant slices and interpolate the labels between them using the nearest neighbor approach. Varying the distance between slices, and therefore the interpolation errors, allows us to manipulate the label _quality_, which we report as a percent \(\in[0,100]\) representing the IoU between the label after interpolation and the original label. To measure the model performance for some virtue value, we: 1. modify the train+val part of the dataset to model some virtue: * sample a portion of volumes to model _diversity_; * sample a portion of slices containing a mask to model _completeness_; * sample an equidistant set of slices and interpolate markup between them to model _quality_. 2. split the resulting data into train and val, at a ratio of 80 to 20. 3. upsample the train in such a way, that the amount of labeled slices is always equal to 80% of labeled slices in the original train+val set; 4. fit the model on train, select the best snapshot on val, and measure the model quality on test. We hypothesized that, while tuning the model and optimizer hyperparameters can change the model performance, it will not change the relative importance of different dataset virtues for the model performance. Therefore, we always train the same model (UNet [11] with ResNet-18 [3] as the backbone), with the same optimizer (Adam [6] with \(3e-4\) learning rate), for the same amount of epochs (100 epochs and 10 epochs long cooldown of the early stopping). For each measurement, the median of 5 runs on random train+val splits is reported. ### Results Interpretation The target of optimization of the labeling procedure is to obtain the model with the best performance given a certain available effort budget for labeling. For example, an expert can roughly segment 10 volumes, or spend the same time, precisely segmenting 3 volumes. For experiments, we devise custom proxies of the effort measure. We leave the empirical measurement of the effort (e.g., as elapsed time) for future research, however, we consulted with experts involved in the segmentation process, and they concur with our estimation. To find the optimal strategy we consider a plot, where the horizontal axis describes labeling efforts spent, and the vertical axis represents the model performance (to make the plots clearer we normalize the model performance to the quality of the model trained on the unaltered data). For the same amount of effort spent pursuing different virtues, we will have different model qualities. The optimal strategy of labeling is represented by a convex polyline that passes through the points on the plot in such a way that no points lie above it. Following this trajectory provides the best possible dataset at any given moment. To understand the optimal strategy, we consider another plot. On the horizontal axis, we plot the model performance, and on the vertical axes-the value of the compared virtues. For each point on the optimal trajectory, we add one point per virtue in comparison. Therefore, each vertically aligned set of points represents the virtues required to achieve a specific model performance. This way, moving along the horizontal axis, we can see which virtue should be pursued earlier on, to stay on the optimal trajectory. ## 3 Results ### What is More Important, _Quality_ or _Diversity_? To compare _quality_ and _diversity_, we define effort as the portion of the volumes used (as a measure of _diversity_) multiplied by _quality_. E.g., 0.1 of the volumes segmented with 80% IoU will result in 8% effort. The sampled plot with the optimal trajectory is shown in Figure 1. From this plot, we observe that the optimal trajectory connects the high-quality points, while low-quality points always fall far below the line. We show in Figure 2 how _quality_ and _diversity_ drive the optimal trajectory. From this plot we conclude, that _quality_ is more important early on, even though we never use IoU worse than 75% (which could be admissible quality for small area labels). However, as _quality_ reaches around 90%, increasing _diversity_ becomes as important or even more important than increasing _quality_. ### How Much Labeling _Quality_ is Enough? Increasing labeling quality up to 100% is challenging if not impossible. But where is a meaningful threshold of the labeling quality, after which the model performance stagnates? To investigate it, we plot the model performance against the labeling quality (see Figure 3). We also plot the performance of the model trained on sparsely segmented slices (each 5th, 10th, and 15th). We conclude, that, first, if one can not achieve good interpolation quality, it may be even harmful to interpolate, and, second, in accordance with the previous section-one should aim for 90% quality of the labeling before aiming for either _completeness_ or _diversity_. Figure 1: Optimal trajectory w.r.t. _quality_ and _diversity_. The variation of the _quality_ is represented by color, of _diversity_ by the marker shape. Figure 2: Importance of the _diversity_ and _quality_. The orange line represents label _quality_, and blue–_diversity_. ### What is More Important, _Diversity_ or _Completeness_? To compare _diversity_ and _completeness_, we define the effort as a total percentage of slices segmented. E.g., if we sample 6 volumes from 10 available (_diversity_ = 0.6) and segmented each 10th slice (_completeness_ = 0.1), then the effort is 0.06. The optimal trajectory plot is shown in Figure 4. The brightest demonstration of the importance of _diversity_ is in the right bottom parts: the 0.2 of _diversity_ with 1 of _completeness_ is much worse than vice versa. In Figure 5 we demonstrate the importance of _diversity_ and _completeness_ for the optimal trajectory. Not only _diversity_ is more important early on, but _completeness_ contributes less to the model performance (note the steeper growth of the _completeness_ with the performance growth). Therefore, finding more diverse examples and segmenting more volumes should be preferred to segmenting more random variations and segmenting slices more densely/interpolating them. ### How Much Data _Diversity_ is Enough? Although, how to define a reasonable limit to stop searching for _diversity_, and start increasing _completeness_? We plot the performance of the model w.r.t. _diversity_ in Figure 6, each line represents a different set of virtues. Since the total amount of the data possibly available is unknown, we can not define numerical limit. Instead, we note that all lines saturate ca. at the same point of increasing diversity. Hence we can define where to stop increasing _diversity_, and start _increasing_ completeness, by continuously updating a segmentation model, while expanding the dataset. Figure 3: Saturation of the model performance with the increase of the label _quality_. Additionally, the performance of the model trained on a small portion of 1.0 quality slices is represented with horizontal lines. Figure 4: Optimal trajectory w.r.t. _completeness_ and _diversity_. Variation of _diversity_ is represented by color, of _completeness_ by the marker shape. Figure 5: Importance of the _diversity_ and _completeness_. The orange line represents label _completeness_, and blue–_diversity_. ## 4 Conclusion In this paper, we have compared the importance of different ways to spend labeling efforts and presented a way to optimize the segmentation labeling procedure. We minimized the effort required to obtain the model of a specific quality. In general, we conclude that _quality_ is more important than _diversity_, which is more important than _completeness_. Based on our experiments, we propose the following procedure to minimize the effort during labeling volumetric data for segmentation: 1. Start with segmenting slices, without interpolation. Aim for maximal quality affordable without pixel hunting, at least 90%. 2. Decide on your time budget and distribute slices to segment as evenly through diverse volumes as possible. Though, keep in mind, that the structure of interest may impose a minimal slice number per volume to capture all parts of the structure. 3. Train a model as early in the process as possible. This allows, first, deciding which areas require more markup (by means of active learning, or just by an expert assessment of predictions), and, second, recognizing the moment when model performance starts to saturate w.r.t. _diversity_. 4. After hitting the saturation w.r.t. the _diversity_, increase the _completeness_ either by adding more volumes or by interpolating more slices to squeeze the last performance percent. We leave a rigorous study of the exact conversion between theoretical effort metrics presented in this paper and empirical efforts (e.g., time spent) for future research. #### 4.0.1 Acknowledgment. Data used in this paper was published in [1] and is available under a Creative Commons license CC-BY-SA4.0. Figure 6: Saturation of the model performance with the increase of the _diversity_. No matter how other virtues vary (different colors), saturation comes at the same point.
2305.13468
Fluid pulsation modes and tidal deformability of anisotropic strange stars in light of the GW$170817$ event
The effects of the anisotropy on the fluid pulsation modes adopting the so-called Cowling approximation and tidal deformability of strange quark stars are investigated by using the numerical integration of the hydrostatic equilibrium, nonradial oscillations, and tidal deformability equations, being these equations modified from their standard form to include the anisotropic effects. The fluid matter inside the compact stars is described by the MIT bag model equation of state. For the anisotropy profile, we consider a local anisotropy that is both regular at the center and null at the star's surface. We find that the effect of the anisotropy is reflected in the fluid pulsation modes and tidal deformability. Finally, we analyze the correlation between the tidal deformability of the GW$170817$ event with the anisotropy.
José D. V. Arbañil, Cesar V. Flores, César H. Lenzi, Juan M. Z. Pretel
2023-05-22T20:27:49Z
http://arxiv.org/abs/2305.13468v2
Fluid pulsation modes and tidal deformability of anisotropic strange stars in light of the GW170817 event ###### Abstract The effects of the anisotropy on the fluid pulsation modes adopting the so-called Cowling approximation and tidal deformability of strange quark stars are investigated by using the numerical integration of the hydrostatic equilibrium, nonradial oscillations, and tidal deformability equations, being these equations modified from their standard form to include the anisotropic effects. The fluid matter inside the compact stars is described by the MIT bag model equation of state. For the anisotropy profile, we consider a local anisotropy that is both regular at the center and null at the star's surface. We find that the effect of the anisotropy is reflected in the fluid pulsation modes and tidal deformability. Finally, we analyze the correlation between the tidal deformability of the GW170817 event with the anisotropy. ## 1 Introduction With all the recent detections of gravitational signals coming from the merger of binary systems reported by LIGO-Virgo Collaboration (LVC) [1; 2; 3; 4; 5; 6; 7; 8; 9; 10], we can say that we live at the beginning of a new golden age in general relativity, the age of gravitational wave astronomy. In this sense, it is essential to invest our best efforts in order to study new quantitative, qualitative, and even exotic physical characteristics that could be present in future multi-messenger observations. Among those phenomena, it is well known that compact stars can be present as components of binary systems. Then the behavior of a compact star prior to, during, and after the merger can not be ignored. For example, when the binary system is very close, the tidal interaction plays an important role [10], it could be a natural route to obtain information about the equation-of-state (EOS) within the signals emitted during a merger of two compact stars. The theoretical methods used to investigate the oscillation frequencies of stars is asteroseismology. This theory is a powerful tool that gives us a firm path in the search of traces of physics inside compact stars [11; 12; 13; 14]. In this way, the oscillation frequencies of such stars, namely, \(f\)- and \(p\)-modes [15; 16], would give us information about the composition and internal structure of such spherical objects (check, e.g., [17; 18; 19; 20; 21] and their references). An important aspect to be analyzed in the study of compact objects is the tidal deformability [22; 23; 24; 25; 26]. As previously mentioned, this parameter gives us information about the equation of state hidden in the signals emitted by compact stars. Moreover, nowadays, using dimensionless tidal deformability, we can place some limits on the theory using the observational data of the event GW170817. In this regard, we investigate the non-radial oscillation modes and tidal deformability of anisotropic strange quark stars. As reported, theoretical evidence indicates that anisotropies can emerge in highly dense media, for instance, such as that appear in phase transitions [27], pion condensed phase [28], a solid or superfluid nucleus [29; 30], or in the presence of superfluid type 3A [31]. Since establishing a connection between the internal composition of the compact star and the results reported by observation is the purpose of many works, the tidal deformability results found in this work are analyzed in light of the deformability parameter obtained from the event GW170817, see Ref. [32]. This article is structured as follows: In Sec. 2, we present the Einstein field equation, energy-momentum tensor, stellar structure equations, non-radial oscillation equations, and tidal deformability equations and their boundary conditions. In Sec. 3 we show the numerical method employed, the EOS, the anisotropic profile, and the scaling solution for non-radial oscillation equations and for tidal deformability equations. Moreover, in this section, we plot the change of oscillation frequency and tidal deformability against anisotropy. Finally, in Sec. 4 we conclude. Throughout the entire work, in order to simplify our equations and also for numerical reasons, we will employ the units \(G=1=c\). ## 2 General Relativistic Formulation ### Einstein field equation We start by writing the Einstein field equation in the presence of an anisotropic fluid: \[G^{\mu}_{\varphi}=8\pi T^{\mu}_{\varphi}, \tag{1}\] where the Greek indexes \(\mu\), \(\varphi\), etc. go from 0 to 3; \(G^{\mu}_{\varphi}\) is the Einstein tensor, and \(T^{\mu}_{\varphi}\) represents the energy-momentum tensor which is given by \[T^{\mu}_{\varphi}=(\rho+p_{t})u^{\mu}u_{\varphi}+p_{t}g^{\mu}_{\varphi}-\sigma k ^{\mu}k^{\nu}g_{r\varphi}, \tag{2}\] with \(\rho\), \(p_{t}\), and \(\sigma=p_{t}-p_{r}\) being respectively the energy density, tangential pressure, and anisotropic pressure parameter; where \(p_{r}\) is the radial pressure. Besides, \(u_{\varphi}\) is the four-velocity of the fluid, \(k_{\varphi}\) denotes the radial unit vector, and \(g_{\mu\varphi}\) stands the metric tensor. These 4-vectors must satisfy the following conditions: \[k_{\varphi}k^{\varphi}=1,\ \ \ \ u_{\varphi}k^{\varphi}=0,\ \ \ \ u_{\varphi}u^{\varphi}=-1. \tag{3}\] ### Static background equations The unperturbed spherically symmetric line element, in Schwarzschild-like coordinates, is expressed in the form \[ds^{2}=-e^{2\Phi}dt^{2}+e^{2\Psi}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2}), \tag{4}\] where the metric functions \(\Phi=\Phi(r)\) and \(\Psi=\Psi(r)\) depend on the radial coordinate \(r\) alone. Considering the spacetime metric (4) and the potential metric \[e^{-2\Psi}=\left(1-\frac{2m}{r}\right), \tag{5}\] the non-null components of the field equations (1) can be placed into the form \[m^{\prime}=4\pi\rho r^{2}, \tag{6}\] \[p^{\prime}_{r}=-(p_{r}+\rho)\left[4\pi rp_{r}+\frac{m}{r^{2}} \right]e^{2\Psi}+\frac{2\sigma}{r},\] (7) \[\Phi^{\prime}=-\frac{p^{\prime}_{r}}{\rho+p_{r}}+\frac{2\sigma}{ r(\rho+p_{r})}. \tag{8}\] The function \(m(r)\) is the mass enclosed within the sphere radius \(r\). Eqs. (6) and (7) represent respectively the mass conservation and the hydrostatic equilibrium equation [33; 34] modified from the original form to include the anisotropic factor [35]. This set of equations is known as the stellar structure equations. The prime (\({}^{\prime}\) ) over the functions stands for the derivation concerning \(r\). To obtain the stellar equilibrium configurations, we integrate Eqs. (6)-(8) from the origin up to the radial coordinate where the radial pressure vanishes. In other words, the solution starts at the center of the star (\(r=0\)), where \[m(0)=0,\ \ \ \Psi(0)=0,\ \ \ \Phi(0)=\Phi_{c},\ \ \ \rho(0)=\rho_{c}, \tag{9}\] and the stellar surface (\(r=R\)) is determined by \[p_{r}(R)=0. \tag{10}\] In addition, at \(r=R\), the interior spacetime metric connects smoothly with the Schwarzschild vacuum exterior solution, so that \[e^{2\Phi}=e^{-2\Psi}=1-\frac{2M}{R}, \tag{11}\] with \(M\) standing for the total mass of the star. ### Non-radial oscillations equations within the Cowling approximation In non-radial oscillations of compact stars, the Cowling formalism [36; 37] is often used to calculate the oscillation frequencies (see, e.g., Refs. [38; 39]), since this method provides a good accuracy of the oscillation frequencies obtained from the numerical approach of complete general relativity. In fact, in typical stellar models, between both methods, a discrepancy of less than 20% and 10% for the \(f\) and \(p_{1}\)-modes are respectively found [40]. This good precision justifies its use to study, for example, the fluid pulsation mode of neutron stars in the presence of slow [41] and rapid rotation rate [42], crust elasticity [43], internal anisotropy [44], and \(d\)-dimensions [45]. To investigate pulsation modes of anisotropic strange stars, the metric functions are kept fixed in the Cowling approximation, i.e., \(\delta g_{\mu\nu}=0\)[39]. In addition, the equations describing the fluid pulsation are obtained by perturbing the conservation equation of the energy-momentum tensor (2). We hence obtain \(\delta\left(\nabla_{\mu}T^{\mu\nu}\right)=0\). Projecting this relation both along the four-velocity \(u_{\nu}\) and orthogonally by employing the operator \(\mathcal{P}^{\nu}_{\mu}=\delta^{\nu}_{\mu}+u^{\nu}u_{\mu}\), we get respectively: \[u^{\nu}\nabla_{\nu}\delta\rho+\nabla_{\nu}\left(\left[(\rho+p_{t })\delta^{\nu}_{\mu}-\sigma k^{\nu}k_{\mu}\right]\delta u^{\mu}\right)\] \[\ \ \ \ +(\rho+p_{t})a_{\nu}\delta u^{\nu}-\nabla_{\nu}u_{\mu} \delta\left(\sigma k^{\nu}k^{\mu}\right)=0, \tag{12}\] \[\left(\delta\rho+\delta p_{t}\right)a_{\mu}+\left(\rho+p_{t} \right)u^{\nu}\left(\nabla_{\nu}\delta u_{\mu}-\nabla_{\mu}\delta u_{\nu}\right)\] \[\ \ \ +\nabla_{\mu}\delta p_{t}+u_{\mu}u^{\nu}\nabla_{\nu}\delta p_{t }-\mathcal{P}^{\nu}_{\mu}\nabla_{\alpha}\delta\left(\sigma k^{\alpha}k_{\nu} \right)=0, \tag{13}\] with \(a_{\mu}=u^{\nu}\nabla_{\nu}u_{\mu}\) being the four-acceleration. Taking into account the Lagrangian fluid vector components in the form \[\xi^{i}=\left(e^{-\Psi}W,-V\partial_{\theta},-\frac{V}{\sin^{2}\theta}\partial _{\phi}\right)\frac{Y_{\ell m}}{r^{2}}, \tag{14}\] with \(W=W(t,r)\) and \(V=V(t,r)\) being functions of the coordinates \(t\) and \(r\), and \(Y_{\ell m}=Y_{\ell m}(\theta,\phi)\) are the spherical harmonics. In such a way, the perturbation of the four-fluid through the Lagrangian perturbation vector \(\delta u^{\mu}=\left(0,\delta u^{r},\delta u^{\theta},\delta u^{\phi}\right)\) can be expressed as \[\delta u^{\mu}=\left(0,e^{-\Psi}\partial_{t}W,-\partial_{t}V\partial_{\theta},- \frac{\partial_{t}V}{\sin^{2}\theta}\partial_{\phi}\right)\frac{Y_{\ell m}e^{- \Phi}}{r^{2}}. \tag{15}\] Considering \(u^{\mu}=\left(e^{-\Phi},0,0,0\right)\), \(k^{\mu}=\left(0,e^{-\Psi},0,0\right)\), \(\sigma=\sigma(p_{r},\mu)\), \(W(t,r)=W(r)e^{i\omega t}\), and \(V(t,r)=V(r)e^{i\omega t}\) in Eqs. (12) and (13), we arrive at the following system of equations \[W^{\prime} = \frac{d\rho}{dp_{r}}\left[(1+\mathcal{X})\left(1+\frac{\partial \sigma}{\partial p_{r}}\right)^{-1}\frac{\omega^{2}r^{2}V}{e^{2\Phi-\Psi}}+ \Phi^{\prime}W\right] \tag{16}\] \[-\mathcal{X}\left[\left(1+\frac{d\rho}{dp_{r}}\right)\frac{2W}{ r}+\ell(\ell+1)e^{\Psi}V\right]\] \[-\ell(\ell+1)e^{\Psi}V,\] \[V^{\prime} = V\left[-\frac{\sigma^{\prime}}{\rho+p_{r}+\sigma}-\left(\frac{ d\rho}{dp_{r}}+1\right)\left(\Phi^{\prime}+\frac{2}{r}\right)\frac{\mathcal{X}}{1+ \mathcal{X}}\right.\] (17) \[\left.+\frac{2}{r}\frac{\partial\sigma}{\partial p_{r}}+\left(1+ \frac{\partial\sigma}{\partial p_{r}}\right)^{-1}\left(\frac{\partial^{2} \sigma}{\partial p_{r}^{2}}p_{r}^{\prime}+\frac{\partial^{2}\sigma}{\partial p _{r}\partial\mu}\mu^{\prime}\right)\right]\] \[+2V\Phi^{\prime}-\left(1+\frac{\partial\sigma}{\partial p_{r}} \right)\frac{1}{1+\mathcal{X}}\frac{e^{\Psi}W}{r^{2}},\] where we have defined \(\mathcal{X}=\sigma/(\rho+p_{r})\) and, following Ref. [44], we consider \(\delta\sigma=(\partial\sigma/\partial p_{r})\delta p_{r}\), with \(\delta\mu=0\) (\(\mu=2m/r\)), and \(\omega\) representing the oscillation eigenfrequency. These two differential equations are reduced to those found in [39] taking \(\sigma=0\). To solve Eqs. (16) and (17) from the center (\(r=0\)) toward the stellar surface (\(r=R\)), we need to impose boundary conditions. Thus, at \(r=0\), we consider that the functions \(W\) and \(V\) assume the respective forms \[W=Cr^{\ell+1},\qquad V=-C\frac{r^{\ell}}{\ell}, \tag{18}\] with \(C\) representing a dimensionless constant. Moreover, at \(r=R\) (where \(p_{r}=0\)) is found that \[\left[1+\mathcal{X}\right]\frac{\omega^{2}V}{e^{2\Phi}}+\left[1+\frac{\partial \sigma}{\partial p_{r}}\right]\left[\frac{r\Phi^{\prime}}{2}-\mathcal{X} \right]\frac{2W}{e^{\Psi}r^{3}}=0. \tag{19}\] Hereafter, to compare our results with those shown in the literature, e.g., Refs. [44; 46], we restrict our results to the quadripolar modes (\(\ell=2\)). ### Tidal deformability The study of the theory of tidal Love numbers is frequent within the context of binary compact star systems. In this scheme, the gravitational effects caused by one star can outcome in the deformation of its companion. Such deformation, produced by an external field, can be measured through the tidal deformability parameter \(\lambda\). This parameter can be expressed as follows \[\lambda=-\frac{Q_{ij}}{\epsilon_{ij}}, \tag{20}\] with \(Q_{ij}\) and \(\epsilon_{ij}\) representing the quadrupole moment and an external tidal field, see Refs. [47; 26; 24]. The relation that directly connects the tidal deformability parameter and the quadripolar Love number \(k_{2}\) is given by \[k_{2}=\frac{3}{2}\lambda R^{-5}. \tag{21}\] Furthermore, the dimensionless tidal deformability \(\Lambda\), as a function of the Love number \(k_{2}\), follows from the following relation \[\Lambda=\frac{2k_{2}}{3C^{5}}, \tag{22}\] where \(C=M/R\) represents the compactness parameter. \(k_{2}\) can also be written in terms of \(C\). Thus, we have \[k_{2} = \frac{8C^{5}}{5}(1-2C)^{2}[2+C(y_{R}-1)-y_{R}] \tag{23}\] \[\times\{2C[6-3y_{R}+3C(5y_{R}-8)]+4C^{3}[13-11y_{R}\] \[+C(3y_{R}-2)+2C^{2}(1+y_{R})]+3(1-2C^{2})\] \[\times[2-y_{R}+2(y_{R}-1)]\ln(1-2C)\}^{-1},\] with the function \(y_{R}=y(r=R)\). In addition, the function \(y(r)\) satisfies the Riccati differential equation \[y^{\prime}r+y^{2}+y\left(C_{0}r-1\right)+C_{1}r^{2}=0, \tag{24}\] where \[C_{0} = \frac{2m}{r^{2}}e^{2\Psi}+4\pi e^{2\Psi}\left(p_{r}-\rho\right)r+ \frac{2}{r}, \tag{25}\] \[C_{1} = 4\pi e^{2\Psi}\left[4\rho+4p_{r}+4p_{t}+\frac{p_{r}+\rho}{Ac_{s} ^{2}}\left(c_{s}^{2}+1\right)\right]\] (26) \[-\frac{6}{r^{2}}e^{2\Psi}-4\Phi^{\prime 2},\] with \(c_{s}^{2}=\frac{dp_{r}}{d\rho}\) and \(A=\frac{dp_{t}}{dp_{r}}\). Comparing Eqs. (25) and (26) with the forms presented in Refs. [48; 49], we see that our \(C_{0}\) and \(C_{1}\) are in agreement and contradiction, respectively, with those presented in these two works of literature. Note that the first-order differential equation (24) is derived from the second-order differential equation for the function \(H\) in the quadripolar case (\(\ell=2\)), Eq. (11), by using \(y=rH^{\prime}/H\). Moreover, if we consider \(p_{t}=p_{r}\) (i.e., \(A=1\)), Eq. (24) is reduced to the isotropic case (see [25]). In particular, for strange quark stars--where the energy density at the star's surface is finite and non-null--it is required a correction term in the calculation of \(y_{R}\). Thus, due to this energy discontinuity \(y_{R}\) stays [50; 51; 52; 53] \[y_{R}\longrightarrow y_{R}-\frac{4\pi R^{3}\rho_{s}}{M}, \tag{27}\] where \(\rho_{s}\) represents the energy density difference between the internal and external regions. ## III Results ### Numerical method To investigate the influence of anisotropy in the oscillation spectrum and tidal deformability of strange stars -once defined the EOS and the anisotropy profile- the stellar structure equations (5)-(7), the non-radial oscillation equations (16)-(17), and the tidal deformability equations (22)-(26) are integrated from the center (\(r=0\)) to the star's surface (\(r=R\)). To study both the fluid perturbation modes and the tidal deformability, we first integrate Eqs. (5)-(8) from the center to the star's surface through the fourth-order Runge-Kutta method, for different values of \(\kappa\) and \(\rho_{c}\). Once determine the parameters \(p_{r}\), \(p_{t}\), \(\rho\), \(m\), and \(\Phi\), Eq. (8) is solved by using the shooting method. This integration begins considering a proof value of \(\Phi_{c}\), if after the numerical solution the equality shown in Eq. (11) is not attained, \(\Phi_{c}\) is corrected until satisfying this condition. The numerical solution of the nonradial oscillations and tidal deformability equations are respectively described below: * The fluid perturbation modes equations, Eqs. (16)-(17), are integrated from the center to the star's surface. It begins by taking into account the correct value of \(\Phi_{c}\) in the stellar structure equations for a particular value of \(\kappa\) and \(\rho_{c}\), and the test value of \(\omega^{2}\). If the numerical integration of the equality (19) is not reached, \(\omega^{2}\) is corrected in the next integration until this condition is satisfied. * The tidal deformability equations, Eqs. (22)-(26), are integrated along the radial coordinate \(r\) which goes from \(0\) to \(R\). It starts employing the certain value of \(\Phi_{c}\) in the stellar structure equations for a particular value of \(\kappa\) and \(\rho_{c}\). ### Equation of state and anisotropic profile To depict the strange quark fluid that makes up the compact object, the MIT bag model EOS is employed. This EOS describes a fluid containing only up, down, and strange quarks that have no mass and no interaction, confined by a bag constant \(\mathcal{B}\). For the anisotropic fluid analyzed, we assume that the radial pressure and the energy density are related by the equality: \[p_{r}=\frac{1}{3}\left(\rho-4\,\mathcal{B}\right). \tag{28}\] This EOS is widely employed for the theoretical possibility that strange matter can be the ground state of strongly interacting matter and it could appear in compact stars [54]. In [55] this hypothesis is verified for a bag constant in the range of \(57\) and \(94\,[\mathrm{MeV/fm^{3}}]\). Following [56], we employ \(\mathcal{B}=60\,[\mathrm{MeV/fm^{3}}]\). For the anisotropic pressure profile, inspired in [56; 57; 58; 59; 60], we use the quasilocal form \(\sigma=\sigma(p_{r},\Psi)\). It depends on quantities that remit information on both the state of the fluid and the geometry at a particular interior point of the spacetime. Thus, we consider the anisotropic profile: \[\sigma=\kappa p_{r}\left(1-e^{-2\Psi}\right), \tag{29}\] being \(\kappa\) a dimensionless anisotropic constant. The relation (29) was used, for instance, to investigate the influence of the anisotropy on the radial oscillation of polytropic stars [57; 58] and strange stars [56], nonradial oscillation of neutron stars [44], magnetic field structure [59], and slowly rotating neutron stars [60]. ### Scaling solution of non-radial oscillations equations and tidal deformability equations In the literature, it has been reported that when a linear EOS is used to describe the fluid of a star, e.g., the MIT bag model EOS, both the stellar structure and radial oscillation equations admit a scaling law for several star properties [54; 56; 61]. This means that if a star's properties are known for a given \(\mathcal{B}\), these properties can be found for another value of \(\mathcal{B}^{\prime}\). For stellar structure, non-radial oscillation, and tidal deformability equations a scaling law can also be used. This can be realized through the following variables: \[\tilde{p}_{r}=\frac{p_{r}}{\mathcal{B}},\quad\tilde{\rho}=\frac{ \rho}{\mathcal{B}},\quad\tilde{\sigma}=\frac{\sigma}{\mathcal{B}},\quad\tilde {m}=m\sqrt{\mathcal{B}},\] \[\tilde{r}=r\sqrt{\mathcal{B}},\quad\tilde{\omega}=\frac{\omega}{ \sqrt{\mathcal{B}}},\quad\tilde{W}=\frac{W}{e}, \tag{30}\] \[\tilde{V}=\frac{V}{f},\quad\tilde{C}_{0}=\frac{C_{0}}{\sqrt{ \mathcal{B}}},\quad\tilde{C}_{1}=\frac{C_{1}}{\mathcal{B}},\quad\tilde{y}=y,\] where \(f=\sqrt{\mathcal{B}}e\), and with \(f\) and \(e\) to be positive and non-null. Considering this scaling law, the stellar structure, non-radial oscillations, and tidal deformability equations keep their original form. Thus, knowing the properties of a star for a certain value of \(\mathcal{B}\), the properties of another star with a different value of \(\mathcal{B}^{\prime}\) can be determined considering the scale: \[\frac{\rho_{c}^{\prime}}{\mathcal{B}^{\prime}}=\frac{\rho_{c}}{ \mathcal{B}},\quad M^{\prime}\sqrt{\mathcal{B}^{\prime}}=M\sqrt{\mathcal{B}}, \quad R^{\prime}\sqrt{\mathcal{B}^{\prime}}=R\sqrt{\mathcal{B}},\] \[\frac{\omega^{\prime}}{\sqrt{\mathcal{B}^{\prime}}}=\frac{\omega}{ \sqrt{\mathcal{B}}},\quad\Lambda^{\prime}=\Lambda, \tag{31}\] with \(\rho_{c}\) being the central energy density. ### Oscillation spectrum of the anisotropic strange stars The frequency and the eigenfrequency, normalized with the average density \(\sqrt{M/R^{3}}\), versus the total mass \(M/M_{\odot}\) are respectively presented on the left and right panels of Fig. 1 for five values of \(\kappa\). The top and bottom panels show the results for \(f\)- and \(p_{1}\)-modes, respectively. On the left panels, in the \(f\)-mode frequency case, we note that the curves decrease with the increment of the total mass until attaining a minimum value, after this point the curves turn anti-clockwise to grow with \(M/M_{\odot}\). In turn, in the \(p_{1}\)-mode frequency case, we obtain that the curves decrease monotonically with the increment of \(M/M_{\odot}\). On the right panels, the normalized eigenfrequencies \(f\) and in \(p_{1}\) decay monotonically with the growth of \(M/M_{\odot}\). Furthermore, from the figures, we can also see that the anisotropy affects the pulsation mode of the fluid. We find that both \(f\)- and \(p_{1}\)-mode change with \(\kappa\). For greater \(\kappa>0\) (lower \(\kappa<0\)), stars have a larger (lower) \(f_{f}\), \(f_{p_{1}}\), \(\omega_{f}\left(R^{3}/M\right)^{0.5}\), and \(\omega_{p_{1}}\left(R^{3}/M\right)^{0.5}\). This change in frequency is associated with the fact that the radial pressure changes with the anisotropy, see [56]. ### Tidal deformability of the anisotropic strange stars The dimensionless tidal deformability as a function of the total mass is shown on the top of Fig. 2 for different values of \(\kappa\). These results are contrasted with the case of \(\Lambda_{1.4}=190^{+390}_{-120}\) obtained by LVC [10]. In all curves, we note that the tidal deformability decreases monotonically with the increment of the total mass. On the other hand, the effects of anisotropy on tidal deformability are also observed. We find that for a larger (lower) value of \(\kappa>0\) (\(\kappa<0\)) greater (lesser) values of \(\Lambda\) are derived for the same mass. All these curves are within the range of \(\Lambda_{1.4}\) reported by LVC in [10]. On the bottom of Fig. 2, it is possible to see in more detail the effect of the anisotropic parameter on the \(\Lambda_{1.4}\), where the dimensionless tidal deformability undergoes a slight increment (decrement) with increasing (decreasing) of the dimensionless anisotropic constant. Figure 1: Upper panels: Oscillation frequency \(f_{f}\) (left side) and normalized frequency \(\omega_{f}\) (right side) as functions of the total gravitational mass for the \(f\)-mode. Meanwhile, the frequencies corresponding to the \(p_{1}\)-mode are shown in the lower plots. We have used five values for the anisotropy parameter \(\kappa\), where the isotropic solution is represented by the black curve. It can be observed that the \(f\)-mode frequencies increase (decrease) because of a positive (negative) anisotropy. Something similar occurs in the case of the \(p_{1}\)-mode frequencies, however, the impact of anisotropy is more significant only in the high-mass branch. On the top and bottom panels of Fig. 3 are respectively shown the oscillation frequency \(f_{f}\) and \(f_{p_{1}}\) against the dimensionless tidal deformability for different values of \(\kappa\). These results are contrasted with the \(\Lambda_{1.4}=190^{+390}_{-120}\) reported by LVC, check [10]. In the figure, we note that the \(f\)-mode (\(p_{1}\)-mode) decrease (increase) monotonically with the increment of the dimensionless tidal deformability. In addition, within the interval delimited by the observation, we note that the frequency as a function of the deformability has an almost linear behavior. The obtained data by LVC, allowed authors in [8] to establish some constraints on \(\Lambda_{1}\) and \(\Lambda_{2}\) which are the dimensionless tidal deformability of the binary system, where \(\Lambda_{1}\) is the dimensionless tidal deformability parameter of the star with higher mass in the binary system and \(\Lambda_{2}\) represents the same parameter of the companion star. In Fig. 4, we plot the diagram \(\Lambda_{1}\times\Lambda_{2}\), where the curves \(\Lambda_{1}-\Lambda_{2}\) are plotted first chosen a value of \(M_{1}\) and determining \(M_{2}\) via the chirp mass \(\mathcal{M}=1.188\,M_{\odot}\)[8], defined by \(\mathcal{M}=(M_{1}\,M_{2})^{3/5}/(M_{1}+M_{2})^{1/5}\). Moreover, the values considered for \(M_{1}\) and \(M_{2}\) are within the range \(1.36\leq M_{1}/M_{\odot}\leq 1.60\) and \(1.17\leq M_{2}/M_{\odot}\leq 1.36\), respectively. We also represent the lines of \(50\%\) and \(90\%\) credibility levels related to the GW170817 event established by LVC in the low-spin prior scenario. Either for \(\kappa>0\) or \(\kappa<0\), we note clearly the influence of the anisotropic parameter on tidal deformability. All curves derived are within the confidence lines taken from Ref. [8]. Finally, we study the dimensionless parameter \(\tilde{\Lambda}\), which is measurable through the gravitational waves event of a binary system. \(\tilde{\Lambda}\) is obtained as follows [22]: \[\tilde{\Lambda}=\frac{16}{13}\frac{\left(M_{1}+12M_{2}\right)M_{1}^{4}\Lambda_ {1}+\left(M_{2}+12M_{1}\right)M_{2}^{4}\Lambda_{2}}{\left(M_{1}+M_{2}\right) ^{5}}. \tag{32}\] As can be seen, it is calculated using the masses and dimensionless tidal deformability of the stars forming the binary system. Since the masses \(M_{1}(\Lambda_{1})\) and \(M_{2}(\Lambda_{2})\) are established into a particular interval in agreement with Figure 2: Top: Dimensionless tidal deformability versus the total mass for several values of \(\kappa\). Bottom: \(\Lambda_{1.4}\) as a function of the dimensionless anisotropic constant \(\kappa\). The vertical and horizontal dashed straight lines represent \(\Lambda_{1.4}=190^{+390}_{-120}\) reported by LVC in Ref. [10]. Figure 3: Oscillation frequency \(f_{f}\) and \(f_{p_{1}}\) against the tidal deformability for different values of \(\kappa\) are plotted on the top and bottom panel. The vertical dashed straight lines mark the tidal deformability \(\Lambda_{1.4}=190^{+390}_{-120}\) from the event GW170817 estimated in Ref. [10]. the GW170817 event, it is evident that each value of \(\kappa\) will produce a range for \(\tilde{\Lambda}\). Thus, in Fig. 5 shows \(\tilde{\Lambda}\) against \(\kappa\) for anisotropic strange stars. We contrast these results with the constraint on the combined dimensionless tidal deformability reported by LVC, i.e., \(\tilde{\Lambda}=300^{+420}_{-230}\), review Ref. [32]. Note that all the obtained values of \(\tilde{\Lambda}\) are within the observational intervals. ## IV Conclusions This work investigates the role of anisotropy on the fluid pulsation mode and tidal deformability of strange quark stars. This is realized through the numerical integration of the hydrostatic equilibrium, non-radial oscillations, and tidal deformability equations; which are modified to the inclusion of the anisotropy effects. To describe the fluid inside the star we assume the MIT bag model equation of state and to the anisotropy factor we employ the relation \(\sigma=\kappa p_{r}(1-e^{-2\Psi})\). Regarding the fluid pulsation modes, it is noted that \(f\)-mode changes considerably with the anisotropy, in contrast with the \(p_{1}\)-mode frequencies do not change much in the presence of anisotropy. We also study the compatibility of dimensionless tidal deformability anisotropic strange stars with observational data reported by the LVC from the GW170817 event. In this scenario, we noted that the results reported in this article are within the set of observational data considered in this work. It is important to highlight that other anisotropy pressure profiles can be used in strange stars [46; 56], analyzing their dimensionless tidal deformability we can investigate the viability of the anisotropic profile and put some constraints using the same approach followed here. It should be noted that the deformability value increases with \(\kappa\) and decreases with \(-\kappa\). This is in agreement with the study of polytropic stars investigated in [58]. However, it is in discrepancy with studies reported in [48] and [62], where the deformability profile is also investigated and how it changes with anisotropy \(\sigma=\kappa(\rho+p_{r})(\rho+3p_{r})r^{2}e^{2\Psi}/3\). From this, we can understand that the deformability increases or decreases with \(\kappa\) depending on the type of anisotropic profile employed. Additionally, it should be noted that in the literature we find works where the deformability parameter of strange stars is analyzed under different contexts. For example, in the reference [51] this parameter is analyzed considering quark matter in the color-flavor-locked (CFL) phase of color superconductivity, in [63] this factor is investigated taking into account isospin effects in strange quark matter, and in the article [64] the tidal deformability are studied under the hypothesis that the quasiparticle model includes the non-perturbing characteristics of quantum chromodynamics in the low-density region. In the works in question, as well as in the present study, the light from the event GW170817 is used to set limits to the study of strange stars under the backgrounds aforementioned. Finally, it can be mentioned that the detectability of the oscillations modes is an important issue to be considered. This detectability is in a strong relationship with the parameters of the detectors, the most important being the sensitivity and frequency range. Moreover, it has to be considered that there are future planned upgrades for the actual operating LIGO-Virgo, in this case, the upgraded detector is called LIGO-Voyager [65]. In addition to this, it is well known that the scientific community has taken seriously the idea to build more technologically advanced gravitational wave detectors, we can mention the third-generation detectors: Einstein Telescope [66], Cosmic Explorer [67] and NEMO [68]. In this sense, the NEMO detector has a sensitivity of \(10^{-24}\,[\mathrm{Hz}]^{-1}\). There Figure 4: Dimensionless tidal deformabilities for the components of the GW170817 event for different cases of anisotropic parameter \(k\). The yellow line represents the LIGO-Virgo confidence curves [8], and the dotted diagonal line denotes the values that correspond to \(\Lambda_{1}=\Lambda_{2}\). fore, its sensitivity is in the order of Cosmic Explorer and Einstein Telescope, but its technology primarily targets frequencies in the range of \(1-4\left[\mathrm{kHz}\right]\), where is possible to observe the fundamental mode. As can be seen, with all the planned detectors, the observation of the oscillation modes of a compact star is a matter of time, and theoretical research in this direction is very important. ###### Acknowledgements. JDVA thanks Universidad Privada del Norte and Universidad Nacional Mayor de San Marcos for the financial support - RR N\({}^{0}\) 005753-2021-R/UNMSM under the project number B21131781. JMZP acknowledges financial support from the PCI program of the Brazilian agency Conselho Nacional de Desenvolvimento Cientifico e Tecnologico-CNPq. ## Appendix A Tidal deformability equations for the anisotropic case To derive the differential equations used to investigate the dimensionless tidal deformability for the anisotropic case, we start by considering the perturbed field equation: \[\delta G_{\varphi}^{\mu}=8\pi\delta T_{\varphi}^{\mu}, \tag{10}\] and, following Thorne and Campolattaro's work [69], we use the linear perturbation of the background metric tensor of the form \[g_{\alpha\beta}^{(*)}=g_{\alpha\beta}+h_{\alpha\beta}, \tag{11}\] with \(g_{\alpha\beta}\) and \(h_{\alpha\beta}\) standing the unperturbed metric tensor and the linearized perturbed metric, respectively. With these specializations, \(h_{\alpha\beta}\) can be placed as [69; 70]: \[h_{\alpha\beta}=\mathrm{diag}\left[He^{2\Phi},He^{2\Psi},r^{2}K,r^{2}K\sin^{ 2}\theta\right]Y_{\ell m}, \tag{12}\] where \(H=H(r)\) and \(K=K(r)\) depend on the radial coordinate, and \(Y_{\ell m}=Y_{\ell m}(\theta,\phi)\) is function of the angular coordinates. Expanding the fluid perturbation variables in terms of \(Y_{\ell m}\), from the perturbed field equation (10) we found: \[\left[e^{-2\Psi}\left(K^{\prime\prime}-K^{\prime}\Psi^{\prime}- \frac{H^{\prime}}{r}+\frac{3K^{\prime}}{r}-\frac{H}{r^{2}}+\frac{2H}{r}\Psi^{ \prime}\right)-\frac{H\ell(\ell+1)}{2r^{2}}+\frac{K}{r^{2}}-\frac{K\ell(\ell+ 1)}{2r^{2}}\right]Y_{\ell m}=-8\pi\delta\rho, \tag{13}\] \[\left[e^{-2\Psi}\left(K^{\prime}\Phi^{\prime}-2H\frac{\Phi^{ \prime}}{r}-\frac{H^{\prime}}{r}+\frac{K^{\prime}}{r}-\frac{H}{r^{2}}\right)+ \frac{K}{r^{2}}+\frac{\ell(\ell+1)(H-K)}{2r^{2}}\right]Y_{\ell m}=8\pi\delta p _{r},\] (14) \[\left[rHe^{2\Phi}\Psi^{\prime}\Phi^{\prime}-rHe^{2\Phi}\Phi^{ \prime 2}-rHe^{2\Phi}\Phi^{\prime\prime}+\frac{re^{2\Phi}H^{\prime}\Psi^{ \prime}}{2}-\frac{3}{2}re^{2\Phi}H^{\prime}\Phi^{\prime}-\frac{re^{2\Phi}H^{ \prime\prime}}{2}-\frac{re^{2\Phi}K^{\prime}\Psi^{\prime}}{2}\right.\] \[\left.+\frac{re^{2\Phi}K^{\prime}\Phi^{\prime}}{2}+\frac{re^{2 \Phi}K^{\prime\prime}}{2}+He^{2\Phi}\Psi^{\prime}-He^{2\Phi}\Phi^{\prime}-H^{ \prime}e^{2\Phi}+K^{\prime}e^{2\Phi}\right]\frac{e^{-2(\Psi+\Phi)}}{r}Y_{\ell m }=8\pi\delta p_{t},\] (15) \[\left[\frac{H\Phi^{\prime}}{r^{2}}+\frac{H^{\prime}}{2r^{2}}- \frac{K^{\prime}}{2r^{2}}\right]\partial_{\theta}Y_{\ell m}=0. \tag{16}\] Substituting the Eq. (16), where \(K^{\prime}=2H\Phi^{\prime}+H^{\prime}\) and \(K^{\prime\prime}=2H^{\prime}\Phi^{\prime}+2H\Phi^{\prime\prime}+H^{\prime\prime}\), in the subtraction of Eq. (13)-Eq.(13) and in the Eq. (14), we have, respectively: \[-2Y_{\ell m}e^{-2\Psi}H\Psi^{\prime}\Phi^{\prime}-Y_{\ell m}e^{- 2\Psi}H^{\prime}\Psi^{\prime}+2Y_{\ell m}e^{-2\Psi}H^{\prime}\Phi^{\prime}+2Y _{\ell m}e^{-2\Psi}H\Phi^{\prime\prime}+Y_{\ell m}e^{-2\Psi}H^{\prime\prime}+ \frac{2H}{r}Y_{\ell m}e^{-2\Psi}\Psi^{\prime}\] \[+\frac{6H}{r}Y_{\ell m}e^{-2\Psi}\Phi^{\prime}+\frac{2H^{\prime}} {r}Y_{\ell m}e^{-2\Psi}-\frac{H\ell(\ell+1)Y_{\ell m}}{r^{2}}-2Y_{\ell m}e^{-2 \Psi}H\Phi^{\prime 2}-Y_{\ell m}e^{-2\Psi}\Phi^{\prime}H^{\prime}=-8\pi\left(\delta p +\delta p_{r}\right), \tag{17}\] \[\frac{H}{r}e^{-2\Psi}Y_{\ell m}\left(\Psi^{\prime}+\Phi^{\prime} \right)=8\pi\delta p_{t}. \tag{18}\] For the perturbation of the radial pressure \(p_{r}=p_{r}(p_{t},\Psi)\), we have \[\delta p_{r}=\frac{\partial p_{r}}{\partial p_{t}}\delta p_{t}, \tag{19}\] where it is considered \(\delta\Psi=0\). In addition, \(\delta\rho\) is defined by considering the equation of state \(\rho=\rho(p_{r})\). In this way, replacing equations (10) and (11) we obtain: \[H^{\prime\prime}+C_{0}H^{\prime}+C_{1}H=0, \tag{12}\] where the functions \(C_{0}=C_{0}(r)\) and \(C_{1}=C_{1}(r)\) are calculated as a function of the background quantities as follows \[C_{0} = \frac{2m}{r^{2}}e^{2\Psi}+4\pi e^{2\Psi}\left(p_{r}-\rho\right)r +\frac{2}{r}, \tag{13}\] \[C_{1} = 4\pi e^{2\Psi}\left[4\rho+4p_{r}+4p_{t}+\frac{p_{r}+\rho}{Ac_{s} ^{2}}\left(c_{s}^{2}+1\right)\right]\] \[-\frac{\ell(\ell+1)}{r^{2}}e^{2\Psi}-4\Psi^{\prime 2}, \tag{14}\] with \(c_{s}^{2}=\frac{dp_{r}}{d\rho}\) and \(A=\frac{dp_{t}}{dp_{r}}\).
2308.02599
Branched Latent Neural Maps
We introduce Branched Latent Neural Maps (BLNMs) to learn finite dimensional input-output maps encoding complex physical processes. A BLNM is defined by a simple and compact feedforward partially-connected neural network that structurally disentangles inputs with different intrinsic roles, such as the time variable from model parameters of a differential equation, while transferring them into a generic field of interest. BLNMs leverage latent outputs to enhance the learned dynamics and break the curse of dimensionality by showing excellent generalization properties with small training datasets and short training times on a single processor. Indeed, their generalization error remains comparable regardless of the adopted discretization during the testing phase. Moreover, the partial connections significantly reduce the number of tunable parameters. We show the capabilities of BLNMs in a challenging test case involving electrophysiology simulations in a biventricular cardiac model of a pediatric patient with hypoplastic left heart syndrome. The model includes a 1D Purkinje network for fast conduction and a 3D heart-torso geometry. Specifically, we trained BLNMs on 150 in silico generated 12-lead electrocardiograms (ECGs) while spanning 7 model parameters, covering cell-scale and organ-level. Although the 12-lead ECGs manifest very fast dynamics with sharp gradients, after automatic hyperparameter tuning the optimal BLNM, trained in less than 3 hours on a single CPU, retains just 7 hidden layers and 19 neurons per layer. The resulting mean square error is on the order of $10^{-4}$ on a test dataset comprised of 50 electrophysiology simulations. In the online phase, the BLNM allows for 5000x faster real-time simulations of cardiac electrophysiology on a single core standard computer and can be used to solve inverse problems via global optimization in a few seconds of computational time.
Matteo Salvador, Alison Lesley Marsden
2023-08-04T04:04:58Z
http://arxiv.org/abs/2308.02599v3
# Branched Latent Neural Maps ###### Abstract We introduce Branched Latent Neural Maps (BLNMs) to learn finite dimensional input-output maps encoding complex physical processes. A BLNM is defined by a simple and compact feedforward partially-connected neural network that structurally disentangles inputs with different intrinsic roles, such as the time variable from model parameters of a differential equation, while transferring them into a generic field of interest. BLNMs leverage latent outputs to enhance the learned dynamics and break the curse of dimensionality by showing excellent in-distribution generalization properties with small training datasets and short training times on a single processor. Indeed, their in-distribution generalization error remains comparable regardless of the adopted discretization during the testing phase. Moreover, the partial connections, in place of a fully-connected structure, significantly reduce the number of tunable parameters. We show the capabilities of BLNMs in a challenging test case involving biophysically detailed electrophysiology simulations in a biventricular cardiac model of a pediatric patient with hypoplastic left heart syndrome. The model includes a 1D Purkinje network for fast conduction and a 3D heart-torso geometry. Specifically, we trained BLNMs on 150 in silico generated 12-lead electrocardiograms (ECGs) while spanning 7 model parameters, covering cell-scale, organ-level and electrical dyssynchrony. Although the 12-lead ECGs manifest very fast dynamics with sharp gradients, after automatic hyperparameter tuning the optimal BLNM, trained in less than 3 hours on a single CPU, retains just 7 hidden layers and 19 neurons per layer. The resulting mean square error is on the order of \(10^{-4}\) on an independent test dataset comprised of 50 additional electrophysiology simulations. In the online phase, the BLNM allows for 5000x faster real-time simulations of cardiac electrophysiology on a single core standard computer and can be employed to solve inverse problems via global optimization in a few seconds of computational time. This paper provides a novel computational tool to build reliable and efficient reduced-order models for digital twinning in engineering applications. The Julia implementation is publicly available under MIT License at [https://github.com/StanfordCBCL/BLNM.jl](https://github.com/StanfordCBCL/BLNM.jl). **Keywords:** Branched Latent Neural Maps, Scientific Machine Learning, Numerical Simulations, Cardiac Electrophysiology, Congenital Heart Disease Introduction Learning complex input-output maps behind physical processes in a reliable manner has significant implications in any field of science and engineering. In particular, when these physical processes are described via mechanistic models, the numerical resolution of the underlying differential equations may be challenging and computationally demanding, even for a single instance of model parameters [42, 44]. In the past few years, several methods in the field of model order reduction, partially or entirely based on Neural Networks (NNs), have been proposed to mitigate the high computational cost of physics-based solvers, with the aim of producing accurate and efficient model evaluations for many-query applications [15, 29, 34, 48, 52, 68], which involve sensitivity analysis, parameter estimation, forward and inverse uncertainty quantification, and optimization [14, 59, 64]. However, many intrusive [44] and non-intrusive [20] reduced-order models either fail or struggle to effectively reproduce phenomena that manifest fast and irregular dynamics while spanning an elaborate solution manifold. In this paper, we propose a novel computational tool which we term Branched Latent Neural Maps (BLNMs) to accurately and efficiently learn generic input-output relationships, even in the presence of sharp features and significant variability. BLNMs are based on feedforward partially-connected NNs [23] to separate the contributions coming from unrelated inputs, such as space and time variables with respect to physics-based scalar parameters. The output of BLNMs is given by relevant scalar or vector fields of interest, as well as additional latent variables, which serve the purpose of enhancing the learned dynamics. The presence of partial connections allows for a significant reduction in the number of tunable parameters while ensuring excellent in-distribution generalization properties during the testing phase, even on different mesh resolutions than those used during the training stage. Several Machine Learning methods have been recently proposed to tackle cardiac electromechanics while exploiting physics-based knowledge [6, 17, 32, 50, 54, 62]. In this paper, we demonstrate the performance of BLNMs in the setting of cardiovascular modeling [11, 35, 38, 61] and congenital heart disease [31, 67], where multiphysics and multiscale phenomena interact in the context of understudied pathological conditions in the field of computational cardiology. Specifically, we consider a patient-specific heart-torso geometry of a pediatric case with hypoplastic left heart syndrome (HLHS) [13]. We perform biventricular-Purkinje 3D-1D electrophysiology simulations to compute in-silico 12-lead electrocardiograms (ECGs) while spanning cell-scale through tissue-level parameter variability of a biophysically detailed mathematical model of electrophysiology. A BLNM trained on 150 electrophysiology simulations in less than 3 hours on a single CPU, endowed with 7 hidden layers and 19 neurons per layer (2,398 tunable parameters), retains an approximation error on the order of \(10^{-4}\) on 50 additional unseen 12-lead ECGs by the NN. Moreover, it enables faster than real-time numerical simulations during the online phase, which allows one to accurately and efficiently solve inverse problems. Indeed, this task would be unaffordable using a biophysically detailed electrophysiology model, given the computational cost of these numerical simulations and the amount of queries that are required to solve a nonlinear optimization problem. BLNMs are lightweight, compact, easy to train architectures, able to precisely capture the fast time scales of 12-lead ECGs while spanning cell-to-organ model variability. Moreover, they can be queried in fractions of seconds to generate new predictions. Overall, BLNMs provide a novel computational tool for the generation of accurate and efficient standalone surrogate models that can be applied for digital twinning in computational science. ## 2 Methods We describe the methodological details behind BLNMs for time-dependent processes, as well as the mathematical and numerical models adopted for the application of simulated cardiac electrophysiology in a congenital heart disease patient. ### Branched Latent Neural Maps Given a generic high-fidelity model \(\mathcal{M}_{\mathrm{HF}}\) expressed in terms of an input-output map between model parameters and a time-dependent process, we derive a surrogate model \(\mathcal{M}_{\mathrm{BLNM}}\) by building a feedforward partially-connected NN that explores model \(\mathcal{M}_{\mathrm{HF}}\) parametric variability while structurally separating the role of time and model parameters. We depict the BLNM architecture in Figure 1, showing that different levels of disentanglement are allowed, ranging from the first hidden layer to the output layer. This disentanglement enables BLNMs to generalize well over different grids during testing even if the training stage is performed on a specific finite dimensional resolution (see Section 3.3). The surrogate model reads: \[\mathbf{z}(t)=\mathcal{BLNM}\left(t,\mathbf{\theta};\mathbf{w}\right)\text{ for }t\in[0,T]. \tag{1}\] This feedforward partially-connected NN is represented by weights and biases \(\mathbf{w}\in\mathbb{R}^{N_{\pi}}\), and defines a map \(\mathcal{BLNM}\colon\mathbb{R}^{1+N_{p}}\to\mathbb{R}^{N_{\pi}}\) from time \(t\) and model parameters \(\mathbf{\theta}\in\mathbf{\Theta}\subset\mathbb{R}^{N_{p}}\) to a state vector \(\mathbf{z}(t)=[\mathbf{z}_{\mathrm{physical}}(t),\mathbf{z}_{\mathrm{latent} }(t)]^{T}\). Indeed, the state vector \(\mathbf{z}(t)\in\mathbb{R}^{N_{\pi}}\) contains \(\mathbf{z}_{\mathrm{physical}}(t)\) physical fields of interest, as well as \(\mathbf{z}_{\mathrm{latent}}(t)\) latent temporal variables without a direct physical representation, that enhance the learned dynamics of the BLNM. These non-dimensional latent variables \(\mathbf{z}_{\mathrm{latent}}(t)\) are not accounted for in the loss function during the training stage, as in neural differential equations [9, 49, 52], but enrich the generalization of BLNMs while mapping the whole solution manifold, by selectively and properly acting in areas with steep gradients. During the optimization process of the NN tunable parameters, we minimize the Mean Square Error (MSE), that is: \[\mathcal{L}(\widetilde{\mathbf{z}}_{\mathrm{physical}}(t),\widetilde{\mathbf{z }}_{\mathrm{obs}}(t);\widehat{\mathbf{w}})=\operatorname*{arg\,min}_{ \widehat{\mathbf{w}}}\left[||\widetilde{\mathbf{z}}_{\mathrm{physical}}(t)- \widetilde{\mathbf{z}}_{\mathrm{obs}}(t)||_{\mathrm{L}^{2}(0,T)}^{2}\right], \tag{2}\] Figure 1: Sketch of Branched Latent Neural Maps with different disentanglement levels between inputs, involving time variable \(t\) and model parameters \(\mathbf{\theta}\), and outputs, i.e. generic fields of interest, including both physical \(\mathbf{z}_{\mathrm{physical}}(t)\) and latent \(\mathbf{z}_{\mathrm{latent}}(t)\) temporal quantities. Partial connections are depicted in light grey, whereas full connections are outlined in black. here \(\widetilde{\mathbf{z}}_{\text{physical}}(t)\in\left[-1,1\right]^{N_{z_{\text{ physical}}}}\) and \(\widetilde{\mathbf{z}}_{\text{obs}}(t)\in\left[-1,1\right]^{N_{z_{\text{ physical}}}}\) represent model \(\mathcal{M}_{\text{BLNM}}\) outputs and observations in non-dimensional form. Time \(\widetilde{t}\in\left[0,1\right]\) and model parameters \(\widetilde{\boldsymbol{\theta}}\in\left[-1,1\right]^{N_{\mathcal{P}}}\) are also normalized during the training phase of model \(\mathcal{M}_{\text{BLNM}}\). ### Cardiac electrophysiology We reconstruct a heart-torso model of a 7 year old female pediatric patient with HLHS from computerized tomography (CT) images. Images and associated clinical data were obtained under an IRB-approved protocol at Stanford University. In Figure 2 we show an example of an electrophysiology simulation and in silico derived 12-lead ECGs on this patient-specific geometry. #### 2.2.1 Mathematical model We model cardiac electrophysiology in the heart-Purkinje system by considering the biophysically detailed monodomain equation [7, 43] coupled with the ten Tusscher-Panfilov ionic model [65], represented here in compact form: \[\left\{\begin{array}{ll}\dfrac{\partial u}{\partial t}+\mathcal{I}_{\text{ ion}}(u,\boldsymbol{w},\boldsymbol{z})-\nabla\cdot(\boldsymbol{D}_{\text{M}} \nabla u)=\mathcal{I}_{\text{app}}(\mathbf{x},t)&\text{ in }\Omega\times(0,T],\\ (\boldsymbol{D}_{\text{M}}\nabla u)\cdot\mathbf{n}=0&\text{ on }\partial\Omega\times(0,T],\\ \dfrac{d\boldsymbol{w}}{dt}=\boldsymbol{H}(u,\boldsymbol{w}, \boldsymbol{z})&\text{ in }\Omega\times(0,T],\\ \dfrac{d\boldsymbol{z}}{dt}=\boldsymbol{G}(u,\boldsymbol{w}, \boldsymbol{z})&\text{ in }\Omega\times(0,T],\\ u(\mathbf{x},0)=u_{0}(\mathbf{x}),\;\boldsymbol{w}(\mathbf{x},0)= \boldsymbol{w}_{0}(\mathbf{x}),\;\boldsymbol{z}(\mathbf{x},0)=\boldsymbol{z} _{0}(\mathbf{x})&\text{ in }\Omega.\end{array}\right. \tag{3}\] In the following, we denote Equation (3) as model \(\mathcal{M}_{\text{HF}}\). \(T=T_{\text{HB}}=600\) ms corresponds to the final simulation time, given by a single heartbeat. The computational domain \(\Omega=\Omega_{\text{purk}}\cup\Omega_{\text{myo}}\) is represented by the one-way coupled 1D Purkinje network and 3D biventricular patient-specific geometry. Figure 2: Example of heart-torso electrophysiology simulation in the patient-specific cardiac model and corresponding 12-lead simulated ECGs. Transmembrane potential \(u\) describes the propagation of the electric signal at the Purkinje and myocardial level, vector \(\mathbf{w}=(w_{1},\ldots,w_{M})\) defines the probability density functions of \(M=12\) gating variables, which represent the fraction of open channels across the membrane of a single cardiomyocyte, and vector \(\mathbf{z}=(z_{1},\ldots,z_{P})\) introduces the concentrations of \(P=6\) relevant ionic species. Among them, sodium \(Na^{+}\), intracellular calcium \(Ca^{2+}\) and potassium \(K^{+}\) play an important role in the physiological processes [2] dictating heart rhythmicity or sarcomere contractility, and are generally targeted by pharmaceutical therapies [4]. Right hand sides \(\mathbf{H}(u,\mathbf{w},\mathbf{z})\) and \(\mathbf{G}(u,\mathbf{w},\mathbf{z})\), which describe the dynamics of the gating variables and ionic concentrations respectively, along with ionic current \(\mathcal{I}_{\rm ion}(u,\mathbf{w},\mathbf{z})\), derive from the mathematical formulation of the ten Tusscher-Panfilov ionic model [65]. The action potential is triggered in the left and right bundle branches by an external applied current \(\mathcal{I}_{\rm app}(\mathbf{x},t)\). The diffusion tensor is expressed as \(\mathbf{D}_{\rm M}=D_{\rm iso}\mathbf{I}+D_{\rm ani}\mathbf{f}_{0}\otimes\mathbf{f }_{0}\) in \(\Omega_{\rm myo}\) and \(\mathbf{D}_{\rm M}=D_{\rm purk}\mathbf{I}\) in \(\Omega_{\rm purk}\), where \(\mathbf{f}_{0}\) expresses the biventricular fiber field [40]. \(D_{\rm ani},D_{\rm iso},D_{\rm purk}\in\mathbb{R}^{+}\) represent the anisotropic, isotropic and Purkinje conductivities, respectively. We impose the condition of an electrically isolated domain by prescribing homogeneous Neumann boundary conditions \(\partial\Omega\), where \(\mathbf{n}\) is the outward unit normal vector to the boundary. The ECG signals \(u_{\rm e}\) are computed in each lead location \(\mathbf{x}_{\rm e}\) following [58]: \[u_{\rm e}(\mathbf{x}_{\rm e})=-\int_{\Omega}\nabla u\cdot\nabla\frac{1}{|| \mathbf{x}-\mathbf{x}_{\rm e}||}dV, \tag{4}\] where \(e=\{V_{1},V_{2},V_{3},V_{4},V_{5},V_{6}\}\) and \(e=\{LA,RA,F\}\) define 6 precordial leads and 3 limb leads located on the pediatric patient-specific torso model, respectively. From this information, we retrieve 3 bipolar limb leads as: \[I=LA-RA\quad II=F-RA\quad III=F-LA, \tag{5}\] and 3 augmented limb leads as: \[aVL=(I-III)/2\quad aVR=-(I+II)/2\quad aVF=(II+III)/2. \tag{6}\] The set \(ECG=\{V_{1},V_{2},V_{3},V_{4},V_{5},V_{6},I,II,III,aVL,aVR,aVF\}\) defines a 12-lead ECG, which is a comprehensive representation of the electrical activity in the heart [7]. In Table 1 we report descriptions, ranges and units for the 7 model parameters that we explore via latin hypercube sampling to generate the dataset of 200 electrophysiology simulations. #### 2.2.2 Numerical discretization We perform space discretization of model \(\mathcal{M}_{\rm HF}\) using \(\mathbb{P}_{1}\) Finite Elements. The biventricular tetrahedral mesh is comprised of 933,916 cells and 158,277 DOFs. The average mesh size is \(h=1\) mm. We generate the Purkinje network for both ventricles using the fractal tree and projection algorithm proposed in [57]. We initiate the left and right bundles from the endocardial locations near the atrioventricular node. The left bundle consists of 14,820 elements (14,821 DOFs), whereas the right bundle has 67,456 elements (67,457 DOFs). Following the approach adopted in [63], we use non-Gaussian quadrature rules to recover convergent conduction velocities in the cardiac tissue [37, 69]. We consider a transmural variation of ionic conductances to differentiate epicardial, myocardial and endocardial properties according to [65]. For time discretization, we first update the variables of the ionic model and then the transmembrane potential by employing an Implicit-Explicit numerical scheme [11, 41, 53]. Specifically, in the monodomain equation, the diffusion term is treated implicitly and the ionic term is treated explicitly. Moreover, the ionic current is discretized by means of the Ionic Current Interpolation scheme [27]. We employ a fixed time step \(\Delta t=0.1\) ms. The fiber architecture is prescribed according to the Bayer-Blake-Plank-Trayanova algorithm with \(\alpha_{\text{epi}}\) = \(-60^{\circ}\), \(\alpha_{\text{endo}}\) = \(60^{\circ}\), \(\beta_{\text{epi}}\) = \(20^{\circ}\) and \(\beta_{\text{endo}}\) = \(-20^{\circ}\)[3]. #### 2.2.3 Integration with Branched Latent Neural Maps In the present application, BLNMs are used to learn in silico ECGs while spanning relevant parameters of the monodomain equation and ten Tusscher-Panfilov ionic model. The vector \(\boldsymbol{\theta}\) corresponds to the 7 model parameters \(\boldsymbol{\theta}_{\text{EP}}=[G_{\text{CaL}},G_{\text{Na}},G_{\text{Kr}},D_{ \text{ani}},D_{\text{iso}},D_{\text{purk}},t_{\text{LV}}^{\text{stim}}]^{T}\) reported in Table 1. The vector of physical variables \(\mathbf{z}_{\text{physical}}(t)\) contains the \(\mathbf{z}_{\text{leads}}(t)\) precordial and limb leads recordings, that is \(\mathbf{z}_{\text{leads}}(t)=[V_{1}(t),V_{2}(t),V_{3}(t),V_{4}(t),V_{5}(t),V_{6 }(t),LA(t),RA(t),F(t)]^{T}\). We note that these recordings are considered in their non-dimensional form \(\widetilde{\mathbf{z}}_{\text{leads}}(t)\in[-1,1]^{N_{\text{physical}}}\) during the training and testing phases. The same holds for time \(\widetilde{t}\in[0,1]\) and model parameters \(\widetilde{\boldsymbol{\theta}}_{\text{EP}}\in[-1,1]^{N_{\mathcal{P}}}\). ### Parameter estimation We employ model \(\mathcal{M}_{\text{BLNM}}\) in the setting of inverse problems. Specifically, we perform parameter calibration for \(\widetilde{\boldsymbol{\theta}}_{\text{EP}}\in[-1,1]^{N_{\mathcal{P}}}\) to match BLNMs physical outputs \(\widetilde{\mathbf{z}}_{\text{physical}}(t)\) to observations \(\widetilde{\mathbf{z}}_{\text{obs}}(t)\) coming from model \(\mathcal{M}_{\text{HF}}\) by minimizing the MSE, all in non-dimensional form, that is: \[\mathcal{L}(\widetilde{\mathbf{z}}_{\text{physical}}(t),\widetilde{\mathbf{z }}_{\text{obs}}(t))=||\widetilde{\mathbf{z}}_{\text{physical}}(t)-\widetilde{ \mathbf{z}}_{\text{obs}}(t)||_{\text{L}^{2}(0,T)}^{2}. \tag{7}\] We randomly initialize \(\widetilde{\boldsymbol{\theta}}_{EP}^{\text{init}}\) in the \([-1,1]^{N_{\mathcal{P}}}\) hypercube and we aim to recover model \(\mathcal{M}_{\text{HF}}\) parameters \(\widetilde{\boldsymbol{\theta}}_{\text{EP}}^{\text{HF}}\). We run a single trial of an Adaptive Differential Evolution algorithm for global optimization [70], which leads to a set of tuned model parameters \(\widetilde{\boldsymbol{\theta}}_{\text{EP}}^{\text{DE}}\) via BLNMs. ### Software and hardware We employ 3D slicer [12] for the manual segmentation of the medical images in order to reconstruct the heart-torso geometry. Meshing of this anatomic model is carried out using the TetGen library available in the SimVascular open-source software [66]. All electrophysiology simulations with model \(\mathcal{M}_{\text{HF}}\) are performed using svFSIplus[71], a C++ high-performance computing multiphysics and multiscale finite element solver for cardiac and cardiovascular modeling, on 336 cores of the Stanford \begin{table} \begin{tabular}{l l l l} \hline Parameter & Description & Range & Units \\ \hline \(G_{\text{CaL}}\) & Maximal \(Ca^{2+}\) current conductance & [1.99e-5, 7.96e-5] & cm ms\({}^{-1}\)\(\mu\)F\({}^{-1}\) \\ \(G_{\text{Na}}\) & Maximal \(Na^{+}\) current conductance & [7.42, 29.68] & nS pF\({}^{-1}\) \\ \(G_{\text{Kr}}\) & Maximal rapid delayed rectifier current conductance & [0.08, 0.31] & nS pF\({}^{-1}\) \\ \(D_{\text{ani}}\) & Anisotropic conductivity & [0.008298, 0.033192] & mm\({}^{2}\) ms\({}^{-1}\) \\ \(D_{\text{iso}}\) & Isotropic conductivity & [0.002766, 0.011064] & mm\({}^{2}\) ms\({}^{-1}\) \\ \(D_{\text{purk}}\) & Purkinje conductivity & [1.0, 3.5] & mm\({}^{2}\) ms\({}^{-1}\) \\ \(t_{\text{LV}}^{\text{stim}}\) & Purkinje left bundle stimulation time & [0, 100] & ms \\ \hline \end{tabular} \end{table} Table 1: Parameter space sampled via latin hypercube for the numerical simulations performed with model \(\mathcal{M}_{\text{HF}}\). Research Computing Center. This solver is part of the SimVascular software suite for patient-specific cardiovascular modeling [66]. We train model \(\mathcal{M}_{\text{BLNM}}\) by using BLNM.jl[22, 45, 46], a new, in-house, Julia library for Scientific Machine Learning which is made publicly available under MIT License at [https://github.com/StanfordCBCL/BLNM.jl](https://github.com/StanfordCBCL/BLNM.jl) with this work. This public repository also contains the dataset encompassing all the electrophysiology simulations used for the training and testing phases. ## 3 Results We report numerical results related to the electrophysiology simulations that were run to generate the training, validation and testing datasets for BLNMs. Then, we explain the technical details behind the automatic BLNM hyperparameter tuning method and show the properties and results associated with model \(\mathcal{M}_{\text{BLNM}}\). ### Electrophysiology simulations We ran 200 numerical simulations on the patient-specific heart-torso model (see Figure 2) and collected the corresponding simulated 12-lead ECGs. In Figure 3 we depict the 200 precordial and limb lead sources that are employed for training, validation and testing of the BLNMs. In Figure 4 we show the corresponding 12-lead ECGs, where the limb leads are algebraically manipulated according to Equations (5) and (6). In Figure 5 we report a representative output from the 3D electrophysiology simulation, namely activation times for 8 random samples from the whole dataset. Figure 3: Full dataset containing 200 in silico precordial and limb leads recordings. We notice that by exploring relevant parameters affecting cardiac function at the cell-level and organ-scale, we are able to generate a broad set of plausible 12-lead ECGs and different patterns in the activation sequence for this pediatric patient. In particular, we remark that the simulated 12-lead ECGs produce sharp gradients during the QRS complex (ventricular depolarization) and T wave propagation (ventricular repolarization). Moreover, they manifest high variability among different instances of the model parameters. Figure 4: Full dataset containing 200 simulated 12-lead ECGs. ### Hyperparameter tuning We perform hyperparameter tuning by employing \(K\)-fold (\(K=5\)) cross validation over 150 electrophysiology simulations. We consider a hypercube as a search space for the number of layers, number of neurons, number of states \(N_{\text{z}}\) and disentanglement level in the BLNM structure. Given the limited dimension of the search space, we employ 50 instances of latin hypercube sampling and select the configuration providing the lowest MSE. The Julia implementation is based on Hyperopt.jl[1], a package to perform parallel hyperparameter optimization. The different NNs associated with each \(K\)-fold are simultaneously trained via Message Passing Interface (MPI) on 5 physical cores of a standard workstation computer. We also exploit Hyper-Threading over 7 additional virtual cores with Open Multi-Processing (OpenMP) to speed-up computations. For each configuration of hyperparameters, we sample the dataset with a fixed time step of \(\Delta t=5.0\) ms and we perform 10,000 iterations of the second-order BFGS optimizer [18]. In Table 2 we report the initial hyperparameter ranges for tuning and the final optimized values. In Table 3 we detail the computational times and resources that we employ to generate electrophysiology simulations and to train NNs. Generating the dataset of biophysically detailed and anatomically accurate electrophysiology simulations and reaching the final BLNM configuration require less than 2 days of computation time. Each electrophysiology simulation runs in approximately 10 minutes but requires hundreds of cores to achieve this performance. On the other hand, training a single NN defining a BLNM requires 10 minutes to 3 hours on a single \begin{table} \begin{tabular}{c c c c c} \hline \hline BLNM & & & Hyperparameters & Trainable parameters \\ & layers & neurons & number of states & disentanglement level & \# parameters \\ \hline tuning & \{1... 8\} & \{10... 30\} & \{9... 12\} & \{1... \(N_{\text{layers}}\}\) & \\ final & 7 & 19 & 10 & 2 & 2,398 \\ \hline \hline \end{tabular} \end{table} Table 2: Hyperparameters ranges and selected values for the final training stage. Figure 5: Simulated activation times in 8 different electrophysiology simulations that are randomly extracted from the full dataset. CPU depending to the specific architecture. ### Branched Latent Neural Maps We showcase the features of BLNMs by means of different test cases. In Table 4 we analyze the influence of the training set size on the computational times and MSE. We consider the optimal NN architecture obtained from hyperparameters tuning (see Section 3.2). We notice that the total training time scales linearly with the dimensionality of the dataset. Moreover, the training costs on a single CPU are quite modest, approximately ranging from 1 to 3 hours. The training MSE is small, on the order of \(10^{-4}\), and comparable, regardless of the number of electrophysiology simulations. On the other hand, the testing loss drops to \(6\cdot 10^{-4}\) with 100 numerical simulations. Given the sharp temporal dynamics of 12-lead ECGs, the number and ranges of model parameters covered \begin{table} \begin{tabular}{c c c c} \hline Number of simulations & Training loss (MSE) & Testing loss (MSE) & Training time \\ \hline 50 & 0.000599 & 0.018932 & 50 minutes \\ 100 & 0.000293 & 0.000589 & 1 hour and 45 minutes \\ 150 & 0.000340 & 0.000454 & 2 hours and 30 minutes \\ \hline \end{tabular} \end{table} Table 4: MSE and computational times associated with different training dataset with increasing complexity for the optimal NN architecture (7 layers, 19 neurons per layer, 10 states, 2 disentanglement level). We use a fixed time step \(\Delta t=5.0\) ms. We employ 1 core of a standard computer endowed with an Intel(R) Core(TM) i7-8700 3.20GHz CPU. \begin{table} \begin{tabular}{c c c} \hline \hline Training time step [ms] & Testing time step [ms] & Training loss (MSE) & Testing loss (MSE) \\ \hline 5.0 & 0.1 & 0.000348 & 0.000459 \\ 5.0 & 1.0 & 0.000348 & 0.000458 \\ 5.0 & 5.0 & 0.000340 & 0.000454 \\ 5.0 & 10.0 & 0.000337 & 0.000452 \\ 5.0 & 20.0 & 0.000333 & 0.000445 \\ \hline \hline \end{tabular} \end{table} Table 5: Testing errors associated with different sampling time steps on in silico 12-lead ECGs for the optimal NN architecture (7 layers, 19 neurons per layer, 10 states, 2 disentanglement level). We consider 50,000 BFGS iterations and 150 electrophysiology simulations for the training stage. \begin{table} \begin{tabular}{l c c} \hline \hline Task & Computational resources & Execution time \\ \hline Dataset generation using \(\mathcal{M}_{\text{HF}}\) (200 simulations) & 336 cores & 1 day \\ \(\mathcal{M}_{\text{BLNM}}\) hyperparameters tuning (50 confs, 10,000 tiers) & 5 cores & 20 hours \\ \(\mathcal{M}_{\text{BLNM}}\) final training (50,000 tiers) & 1 core & 2 hours and 30 minutes \\ \hline \end{tabular} \end{table} Table 3: Summary of the computational times and resources to generate the electrophysiology simulations with model \(\mathcal{M}_{\text{HF}}\) and to train model \(\mathcal{M}_{\text{BLNM}}\). We always tune NN parameters with the BFGS optimizer, by employing either 5 cores or serial execution on an Intel(R) Core(TM) i7-8700 3.20GHz CPU. We sample in silico 12-lead ECGs with a fixed time step \(\Delta t=5.0\) ms. by model \(\mathcal{M}_{\text{HF}}\), BLNMs provide excellent in-distribution generalization properties with a relatively small amount of training data, especially when compared to the significant variability and complex dynamics encompassed by the dataset. \begin{table} \begin{tabular}{c c c} \hline \hline Number of states & Training loss (MSE) & Testing loss (MSE) \\ \hline 9 & 0.000758 & 0.097660 \\ 10 & 0.000340 & 0.000454 \\ 11 & 0.000358 & 0.000754 \\ \hline \hline \end{tabular} \end{table} Table 6: Training and testing errors associated with different number of states on in silico 12-lead ECGs for the optimal NN architecture (7 layers, 19 neurons per layer, 2 disentanglement level). We consider 50,000 BFGS iterations and 150 electrophysiology simulations for the training stage, with \(\Delta t=5.0\) ms. Figure 6: BLNM predictions (solid) and ground truth (points) for 1 randomly selected 12-lead ECGs in the testing set. Different colors represent different testing time steps, namely 0.1, 10.0 and 20.0 ms (left to right), respectively. We show the time evolution of three relevant leads, i.e. \(I\), \(aVF\) and \(V_{2}\). We employ \(\Delta t=5.0\) ms for training. In Table 5 and Figure 6 we study the effect of different testing time steps on the BLNM prediction accuracy. We see that the MSE remains approximately the same on finer and coarser meshes with respect to the fixed time step used for training, that is \(\Delta t=5.0\) ms. This means that BLNMs appear to show little sensitivity to time discretization even if the training stage is performed on a specific finite dimensional representation of the encoded physical process. In Figure 7 we compare the BLNM predictions with the ground truth for 5 randomly selected testing samples. BLNMs manifest good agreement with observations, even in presence of sharp peaks and gradients during the QRS complex and T wave propagation. In Table 6 we see the impact of varying the total number of states on the resulting MSEs. Adding latent outputs to the 9 physical outputs representing precordial and limb lead recordings allows us to significantly reduce both training and testing errors. Specifically, the training error is approximately halved, whereas the testing error is reduced by two orders of magnitude. This means that the dynamics of 12-lead ECGs can be reproduced more accurately in the presence of a suitable number of latent variables. In particular, from Figure 8 we notice that the additional latent variable selected by the hyperparameter tuning process enhances the BLNM learned dynamics by selectively acting on the QRS complex, that is ventricular depolarization, and T wave, that is ventricular repolarization. Similar considerations Figure 7: BLNM predictions (solid) and ground truth (points) for 5 randomly selected 12-lead ECGs in the testing set. hold even for sub-optimal NN architectures. In Figure 9 we depict the training and testing errors with respect to the number of latent outputs by considering four different BLNMs with smaller/higher number of layers and/or neurons per layer than the optimal set of hyperparameters. We see that adding one latent output always entails a significant reduction in both loss functions. On the other hand, two latent outputs contribute to a small reduction of the training error while sometimes leading to overfitting. This means that a single latent output is sufficient to capture the required additional features for this specific application. We also train a standard feedforward fully-connected NN with 9 physical outputs, i.e. without latent outputs, 7 layers and 19 neurons per layer, that is the optimal configuration for BLNMs. This NN accounts for 2,631 trainable parameters. We employ the BFGS optimizer and we perform 50,000 epochs over the usual 150 electrophysiology simulations, sampled with \(\Delta t=5.0\) ms. The training error is \(7\cdot 10^{-3}\) while the testing error is 3.1. This shows that BLNMs outperform standard NNs in terms of training and testing errors while considering less tunable parameters and shorter training times. Moreover, the standard NN does not generalize well on different discretization due to the high MSE reported for the testing loss. Furthermore, we quantitatively compare BLNMs against latent neural differential equations [5, 49]. We perform hyperparameter tuning using the same ranges reported in Table 2, except for the disentanglement level, which is not present given the feedforward fully-connected structure of the NN in this framework. Following the approach of BLNMs, we employ the BFGS optimizer and we perform 10,000 epochs, sampling the training and validation sets with a fixed time step \(\Delta t=5.0\) ms. We discretize the latent neural differential equations in time using the forward Euler method, by considering \(\Delta t=5.0\) ms. The optimal configuration of hyperparameters found during \(K\)-fold (\(K=5\)), which is given by 7 layers, 26 neurons per layer and 9 states (i.e no latent variables), has a validation loss that is equal to 54.3. Indeed, we notice that latent neural differential equations fail to capture the QRS complex and the T wave, which are the most important features of 12-lead ECGs. Figure 8: Time evolution of BLNM latent variable for 5 randomly selected 12-lead ECGs in the testing set. ### Parameter estimation We employ the final BLNM to perform parameter calibration against the testing set, which is comprised of 50 electrophysiology simulations. In Figure 10 we report the box plots showing the distribution of the errors, given by the absolute difference between each parameter \(\tilde{\theta}_{\text{EP}}^{\text{HF}}\) from model \(\mathcal{M}_{\text{HF}}\) and each estimated parameter \(\tilde{\theta}_{\text{EP}}^{\text{DE}}\) with model \(\mathcal{M}_{\text{BLNM}}\), in non-dimensional form. We notice that all the errors are small and lay within the \([0,0.09]\) range. This is possible given the small approximation error (\(\sim 10^{-4}\)) provided by the BLNM with respect to the high-fidelity electrophysiology simulations. We show that BLNMs can be used to match unseen observations coming from model \(\mathcal{M}_{\text{HF}}\), while also retrieving all 7 cell-to-organ model parameters. Performing a single instance of global optimization requires 7 seconds of computations in serial execution on an Intel(R) Core(TM) i7-8700 3.20GHz CPU. ## 4 Discussion Many efforts in the Scientific Machine Learning community are devoted to learning or mapping physical processes, within a certain range of variability, by means of NNs. This can be performed by Figure 9: Training and testing errors vs. number of latent outputs associated with four different NN architectures. We consider 50,000 BFGS iterations, 150 and 50 electrophysiology simulations for training and testing, respectively, with \(\Delta t=5.0\) ms. either learning the time [5, 9, 24, 49, 56], space [39, 51] and space-time [19, 25, 34, 52, 68] dynamics via different forms of neural differential equations or by mapping the whole solution manifold with physics-informed or data-driven neural maps [26, 29, 47, 48]. These involve the use of feedforward fully-connected, recurrent, convolutional or graph neural networks, as well as encoders and decoders based on these architectures. BLNMs blend and share mathematical properties coming from both classes of numerical methods. Indeed, this novel neural map encodes the whole output of interest by spanning model variability in a supervised fashion, while structurally disentangling inputs of different nature, such as time and model parameters of a differential equation. The level of separation between different categories of inputs can be properly tuned, ranging from the first hidden layer to the outputs of a feedforward partially-connected NN. BLNMs are simple, lightweight architectures, easy and fast to train, that effectively reproduce challenging processes with sharp gradients and fast dynamics in complex solution manifolds. While autoencoders generally exploit latent variables between the encoder and the decoder in order to perform dimensionality reduction [16, 55, 60], BLNMs are endowed with additional latent outputs that act in specific regions of the simulated process to locally enhance the learned dynamics. This principle is similar to what is done in latent/augmented neural differential equations [9, 49, 52], where the NN defines a novel set of differential equations encoding the dynamics of a specific state vector, which contains both physical and latent variables. The latter are generally not considered in the loss function, as in BLNMs, but allow one to find better dynamics for the physical variables that are targeted during the optimization process. BLNMs exploit these latent variables as a lifting in the output dimension in order to better map the whole solution manifold directly, without passing them from a system of differential equations. This enables faster training than neural differential equations, as we do not have to replicate the NN structure over different time steps and we do not need to compute gradients over a chain of NNs during backpropagation. BLNMs require backpropagation Figure 10: Box plots showing the distribution of the errors for all model parameters. over a single NN, where the presence of partial connections significantly reduces the number of tunable parameters with respect to latent neural differential equations. Similar considerations hold for the online inference process, which can be carried out with BLNMs by simply querying the NN without solving one or multiple differential equations. This provides a speed-up in the testing phase of BLNMs in comparison to neural differential equations. Moreover, latent neural differential equations generally struggle to reproduce sharp and irregular features, as we showed in Section 3.3. On the other hand, while recent computational tools based on neural differential equations or deep neural operators enable space-time extrapolation [10, 52, 68, 72], learning the input-output map via BLNMs currently allows for excellent in-distribution generalization only. Indeed, while testing BLNMs for out-of-distribution generalization, i.e. by considering model parameters outside the training range and longer simulation times, we notice that they provide reasonable approximations only in the neighborhood that is right outside the training range and fail to perform time extrapolation. In particular, after the maximum training time, BLNMs provide the trivial zero solution followed by a divergent behavior. Future studies should aim to improve the performance of BLNMs for out-of-distribution generalization. Another important feature of BLNMs is that they present a comparable performance among different discretizations during the testing phase. This last property is also shared by neural operators [26], which learn maps between infinite dimensional function spaces. Nevertheless, BLNMs focus on a specific finite dimensional grid during the training stage and are able to generalize over different resolutions during the testing phase. Moreover, if compared to BLNMs, neural operators of different categories, such as Fourier, low-rank, graph-based or deep operators, necessitate a more complex structure within the layers of the NN, which increases training and testing times [28, 29]. BLNMs present several differences with respect to both physics-informed neural networks (PINNs) [47] and associated recent extensions [8, 30, 33, 36]. While both BLNMs and PINNs share a data-driven term in the loss function, the former method encodes latent outputs that enhance the learned dynamics but does not enforce any physics-based knowledge, while the latter focuses on physical outputs only but also incorporates a physics-driven loss function based on the strong form of differential equations. BLNMs focus on a specific mesh during training and present similar generalization errors over both coarser and finer grids during testing. On the other hand, PINNs are mesh-less and require a suitable distribution of training points in the parameter space in order to generalize well during the testing phase. BLNMs present a partially connected structure, that allows for reduced complexity (i.e. number of trainable parameters) while structuring separating flows of information coming from inputs that are intrinsically different, whereas PINNs are normally based on fully-connected NNs. Both methods can potentially handle different sets of inputs and outputs, such as space and time variables, scalar and vector fields from parameterized differential equations, model-based or geometrically-based parameters. Furthermore, in this specific application for cardiac electrophysiology, the 12-lead ECGs are obtained by a space integral over the gradient of the transmembrane potential coming from the monodomain equation (see Equation (4)), which makes a direct use of PINNs unfeasible because the physics-based part cannot be incorporated as the residual of a differential equation written in strong form. On the other hand, BLNMs can properly handle scenarios for model discovery or when the mathematical formulation cannot be seamlessly enforced in the loss function. All the aforementioned aspects characterizing BLNMs are demonstrated on a challenging real-world application in the field of cardiac modeling. Specifically, a reduced-order model of in silico 12-lead ECGs spanning 7 cell-to-organ model parameters is learned from biophysically detailed and anatomically accurate electrophysiology simulations on a patient-specific heart-torso geometry of a pediatric patient with HLHS, a complex form of congenital heart disease. BLNMs accurately reproduce the outputs of this high-fidelity electrophysiology model and can be readily employed in many-query applications, such as robust and global parameter estimation. ## 5 Conclusions We introduced BLNMs, a novel computational tool for arbitrary functional mapping. BLNMs structurally disentangles inputs with different intrinsic roles, such as time and model parameters, by means of feedforward partially-connected NNs. These partial connections can be propagated from the first hidden layer throughout the outputs according to the chosen disentanglement level. Furthermore, BLNMs may be endowed with latent variables in the output space, which enhance the learned dynamics of the neural map. The novelties of this work reside both in the methods and their application to congenital heart disease, which is understudied in the field of computational cardiology. Indeed, we apply BLNMs in a challenging test case, that is learning the 12-lead ECGs of a pediatric patient with HLHS by covering a large range of 7 significant cell-to-organ model parameters. We demonstrate that BLNMs retain a small number of tunable parameters while accurately encoding complex, irregular and highly variable dynamics. Moreover, thanks to the efficient Julia implementation, leveraging different NN libraries and optimization tools, these neural maps can be trained in a fast manner even on a single CPU. BLNMs require small training datasets and do not degrade in accuracy when tested on a different discretization than the one used for training. Furthermore, they can be effectively employed for parameter estimation, as demonstrated using the whole testing set of the high-fidelity numerical simulations. This parameter calibration process can be carried out within a few seconds, i.e. almost in real-time, on a single core standard computer, by considering global optimization in the parameter space. In future works, we aim at using BLNMs to match patient-specific data with numerical simulations by also leveraging computational tools from global sensitivity analysis and robust parameter estimation with uncertainty quantification. Moreover, we would incorporate geometrical features within BLNMs, so that we can cover anatomical variability and we do not need to re-train the NN on every new patient. Finally, although we showcased and tested BLNMs in a specific application involving time processes only, this paper paves the way for several extensions of the presented approach to space-time processes, while also structurally disentangling different sets of parameters, such as the ones describing geometric variability from scalar and vector values related to a single geometry. Furthermore, integrating a physics-based loss or a multifidelity approach, as recently proposed in the framework of deep operator networks [21], may improve the performance and generalization of BLNMs, especially for multiscale and multiphysics problems with known physical laws and properties. ## Acknowledgements This project has been funded by the NSF SSI grant 1663671, CDSE grant 2105345 and NIH grants R01EB029362, R01LM013120. We acknowledge Additional Ventures Foundation, Stanford Cardiovascular Institute and the Vera Moulton Wall Center for pulmonary vascular disease at Stanford University. We thank Dr. Fanwei Kong for the segmentation and mesh generation of the patient-specific heart-torso model.
2304.02894
Affect as a proxy for literary mood
We propose to use affect as a proxy for mood in literary texts. In this study, we explore the differences in computationally detecting tone versus detecting mood. Methodologically we utilize affective word embeddings to look at the affective distribution in different text segments. We also present a simple yet efficient and effective method of enhancing emotion lexicons to take both semantic shift and the domain of the text into account producing real-world congruent results closely matching both contemporary and modern qualitative analyses.
Emily Öhman, Riikka Rossi
2023-04-06T06:53:23Z
http://arxiv.org/abs/2304.02894v2
# Affect as a proxy for literary mood ###### Abstract We propose to use affect as a proxy for mood in literary texts. In this study, we explore the differences in computationally detecting tone versus detecting mood. Methodologically we utilize affective word embeddings to look at the affective distribution in different text segments. We also present a simple yet efficient and effective method of enhancing emotion lexicons to take both semantic shift and the domain of the text into account producing real-world congruent results closely matching both contemporary and modern qualitative analyses. + Footnote †: journal: [https://jdmdh.episciences.org](https://jdmdh.episciences.org) + Footnote †: journal: [https://jdmdh.episciences.org](https://jdmdh.episciences.org) ## 1 Introduction In this study, we explore how the literary concept of mood can be studied and detected with computational methods. We propose to use affect as a proxy for mood and test our hypothesis first quantitatively on different segmentations of the different texts, and then qualitatively against expert close-readings of the same texts. We use multiple different natural language processing (NLP) approaches to attempt to identify the origins, construction, and location of mood. For this purpose, we have collected a corpus of nearly 1000 literary works published in Finnish around the year 1900. Our paper utilizes many common NLP methods and approaches from computational literary studies and affective computing, but besides our pilot study [14], to our knowledge, it is the first study to combine emotion detection/sentiment analysis with the study of mood in texts. We focus on _mood_ as it is one of the more ephemeral yet pervasive aspects of a literary text. Mood is typically described as the atmosphere that the author creates through their word choices, style, and use of imagery and can sometimes even include _tone_. The line between tone and mood can be difficult to draw, particularly when approaching the topic with computational tools, but succinctly the difference can be explained as _mood_ being about how the reader feels about the text, and _tone_ about how the implied author feels about it and uses words to convey their own attitude towards a topic or subject1[12]. Footnote 1: On the concepts of tone and mood in literary studies, see Richards [1929], Ngai [2005], Flatley [2008]. Although many different literary tools contribute to the creation of mood, perhaps the most important one, that influences all the other tools as well, is the choice of words. Typically in literature, the intentionality of word choice is higher than in most other text genres (e.g. social media posts) [13, 14]. It is therefore a great subject for analyzing the use and intensity of emotion-associated words in text. As there are so many components that help create mood, in a larger sense mood is not reducible to a single aspect, but generated by a set of textual elements, however, we can use the importance of word choice to our advantage. If we look at the affective distribution of the words in texts using different text segments, we can attempt to pinpoint where in a text mood is created and how that links to affect. We suggest that the computational study of the valence of the lexicon of a literary text can be valuable in providing an accurate picture of the distribution of the positive and negative valence in a text continuum, and thus help us to better understand the relationship between the linguistic qualities of a text and its perceived emotional effects, particularly the mood of a text. Furthermore, we contribute to the discussion of the most suitable tools for interdisciplinary work including the debate about lexicon-based methods versus machine learning as the most "accurate" [van Atteveldt et al., 2021, Ohman, 2021, Teodorescu and Mohammad, 2022]. ## 2 Background & Previous Work Around the turn of the millennium the "affective turn" took place [Smith, 2011, Armstrong, 2014]. It was a shift in attitude regarding the importance of affect in humanities and social science research, including literary studies [Kim and Bianco, 2007]. Literature can be considered a domain where the affective functions of language are of principal importance [Hogan, 2011]. The affective turn has led to a significant increase in research that focuses on the affective side of text in literature [Armstrong, 2014]. The affective power of literary texts has been acknowledged since Aristotle's _Poetics_, but in the 20th century, many schools of thought such as formalism, new criticism, structuralism and post-structuralism orientated the attention to formal and structural aspects of texts, whereas the study of emotions was excluded and considered as susceptible to researchers' subjective emotions. Research topics range from the study of literature and empathy [Keen, 2007] to the study of literature and cognition [Hogan, 2011], negative affects and tone in texts [Ngai, 2005] to empirical perspectives [Sklar, 2013, Van Lissa et al., 2016], and even emotions specific to Finnish literature [Rossi, 2020, Rossi and Lyytikainen, 2022]. Recent studies on emotions in Finnish literature demonstrate that Finnish literature presents us with a rich body of work for developing the general theory of literature and emotions and to study the ways in which genre-specific emotional effects vary culturally and historically [Rossi and Lyytikainen, 2022]. While this research has opened up new perspectives it has also demonstrated there are a number of gaps and complex questions to resolve. Along with the affective turn, the question of a text's overall emotional tone or mood has aroused vivid interest (e.g. Ngai Ngai 2005, Lyytikainen 2017, and Rossi Rossi 2020). However, a systematic theory of how tone and mood are created and triggered in the reader is still in the works. We suggest that a study of the emotional valence of the lexicon measured quantitatively provides a new approach that can help with understanding the components of a text's mood [Ohman and Rossi, 2021, 2022]. Furthermore, it has been shown that lexicon-based methods can achieve better results in emotion classification tasks than machine learning models, especially when the text segment size is optimized [Ohman, 2021, Teodorescu and Mohammad, 2022]. Parallel to the affective turn, sentiment analysis became an active field of research within natural language processing and computer science [Mantyla et al., 2018]. Sentiment analysis and emotion detection has been used with literary works with varying levels of success; Kim and Klinger (2018) provide a substantive overview of sentiment analysis and emotion detection as it is used in CLS. Although many of the papers cited in the review are interesting and innovative, virtually none of them deal with topics that are common in more traditional literary studies. Common CLS topics are genre classification by emotion, story-type, sentiment tracking, and sentiment recognition (see e.g. Sprugnoli et al., 2016; Schmidt and Burghardt, 2018; Amano et al., 2023). As interesting and innovative as previous CLS work is, it is typically of little use to literary scholars studying affect. These CLS approaches rarely work in harmony with literary analysis in the traditional sense and usually do not even touch upon the topics that interest literary scholars such as tone, mood, and emotion evocation. We hope that our efforts will contribute to merging the talents and knowledge within CLS, digital humanities, NLP, as well as traditional literary studies. Although there are exceptions (see e.g. Hu et al., 2021; Herrmann et al., 2019), it is somewhat rare for studies within the field of CLS to have literary experts working on the project, and many such projects rely heavily on the analysis of the quantitative results conducted by experts of NLP rather than experts of literature. This is not a problem in only CLS, but in many other interdisciplinary fields, particularly those with a computational element (Bartlett et al., 2018). This is why we think it is imperative to conduct CLS (and other interdisciplinary studies) with domain experts and not just NLP knowledge and literary data. CLS as a field has been criticized for providing either obvious results or ephemeral results that are not robust enough for repeat analysis (Da, 2019). Furthermore, there seems to be a pervasive belief that state-of-the-art methods from NLP are the most accurate when used on non-standard unstructured data regardless of the research question of the downstream application of such methods. It is commonly suggested that machine learning methods are more accurate than lexicon-based ones (see e.g. van Atteveldt et al., 2021), but several recent papers have suggested this is not the case, especially when dealing with emotional arcs and ideal bin sizes in verbose domains (Ohman, 2021; Teodorescu and Mohammad, 2022). We have taken the criticism of both camps to heart in an effort to produce robust and reliable but also useful and interpretable results. ## 3 Data Our data collection was simple and straightforward. Although there are R packages and Python libraries in existence for handling Project Gutenberg downloads, Project Gutenberg discourages mass downloads using such methods, thus we used their recommended method for filtering works using http queries and downloading a set smaller amount of books at a time. We downloaded the first2 1000 books from Project Gutenberg3, with two filter criteria: (1) the language was Finnish, and (2) the text was in utf-8 plain text format. There is currently no method for filtering out translated works so we estimate that the dataset consists of approximately 50% translated texts. The data is publicly available4. Footnote 2: Presumably in order of entry to the database. Footnote 3: [https://www.gutenberg.org/](https://www.gutenberg.org/) Footnote 4: [https://github.com/esohman/FinLit-corpus](https://github.com/esohman/FinLit-corpus) Footnote 5: [https://libraries.io/pypi/gutenberg-cleaner](https://libraries.io/pypi/gutenberg-cleaner) We used the simple gutenberg-cleaner5 to get rid of the preamble and the legal text at the end of the book, then we created a regex to extract key information such as the title, the name of the author, the year of publication, and whether the book was originally written in Finnish. The translation status of the book was extracted based on whether the terms _suomentaja_, _suomenttu_, _suomentanu_, or any version of _kaintaja/kainnos/kainnetty_ etc.6 were present within the first ten lines of text after the preamble was removed. Although we filtered books based on their encoding, a fairly large number of the books were not actually utf-8 encoded and had to be decoded and re-encoded. We used automatic encoding detection and tried to convert the texts to utf-8, but for some works this failed and in the end due to these encoding issues, our final corpus consists of 975 books instead of 1000. A vast majority (95+%) were written or translated between the years 1850 and 1925 and over 90% after 1880, with only a few instances of older texts, which means that the language used in the texts can be considered Modern Finnish [11]. The final data consists of 2,938,032 sentences and 41,417,116 tokens. ### The Emotion Intensity Lexicon We used the Finnish Emotion Intensity Lexicon (FEIL) [14] as a starting point for detecting affective terms7. FEIL is based on the NRC emotion lexicon [15] and emotion intensity lexicon [15] and has been adapted for Finnish. It lists words alongside the emotions they are associated with as well as the intensity of the associated emotion as a number between 0 and 1. The emotions roughly correlate with Plutchik's wheel of emotions [23] and contains the emotions _anger, anticipation, disgust, fear, joy, sadness_, and _trust_. We follow the best practices and ethical guidelines as set forth by Mohammad [2022, 23]. Footnote 7: [https://github.com/Helsinki-NLP/SELF-FEIL](https://github.com/Helsinki-NLP/SELF-FEIL) ## IV Method Finnish is a fairly easy language to work with in terms of NLP. There are numerous high-quality resources that are actively maintained and new tools are constantly being developed for various written standards of Finnish and Finnish-adjacent languages. Nearly all of these resources are also open-source and freely available [1]. Therefore, we were privileged enough to have several different temmatizers and tokenizers at our disposal, some even specifically made for older Finnish texts. We also added to this list by creating a Finnish version of the chapterize Python package8 package. Footnote 8: The original: [https://pypi.org/project/chapterize/](https://pypi.org/project/chapterize/) and the Finnish version: [https://github.com/esohman/chapterize-fi](https://github.com/esohman/chapterize-fi) Once the data was cleaned, we preprocessed it by lemmatizing, splitting it into paragraphs, and tokenizing the texts. We attempted to fine-tune Finnish BERT [24] to work with our texts (as per Gururangan et al., 2022), but the vocabulary was not improved sufficiently to work with data so different from the original training data for Finnish BERT. In order to find the best tool for our data we tried multiple different lemmatization tools. These tools included the Turku Neural Parser [17], murre [18, 1], and both the _experimental_ and _news_ Finnish spaCy models. In the end we settled for the Turku Neural Parser as the results were the most accurate (see table 1 for an example) and all words were not only parsed but parsed correctly in context as well. In the example, the Turku Neural Parser was the only one able to correctly parse the nonstandard form _kahvians_ - standard form: _kahviansa_ - partitive case of 3rd. pers. sing./plur. coffee. Incidentally, in the dissertation of Airio [19]_kahviansa_ is discussed as an example of "parasite words" since it can be mistakenly split into _kahvi_ (coffee) and _ansa_ (trap), something none of the lemmatizers did. With careful optimism, we take this as a demonstration of how good lemmatizers for morphologically complex languages have become in the past decade. The different lemmatizers had different strengths and weaknesses. For example, spaCy is very easy to use and install, it is reasonably fast for this amount of text, and also produces other information that most of the other lemmatizers leave out. It is therefore great for exploring other aspects of the data such as NER and similarity scores. However, the Turku Neural Parser produced the best results in an easy-to-use pipeline9. Omorfi (Pirinen, 2015) and FinPos (Silfverberg et al., 2016) might also have been good candidates for lemmatizers, but these proved difficult to install on the platforms available for this project. Footnote 9: [http://turkunlp.org/Turku-neural-parser-pipeline/](http://turkunlp.org/Turku-neural-parser-pipeline/) Once we had the texts lemmatized and tokenized, we attempted to identify the first three paragraphs of each text. This was a more complex process than expected as even after removing the preambles/headers and footers, miscellaneous metadata of various shape cluttered the start of the book. There was very little uniformity or even commonly recurring patterns of where the actual text of the book or chapters starts. This led us to create a Finnish version of the chapterize package for Python, which enabled use to split texts into chapter and recognize opening paragraphs when used in conjunction with the sentence and paragraph ids provided by the conllu metadata generated by Turku Neural Parser. We used two different text sections as targets for overall mood detection: the first three paragraphs of each book, and the first 200 tokens from each chapter in each book. The size of the bins was chosen based on findings by Teodorescu and Mohammad (2022) that suggest even at a few hundred tokens, lexicon-based methods can excel at estimating emotion arcs beyond current state-of-the-art machine learning capabilities. They suggest that lexicon-based approaches are more suitable "for applications where simple, interpretable, low-cost, and low-carbon-footprint systems are desired" (Teodorescu and Mohammad, 2022; Teodorescu and Mohammad, 2022). Previous studies (Ohman and Rossi, 2021) have made it clear that although the results from lexicon-based emotion detection can be very accurate in terms of real-world congruency, certain words can easily obfuscate the results. One of the literary works used as quality assessment and proof-of-concept the novel _Rautatie_ (tr. as _The Railroad_) by _Juhani Aho_ focuses on the novelty of a railroad track coming to a peripheral rural village and naturally contains many instances of the word _railroad_, the fact that the word itself was associated strongly with _trust_ in the lexicon skewed the results and were not representative of the level of overall _trust_ in the novel. Therefore we opted to remove the term from the lexicon. On the other hand, FEIL contains mostly contemporary words and their contemporary emotion associations. It is reasonable to assume that these words and their associations would have been subject to semantic shift. Hence we needed to ensure that first and foremost the most common words in our texts that in our subjective opinion have an emotion association are indeed in the lexicon, and secondly that the most common words that match words in the emotion lexicon are labeled correctly for emotions at reasonable intensities. These two steps are iterative and continuous in that they should be repeated whenever the lexicon or lemmatization is altered. \begin{table} \begin{tabular}{l l} \hline \hline **Translation** & The provoist sits down in his rocking chair, stands his pipe on the floor against the table leg, and starts drinking his coffee \\ \begin{tabular}{l} Original \\ **Marre (hist)** \\ **spacCy (news.Jg)** \\ **spacCy (exp. /w volkoko)** \\ **UniC learning emotion labels or intensity scores or adding new words to the lexicon should be avoided. If necessary, such additive alterations require the use of multiple annotators who are not the authors and cross-checking the results using inter-annotator agreement scores (van Atteveldt et al., 2021). For this reason, we did not want to introduce new biases into the lexicon based our own interpretations of emotion intensities. Thus we developed an alternative method that would allow us to easily and objectively add words to the lexicon. To enable this we created word embeddings of our texts and used them to look up words in the lexicon with high cosine similarity to the words we wanted to introduce to the lexicon (as per e.g. Maas et al. 2011; Yu et al. 2017; Ye et al. 2018). This lead to the association for e.g. the words _kirkas, valkoinen, and valkea_ (clear, white, white/light/bright often referring to fire or morning light) to be identical. The words that needed to be added were relatively few so we were able to manually check that their emotion associations and intensities made sense. For future projects, we intend to employ human annotators in conjunction with word embeddings. Nonetheless, this approach alone showed much promise and was very accurate within the small sample size in both recognizing semantically similar words and in mitigating issues with semantic shift. Ultimately we ended up removing 128 entries from the lexicon and adding 203 tokens including _rakastaa_, to love. As the FEIL lexicon was based on an English lexicon, this exlusion of such a central emotional term demonstrates the issue with noun and verb distinctions in Finnish compared to English. The noun and verb forms are often the same in English unlike in Finnish where the forms are distinct (cf. to love/a love, to run/a run vs. rakastaa/rakkaus, juosta/juoksu). Many of such instances were addressed in the creation of FEIL with the addition of verb forms copying the intensity scores and emotion associations of the English words, yet many verb forms are still missing from the lexicon. We used this domain- and period-specific version of FEIL to tabulate normalized (per token count for inter-text comparability), intensity scores for each target text. Other future projects should include checking that both noun and verb forms are found in the lexicon. Figure 1: Co-occurence of emotions in FEIL. The co-occurence matrix shows that emotion associations are not linear. A word associated with _anger_ is quite likely (0.5) to also be associated with _fear_, but a word associated with _fear_ is slightly less likely to be associated with _anger_ (0.42). Words associated with _sadness_ are highly likely to also be associated with _fear_, however, due to the very low number of _sadness_-associated words in the lexicon, _fear_-associated words are much less likely to simultaneously also evoke _sadness_ (0.09). _Joy_ is also the only purely positive emotion in the lexicon, which means it is more likely to be associated with many positive emotions beyond _joy_ itself due to annotation process conducted with best-worst scaling [16] (ranking words associated with a specific emotion in terms of least to most associated). As mentioned earlier, tone and mood can be difficult to distinguish from each other and can even be intertwined to different degrees, especially if we focus on words alone. However, we argue that the tone of a literary text tends to shift much more within and between chapters, and therefore by focusing on the first paragraphs of each chapter, or even the opening paragraphs of the first chapter only, we can get a fairly accurate idea of the mood of the text due to its more stable nature, with a smaller risk of it being confused with tone. It is well-established that first impressions matter in literature and beginnings tend to shape the experience of reading and set the mood for the whole text. Tone might vary within one text depending on changing narrative viewpoints or even narrators, or switching from descriptive language ot dialogue. Theories of perception (e.g. Perry 1979) argue that the openings play a crucial role in creating a text's overall emotional disposition. The affective language and emotional effects created at the beginning of a text modify and adjust the reader's general emotional orientation by shaping up modes of perception and organization of information. For instance, the melancholic mood created in the beginning of Aho's _Rautata_ (Railroad), or the strong effects of disgust in the beginning of Sillanpaa's _Hurskas kurjuus_ (Meek Heritage), are likely to influence the reader experiences later reactions and feelings triggered by narrative events. ## V The Mood in Selected Texts As the results mostly consist of a dataframe with emotion intensity scores for each text, the results do not easily lend themselves to visual representations. Therefore, we are focusing the presentation of our results on a small subset of data. We chose four texts based mostly on the second author's area of expertise and previous intimate analyses of the affective landscapes of these texts. The scores for the chosen texts' first three paragraphs are presented in table form in table 2 and f0or both approaches comparatively in figures 2 and 3. ### An overview of the selected texts The first one is Juhani Aho's breakthrough novel _Rautata_ (tr. as _The Railroad_, 1884). In this text, from the perspective of the implied reader, the novel evokes emotional effects of melancholia and nostalgia, which are characteristic of Aho's work. \begin{table} \begin{tabular}{l l l l l l l l} \hline **title** & **anger** & **anticipation** & **disgust** & **fear** & **joy** & **sadness** & **trust** \\ **Putkinotko** & 4.22 & 8.67 & 5.42 & 11.75 & 15.69 & 1.47 & 22.07 \\ **Kauppa-Lopo** & 12.45 & 5.71 & 7.40 & 19.11 & 7.66 & 0 & 0 \\ **Rautata** & 8.56 & 21.15 & 0 & 10.50 & 13.81 & 5.52 & 16.74 \\ **Hurskas kurjuus** & 20.33 & 9.90 & 11.25 & 27.73 & 9.07 & 7.29 & 15.05 \\ \hline \end{tabular} \end{table} Table 2: Normalized emotion scores for each novel’s first three paragraphs. The second one is Minna Canth's _Kauppa-Lopo_ (no translation, the title refers to the protagonist's nickname, 1889), a tragic story of poverty and illness. The beginning of the novella, set in prison, underlines the anti-hero's ugly appearance, but the narrative contrasts the physical ugliness with an inner goodness: she is described as good-hearted and compassionate towards other people. The story was met with anger in contemporary audience, and the critics did not value her social criticism. Canth's naturalism was considered "poor art" by her contemporaries and she was accused of being an admirer of disgust, "destroying the laws of beauty, unfolding ugliness in every sense" [12, 13]. The third one is Frans Emil Sillanpaa's _Hurskas kurjuus_ (tr. as _Meek Heritage_, literally "Sacred Misery", 1919), which begins with a shocking prologue which anticipates the death of the protagonist: it describes the execution of a poor tenant farmer who had ended up as a Red Guard soldier in the Finnish Civil War (1918). Despite the negative emotions and the tragic events the narrator also expresses trust and comfort in the future of the Finnish nation. When Sillanpaa's novel appeared it was met with confusion. The fourth and last one is _Putkinotko_ (no translation, 1919-20) by Joel Lehtonen. Like Sillanpaa's _Sacred Misery_, this novel tracks the tensions that escalated in the Finnish Civil War in 1918. The novel's protagonist, a good-hearted yet self-wild tenant farmer resigns to obey the landlord and instead resorts to illegal distillery to support the family. The novel is emotionally ambivalent: the idyllic descriptions of Finnish summer nature and the comic elements are likely to arouse positive emotions, while the unembellished description of poverty intends to evoke moral anger and sadness over social inequality. ### Results The results for the texts are presented in figures 2 and 3. In figure 2 we can see some patterns emerge. In particular, the laconism of _Rautatie_ is evident when compared to the other authors and the strong early emotional impact of _Hurskas Kurjuus_ becomes very apparent, with _fear_ and _anger_, but also _sadness_ being particularly notable. _Fear_ and to some extent _anger_ are also highly present in _Kauppa-Lopo_, likely due to the descriptions of the prison environment and the appearance of the protagonist. _Trust_ and _joy_ are the most Figure 2: Emotion word distribution in first three paragraphs per 1000 words notable emotions in _Putkinotko_, perhaps due to the detailed descriptions of the idyllic landscape that dominate the opening chapter. When comparing the two different segmentations, first paragraph-only vs. opening paragraphs of each chapter, the latter shows a stronger prevalence of positive emotions. We take this to indicate that the intended opening mood for these novels is constructed on negative emotions intended to evoke strong feelings in the reader. In the former segmentation, the emotion distribution is more varied and seemingly converging on the distribution of emotion words in the lexicon. ## 6 Discussion Our computationally derived results correspond highly with qualitative evaluations of the same target texts in terms of established interpretations of mood specifically when compared to the emotion word distribution of the first three paragraphs of a literary text. The valency and intensity of emotions in the opening paragraphs of the book and the opening paragraphs of all chapters the emotions approach the distribution of emotions in the lexicon in the former segmentation. That is, they become muddled because they are not focused enough. Only minor effects of the authors' style can be discerned when applying this method across all chapters. Not only are the differences between chapters reduced with the former, all-chapters, approach, but the differences between the texts also become less clear. This could also be in part because the focus and narrative choices become more varied and therefore the results average out and start to converge on the distribution of emotions in the lexicon. We recommend that the quest for mood should begin with the opening paragraphs of a text. The qualitative analysis of the selected works, summarized above, has paid attention to various aspects of depicting and triggering emotions in a literary texts: 1. the characters' and the narrators' emotions 2. the emotional effects targeted at the implied reader 3. the empirical readers' reactions in contemporary reception 4. the texts' tone, the organizing feeling of a literary work, which is never reducible to a reader's emotional response to a text, nor to a text's internal representations of feeling (on the concept of the emotional effect, see Lyytikainen 2017; on the notion of tone, see Ngai Figure 3: Emotion word distribution in the first 200 tokens of each chapter per 1000 words 2005, 28. It should be noted that evoking emotional effects in literature is not restricted to emotion-associated words or to direct descriptions of the character's emotions. All facets of the narrative, from the description of objects to the narrative point of view and style, including tropes and even the rhythm of the text are important aspects in triggering emotional effects in the reader. As an example, the melancholic tone of Juhani Aho's text is not generated by themes of separation and loss alone but also by Aho's style, which favors fragmentation and loosening of syntax, with a recurring mannerism of three points "...", as a sign of hesitation and withdrawal, even evoking a depressive loss of contact. The qualitative analysis demonstrates that the selected texts depict and trigger negative emotions in particular: feelings of deception, fear, anxiety, disgust and hatred, anger, moral indignation and melancholia. This can be partly explained by genre-specific emotional effects: a critical naturalist novel tends to shock and challenge its reader by representing and inciting strong negative emotions, which confirm the effect of the reality of a text and direct the reader's attention to the social defects described. For instance, the emotion of _disgust_, a genre-specific emotion of the naturalist novel, is a named emotion and salient in Sillanpaa's and Canth's novels in particular [Rossi, 2007, 2017, 2020]. The salience of negative emotions can be explained by the importance of negative emotions in literature and art in general. As discussed by Menninghaus et al. [2017] negative emotions are an important resource for the arts, since negative emotions have been shown to be particularly powerful in securing attention, intense emotional involvement, and high memorability, and hence is precisely what artworks strive for. Narrative plots routinely involve social conflicts and both represent and elicit negative emotions in response to such conflicts: failing marriages, unhappy love, long separations, adultery, betrayed friendship, and the like. In narratives, happiness is generally not described in great detail but rather evoked as a peak moment to be challenged, or as a goal to strive for: Canth and Jotuni depict the characters' desire for happiness and love, which is not possible in today's society; Sillanpaa's narrator expresses trust in and hope for the future of Finland, and Aho concludes _The Railroad_ with the normative happy ending of a fairy tale - yet this happiness is only evoked, but not described in detail. ## VII Future Work This dataset will be used for more robust detection of tone and mood in Finnish literature using many novel approaches. The approaches developed for this dataset will also be used with other literary datasets in many other languages. Our preliminary studies show that the "big data" results support qualitative analyses and further justify the use of purely lexicon-based methods and affect as a proxy for mood when dealing with larger collections of text where word choice is an important factor in evoking affective states in the reader. Explicitly, we can see that the choice of emotion-associated words in the first three paragraphs correlates highly with established analyses of mood in the selected texts. We hope to add established emotion categories from literary affect studies (see e.g. Hogan 2011) to the lexicon as a measure to further improve the usability of the FEIL lexicon for the literary domain [Ohman, 2020]. Additionally, we would like to expand on the methodologies used in this exploratory study and hopefully create more and more robust approaches to tone and mood detection in literature and perhaps also incorporate intentionality detection [Guo et al., 2009] to further separate tone from mood. ## Acknowledgements This work was supported by JSPS KAKENHI Grant Number 22K18154.
2306.02035
Pythagoras Superposition Principle for Localized Eigenstates of 2D Moiré Lattices
Moir\'e lattices are aperiodic systems formed by a superposition of two periodic lattices with a relative rotational angle. In optics, the photonic moir\'e lattice has many appealing properties such as its ability to localize light, thus attracting much attention on exploring features of such a structure. One fundamental research area for photonic moir\'e lattices is the properties of eigenstates, particularly the existence of localized eigenstates and the localization-to-delocalization transition in the energy band structure. Here we propose an accurate algorithm for the eigenproblems of aperiodic systems by combining plane wave discretization and spectral indicator validation under the higher-dimensional projection, allowing us to explore energy bands of fully aperiodic systems. A localization-delocalization transition regarding the intensity of the aperiodic potential is observed and a novel Pythagoras superposition principle for localized eigenstates of 2D moir\'e lattices is revealed by analyzing the relationship between the aperiodic and its corresponding periodic eigenstates. This principle sheds light on exploring the physics of localizations for moir\'e lattice.
Zixuan Gao, Zhenli Xu, Zhiguo Yang, Fangwei Ye
2023-06-03T07:27:31Z
http://arxiv.org/abs/2306.02035v3
# Pythagoras Superposition Principle for Localized Eigenstates of 2D Moire Lattices ###### Abstract Moire lattices are aperiodic systems formed by a superposition of two periodic lattices with a relative rotational angle. In optics, the photonic moire lattice has many appealing properties such as its ability to localize light, thus attracting much attention on exploring features of such a structure. One fundamental research area for photonic moire lattices is the properties of eigenstates, particularly the existence of localized eigenstates and the localization-to-delocalization transition in the energy band structure. Here we propose an accurate algorithm for the eigenproblems of aperiodic systems by combining plane wave discretization and spectral indicator validation under the higher-dimensional projection, allowing us to explore energy bands of fully aperiodic systems. A localization-delocalization transition regarding the intensity of the aperiodic potential is observed and a novel Pythagoras superposition principle for localized eigenstates of 2D moire lattices is revealed by analyzing the relationship between the aperiodic and its corresponding periodic eigenstates. This principle sheds light on exploring the physics of localizations for moire lattice. ## I Introduction The structural geometrical properties of natural or artificial systems profoundly impact the properties of waves that are allowed to propagate in them. Thus, a fascinating range of phenomena stemming from the geometrical properties of material landscapes, such as their periodicity, are continuously discovered in diverse areas of physics, including mechanics, acoustics, optics, electronics, solid-state physics, and physics of matter waves [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Recently, moire systems [11; 12; 13; 14; 15; 16] have drawn much attention due to their unusual electronic, optical and magnetic properties, and their potential for designing novel materials with tailored functionalities [17; 18]. A moire system is a system that involves two or more periodic structures with different lattice constants or orientation, which interact with each other to form a spatial moire pattern. These systems can arise in a variety of fields such as materials science, condensed matter physics, optics, and electronics. For example, in condensed-matter physics, the moire systems are revealing a wealth of profound physical effects that have established a new area of research referred to as twistronics [3]. Moire patterns are also crucial in all areas of physics related to wave propagation, such as Bose-Einstein condensates and optics [8; 9], where they afford the possibility to explore the phenomena that arise because of the transition from aperiodic (incommensurate) to periodic (commensurate) geometries, occurring at specific values of the rotation angle in contrast to aperiodic quasicrystal systems. Photonic moire lattices can be created by the superposition of two rotated square or hexagonal sublattices [19; 20]. Recent experiments reported the observation of the 2D localization-delocalization transition (LDT) [19] of light waves when one tunes the twisting angles or the depth of the modulation of the constitute sublattices. In one dimension the LDT effect had been observed for both light [21] and matter waves [22]. The localization phenomenon is due to the band flattening of the moire pattern in the incommensurate (namely, aperiodic) phase [19]. Theoretically, the properties of localized eigenstates and the existence of the LDT in the eigenvalue spectrum for a fixed moire lattice remain less explored due to the difficulty in calculating eigenproblems for aperiodic systems. Traditional crystalline approximant methods [23; 24] are slowly convergent and cannot provide results accurate enough for physical understanding. In this Article, we developed an efficient method to solve the aperiodic problems, which allows us to explore the properties of localized eigenstates in moire lattices, and by this method we reveal the Pythagoras superposition principle between aperiodic system and its periodic crystalline approximants. In the paraxial approximation, the propagation of an extraordinarily polarized beam in a photorefractive medium with an optically induced refractive index is described by the Schrodinger-like equation of the dimensionless field amplitude \(\phi(\mathbf{r},z)\)[25]: \[\mathrm{i}\frac{\partial\phi}{\partial z}=-\frac{1}{2}\nabla^{2}\phi+\frac{E_{ 0}}{1+I(\mathbf{r})}\phi \tag{1}\] where \(\mathbf{r}=(x,y)\) and \(I(\mathbf{r})=|p_{1}v(\mathbf{r})+p_{2}v(S\mathbf{r})|^{2}\) is the intensity of the moire lattice induced by two ordinarily polarized mutually coherent periodic sublattices, \(v(\mathbf{r})\) and \(v(S\mathbf{r})\). Here \(S\) is a 2D rotational matrix such that \(S(\mathbf{\theta})\mathbf{r}\) rotates vector \(\mathbf{r}\) by a counterclockwise angle \(\theta\). \(p_{1}\) and \(p_{2}\) are the amplitudes of the first and second sublattices, and \(p_{1}/p_{2}\) is defined as the lattice ratio. The potential term \(V(\mathbf{r})=E_{0}/(1+I(\mathbf{r}))\) describes the optical response of the photorefractive crystal [19; 20], where \(E_{0}\) describes the strength. Notably, the moire structures can be periodic or aperiodic, depending on the twisting angle \(\theta\). The moire lattices composed of two square lattices as considered here, they are periodic when \(\theta\) takes any Pythagorean angle and aperiodic otherwise [20]. Throughout this article, the sublattices of the moire systems have fixed parameter \(p_{1}/p_{2}=1\). To visualize the mysterious properties of moire lattices, Fig. 1 presents the first eigenstates of aperiodic systems for different \(E_{0}\) with \(\theta=\pi/6\) and periodic crystalline approximants with Pythagorean angle \(\theta=\arcsin(451/901)\) which approximates \(\pi/6\) with error \(\sim 2\times 10^{-4}\pi\), together with the results of aperiodic systems with \(\theta=0.6435\) which approximates the Pythagorean angle \(\arcsin(3/5)\). These data describes the results in domain \([-33.3401,33.3401]^{2}\). Here the aperiodic systems are calculated by the projection indicator (PI) method described below, while the periodic systems are solved by the plane wave method [26]. The photonic lattice actually acts as an effective potential that can trap or diffuse light during its propagation and the larger modulus indicates a stronger effective potential. These results clearly show a more localized tendency with the increase of \(E_{0}\). At \(E_{0}=7\) and \(10\) both the aperiodic and periodic systems demonstrate mode localization. We note that, although the lattice for (e-h) is periodic, the width of the wavepacket in \(E_{0}=7,\theta=\arcsin(451/901)\) is \(2.813\), which is much less than the period \(T=\sqrt{901/2\pi}\) implying that the eigenstate of the periodic approximant is localized. This localized eigenstate shows that a localized light field having the central ring shape in the transverse plane is obtained, indicating a localized high-order light mode [27]. Oppositely, for \(\theta=0.6435\) which is close to the Pythagorean angle \(\arcsin(3/5)\), the eigenstates are delocalized since it can be approximated by a periodic system with much smaller period \(T=\sqrt{5/2}\pi\). Interestingly, the eigenstate of the aperiodic systems features a lot of sharp peaks and looks completely different from that of its periodic companion, comparing between \(\theta=\pi/6\) and \(\arcsin(451/901)\). Different from the non-localized light fields, peaks of the localized moire light field are more like a superposition of several ring-shape localized light fields rather than side lobes around the central main lobes indicating a weakly diffusing effect [27]. Each peak corresponds to a localized light field in the neighborhood and the photonic moire lattices have some ability to trap the light during its propagation. This is counterintuitive, taking into account that the difference in rotation angles of two systems is negligible, illustrating the discontinuity in eigenstates with the rotation angle. Thus, generally, this tells the breakdown of the traditional crystalline approximant method [23; 24] to solve the eigenstates of the aperiodic systems and the error cannot be controlled due to the simultaneous Diophantine approximation [28]. Another commonly employed approach for addressing similar problems is the continuum model Hamiltonian method [1]. This method solves the Hamiltonian structure [14] of the moire system in the specific area. Instead of truncating the traditional domain in physic space, the continuum model Hamiltonian method truncates the momentum-space lattice at the first shell, thereby rendering a truncation error of momentum space similar to the conventional crystalline approximant method. This motivates us to propose an efficient and accurate method for aperiodic eigenvalue problems that circumvents the error like simultaneous Diophantine error. ## II Method Here we develop the projection indicator (PI) method to solve the eigenproblem of the Schrodinger equation \[E\psi=-\frac{1}{2}\nabla^{2}\psi+\frac{E_{0}}{1+I(\mathbf{r})}\psi, \tag{2}\] by a combination of the projection method [29] and the indicator projection with plane wave discretizations. For simplicity, one chooses the projection matrix \(\mathbf{P}\) so that the period in each direction is \(2\pi\), expressed as \[\mathbf{P}=2\begin{bmatrix}\cos\gamma&-\sin\gamma&\cos\gamma&\sin\gamma\\ \sin\gamma&\cos\gamma&-\sin\gamma&\cos\gamma\end{bmatrix} \tag{3}\] with \(\gamma=\theta/2\), where the projection direction is in parallel to the shortest periodic edge of the 4D periodic system. By the projection matrix \(\mathbf{P}\), one can obtain the transition from 2D to 4D spaces, \(\mathbf{q}=\mathbf{P}^{\top}\mathbf{r}\), with \(\mathbf{q}=(q_{1},q_{2},q_{3},q_{4})\). Then substituting \(\phi=e^{-iEt}\psi\) into Eq. (1) leads to a 4D eigenproblem, \[E\psi=-\frac{1}{2}\sum_{i,j=1}^{4}\frac{\partial^{2}\psi}{\partial q_{i} \partial q_{j}}\left(\frac{\partial q_{i}}{\partial x}\frac{\partial q_{j}}{ \partial x}+\frac{\partial q_{i}}{\partial y}\frac{\partial q_{j}}{\partial y }\right)+\tilde{V}(\mathbf{q})\psi. \tag{4}\] Here \(\tilde{V}(\mathbf{q})\) represents the potential function \(V(\mathbf{r})\) after being lifted to 4D space, which implies that if \(\mathbf{q}=\mathbf{P}^{\top}\mathbf{r}\) holds, then \(\tilde{V}(\mathbf{q})=V(\mathbf{r})\). \(\tilde{V}(\mathbf{q})\) is a periodic function with period \([0,2\pi]^{4}\), thus Eq. (4) is a 4D periodic eigenproblem. The numerical solution \(\tilde{\psi}_{N}\) of Eq. (4) can be expanded using the plane wave expansion as \[\tilde{\psi}_{N}(\mathbf{q})=\sum_{\mathbf{k}\in\Omega}\psi_{\mathbf{k}}e^{\mathrm{i}(\bm {k},\mathbf{q})}. \tag{5}\] Here \(\langle\cdot,\cdot\rangle\) denotes the standard inner product between vectors, \(\Omega=\mathbb{Z}^{4}\cap\{||\mathbf{k}||_{\infty}\leq N\}\) is the basis space, and \(N\) represents the number of spectral modes in each dimension, and \(\psi_{\mathbf{k}}\) are the Fourier expansion coefficients. Let \(\mathbf{k}=(k_{1},\cdots,k_{4})\). Eq. (4) can be transformed into \[E\psi_{\mathbf{k}}=\frac{1}{2}\sum_{i=1}^{2}\sum_{j=1}^{4}\sum_{l=1}^{4}\left( \psi_{\mathbf{k}}k_{j}k_{l}\frac{\partial q_{j}}{\partial r_{i}}\frac{\partial q _{l}}{\partial r_{i}}\right)+\mathcal{F}\{V\psi\}_{\mathbf{k}}, \tag{6}\] where \(\mathcal{F}\{\cdot\}\) denotes the Fourier transform and \(\mathcal{F}\{V\psi\}_{\mathbf{k}}\) is the Fourier coefficient of \(\mathcal{F}\{V\psi\}\) with the frequency \(\mathbf{k}\). Set a column vector \(\vec{\psi}\) containing all the Fourier expansion coefficients \(\psi_{\mathbf{k}}\). Hence, one can form a matrix \(\mathbf{A}\) to transform Eq. (6) into a matrix eigenproblem \(\mathbf{A}\vec{\psi}=E\vec{\psi}\). Due to the enormous size of \(\mathbf{A}\) which is not sparse, it cannot be stored explicitly. Therefore, we use a matrix-free preconditioned Krylov subspace method [30] which only requires the matrix-vector product to be stored in each iteration, making it a more efficient approach. Once the eigenvector \(\vec{\psi}\) is obtained, the four-dimensional eigenfunction \(\psi(\mathbf{q})\) can be approximated using Eq. (5). By the projection matrix \(\mathbf{P}\), \(\tilde{\psi}_{N}\) can be transformed back to the 2D space, which implies that \[\tilde{\psi}_{N}(\mathbf{q})=\sum_{\mathbf{k}\in\Omega}\psi_{\mathbf{k}}e^{\mathrm{i}(\bm {k},\mathbf{q})}=\sum_{\mathbf{k}\in\Omega}\psi_{\mathbf{k}}e^{\mathrm{i}(\mathbf{P}\mathbf{k},\bm {r})}:=\psi_{N}(\mathbf{r}). \tag{7}\] Here the second equality uses \(\langle\mathbf{k},\mathbf{P}^{\top}\mathbf{r}\rangle=\langle\mathbf{P}\mathbf{k},\mathbf{r}\rangle\). The 2D aperiodic function \(\psi_{N}(\mathbf{r})\) as the eigenfunction and \(E\) as the eigenvalue are the numerical results of the original problem Eq. (1) returned by the PI. Therefore, this approach is equivalent to solving the higher-dimensional periodic lattice problem within the subspace of the original problem, applying the plane wave method. On the other hand, if the twisting angle is such that the moire lattices restore periodicity, one can solve the corresponding eigenproblem directly in the original (2D) space, using the plane wave method, thanks to the Floquet-Bloch theorem [31]. Due to the singularity of the eigenvalue problem, especially when \(E_{0}\) is large, the error in the numerical calculation may lead to some spurious eigenstates. In order to sift out pseudo-eigenstates, a spectral indicator method [32; 33; 34] is adopted. Define the indicator [35] \[\mathrm{Ind}=\|\mathbf{Q}(\mathbf{Q}\mathbf{f}/\|\mathbf{Q}\mathbf{f}\|)\| \tag{8}\] where matrix \(\mathbf{Q}\) is the spectral projection \[\mathbf{Q}=\frac{1}{2\pi i}\int_{\Gamma}(\mathbf{A}-s\mathbf{I})^{-1}ds, \tag{9}\] with \(\mathbf{I}\) being the identity matrix. The indicator becomes \(1\) if there exists at least one eigenvalue in the square. To evaluate the integral, the closed path is divided uniformly into four parts and the composite trapezoidal rule is employed to compute \(\mathbf{Q}\mathbf{f}\) where \(\mathbf{f}\) takes the potential function \(V\) directly. The numerical approximation of \(\mathbf{Q}\mathbf{f}\) can be obtained via a certain quadrature rule \[\mathbf{Q}\mathbf{f}\approx\frac{1}{2\pi i}\sum_{j=1}^{n_{0}}\omega_{j}\mathbf{r}_{j}. \tag{10}\] Here \(\{\omega_{j}\}\) are quadrature weights and \(\{\mathbf{r}_{j}\}\) are the solutions of the linear systems \[(\mathbf{A}-s_{j}\mathbf{I})\mathbf{r}_{j}=\mathbf{f},j=1,2,\ldots,n_{0}, \tag{11}\] where \(\{s_{j}\}\) are the quadrature nodes on \(\Gamma\). For simplicity, the piecewise trapezoid formula is chosen for the curve integral Eq. (9). The size of \(\Gamma\) is small such that only a few sample points guarantee high accuracy. Since the spectral projection method provides many solutions of Eq. (4), we can take a small domain around each eigenvalue and calculate the indicator value to validate the correctness of this eigenvalue. Detailed steps of the PI are included in Algorithm 1. ``` Data:\(d\)-dimensional quasi-periodic potential \(V\), the number of bases \(N\) in each dimension, the number of eigenvalues \(M\), the step size \(\delta\), and the threshold value \(\epsilon\) 1 Determine the basis space and the test space \(\Omega\) 2 Compute the first \(M\) eigenpairs of \(\mathbf{A}\) as \(\{(E_{m},u_{m})\}_{m=1}^{M}\) 3for\(m=1\) to \(M\)do 4 Set \(\omega=[E_{m}-\delta/2,E_{m}+\delta/2]^{2}\) and \(\mathbf{f}\) by \(V\) 5 Compute the indicator \(\mathrm{Ind}=\|\mathbf{Q}(\mathbf{Q}\mathbf{f}/\|\mathbf{Q}\mathbf{f}\|)\|\) by Eq. (9) if\(\mathrm{Ind}<\epsilon\)then 6 Delete the eigenpair \((E_{m},u_{m})\) 7 end if 8 Project the eigenfunctions \(u_{m}\) back into the \(d\)-dimensional space 9 end for ``` **Algorithm 1**Projection indicator method ## III Accuracy performance To validate the accuracy performance of the PI method, the absolute errors of the first eigenvalues are evaluated and presented in Fig. 2 where panels (a) and (b) correspond to 1D and 2D moire systems, respectively. Here the 1D moire system is with potential \(V(x)=E_{0}/(1+I(x))\) for \(I(x)=(\cos(2x\cos\theta)+\cos(2x\sin\theta))^{2}+1\), and the rotation angle is set as \(\pi/6\). The results for three cases of \(E_{0}=1,3/2\) and \(7\) are presented with the increase of \(N\). In these low strength cases \(E_{0}=1,3/2\), the eigenstates are delocalized and one can see the spectral convergence of the numerical approximation. Conversely, in the strong strength case \(E_{0}=7\), the numerical approximation converges slower and more nodes are needed to achieve high accuracy. These results demonstrate that the larger \(E_{0}\), the more singular the system, and the slower the convergence of numerical results. In 1D, a small value of \(N\) can achieve an accuracy of approximately \(10^{-10}\) for \(E_{0}=1\). For 2D systems, rapid convergences are observed for both cases of \(E_{0}=1\) and \(E_{0}=3/2\). These results demonstrate the high accuracyand the fast convergence of the PI method for low strength cases. For systems with large strength \(E_{0}\), the eigenfunctions tend to be localized. This makes numerical calculation more challenging to achieve high accuracy. Consequently, if the numerical error in the eigenfunction is significant, the wave function of the time evolution will be unable to remain unchanged during the propagation. By examining the wave propagation behavior by time-dependent Schrodinger equation Eq. (1), one can validate the accuracy of the eigenstates obtained through the PI. Fig. 3 presents the results of the 1D moire system at different times. The initial condition takes the eigenfunctions for \(E_{0}=4\) and \(7\), which is calculated using the PI with \(N=50\). One can observe that for the case of \(E_{0}=7\) the wave function remains unaltered regardless of the propagation time. However, for a smaller strength \(E_{0}=4\), the wave function exhibits a slight oscillation changes during propagation. The behavior shows the stable propogation of the localized eigenstate and the results demonstrate that the PI method provides accurate solution even when the eigenfunctions are localized. To further validate the accuracy of the PI for localized eigenstates, we calculate the first 6 eigenvalues of the 2D aperiodic systems with \(E_{0}=4\) and \(7\) and display the results in Fig. 4. In the spectral projection step, 24 and 32 spectral nodes are used in each dimension, and the spectral indicator sets the size of \(\Gamma\) to be \(2.5\times 10^{-4}\). Notably, the overall errors are less than \(10^{-3}\), serving as evidence of the high accuracy of the PI. The indicator-only method is used to verify the accuracy of the PI results, which involves searching for eigenvalues across the entire domain. For the results obtained through the spectral projection step, one finds that less than half of the eigenvalues are validated by the indicator method. This highlights the necessity of the indicator test in ensuring accuracy. Figure 3: Wave propagation of 1D aperiodic potentials at \(E_{0}=4\) and \(7\). (a,b,c,d) correspond ot the results t \(t=0,10,100\) and \(1000\). Figure 2: Error of the first eigenvalue of moiré systems with the increase of \(N\) for \(E_{0}=1,3/2\) and \(7\): (a) the 1D case; (b) the 2D case. ## IV Pythagoras Superposition Principle We are now ready to study the energy bands and the LDT phenomenon of 2D photonic moire lattices. The LDT in moire lattice with respect to the change of the lattice ratio was discussed in Wang _et al._[19]. However, the effect of varying \(E_{0}\) on localization remains unclear. The PI is used to calculate the eigenstates of the moire system. Through the location of the peaks in the localized eigenstates, one can deduce the localization position of the wavepackets propagated in this system. Here we consider the 2D aperiodic systems at \(E_{0}=7\) and \(\theta=\pi/6\) and attempt to understand the misconvergence of the simultaneous Diophantine approximation. Fig. 5(a-d) show the \(1^{\text{st}},3^{\text{rd}},41^{\text{st}}\) and \(109^{\text{th}}\) eigenstates for \(|\psi|^{2}\), with \(N=30\). The two eigenstates of (c,d) are at the middle and bottom of the first energy band of the aperiodic system, where the degree of localization decreases with the increase of mode numbers. A quantitative description of the localization degree of the eigenstates in a given region \(U\) is the integral form factor (IFF) [19; 36] expressed by, \[\text{IFF}=\frac{\left(\int_{U}|\psi|^{4}d^{2}\mathbf{r}\right)^{1/2}}{\int_{U}| \psi|^{2}d^{2}\mathbf{r}}. \tag{12}\] A larger IFF means a more localized state of eigenfunction \(\psi\). Fig. 5(e) displays the IFFs of all eigenstates in the first energy band, where states A-D correspond to panels (a-d), respectively. One observes the decreasing tendency Figure 4: The eigenvalues obtained by the PI and the indicator-only method. The first six eigenvalues of 2D aperiodic systems with \(E_{0}=4\) and \(7\) are displayed. The projection method uses plane waves \(N=24\) and \(32\) along one direction. The indicator method uses intervals of size \(2.5\times 10^{-4}\). Figure 5: Results of the 2D aperiodic system at \(E_{0}=7\) and \(\theta=\pi/6\). (a,b) Contours of the first and third eigenstates; (c,d) Contours of the eigenstates at the middle (\(41^{\text{st}}\)) and bottom (\(109^{\text{th}}\)) of the first energy band; (e) The IFFs of all the eigenstates in the first energy band as a function of eigenvalue. States A-D correspond to the eigenstates of panels (a-d); (f,g) Enlarged plots of the first eigenstate in two domains \([0,40]^{2}\) and \([40,80]^{2}\) with the \(y=x\) cuts; (h) The peak sites of interior wavepackets (red circles) in panel (a), which are located at the nodes of the \(T\) mesh. of the IFF value with the more index. The first eigenstate of the aperiodic system is the only one with an IFF bigger than \(0.05\). In order to verify the localization character, the enlarged plots of the eigenstate are present in Fig. 5(f,g), together with the \(y=x\) cuts for \(|\psi|^{2}\). The exponential decay of the wavepackets can be observed, demonstrating the exponential localization characteristics of the eigenfunction. Fig. 5(a-d) thus verifies the mode transition from localization into delocalization with the increasing of the mode index in the energy spectrum. This is in agreement with the experimental demonstration [19] of LDT in 2D photonic moire lattices, which revealed the mechanics for wave localization based on flat-band structure, in contrast to the schemes based on light diffusion in photonic quasicrystals requiring disorder media [37, 38]. The difference between aperiodic eigenstates and their periodic approximants can be also illustrated by the phase structures of the first eigenstates. Fig. 6 displays the results of the localized system with parameters \(E_{0}=7\) and \(\theta=\pi/6\) and its periodic approximant. For localized systems, as depicted in Fig. 6 (a), the phase does not change very frequently with space. In some areas, there also exist abrupt changes in phase, which implies that the moire lattice can also preserve the topological phase structure of the localized light mode. While in Fig. 6 (b), the phase of periodic approximants changes periodically and strongly. These differences show that direct periodic approximation may not preserve all physical properties. Moreover, Fig. 1 indicates that the crystalline approximants do not converge to the aperiodic system. Consequently, the IFFs of the aperiodic system are significantly smaller than the results of its periodic counterpart. In order to connect the relations between the moire lattice and its crystalline approximants, we introduce a Pythagoras triple \((a,b,c)\) to represent a Pythagoras angle such that \(\sin\theta=a/c.\) Let the central cell be \([0,T]^{2}\) where \(T=\sqrt{c\pi}\) or \(\sqrt{c/2}\pi\) is the period. Due to the symmetry of the potential, the first eigenfunction is composed of wavepackets located at the four corners of the cell for period \(\sqrt{c/2}\pi\), or of wavepackets at the corners and the center of the cell for period \(\sqrt{c}\pi\). Table 1 lists the \(c\), \(\theta\) and \(T\) values of the 9 periodic approximants to the moire angle (with period \(T_{i}<150\), \(i=1,\cdots,9\)). Fig. 5(h) displays that the peak sites of those interior wavepackets are all located at the nodes of the \(T\) mesh (grid points \(nT_{i}/2\) for integer \(n\)) for the first eigenstate shown in Fig. 5(a). One can clearly observe that these happen to be the packet sites by all these periodic approximants. This counterpart clearly appears as a superposition principle for localized eigenstates of the Pythagoras angles. This is to say that an eigenstate of the moire lattice can be considered as the summation of the eigenstates of its crystalline approximants, and the weight of each approximant depends on the Diophantine error between the twist angles. The periodic systems near \(\theta=\pi/6\) have large periods, while the periodic system with the smallest period, \(\theta=\arcsin(3/5)\), is near \(\theta=0.6435\). By the superposition principle, when \(E_{0}\) is large the eigenstates of the former systems show sparse peak distribution, while the peaks of the eigenstates of \(\theta=0.6435\) are very dense. This conclusion is in consistent with the numerical results in Fig. 1. To provide a theoretical understanding of the Pythagoras superposition principle, we consider the simplified 1D aperiodic system whose equation is Eq.(2). Here the use of the 1D system is due to its intuitive physical picture and the corresponding Schrodinger-like equation is also easy to solve in the incremental space \((q_{1},q_{2})\). Fig. 7(a-c) shows the contour of the first eigenstates of different rotation angles \(\pi/6,\pi/4\) and \(\pi/8\) in the \(\mathbf{q}\) space, where some localized regions of javelin shape can be observed, and we denote them as the localization in higher dimensions. The projection lines defined by \(q_{2}=\tan(\pi/12)q_{1},q_{2}=\tan(\pi/4)q_{1}\) and \(q_{2}=\tan(\pi/16)q_{1}\) describe the physical solutions of the aperiodic systems. Since each projection line is not parallel to the javelin-shaped domain, it intersects with many domains, leading to the wavepackets of the physical space in a lower dimension. The localization phenomenon is independent of the rotation angle. Fig. 7(d) shows these packets for \(x<300\), where peaks A-C are due to the periodic systems with \(c=12545,3361\) and \(901\), correspondingly the periods are \(\sqrt{12545/2}\pi,\sqrt{3361}\pi\) and \(\sqrt{901}\pi\). These are in agreement with the analysis based on the Pythagoras triple. The periodic system due to peak A has rotation angle \(\arcsin(6273/12545)\approx 0.16668\pi\), an error of \(10^{-5}\pi\) to the moire angle \(\pi/6\). This small Diophantine error results in a strongly localized wavepacket, as observed from the figure. Figure 6: The phase structure of the first eigenstate of PI: (a) the aperiodic localized system, (b) the periodic approximant. Fig. 7(c-d) illustrates the intersecting lines corresponding to peaks A and B in panel (b), where the black dash lines are the projection lines of the periodic systems, slightly different from the projection line of the aperiodic system. In the enlarged subplots, the purple triangles represent the locations of these peaks, and the black circles are the corresponding peak locations of the periodic systems. Due to the periodic approximation, the black line crosses the central axis of the javelin-shaped domain. One can observe that in each panel, the triangle and circle symbols are very close, demonstrating that the locations of wavepackets for the aperiodic systems do have a relation to the approximate periodic systems. Let \(\varepsilon\) be the distance between the two symbols, representing the error in the locations of the wavepackets. This error can be roughly estimated as \(\varepsilon\approx T\Delta\theta\), where \(T\) is the period of the periodic system and \(\Delta\theta\) is the difference between the twist angles of the aperiodic and periodic systems. \(\varepsilon\) values of peaks A-C are \(0.0115,0.0332\) and \(0.0427\), respectively. \(T\Delta\theta\) values are \(0.0115,0.0313\) and \(0.0428\), which are high precision approximations of \(\varepsilon\). Hence \(T\Delta\theta\) can characterize the error in the localizations of the wavepackets. ## V Conclusion To summarize, we propose a highly efficient PI algorithm, which is a combination of the projection method and the indicator for aperiodic eigenproblems for photonic moire lattices. The PI solves the problem directly without \begin{table} \begin{tabular}{l l l} \hline \hline \(c\) & \(\theta\) & \(T\) \\ \hline 65 & \(0.169500\pi\) & \(T_{1}=\sqrt{65/2\pi}\) \\ 241 & \(0.165903\pi\) & \(T_{2}=\sqrt{241}\pi\) \\ 901 & \(0.166858\pi\) & \(T_{3}=\sqrt{901/2\pi}\) \\ 725 & \(0.167431\pi\) & \(T_{4}=\sqrt{725}\pi\) \\ 2701 & \(0.166476\pi\) & \(T_{5}=\sqrt{2701/2\pi}\) \\ 3361 & \(0.166603\pi\) & \(T_{6}=\sqrt{3361}\pi\) \\ 4813 & \(0.167081\pi\) & \(T_{7}=\sqrt{4813}\pi\) \\ 7925 & \(0.163487\pi\) & \(T_{8}=\sqrt{7925}\pi\) \\ 10085 & \(0.166731\pi\) & \(T_{9}=\sqrt{10085}\pi\) \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters of 9 periodic systems near \(\pi/6\) Figure 7: The first eigenstate of the 1D aperiodic system: (abc) the contour plots in the incremental space of the rotation angle \(\pi/6,\pi/4\) and \(\pi/8\). The corresponding projection lines are \(q_{2}=\tan(\pi/12)q_{1},q_{2}=\tan(\pi/4)q_{1}\) and \(q_{2}=\tan(\pi/16)q_{1}\), respectively; (d) the projected eigenstate in the physical space and the widths of A-C are all about 3.68; (ef) the detailed contours corresponding to peaks A and B. The rotation angle of (def) is \(\pi/6\). using periodic approximations such that the simultaneous Diophantine approximation can be avoided. It allows us to accurately calculate the band structure of the eigenstates in moire lattices. In addition, we conduct analysis on connections between periodic and aperiodic systems in terms of their structures, numerical algorithms and eigenstate properties. We find that the localized eigenstates in the moire lattices are determined by the periodic lattices adjacent to it, leading to the Pythagoras superposition principle. This principle connects the relationship between the aperiodic and periodic lattices and is promising to further explore the moire lattices and the wavepacket localization in 2D and 3D systems. ###### Acknowledgements. Z. G. and Z. X. are supported by the National Natural Science Foundation of China (NNSFC)(No. 12071288) and Science and Technology Commission of Shanghai Municipality (grant Nos. 20JC1414100 and 21JC1403700). Z. Y. is supported by the NNSFC (No. 12101399) and the Shanghai Sailing Program (No. 21YF1421000). F.Y. is supported by the NNSFC (No. 91950120), Scientific funding of Shanghai (No. 9ZR1424400), and Shanghai Outstanding Academic Leaders Plan (No. 20XD1402000).
2306.15549
Quantum phenomena inside a black hole: quantization of the scalar field iniside horizon in Schwarzschild spacetime
We discuss the problem of the quantization and dynamic evolution of a scalar free field in the interior of a Schwarzschild black hole. A unitary approach to the dynamics of the quantized field is proposed: a time-dependent Hamiltonian governing the Heisenberg equations is derived. It is found that the system is represented by a set of harmonic oscillators coupled via terms corresponding to the creation and annihilation of pairs of particles and that the symmetry properties of the spacetime, homogeneity and isotropy are obeyed by the coupling terms in the Hamiltonian. It is shown that Heisenberg equations for annihilation and creation operators are transformed into ordinary differential equations for appropriate Bogolyubov coefficients. Such a formulation leads to a general question concerning the possibility of gravitationally driven instability, that is however excluded in this case.
Pawel Gusin, Andrzej Radosz, Andy T. Augousti, Janos Polonyi, Oleg B. Zaslavskii, Romuald J. 'Sciborski
2023-06-27T15:20:33Z
http://arxiv.org/abs/2306.15549v1
# Quantum phenomena inside a black hole: ###### Abstract _We discuss the problem of the quantization and dynamic evolution of a scalar free field in the interior of a Schwarzschild black hole. A unitary approach to the dynamics of the quantized field is proposed: a time-dependent Hamiltonian governing the Heisenberg equations is derived. It is found that the system is represented by a set of harmonic oscillators coupled via terms corresponding to the creation and annihilation of pairs of particles and that the symmetry properties of the spacetime, homogeneity and isotropy are obeyed by the coupling terms in the Hamiltonian. It is shown that Heisenberg equations for annihilation and creation operators are transformed into ordinary differential equations for appropriate Bogolyubov coefficients. Such a formulation leads to a general question concerning the possibility of gravitationally driven instability, that is however excluded in this case._ ## 1 Introduction The horizon of a black hole (BH) may be regarded as a geometrical singularity ("fake geometrical singularity"). Indeed, considering a Schwarzschild BH in Schwarzschild coordinates one finds the metric tensor exhibiting an on-horizon singularity that is absent in other, singularity-free coordinate systems. There are a variety of singularity-free coordinate systems in this case, e.g. Kruskal-Szekers, Eddington-Finkelstein, Novikov and others [1-2]. Two interesting observations might be made here. The first is to notice that the presence of the event horizon is manifested both in coordinates revealing the horizon's singularity as well as in the singularity-free systems. The second one is to note the surprising similarities and/or analogies for phenomena taking place outside and inside black holes. A rather well-known example of such a property is the so-called BSW effect [3]. Two-particle collisions occuring in the vicinity of the black hole's horizon may lead to a high-energy outcome according to two scenarios [4-5]. These two scenarios turn out to be the same in the exterior as well as in the interior of BH. A variety of other aspects of the Exterior vs Interior (a)symmetry have been discussed in Ref. [6]. It was shown by Doran et al. [7] that the interior of a Schwarzschild BH's, which is a dynamically changing spacetime, may be regarded as a solution of Einstein's equation. This interior spacetime, also called "T-sphere" (see [8]) which is globally hyperbolic, gains then the status of a cosmological model. Its 3D spatial-like section is a hypercylinder \({\bf R}^{1}\times S^{2}\), expanding longitudinally, along the homogeneity direction \({\bf R}^{1}\), (see also [6-8]) and contracting transversally, perpendicularly to this direction in the angular coordinates of the sphere \(S^{2}\). However, as shown in Ref. [7], such a process may be preceded by a process of expansion of the sphere and collapsing of the cylinder to its base sphere of radius \(r_{S}\). Such an expansion followed by a contraction constitutes the full cycle for the cosmological model introduced in [7]. Various phenomena and processess have been considered both in the interior of the Schwarzschild BH [8, 10-13] and in its extension [7] to which we will hereafter refer to as the "T-model", an anisotropic cosmological model. In particular the Yang-Mills and Higgs fields in the Kantowski-Sachs anisotropic, cigar-like - referred to above as a hypercylinder - cosmological model were discussed in [14] (see also [15]). Canonical quantization of the scalar field inside a Schwarzschild BH was presented by Yajnik and Narayan [16], where a so-called tortoise coordinate was used, in consequence leading to a Hamiltonian of diagonal form and, as claimed by the authors, to "QFT set up by the freely falling observer". Other studies of the quantum properties of scalar field were given for instance in Refs. [17-18] and the investigations of the interior of the Schwarzschild BH were presented in Refs. [19-20]. The most recent results have been given by Almeida and Rodrigues in Ref. [21] where the quantization of the BH gravity was discussed and by Giddings and Perkins in Ref. [22], in which the quantum evolution of the Hawking state in Schwarzschild spacetime was investigated. In this paper we will present a particular quantum aspect of the "T-model". Namely the problem of dynamics, i.e. the temporal evolution of the quantized scalar field in the case of such a cosmology will be introduced and briefly discussed within a unitary approach. The Hamiltonian of the system, represented by a set of harmonic oscillators, coupled via creation and annihilation of pairs of particles, revealing interesting symmetry properties, will be derived. The Heisenberg equations of motion for appropriate annihilation and creation operators will be converted into ordinary differential equations for Bogolyubov coefficients and will be shown to reveal the possibility of an instability that is referred to as a gravitationally driven instability. The paper is organized as follows. In Sec. 2 we discuss the properties of the Schwarzschild BH and a T-model is formulated. In Sec. 3 a scalar field and its quantization are discussed. In Sec. 4. the Hamiltonian of the scalar field is derived and a discussion is presented in the final section, Sec.5; Appendix is devoted for a derivation of explicit form the temporal part of (factorized) Klein-Gordon equation. ## 2 "T-sphere" model - an anisotropic cosmological model The metric \(g_{\mu\nu}\) for the exterior of the Schwarzchild black hole, diagonal in the Schwarzschild coordinates \(\left(t,r,\theta,\varphi\right)\), reveals the singularity on the horizon: \[ds^{2}=g_{t}\left(r\right)dt^{2}-g_{r}\left(r\right)dr^{2}-g_{2}\left(r\right) d\Omega^{2}. \tag{2.1}\] where \[g_{t}=1-\frac{2M}{r}=g_{r}^{-1} \tag{2.2}\] \(g_{2}\left(r\right)=r^{2}\), and \(d\Omega^{2}\) denotes the metric on the two-dimensional unit sphere \(S^{2}\) with the coordinates \(\left(\theta,\phi\right):\) \[d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}. \tag{2.3}\] The geometrical singularity at the horizon, \(r_{S}=2M\) may be removed by a transformation to a singularity-free coordinate system, such as Kruskal-Szekeres, Eddington-Finkelstein, Novikov, Lemaitre or other systems [1-2]. The coordinate system (2.1), though ill-defined on the horizon, may be applied inside the horizon (see e.g. [6-7]). The interior of a BH, \(r<r_{S}\) possesses, apart from some well-known, some not so well-known, properties too (see [9]). The Killing vector \(\partial_{t}\) becomes a spatial one that results in momentum conservation instead of energy conservation, as obeyed outside BH (see below). This is accompanied by the interchange of the roles of the coordinates: \(t\) and \(r\) play the role of the spatial- and temporal-like coordinates, respectively. The interesting feature of the interior of a Schwarzschild BH is that it may be regarded as a unique spacetime, a cosmological anisotropic model called a "T-sphere" model or simply T-model [8]. It is described by the line element (2.1) for \(r<r_{S}\) but now expressed in terms of \(T\left(=-r\right)\) (temporal) and \(z\) (spatial) coordinates instead of \(r\) and \(t\), coordinates, respectively \[ds_{-}^{2}=g_{T}dT^{2}-g_{z}dz^{2}-g_{2}\left(T\right)\left(d\theta^{2}+\sin^ {2}\theta d\varphi^{2}\right), \tag{2.4}\] where, \(T\in\left\langle-r_{S},0\right\rangle,\)\(z\in\left(-\infty,+\infty\right),\)\(g_{T}=\left(\frac{r_{S}}{T}-1\right)^{-1}=g_{z}^{-1}\). At each instant of \(T_{0}\) the spatial slice is a hypercylinder \(\mathbf{R}^{1}\times S^{2}\), longitudinally expanding and transversally, a two-sphere of radius \(\left|T_{0}\right|,\) contracting (see e.g. [6]). Along the cylinder axis \(z\) the system is homogeneous and that represents the momentum \(z\)-component conservation. Phenomena of a classical nature have been considered in the T-model both within a more traditional approach (see e.g.[10-13]) as well as from other specific perspectives (see [9], [23-25]). Here we will consider a special quantum phenomenon, namely the problem of dynamics of the quantized scalar field in the case T-model will be introduced and briefly discussed within a unitary approach. ## 3 Scalar free field in a T-model A scalar free field \(\Phi\) in a space-time \(M\) with a metric \(g_{\mu\nu}\) is described in terms of Lagrangian density \(\mathcal{L}\): \[\mathcal{L}=\frac{1}{2}\sqrt{-g}g^{\mu\nu}\partial_{\mu}\Phi\partial_{\nu} \Phi-\left(\mu^{2}+\xi R\right)\Phi^{2}, \tag{3.1}\] _where_\(-g=\det\left[g_{\alpha\beta}\right],\) the parameter \(\mu\) can be interpreted as the mass only in asymptomatically flat space-time, \(R\) is the scalar curvature of \(M\) and \(\xi\) is the field coupling to the spacetime curvature. In the case of the spacetime (2.4) the coupling with gravitational field vanishes (as \(R=0\)) and the action of the scalar free field (3.1) takes the form \[S=\frac{1}{2}\int dT\int\limits_{\mathbf{\Sigma}}dzd\Omega T^{2}\left[\frac{1 }{g_{T}}\left(\partial_{T}\Phi\right)^{2}-\frac{1}{g_{z}}\left(\partial_{z} \Phi\right)^{2}+\frac{1}{T^{2}}\Phi\Delta_{S^{2}}\Phi-\mu^{2}\Phi^{2}\right], \tag{3.2}\] where \(\Sigma=\mathbf{R}^{1}\times S^{2}\), \(d\Omega=\sin\theta d\varphi d\theta\) and we have integrated by parts in the sector \(S^{2}\) which resulted in the Laplace operator \(\Delta_{S^{2}}\) on \(S^{2}\): \[\Delta_{S^{2}}\Phi=\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left( \sin\theta\frac{\partial\Phi}{\partial\theta}\right)+\frac{1}{\sin^{2}\theta} \frac{\partial^{2}\Phi}{\partial\varphi^{2}}. \tag{3.3}\] The Klein-Gordon (or Euler-Lagrange) equation \[\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\Phi \right)+\mu^{2}\Phi=0, \tag{3.4}\] takes in this case the following form: \[\partial_{T}\left(T^{2}g_{z}\partial_{T}\Phi\right)-\frac{T^{2}}{g_{z}} \partial_{z}^{2}\Phi-\Delta_{S^{2}}\Phi+\mu^{2}T^{2}\Phi=0, \tag{3.5}\] Taking the field \(\Phi\) in the form of a product: \[\Phi\left(T,z,\theta,\phi\right)=R\left(T\right)u\left(z\right)Y\left(\theta,\phi\right). \tag{3.6}\] it follows that the wave equation ( 3.5) separates into the following equations: \[\Delta_{S^{2}}Y=-l\left(l+1\right)Y, \tag{3.7}\] \[\frac{d^{2}u_{\varepsilon}}{dz^{2}}=-\varepsilon^{2}u_{\varepsilon}, \tag{3.8}\] \[\frac{d}{dT}\left(T^{2}g_{z}\frac{dR_{\varepsilon l}}{dT}\right)+T^{2}\left( \frac{\varepsilon^{2}}{g_{z}}+\mu^{2}+\frac{l\left(l+1\right)}{T^{2}}\right)R _{\varepsilon l}=0, \tag{3.9}\] where \(\varepsilon\) is a (separation) constant. The solution of Eq.(3.7) is given by the spherical harmonics \(Y_{lm}\left(\theta,\phi\right),\) \[\int\limits_{S^{2}}d\Omega Y_{lm}\left(\theta,\varphi\right)Y_{l^{\prime}m^{ \prime}}^{\ast}\left(\theta,\varphi\right)=\delta_{ll^{\prime}}\delta_{mm^{ \prime}}, \tag{3.10}\] \[\int\limits_{S^{2}}d\Omega Y_{lm}\left(\theta,\varphi\right)Y_{l^{\prime}-m^{ \prime}}\left(\theta,\varphi\right)=\delta_{ll^{\prime}}\delta_{m,-m^{\prime}} \tag{3.11}\] where \(m=-l,-\left(l-1\right),...0....,l\). The solution of equation (3.8) is \[u\left(z\right)=e^{\pm i\varepsilon z}. \tag{3.12}\] One can decompose the field \(\Phi\) into the complete system of functions on \(\mathbf{R}^{1}\) and \(S^{2}\). Thus, the real field \(\Phi=\Phi^{\ast}\) is represented as: \[\Phi\left(T,z,\theta,\varphi\right)=\sum\limits_{\varepsilon,l,m}\left[R_{ \varepsilon l}\left(T\right)e^{i\varepsilon z}Y_{lm}\left(\theta,\varphi \right)A_{\varepsilon lm}+R_{\varepsilon l}^{\ast}\left(T\right)e^{-i \varepsilon z}Y_{lm}^{\ast}\left(\theta,\varphi\right)A_{\varepsilon lm}^{ \ast}\right], \tag{3.13}\] where \(R_{\varepsilon l}\left(T\right)\) are the functions of the temporal variable \(T\) satisfying second order differential equation (3.9) and \(A_{\varepsilon lm}\) are Fourier-like coefficients. The scalar product \(\left(\cdot,\cdot\right)\) (Klein-Gordon) is in general defined as : \[\left(\Phi,\Psi\right)=i\int\limits_{\Sigma_{t}}\left(\Phi^{\ast}\partial_{ \mu}\Psi-\Psi\partial_{\mu}\Phi^{\ast}\right)n^{\mu}dvol\left(\Sigma_{t} \right), \tag{3.14}\] where \(n=n^{\mu}\partial_{\mu}\) denotes the unit time-like vector field orthogonal to a space-like hypersurface (slice) \(\Sigma_{t}\) and \(\Phi,\Psi\) are the solutions of the Klein-Gordon equation. In this case \(\Sigma_{t}\simeq A\times S^{2}\) and the scalar product takes the form (see [17], [26]): \[\left(\Phi,\Psi\right)=iT^{2}g_{z}\int\limits_{S^{2}}\sin\theta d\theta d\phi \int\limits_{A}\left(\Phi^{\ast}\partial_{T}\Psi-\Psi\partial_{T}\Phi^{\ast} \right)dz. \tag{3.15}\] There is the following normalization condition \[A_{\varepsilon lm}=\left(R_{\varepsilon l}\left(T\right)e^{i\varepsilon z}Y _{lm}\left(\theta,\varphi\right),\Phi\right) \tag{3.16}\] where \(\Phi\) is given by (3.6), which is equivalent to the claim of the canonical commutation relations (see also below). After some (lengthy but simple) algebra one finds that condition (3.16) is satisfied iff \[T^{2}g_{z}\left[R_{\varepsilon l}^{*}\overset{\cdot}{R}_{\varepsilon l}-\overset{ \cdot*}{R}_{\varepsilon l}R_{\varepsilon l}\right]=-i, \tag{3.17}\] \[R_{\varepsilon l}^{*}\overset{\cdot}{R}_{-\varepsilon l}^{*}-R_{- \varepsilon l}^{*}\overset{\cdot*}{R}_{\varepsilon l}=0. \tag{3.18}\] The condition (3.17) is derived from the differential equation (3.9). First, one writes Eq.(3.9) for the complex conjugated function \(R_{\varepsilon l}^{*}\); then one multiplies it by \(R_{\varepsilon l}\) and Eq.(3.9) by \(R_{\varepsilon l}^{*}\); finally one subtracts the former from the latter obtaining \[d_{T}\left(T^{2}g_{z}\left[R_{\varepsilon l}^{*}\overset{\cdot}{R}_{ \varepsilon l}-\overset{\cdot*}{R}_{\varepsilon l}R_{\varepsilon l}\right] \right)=0. \tag{3.19}\] Therefore, (3.17) turns out to be a normalization condition for \(R_{\varepsilon l}\) i.e. the Wronskian in this case, as it should be. On the other hand Eq. (3.18) is just an equivalence. ### Quantization Quantization of the field (3.1-2) is performed in a canonical way. Namely, one introduces the momentum field as the field canonically conjugated to \(\Phi\left(T,z,\theta,\varphi\right),\)i.e. \[\pi=\frac{\partial\mathcal{L}}{\partial\left(\partial_{T}\Phi\right)}=\frac{ T^{2}}{g_{T}}\partial_{T}\Phi. \tag{3.20}\] Then one imposes canonical commutation relations \[\left[\widehat{\Phi}\left(t,\mathbf{x}\right),\widehat{\pi}\left(t, \mathbf{y}\right)\right] = i\delta\left(\mathbf{x},\mathbf{y}\right), \tag{3.21}\] \[\left[\widehat{\Phi}\left(t,\mathbf{x}\right),\widehat{\Phi} \left(t,\mathbf{y}\right)\right] = \left[\widehat{\pi}\left(t,\mathbf{x}\right),\widehat{\pi}\left( t,\mathbf{y}\right)\right]=0,\] where \(\mathbf{x},\mathbf{y}\in\Sigma_{t}\). In our case the slice \(\Sigma_{t}\) has the topology of the product space of the set \(A\subset\mathbf{R}^{1}\) and the two-dimensional sphere \(S^{2}\). The momentum field given in its Fourier decomposed form is: \[\widehat{\pi}\left(t,r,\theta,\phi\right)=\frac{T^{2}}{g_{T}}\sum_{ \varepsilon,l,m}\left[\widehat{A}_{\varepsilon lm}\overset{\cdot}{R}_{ \varepsilon l}\left(T\right)e^{i\varepsilon z}Y_{lm}\left(\theta,\phi\right)+ \widehat{A}_{\varepsilon lm}^{\dagger}\overset{\cdot*}{R}_{\varepsilon l} \left(T\right)e^{-i\varepsilon z}Y_{lm}^{*}\left(\theta,\phi\right)\right] \tag{3.22}\] The canonical commutation relations Eqs. (3.21) turn out to be satisfied under the following conditions: a) \(\widehat{A}_{\varepsilon lm}\), \(\widehat{A}_{\varepsilon lm}^{\dagger}\), are the annihilation and creation operators, respectively, i.e. the only nonvanishing commutator is \[\left[\widehat{A}_{\varepsilon lm},\widehat{A}_{\varepsilon^{\prime}l^{ \prime}m^{\prime}}^{\dagger}\right]=\delta_{\varepsilon\varepsilon^{\prime}} \delta_{ll^{\prime}}\delta_{mm^{\prime}} \tag{3.22}\] b) the Wronskian (3.17) must hold. Hamiltonian of the scalar field in a T-model The Hamiltonian of the field described by the Lagrangian density \(\mathcal{L}\) is determined as an integral over the spatial part \(\mathbf{\Sigma}\) of the spacetime \[H=\int\limits_{\mathbf{\Sigma}}d^{3}x\left[\pi\partial_{T}\Phi-\mathcal{L} \right], \tag{4.1}\] and this expression is equivalent to the (integrated) \(T_{TT}\) element of the stress-energy tensor. Applying formula (4.1) for the case (2.4) and (3.1) one obtains \[H=\frac{1}{2}\int\limits_{\mathbf{\Sigma}}dzd\theta d\varphi T^{2}\sin\theta \left[\frac{1}{g_{T}}\left(\partial_{T}\Phi\right)^{2}+\frac{1}{g_{z}}\left( \partial_{z}\Phi\right)^{2}-\Phi\Delta_{S^{2}}\Phi+\mu^{2}\Phi^{2}\right] \tag{4.2}\] Using the Fourier decomposition of the quantized field and momentum (see Eqs. (3.13), (3.22)) one finds the Hamiltonian of the quantized scalar field as expressed in terms of annihilation and creation operators: \[H=\frac{1}{2}\sum\limits_{\varkappa}\left[\omega_{\varkappa}\widehat{A}_{ \varkappa}\widehat{A}_{\varkappa}^{\dagger}+\gamma_{\varkappa\varkappa^{ \prime}}\widehat{A}_{\varkappa}\widehat{A}_{\varkappa^{\prime}}+\left(c.c \right)\right] \tag{4.3}\] where indices \(\varkappa,\varkappa^{\prime}\) correspond to the appropriate three letter sets \(\varepsilon lm.\) The parameters \(\omega_{\varkappa},\gamma_{\varkappa\varkappa^{\prime}}\) are given as \[\gamma_{\varepsilon lm/\varepsilon^{\prime}lm^{\prime}}=\left[T^{2}g_{z} \overset{\cdot}{R}_{\varepsilon l}\overset{\cdot}{R}_{-\varepsilon l}+T^{2} \left\{\frac{\varepsilon^{2}}{g_{z}}+\frac{l\left(l+1\right)}{T^{2}}+\mu^{2} \right\}R_{\varepsilon l}\left(T\right)R_{-\varepsilon l}\left(T\right) \right]\delta_{\varepsilon,-\varepsilon^{\prime}}\delta_{m,-m^{\prime}} \tag{4.4}\] \[\omega_{\varepsilon lm}=\left[T^{2}g_{z}\overset{\cdot}{R}_{\varepsilon l} \overset{\cdot}{R}_{\varepsilon l}+T^{2}\left\{\frac{\varepsilon^{2}}{g_{z}} +\frac{l\left(l+1\right)}{T^{2}}+\mu^{2}\right\}R_{\varepsilon l}\left(T \right)R_{\varepsilon l}^{\ast}\left(T\right)\right]. \tag{4.5}\] Therefore, the Hamiltonian of the scalar field in the T-model, i.e. anisotropic cosmological model representing interior of the Schwarzschild BH, turns out to be \[H=\frac{1}{2}\sum\limits_{\varepsilon lm}\left[\omega_{\varepsilon lm}\left( \widehat{A}_{\varepsilon lm}\widehat{A}_{\varepsilon lm}^{\dagger}+ \widehat{A}_{\varepsilon lm}^{\dagger}\widehat{A}_{\varepsilon lm}\right)+ \gamma_{\varepsilon lm/-\varepsilon l-m}\widehat{A}_{\varepsilon lm}\widehat {A}_{-\varepsilon l-m}+\gamma_{\varepsilon lm/-\varepsilon l-m}^{\ast} \widehat{A}_{\varepsilon lm}^{\dagger}\widehat{A}_{-\varepsilon l-m}^{ \dagger}\right]. \tag{4.6}\] representing the set of interacting, time-dependent harmonic oscillators. On this basis one can study the dynamics of the quantized scalar field. The evolution of the system is described by the Heisenberg equation of motion for the operators \(\widehat{A}_{\varepsilon lm}\) \[i\frac{d}{dt}\widehat{A}_{\varepsilon lm}=\left[\widehat{A}_{\varepsilon lm},\widehat{H}\right]=\omega_{\varepsilon lm}\left(t\right)\widehat{A}_{ \varepsilon lm}\left(t\right)+\gamma_{\varepsilon lm}^{\ast}\left(t\right) \widehat{A}_{-\varepsilon l-m}^{\dagger}\left(t\right) \tag{4.7}\] where, \(\gamma_{\varepsilon lm/-\varepsilon l-m}\equiv\gamma_{\varepsilon lm}.\) One can search for the solutions of the above equations by using the following ansatz: \[\widehat{A}_{\varepsilon lm}\left(t\right)=\alpha_{\varepsilon lm}\left(t \right)\widehat{A}_{\varepsilon lm}+\beta_{\varepsilon lm}\left(t\right) \widehat{A}_{-\varepsilon l-m}^{\dagger}, \tag{4.8}\] where \(\alpha_{\varepsilon lm}\left(t\right)\) and \(\beta_{\varepsilon lm}\left(t\right)\) are some unknown complex functions and \(\widehat{A}_{\varepsilon lm}\) and \(\widehat{A}_{-\varepsilon l-m}^{\dagger}\) are time independent operators. By definition the relation (4.8) preserves the commutation relations (3.22), hence it turns out to be the Bogolyubov transformation, \[\left|\alpha_{\varepsilon lm}\left(t\right)\right|^{2}-\left|\beta_{ \varepsilon lm}\left(t\right)\right|^{2}=1. \tag{4.9}\] Then, the Heisenberg equations (4.7) are converted into differential equations for the Bogolyubov coefficients \[i\frac{d}{dt}\alpha_{\varepsilon lm}\left(t\right)=\omega_{\varepsilon lm} \left(t\right)\alpha_{\varepsilon lm}\left(t\right)+\gamma_{\varepsilon lm}^ {\ast}\left(t\right)\beta_{\varepsilon lm}^{\ast}\left(t\right), \tag{4.10}\] \[i\frac{d}{dt}\beta_{\varepsilon lm}\left(t\right)=\omega_{\varepsilon lm} \left(t\right)\beta_{\varepsilon lm}\left(t\right)+\gamma_{\varepsilon lm}^ {\ast}\left(t\right)\alpha_{\varepsilon lm}^{\ast}\left(t\right). \tag{4.11}\] In general, one can't expect exact solutions of the equations (4.10-11) and approximate schemes would therefore be proposed. Our forthcoming paper will be devoted to the comprehensive discussion of this problem. ## 5 Discussion Considering the interior of a Schwarzschild BH as a unique spacetime, an anisotropic cosmological model, we have performed the quantization of the free (noninteracting) scalar field by imposing the canonical commutation relations. One decomposes the field and momentum in terms of the complete set of solutions of the Klein-Gordon (or in fact Euler-Lagrange equations) with the coefficients of expansion being annihilation and creation operators. This procedure leads to the Hamiltonian of the quantized scalar field taking the form of the set of harmonic, time-dependent oscillators coupled in a special way: there are terms in the Hamiltonian corresponding to creation, \(\gamma_{\varepsilon lm}\widehat{A}_{\varepsilon lm}\widehat{A}_{-\varepsilon l -m}\) and annihilation \(\gamma_{\varepsilon lm}^{\ast}\widehat{A}_{\varepsilon lm}^{\dagger}\widehat{A }_{-\varepsilon l-m}^{\dagger}\) particles in pairs. Such a picture, peculiar at first sight, appears to have a deeper sense. The spacetime considered is a dynamic one - there is no energy conservation there, hence the Hamiltonian contains terms representing spontaneous creation and annihilation pairs of particles. Homogeneity of the spacetime along the \(z\)-direction results in the presence of a spatial-like Killing vector representing, \(z\)-momentum-component conservation. Hamiltonian (4.6) reflects this symmetry property: pairs of the particles with opposite \(z\)-component momenta may be created, \(\widehat{A}_{\varepsilon lm}^{\dagger}\widehat{A}_{-\varepsilon l-m}^{\dagger}\) and annihilated \(\widehat{A}_{\varepsilon lm}\widehat{A}_{-\varepsilon l-m}\); the Hamiltonian of the system also obeys rotational invariance. The conservation of the \(z\)-momentum component in the terms represented by \(\gamma_{\varepsilon lm}\) and \(\gamma_{\varepsilon lm}^{\ast}\) in the Hamiltonian is an analogue of energy conservation outside the BH, i.e. the particles in a pair carry positive/negative energy; the one with negative energy cannot survive outside the BH but only within the horizon of the BH. There is a more or less obvious interpretation of the \(\beta_{\,\varepsilon lm}\left(t\right)\) coefficient of the Bogolyubov's transformation (4.8): it is proportional to the number of the particles created during the evolution of the system, \[\left\langle 0\left(t\right)\right|\widehat{A}_{\varepsilon lm}^{\dagger} \widehat{A}_{\varepsilon lm}\left|0\left(t\right)\right\rangle=\left\langle 0 \right|\widehat{A}_{\varepsilon lm}^{\dagger}\left(t\right)\widehat{A}_{ \varepsilon lm}\left(t\right)\left|0\right\rangle=\left|\beta_{\varepsilon lm }\left(t\right)\right|^{2}, \tag{5.1}\] where \(\left|0\right\rangle\) is the vacuum state for fixed time \(t=0\) and annihilation operators \(\widehat{A}_{\varepsilon lm}\) while \(\left|0\left(t\right)\right\rangle\) is the vacuum state for later time \(t\) and annihilation operators \(\widehat{A}_{\varepsilon lm}\left(t\right).\) Due to the violent dynamics of the background spacetime, one may expect the dynamics of the creation and annihilation of the (pairs of) particles to be violent, and conventional adiabatic-like approaches (see e.g. [17-18]) could hardly be regarded as a working scheme. Therefore, attempts to find an approximate solution within a treatment here proposed that might be called a "unitary approach" as based on a unitarity of the evolution of the system, will be discussed in our following paper. An interesting aspect of the dynamics of the model (3.1) will be however briefly discussed here. That is the question of the possible instability of the system of interacting harmonic oscillators (4.6) (see [27-28]). The oscillators interact in pairs, \(\left(\varepsilon lm\right)/\left(-\varepsilon l-m\right)\) and one can consider diagonalization (at an arbitrary instant \(T^{\prime}\)) of the Hamiltonian corresponding to such a subsystem. Then the frequency in such a diagonalized case is given as: \[\Omega_{\varepsilon lm}^{2}=\omega_{\varepsilon lm}^{2}-\left|\gamma_{ \varepsilon lm/-\varepsilon l-m}\right|^{2}. \tag{5.2}\] This expression should be positive, otherwise the system is unstable (see [27]) (this problem will be discussed in detail in our following paper) - this would be named a "gravitationally driven instability". One can check that in this case, Eqs. (4.4-5) the right hand side of Eq.(5.2) \[\Omega_{\varepsilon lm}^{2}=\frac{1}{g_{z}}\left[\frac{\varepsilon^{2}}{g_{z }}+\frac{l\left(l+1\right)}{T^{2}}+\mu^{2}\right] \tag{5.3}\] is positive: there is no gravitational instability in the scalar field quantized in Doran et al. [7] spacetime. An interesting issue is that, apart from the possible instability of type (5.2), that might be referred to as "a restoring force instability" there is also another possible instability, namely "a friction driven instability" but the problem of its origin and character will be discussed elsewhere. Appendix Let us briefly analyze the form of the temporal part of Klein-Gordon equation in this case, i.e. Eq. (3.9): \[\frac{1}{T^{2}}\frac{d}{dT}\left(T^{2}g_{z}\frac{dR}{dT}\right)+\left(\frac{ \varepsilon^{2}}{g_{z}}+\mu^{2}+\frac{l\left(l+1\right)}{T^{2}}\right)R=0,\] (A.1) where \(g_{z}=\frac{r_{S}}{T}-1\), and lower labels have been omitted here. Making the substitution, \(R=f\eta\), one finds \[\frac{1}{T^{2}}\frac{d}{dT}\left(T^{2}g_{z}\frac{dR}{dT}\right)=\frac{1}{T^{2 }}\left[\left(r_{S}-2T\right)\left(f^{\prime}\eta+f\eta^{\prime}\right)+\left( r_{S}T-T^{2}\right)\left(f^{\prime\prime}\eta+f\eta^{\prime\prime}+2f^{\prime} \eta^{\prime}\right)\right]\] (A.2) and prime means differentiation with respect to \(T\). Claiming \[\left(r_{S}-2T\right)f+2\left(r_{S}T-T^{2}\right)f^{\prime}=0,\] (A.3) one gets \(R\left(T\right)\) in the form \[R=\frac{\eta}{\sqrt{T\left(r_{S}-T\right)}},\] (A.4) and \(\eta\left(T\right)\) satisfies the following confluent Heun equation \[\left[\frac{d^{2}}{dT^{2}}+\nu^{2}\left(T\right)\right]\eta=0,\] (A.5) where \[\nu^{2}\left(T\right)=A+\frac{B}{T}+\frac{C}{\left(r_{S}-T\right)}+\frac{D}{T ^{2}}+\frac{E}{\left(r_{S}-T\right)^{2}},\] (A.6) and the five coefficients \(A,...,E\) are equal to: \[A=\left(\varepsilon^{2}-\mu^{2}\right),\quad B=\frac{1}{2r_{S}}\left(2l\left( l+1\right)+1\right),\] \[C=r_{S}\left(\mu^{2}+2\varepsilon^{2}\right)+B,\quad D=\frac{1}{4},\] \[E=D-2\left(1+r_{S}^{2}\varepsilon^{2}\right).\]
2305.15019
A comparison of estimators of mean and its functions in finite populations
Several well known estimators of finite population mean and its functions are investigated under some standard sampling designs. Such functions of mean include the variance, the correlation coefficient and the regression coefficient in the population as special cases. We compare the performance of these estimators under different sampling designs based on their asymptotic distributions. Equivalence classes of estimators under different sampling designs are constructed so that estimators in the same class have equivalent performance in terms of asymptotic mean squared errors (MSEs). Estimators in different equivalence classes are then compared under some superpopulations satisfying linear models. It is shown that the pseudo empirical likelihood (PEML) estimator of the population mean under simple random sampling without replacement (SRSWOR) has the lowest asymptotic MSE among all the estimators under different sampling designs considered in this paper. It is also shown that for the variance, the correlation coefficient and the regression coefficient of the population, the plug-in estimators based on the PEML estimator have the lowest asymptotic MSEs among all the estimators considered in this paper under SRSWOR. On the other hand, for any high entropy $\pi$PS (HE$\pi$PS) sampling design, which uses the auxiliary information, the plug-in estimators of those parameters based on the H\'ajek estimator have the lowest asymptotic MSEs among all the estimators considered in this paper.
Anurag Dey, Probal Chaudhuri
2023-05-24T11:00:07Z
http://arxiv.org/abs/2305.15019v1
# A comparison of estimators of mean and its functions in finite populations ###### Abstract Several well known estimators of finite population mean and its functions are investigated under some standard sampling designs. Such functions of mean include the variance, the correlation coefficient and the regression coefficient in the population as special cases. We compare the performance of these estimators under different sampling designs based on their asymptotic distributions. Equivalence classes of estimators under different sampling designs are constructed so that estimators in the same class have equivalent performance in terms of asymptotic mean squared errors (MSEs). Estimators in different equivalence classes are then compared under some superpopulations satisfying linear models. It is shown that the pseudo empirical likelihood (PEML) estimator of the population mean under simple random sampling without replacement (SRSWOR) has the lowest asymptotic MSE among all the estimators under different sampling designs considered in this paper. It is also shown that for the variance, the correlation coefficient and the regression coefficient of the population, the plug-in estimators based on the PEML estimator have the lowest asymptotic MSEs among all the estimators considered in this paper under SRSWOR. On the other hand, for any high entropy \(\pi\)PS (HE\(\pi\)PS) sampling design, which uses the auxiliary information, the plug-in estimators of those parameters based on the Hajek estimator have the lowest asymptotic MSEs among all the estimators considered in this paper. **Keywords and phrases:** Asymptotic normality, Equivalence classes of estimators, High entropy sampling designs, Inclusion probability, Linear regression model, Rejective sampling design, Relative efficiency, Superpopulation models. ## 1 Introduction Suppose that \(\mathcal{P}\)=\(\{1,2,\ldots,N\}\) is a finite population of size \(N\), \(s\) is a sample of size \(n\) (\(<N\)) from \(\mathcal{P}\), and \(\mathcal{S}\) is the collection of all possible samples having size \(n\). Then, a sampling design \(P(s)\) is a probability distribution on \(\mathcal{S}\) such that \(0\leq P(s)\leq 1\) for all \(s\in\mathcal{S}\) and \(\sum_{s\in\mathcal{S}}P(s)\)=1. In this paper, simple random sampling without replacement (SRSWOR), Lahiri-Midzuno-Sen (LMS) sampling design (see Lahiri (1951), Midzuno (1952) and Sen (1953)), Rao-Hartley-Cochran (RHC) sampling design (see Rao et al. (1962)) and high entropy \(\pi\)PS (HE\(\pi\)PS) sampling designs (see Section 2) are considered. Note that all of the above sampling designs except SRSWOR are implemented utilizing some auxiliary variable. Let \((Y_{i},X_{i})\) be the value of \((y,x)\) for the \(i^{th}\) population unit, \(i\)=\(1,\ldots,N\), where \(y\) is a univraite or multivariate study variable, and \(x\) is a positive real valued size/auxiliary variable. Suppose that \(\overline{Y}\)=\(\sum_{i=1}^{N}Y_{i}/N\) is the finite population mean of \(y\). The Horvitz-Thompson (HT) estimator (see Horvitz and Thompson (1952))) and the RHC (see Rao et al. (1962)) estimator are commonly used design unbiased estimators of \(\overline{Y}\). Other well known estimators of \(\overline{Y}\) are the Hajek estimator (see Hajek (1971), Sarndal et al. (2003) and references therein), the ratio estimator (see Cochran (1977)), the product estimator (see Cochran (1977)), the generalized regression (GREG) estimator (see Chen and Sitter (1999)) and the pseudo empirical likelihood (PEML) estimator (see Chen and Sitter (1999)). However, these latter estimators are not always design unbiased. For the expressions of the above estimators, the reader is referred to the Appendix. Now, suppose that \(y\) is a \(\mathbb{R}^{d}\)-valued (\(d\geq 1\)) study variable, and \(g(\sum_{i=1}^{N}h(Y_{i})/N)\) is a population parameter. Here, \(h\): \(\mathbb{R}^{d}\rightarrow\mathbb{R}^{p}\) is a function with \(p\geq 1\) and \(g\): \(\mathbb{R}^{p}\rightarrow\mathbb{R}\) is a continuously differentiable function. All vectors in Euclidean spaces will be taken as row vectors and superscript \(T\) will be used to denote their transpose. Examples of such a parameter are the variance, the correlation coefficient, the regression coefficient, etc. associated with a finite population. For simplicity, we will often write \(h(Y_{i})\) as \(h_{i}\). Then, \(g(\overline{h})\)=\(g(\sum_{i=1}^{N}h_{i}/N)\) is estimated by plugging in the estimator \(\hat{\overline{h}}\) of \(\overline{h}\). In this article, our objective is to find asymptotically efficient (in terms of mean squared error (MSE)) estimator of \(g(\overline{h})\). In Section 2, based on the asymptotic distribution of the estimator of \(g(\overline{h})\) under above sampling designs, we construct equivalence classes of estimators such that any two estimators in the same class have the same asymptotic MSE. We consider the special case, when \(g(\overline{h})\)=\(\overline{Y}\), and compare equivalence classes of estimators under superpopulations satisfying linear models in Section 3. Among different estimators under different sampling designs considered in this article, the PEML estimator of the population mean under SRSWOR turns out to be the estimator with the lowest asymptotic MSE. Also, the PEML estimator has the same asymptotic MSE under SRSWOR and LMS sampling design. Interestingly, we observe that the performance of the PEML estimator under RHC and any HE\(\pi\)PS sampling designs, which use auxiliary information is worse than its performance under SRSWOR. Earlier, it was shown that the GREG estimator is asymptotically at least as efficient as the HT, the ratio and the product estimators under SRSWOR (see Cochran (1977)). It will follow from our analysis that the PEML estimator is asymptotically equivalent to the GREG estimator under all the sampling designs considered in this paper. In Section 3, we consider the cases, when \(g(\overline{h})\) is the variance, the correlation coefficient and the regression coefficient in the population. Note that if the estimators of the population variance are constructed by plugging in the HT, the ratio, the product or the GREG estimators of the population means, then the estimators of the variance may become negative. For this reason, one also faces problem with the plug-in estimators of the correlation coefficient and the regression coefficient as these estimators require estimators of population variances. On the other hand, if the estimators of the above-mentioned parameters are constructed by plugging in the Hajek or the PEML estimators of the population means, such a problem does not occur. Therefore, for these parameters, we compare only those equivalence classes, which contain the plug-in estimators based on the Hajek and the PEML estimators. From this comparison under superpopulations satisfying linear models, we once again conclude that for any of these parameters, the plug-in estimator based on the PEML estimator has asymptotically the lowest MSE among all the estimators considered in this article under SRSWOR as well as LMS sampling design. Moreover, under any HE\(\pi\)PS sampling design, which uses the auxiliary information, the plug-in estimator based on the Hajek estimator has asymptotically the lowest MSE among all the estimators considered in this article. Scott and Wu (1981) proved that the ratio estimator has the same asymptotic distribution under SRSWOR and LMS sampling design. Chen and Sitter (1999) showed that the PEML estimator is asymptotically equivalent to the GREG estimator under some conditions on the sampling design, which are satisfied by SRSWOR and RHC sampling design. However, asymptotic equivalence classes as in Table 2 in Section 2, which consist of several estimators of a function of the population means under several sampling designs, were not constructed by any earlier author. Raj (1954) compared the sample mean under the simple random sampling with replacement with the usual unbiased estimator of the population mean under the probability proportional to size sampling with replacement, when the study variable and the size variable are exactly linearly related. Avadhani and Sukhatme (1970) compared the ratio estimator of the population mean under SRSWOR with the RHC estimator under RHC sampling design, when an approximate linear relationship holds between the study variable and the size variable. Avadhani and Srivastava (1972) carried out the comparison of the ratio estimator of the population mean under LMS sampling design and the RHC estimator under RHC sampling design, when the study variable and the size variable are approximately linearly related. It was shown that the GREG estimator of the population mean is asymptotically at least as efficient as the HT, the ratio and the product estimators under SRSWOR (see Cochran (1977)). However, the above comparisons included neither the PEML estimator nor HE\(\pi\)PS sampling designs. Some empirical studies carried out in Section 4 using synthetic and real data demonstrate that the numerical and the theoretical results corroborate each other. We make some remarks on our major findings in Section 5. Proofs of the results are given in the Appendix. ## 2 Comparison of different estimators of \(g(\overline{h})\) In this section, we compare the estimators of \(g(\overline{h})\), which are obtained by plugging in the estimators of \(\overline{h}\) mentioned in Table 1 below. First, we find equivalence classes of estimators of \(g(\overline{h})\) such that any two estimators in the same class are asymptotically normal with the same mean \(g(\overline{h})\) and the same variance. We define our asymptotic framework as follows. Let \(\{\mathcal{P}_{\nu}\}\) be a sequence of nested populations with \(N_{\nu}\), \(n_{\nu}\rightarrow\infty\) as \(\nu\rightarrow\infty\)( see Isaki and Fuller (1982), Wang and Opsomer (2011), Conti and Marella (2015), Boistard et al. (2017), Han and Wellner (2021) and references therein), where \(N_{\nu}\) and \(n_{\nu}\) are, respectively, the population size and the sample size corresponding to the \(\nu^{th}\) population. Henceforth, we shall suppress the subscript \(\nu\) that tends to \(\infty\) for the sake of simplicity. Throughout this paper, we consider the following condition (cf. Assumption 1 in Cardot and Josserand (2011), A4 in Conti (2014), A1 in Cardot et al. (2014) A4 in Conti and Marella (2015) and (HT3) in Boistard et al. (2017)) **C 0.**: \(n/N\rightarrow\lambda\) _as \(\nu\rightarrow\infty\), where \(0\leq\lambda<1\)._ Before we state the main results, let us discuss the HE\(\pi\)PS sampling design and some conditions on \(\{(X_{i},h_{i}):1\leq i\leq N\}\) (recall that \(h_{i}{=}h(Y_{i})\)). A sampling design \(P(s)\) \begin{table} \begin{tabular}{|c|c|} \hline Sampling & Estimators \\ designs & \\ \hline SRSWOR & HT (which coincides with Hájek estimator), ratio, \\ & product, GREG and PEML estimators \\ \hline LMS & HT, Hájek, ratio, product, GREG and \\ & PEML estimators \\ \hline HE\(\pi\)PS & HT (which coincides with ratio and product \\ & estimators), Hájek, GREG and PEML estimators \\ \hline RHC & RHC, GREG and PEML estimators \\ \hline \end{tabular} \end{table} Table 1: Estimators of \(\overline{h}\) satisfying the condition, \(D(P||R)\)= \(\sum_{s\in\mathcal{S}}P(s)\log\)\((P(s)/R(s))\to 0\) as \(\nu\rightarrow\infty\), for some rejective sampling design (see Hajek (1964)) \(R(s)\) is known as the high entropy sampling design ( see Berger (1998), Conti (2014), Cardot et al. (2014), Boistard et al. (2017) and references therein). A sampling design \(P(s)\) is called the HE\(\pi\)PS sampling design if it is a high entropy sampling design, and its inclusion probabilities satisfy the condition \(\pi_{i}\)=\(nX_{i}/\sum_{i=1}^{N}X_{i}\) for \(i\)=\(1,\ldots,N\). An example of the HE\(\pi\)PS sampling design is the Rao-Sampford (RS) sampling design (see Sampford (1967) and Berger (1998)). We now state the following conditions. **C 1.**\(\{P_{\nu}\}\) _is such that \(\sum_{i=1}^{N}||h_{i}||^{4}/N\)=\(O(1)\) and \(\sum_{i=1}^{N}X_{i}^{4}/N\) =\(O(1)\) as \(\nu\rightarrow\infty\). Further, \(\lim_{\nu\rightarrow\infty}\overline{h}\) exists, and \(\overline{X}\)=\(\sum_{i=1}^{N}X_{i}/N\) and \(S_{x}^{2}\)= \(\sum_{i=1}^{N}(X_{i}-\overline{X})^{2}/N\) are bounded away from \(0\) as \(\nu\rightarrow\infty\). Moreover, \(\nabla g(\mu_{0})\neq 0\), where \(\mu_{0}\)=\(\lim_{\nu\rightarrow\infty}\overline{h}\) and \(\nabla g\) is the gradient of \(g\)._ **C 2.**\(\max_{1\leq i\leq N}X_{i}/\min_{1\leq i\leq N}X_{i}\)=\(O(1)\) as \(\nu\rightarrow\infty\). Let \(\textbf{V}_{i}\) be one of \(h_{i}\), \(h_{i}-\overline{h}\), \(h_{i}-\overline{h}X_{i}/\overline{X}\), \(h_{i}+\overline{h}X_{i}/\overline{X}\) and \(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\) for \(i\)=\(1,\ldots,N\), \(\overline{h}\)=\(\sum_{i=1}^{N}h_{i}/N\) and \(S_{xh}\)=\(\sum_{i=1}^{N}X_{i}h_{i}/N-\overline{h}\)\(\overline{X}\). Define \(\textbf{T}\)=\(\sum_{i=1}^{N}\textbf{V}_{i}(1-\pi_{i})/\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\), where \(\pi_{i}\) is the inclusion probability of the \(i^{th}\) population unit. Also, in the case of RHC sampling design, define \(\overline{\textbf{V}}\)=\(\sum_{i=1}^{N}\textbf{V}_{i}/N\), \(\overline{X}\)=\(\sum_{i=1}^{N}X_{i}/N\) and \(\gamma\)=\(\sum_{i=1}^{n}N_{i}(N_{i}-1)/N(N-1)\), where \(N_{i}\) is the size of the \(i^{th}\) group formed randomly in RHC sampling design (see Rao et al. (1962)), \(i\)=\(1,\ldots,n\). Now, we state the following conditions on the population values and the sampling designs. **C 3.**\(P(s)\) _is such that \(nN^{-2}\sum_{i=1}^{N}(\textbf{V}_{i}-\textbf{T}\pi_{i})^{T}(\textbf{V}_{i}- \textbf{T}\pi_{i})(\pi_{i}^{-1}-1)\) converges to some positive definite (p.d.) matrix as \(\nu\rightarrow\infty\)._ **C 4.**\(n\gamma\overline{X}N^{-1}\sum_{i=1}^{N}(\textbf{V}_{i}-X_{i}\overline{\textbf{V}} /\overline{X})^{T}(\textbf{V}_{i}-X_{i}\overline{\textbf{V}}/\overline{X})/X_{i}\) converges to some p.d. matrix as \(\nu\rightarrow\infty\)._ Similar conditions like C1, C3 and C4 are often used in sample survey literature (see Assumption 3 in Cardot and Josserand (2011), A3 and A6 in both Conti (2014) and Conti and Marella (2015), (HT2) in Boistard et al. (2017), and F2 and F3 in Han and Wellner (2021)). Conditions C1 and C4 hold (_almost surely_), whenever \(\{(X_{i},\,h_{i}):1\leq i\leq N\}\) are generated from a superpopulation model satisfying appropriate moment conditions (see Lemma S2 in the supplement). The condition \(\sum_{i=1}^{N}||h_{i}||^{4}/N\)=\(O(1)\) holds, when \(h\) is a bounded function (e.g., \(h(y)\)=\(y\) and \(y\) is a binary study variable). Condition C2 implies that the variation in the population values \(X_{1},\ldots,X_{N}\) cannot be too large. Under any \(\pi\)PS sampling design, C2 is equivalent to the condition that \(L\leq N\pi_{i}/n\leq L^{\prime}\) for some constants \(L,L^{\prime}>0\), any \(i\)=\(1,\ldots,N\) and all sufficiently large \(\nu\geq 1\). This latter condition was considered earlier in the literature (see (C1) in Boistard et al. (2017) and Assumption 2-(i) in Wang and Opsomer (2011)). Condition C2 holds (_almost surely_), when \(\{X_{i}\}_{i=1}^{N}\) are generated from a superpopulation distribution, and the support of the distribution of \(X_{i}\) is bounded away from \(0\) and \(\infty\). Condition C3 holds (_almost surely_) for SRSWOR, LMS sampling design and any \(\pi\)PS sampling design under appropriate superpopulation models (see Lemma S2 in the supplement). For the RHC sampling design, we also assume that \(\{N_{i}\}_{i=1}^{n}\) are as follows. \[N_{i}=\begin{cases}N/n,\text{ for }i=1,\cdots,n,\text{ when }N/n\text{ is an integer},\\ \lfloor N/n\rfloor,\text{ for }i=1,\cdots,k,\text{ and}\\ \lfloor N/n\rfloor+1,\text{ for }i=k+1,\cdots,n,\text{ when }N/n\text{ is not an integer},\end{cases} \tag{1}\] where \(k\) is such that \(\sum_{i=1}^{n}N_{i}\)=\(N\). Here, \(\lfloor N/n\rfloor\) is the integer part of \(N/n\). Rao et al. (1962) showed that this choice of \(\{N_{i}\}_{i=1}^{n}\) minimizes the variance of the RHC estimator. Now, we state the following theorem. **Theorem 1**.: _Suppose that C0 through C3 hold. Then, classes \(1,2,3\) and \(4\) in Table 2 describe equivalence classes of estimators for \(g(\overline{h})\) under SRSWOR and LMS sampling design._ For next two theorems, we assume that \(n\max_{1\leq i\leq N}X_{i}/\sum_{i=1}^{N}X_{i}<1\). Note that this condition is required to hold for any without replacement \(\pi\)PS sampling design. **Theorem 2**.: \((i)\) _If C0 through C3 hold, then classes \(5,6\) and \(7\) in Table 2 describe equivalence classes of estimators for \(g(\overline{h})\) under any HE\(\pi\)PS sampling design. \((ii)\) Under RHC sampling design, if C0 through C2 and C4 hold, then classes \(8\) and \(9\) in Table 2 describe equivalence classes of estimators for \(g(\overline{h})\)._ **Remark 1**.: _It is to be noted that if C1 through C3 hold, and C0 holds with \(\lambda\)=\(0\), then in Table 2, class \(8\) is merged with class \(5\), and class \(9\) is merged with class \(6\). For details, see Section \(S3\) in the supplement._ Next, suppose that \(W_{i}\)=\(\nabla g(\overline{h})h_{i}^{T}\) for \(i\)=\(1,\ldots,N\), \(\overline{W}\)=\(\sum_{i=1}^{N}W_{i}/N\), \(S_{xw}\)= \(\sum_{i=1}^{N}W_{i}X_{i}/N-\overline{W}\ \overline{X}\), \(S_{w}^{2}\)= \(\sum_{i=1}^{N}W_{i}^{2}/N\ -\overline{W}^{2}\), \(S_{x}^{2}\)=\(\sum_{i=1}^{N}X_{i}^{2}/N-\overline{X}^{2}\) and \(\phi\)=\(\overline{X}-(n/N)\sum_{i=1}^{N}X_{i}^{2}/N\overline{X}\). Now, we state the following theorem. **Theorem 3**.: _Suppose that the assumptions of Theorems 1 and 2 hold. Then, Table 3 gives the expressions of asymptotic MSEs, \(\Delta_{1}^{2},\ldots,\Delta_{9}^{2}\), of the estimators in equivalence classes \(1,\ldots,9\) in Table 2, respectively._ **Remark 2**.: _It can be shown in a straightforward way from Table 3 that \(\Delta_{1}^{2}\leq\Delta_{i}^{2}\) for \(i\)=\(2,3\) and \(4\). Thus, both the plug-in estimators of \(g(\overline{h})\) that are based on the GREG and the PEML estimators are asymptotically as good as, if not better than, the plug-in estimators based on the HT (which coincides with the Hajek estimator), the ratio and the product estimators under SRSWOR, and the plug-in estimators based on the HT, the Hajek, the ratio and the product estimators under LMS sampling design._ Let us now consider some examples of \(g(\overline{h})\) in Table 4 below. Conclusions of Theorems 1 through 3, and Remarks 1 and 2 hold for all the parameters in Table 4. Here, we recall from the introduction that for the variance, the correlation coefficient and the regression coefficient, we consider only the plug-in estimators that are based on the Hajek and the PEML estimators. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & \multicolumn{6}{|c|}{Estimators of \(\overline{h}\)} \\ \hline Sampling & GREG and & \multirow{2}{*}{HT} & \multirow{2}{*}{RHC} & \multirow{2}{*}{Hajek} & \multirow{2}{*}{Ratio} & \multirow{2}{*}{Product} \\ design & PEML & & & & & \\ \hline SRSWOR & class 1 & \({}^{1}\)class 2 & & \({}^{1}\)class 2 & class 3 & class 4 \\ \hline LMS & class 1 & class 2 & & class 2 & class 3 & class 4 \\ \hline HE\(\pi\)PS & class 5 & \({}^{2}\)class 6 & & class 7 & \({}^{2}\)class 6 & \({}^{2}\)class 6 \\ \hline RHC & class 8 & & class 9 & & & \\ \hline \end{tabular} \({}^{1}\) The HT and the Hájek estimators coincide under SRSWOR. \({}^{2}\) The HT, the ratio and the product estimators coincide under HE\(\pi\)PS sampling designs. \end{table} Table 2: Disjoint equivalence classes of estimators for \(g(\overline{h})\) ## 3 Comparison of estimators under superpopulation models In this section, we derive asymptotically efficient estimators for the mean, the variance, the correlation coefficient and the regression coefficient under superpopulations satisfying linear regression models. Earlier, Raj (1954) Murthy (1967), Avadhani and Sukhatme (1970), Avadhani and Srivastava (1972) and Cochran (1977) used the linear relationship between the \(Y_{i}\)'s and the \(X_{i}\)'s for comparing different estimators of the mean. However, they did not use any probability distribution for the \((Y_{i},X_{i})\)'s. Subsequently, Rao (2003), Fuller (2011), Chaudhuri (2014) (see chap. 5) and some other authors considered the linear relationship between the \(Y_{i}\)'s and the \(X_{i}\)'s and a probability distribution for the \((Y_{i},X_{i})\)'s for constructing different estimators and studying their behaviour. However, the problem of finding asymptotically the most efficient estimator for the mean among a large class of estimators as considered in this paper was not done earlier in the literature. \begin{table} \begin{tabular}{|c|} \hline \(\Delta_{1}^{2}\)=\((1-\lambda)\lim\limits_{\nu\rightarrow\infty}\left(S_{w}^{2}-(S_{xw}/S_{x})^{2}\right)\) \\ \hline \(\Delta_{2}^{2}\)=\((1-\lambda)\lim\limits_{\nu\rightarrow\infty}S_{w}^{2}\) \\ \hline \(\Delta_{3}^{2}\)=\((1-\lambda)\lim\limits_{\nu\rightarrow\infty}\left(S_{w}^{2}-2\overline{W}S_{xw}/ \overline{X}+\left(\overline{W}/\overline{X}\right)^{2}S_{x}^{2}\right)\) \\ \hline \(\Delta_{4}^{2}\)=\((1-\lambda)\lim\limits_{\nu\rightarrow\infty}\left(S_{w}^{2}+2\overline{W}S_{xw}/ \overline{X}+\left(\overline{W}/\overline{X}\right)^{2}S_{x}^{2}\right)\) \\ \hline \(\Delta_{5}^{2}\)=\(\lim\limits_{\nu\rightarrow\infty}(1/N)\sum_{i=1}^{N}\left(W_{i}-\overline{W}-(S_{xw}/S_{x}^{2})(X_ {i}-\overline{X})\right)^{2}\times\) \\ \(\left((\overline{X}/X_{i})-(n/N)\right)\) \\ \hline \(\Delta_{6}^{2}\)=\(\lim\limits_{\nu\rightarrow\infty}(1/N)\sum_{i=1}^{N}\left\{W_{i}+\phi^{-1} \overline{X}^{-1}X_{i}\big{(}(n/N)\sum_{i=1}^{N}W_{i}X_{i}/N-\overline{W}\ \overline{X}\big{)}\right\}^{2}\times\) \\ \(\left\{(\overline{X}/X_{i})-(n/N)\big{\}}\) \\ \hline \(\Delta_{7}^{2}\)=\(\lim\limits_{\nu\rightarrow\infty}(1/N)\sum_{i=1}^{N}\left(W_{i}-\overline{W}+(n/N \phi\overline{X})X_{i}S_{xw}\right)^{2}\times\) \\ \(\left((\overline{X}/X_{i})-(n/N)\right)\) \\ \hline \(\Delta_{8}^{2}\)=\(\lim\limits_{\nu\rightarrow\infty}n\gamma(\overline{X}/N)\sum_{i=1}^{N}\left(W_{i}- \overline{W}-(S_{xw}/S_{x}^{2})(X_{i}-\overline{X})\right)^{2}/X_{i}\) \\ \hline \(\Delta_{9}^{2}\)=\(\lim\limits_{\nu\rightarrow\infty}n\gamma\big{(}(\overline{X}/N)\sum_{i=1}^{N}W_{i}^{2}/X_ {i}-\overline{W}^{2}\big{)}\) \\ \hline \end{tabular} \end{table} Table 3: Asymptotic MSEs of estimators for \(g(\overline{h})\) (note that for simplifying notations, the subscript \(\nu\) is dropped from the expressions on which limits are taken.) Also, large sample comparisons of the plug-in estimators of the variance, the correlation coefficient and the regression coefficient considered in this paper were not carried out in the earlier literature. Suppose that \(\{(Y_{i},X_{i}):1\leq i\leq N\}\) are i.i.d. random vectors defined on a probability space \((\Omega,\mathbb{F},\mathbb{P})\). Without any loss of generality, for convenience, we take \(\sigma_{x}^{2}\)=\(E_{\mathbb{P}}(X_{i}-E_{\mathbb{P}}(X_{i}))^{2}\) =1. This might require rescaling the variable \(x\). Here, \(E_{\mathbb{P}}\) denotes the expectation with respect to the probability measure \(\mathbb{P}\). Recall that the population values \(X_{1},\ldots,X_{N}\) are used to implement some of the sampling designs. In such a case, we consider a function \(P(s,\omega)\) on \(\mathcal{S}\times\Omega\) so that \(P(s,\cdot)\) is a random variable on \(\Omega\) for each \(s\in\mathcal{S}\), and \(P(\cdot,\omega)\) is a probability distribution on \(\mathcal{S}\) for each \(\omega\in\Omega\) (see Boistard et al. (2017)). Note that \(P(s,\omega)\) is the sampling design for any fixed \(\omega\) in this case. Then, the \(\Delta_{j}^{2}\)'s in Table 3 can be expressed in terms of superpopulation moments of \((h(Y_{i}),X_{i})\) by strong law of large numbers (SLLN). In that case, we can easily compare different classes of estimators in Table 2 under linear models. Let us first state the following conditions on superpopulation distribution \(\mathbb{P}\). **C 5**.: \(X_{i}\leq b\) _a.s. \([\mathbb{P}]\) for some \(0<b<\infty\), \(E_{\mathbb{P}}(X_{i})^{-2}<\infty\), and \(\max_{1\leq i\leq N}X_{i}/\min_{1\leq i\leq N}X_{i}\)=\(O(1)\) as \(\nu\rightarrow\infty\) a.s. \([\mathbb{P}]\). Also, the support of the distribution of \((h(Y_{i}),X_{i})\) is not a subset of a hyper-plane in \(\mathbb{R}^{p+1}\)._ The condition, \(X_{i}\leq b\)_a.s. \([\mathbb{P}]\)_ for some \(0<b<\infty\), in C5 and C0 along with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\) ensure that \(n\max_{1\leq i\leq N}X_{i}/\sum_{i=1}^{N}X_{i}<1\) for all sufficiently large \(\nu\)_a.s. \([\mathbb{P}]\)_, which is required to hold for any without replacement \(\pi\)PS sampling design. On the other hand, the condition, \(\max_{1\leq i\leq N}X_{i}/\min_{1\leq i\leq N}X_{i}=\)\(O(1)\) as \(\nu\rightarrow\infty\)_a.s. \([\mathbb{P}]\)_, in C5 implies that C2 holds _a.s._\([\mathbb{P}]\). Further, C5 ensures that C4 holds _a.s._\([\mathbb{P}]\) (see Lemma S2 in the supplement). C5 also ensures that C3 holds under LMS and any \(\pi\)PS sampling designs _a.s._\([\mathbb{P}]\) (see Lemma S2 in the supplement). Let us first consider the case, when \(g(\overline{h})\) is the mean of \(y\) (see the \(2^{nd}\) row in Table 4). Further, suppose that \(Y_{i}\)=\(\alpha+\beta X_{i}+\epsilon_{i}\) for \(\alpha,\beta\in\mathbb{R}\) and \(i\)=\(1,\ldots,N\), where \(\{\epsilon_{i}\}_{i=1}^{N}\) are i.i.d. random variables and are independent of \(\{X_{i}\}_{i=1}^{N}\) with \(E_{\mathbb{P}}(\epsilon_{i})\)=\(0\) and \(E_{\mathbb{P}}(\epsilon_{i})^{4}<\infty\). Then, we have the following theorem. **Theorem 4**.: _Suppose that C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\), and C5 holds. Then, a.s. \([\mathbb{P}]\), the PEML estimator under SRSWOR as well as LMS sampling design has the lowest asymptotic MSE among all the estimators of the population mean under different sampling designs considered in this paper._ **Remark 3**.: _Note that for SRSWOR, the PEML estimator of the population mean has the lowest asymptotic MSE among all the estimators considered in this paper a.s. \([\mathbb{P}]\), when C0 holds with \(0\leq\lambda<1\) and C5 holds (see the proof of Theorem 4)._ **Theorem 5**.: _Suppose that C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\), and C5 holds. Then, a.s. \([\mathbb{P}]\), the performance of the PEML estimator of the population mean under RHC and any HE\(\pi\)PS sampling designs, which use auxiliary information, is worse than its performance under SRSWOR._ Recall from the introduction that for the variance, the correlation coefficient and the regression coefficient, we compare only those equivalence classes, which contain the plug-in estimators based on the Hajek and the PEML estimators. We first state the following condition. **C 6**.: \(\xi>2\max\{\mu_{1},\mu_{-1}/(\mu_{1}\mu_{-1}-1)\}\)_, where \(\xi\)=\(\mu_{3}-\mu_{2}\mu_{1}\) is the covariance between \(X_{i}^{2}\) and \(X_{i}\) and \(\mu_{j}\)=\(E_{\mathbb{P}}(X_{i})^{j}\), \(j\)=\(-1,1,2,3\)._ The above condition is used to prove part \((ii)\) in each of Theorems 6 and 7. This condition holds when the \(X_{i}\)'s follow well known distributions like Gamma (with shape parameter value larger than 1 and any scale parameter value), Beta (with the second shape parameter value greater than the first shape parameter value and the first shape parameter value larger than 1), Pareto (with shape parameter value lying in the interval \((3,(5+\sqrt{17})/2)\) and any scale parameter value), Log-normal (with both the parameters taking any value) and Weibull (with shape parameter value lying in the interval \((1,3.6)\) and any scale parameter value). Now, consider the case, when \(g(\overline{h})\) is the variance of \(y\) (see the \(3^{rd}\) row in Table 4). Recall the linear model \(Y_{i}\)=\(\alpha+\beta X_{i}+\epsilon_{i}\) from above and assume that \(E_{\mathbb{P}}(\epsilon_{i})^{8}<\infty\). Then, we have the following theorem. **Theorem 6**.: \((i)\) _Let us first consider SRSWOR and LMS sampling design and suppose that C0 and C5 hold. Then, a.s. \([\mathbb{P}]\), the plug-in estimator of the population variance based on the PEML estimator has the lowest asymptotic MSE among all the estimators considered in this paper. \((ii)\) Next, consider any HE\(\pi\)PS sampling design and suppose that C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\), and C5 and C6 hold. Then, a.s. \([\mathbb{P}]\), the plug-in estimator of the population variance based on the Hajek estimator has the lowest asymptotic MSE among all the estimators considered in this paper._ Now, suppose that \(y\)=\((z_{1},z_{2})\in\mathbb{R}^{2}\) and consider the case, when \(g(\overline{h})\) is the correlation coefficient between \(z_{1}\) and \(z_{2}\) (see the \(4^{th}\) row in Table 4). Let us also consider the case, when \(g(\overline{h})\) is the regression coefficient of \(z_{1}\) on \(z_{2}\) (see the \(5^{th}\) row in Table 4). Further, suppose that \(Y_{i}\)=\(\alpha+\beta X_{i}+\epsilon_{i}\) for \(Y_{i}\)=\((Z_{1i},Z_{2i})\), \(\alpha,\beta\in\mathbb{R}^{2}\) and \(i\)=\(1,\ldots,N\), where \(\{\epsilon_{i}\}_{i=1}^{N}\) are i.i.d. random vectors in \(\mathbb{R}^{2}\) independent of \(\{X_{i}\}_{i=1}^{N}\) with \(E_{\mathbb{P}}(\epsilon_{i})\)=\(0\) and \(E_{\mathbb{P}}||\epsilon_{i}||^{8}<\infty\). Then, we have the following theorem. **Theorem 7**.: \((i)\) _Let us first consider SRSWOR and LMS sampling design and suppose that C0 and C5 hold. Then, a.s. \([\mathbb{P}]\), the plug-in estimator of each of the correlation and the regression coefficients in the population based on the PEML estimator has the lowest asymptotic MSE among all the estimators considered in this paper. \((ii)\) Next, consider any HE\(\pi\)PS sampling design and suppose that C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\), and C5 and C6 hold. Then, a.s. \([\mathbb{P}]\), the plug-in estimator of each of the above parameters based on the Hajek estimator has the lowest asymptotic MSE among all the estimators considered in this paper._ Data analysis In this section, we carry out an empirical comparison of the estimators of the mean, the variance, the correlation coefficient and the regression coefficient, which are discussed in this paper, based on both real and synthetic data. Recall that for the above parameters, we have considered several estimators and sampling designs, and conducted a theoretical comparison of those estimators in Sections 2 and 3. For empirical comparison, we exclude some of the estimators considered in theoretical comparison so that the results of the comparison become concise and comprehensive. The reasons for excluding those estimators are given below. 1. Since the GREG estimator is well known to be asymptotically better than the HT, the ratio and the product estimators under SRSWOR (see Cochran (1977)), we exclude these latter estimators under SRSWOR. 2. Since the MSEs of the estimators under LMS sampling design become very close to the MSEs of the same estimators under SRSWOR as expected from Theorem 1, we do not report these results under LMS sampling design. Moreover, SRSWOR is a simpler and more commonly used sampling design than LMS sampling design. Thus we consider the estimators mentioned in Table 5 below for the empirical comparison. Recall from Table 1 that the HT, the ratio and the product estimators of the mean coincide under any HE\(\pi\)PS sampling design. We draw \(I\)=1000 samples each of sizes \(n\)=75, 100 and 125 using sampling designs mentioned in Table 5. We use the \(R\) software for drawing samples as well as computing different estimators. For RS sampling design, we use the 'pps' package in \(R\), and for the PEML estimator, we use \(R\) codes in Wu (2005). Two estimators \(g(\hat{\overline{h}}_{1})\) and \(g(\hat{\overline{h}}_{2})\) of \(g(\overline{h})\) under sampling designs \(P_{1}(s)\) and \(P_{2}(s)\), respectively, are compared empirically by means of the relative efficiency defined as \[RE(g(\hat{\overline{h}}_{1}),P_{1}|g(\hat{\overline{h}}_{2}),P_{2})=MSE_{P_{2 }}(g(\hat{\overline{h}}_{2}))/MSE_{P_{1}}(g(\hat{\overline{h}}_{1})),\] where \(MSE_{P_{j}}(g(\hat{\overline{h}}_{j}))\)=\(I^{-1}\sum_{l=1}^{I}(g(\hat{\overline{h}}_{jl})-g(\overline{h}_{0}))^{2}\) is the empirical mean squared error of \(g(\hat{\overline{h}}_{j})\) under \(P_{j}(s)\), \(j\)=\(1,2\). Here, \(\hat{\overline{h}}_{jl}\) is the estimate of \(\overline{h}\) based on the \(j^{th}\) estimator and the \(l^{th}\) sample, and \(g(\overline{h}_{0})\) is the true value of the parameter \(g(\overline{h})\), \(j\)=\(1,2\), \(l\)=\(1,\ldots,I\). \(g(\hat{\overline{h}}_{1})\) under \(P_{1}(s)\) will be more efficient than \(g(\hat{\overline{h}}_{2})\) under \(P_{2}(s)\) if \(RE(g(\hat{\overline{h}}_{1}),P_{1}|g(\hat{\overline{h}}_{2}),P_{2})>1\). Next, for each of the parameters considered in this section, we compare average lengths of asymptotically 95% confidence intervals (CIs) constructed based on several estimators used in this section. In order to construct asymptotically 95% CIs, we need an estimator of the asymptotic MSE of \(\sqrt{n}(g(\hat{\overline{h}})-g(\overline{h}))\) and we shall discuss it in detail now. If we consider SRSWOR or RS sampling design, it follows from the proofs of Theorems 1 and 2 that the asymptotic MSE of \(\sqrt{n}(g(\hat{\overline{h}})-g(\overline{h}))\) is \(\tilde{\Delta}_{1}^{2}\)=\(\lim_{\nu\rightarrow\infty}nN^{-2}\nabla g(\overline{h})\sum_{i=1}^{N}( \mathbf{V}_{i}-\mathbf{T}\pi_{i})^{T}(\mathbf{V}_{i}-\mathbf{T}\pi_{i})(\pi_{ i}^{-1}-1)\nabla g(\overline{h})^{T}\), where \(\mathbf{T}\)=\(\sum_{i=1}^{N}\mathbf{V}_{i}(1-\pi_{i})/\,\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\). Moreover, \(\mathbf{V}_{i}\) is \(h_{i}\) or \(h_{i}-\overline{h}\) or \(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{HT}\) or \(\hat{\overline{h}}_{H}\) or \(\hat{\overline{h}}_{PEML}\) (as well as \(\hat{\overline{h}}_{GREG}\)) with \(d(i,s)\)=\((N\pi_{i})^{-1}\), respectively. Recall from the paragraph following C2 that \(S_{xh}\)=\(\sum_{i=1}^{N}X_{i}h_{i}/N-\overline{X}\)\(\overline{h}\). Following the idea of Cardot et al. (2014), we estimate \(\tilde{\Delta}_{1}^{2}\) by \[\hat{\Delta}_{1}^{2}=nN^{-2}\nabla g(\hat{\overline{h}})\sum_{i\in s}(\hat{ \mathbf{V}}_{i}-\hat{\mathbf{T}}\pi_{i})^{T}(\hat{\mathbf{V}}_{i}-\hat{ \mathbf{T}}\pi_{i})(\pi_{i}^{-1}-1)\pi_{i}^{-1}\nabla g(\hat{\overline{h}})^{T}, \tag{2}\] where \(\hat{\mathbf{T}}\)=\(\sum_{i\in s}\hat{\mathbf{V}}_{i}(\pi_{i}^{-1}-1)/\sum_{i\in s}(1-\pi_{i})\), \(\hat{\overline{h}}\)=\(\hat{\overline{h}}_{HT}\) in the case of the mean, the variance and the regression coefficient, and \(\hat{\overline{h}}\)=\(\hat{\overline{h}}_{H}\) in the case of the correlation coefficient. Here, \(\hat{\mathbf{V}}_{i}\) is \(h_{i}\) or \(h_{i}-\hat{\overline{h}}_{HT}\) or \(h_{i}-\hat{\overline{h}}_{HT}-\hat{S}_{xh,1}(X_{i}-\hat{\overline{X}}_{HT})/ \hat{S}_{x,1}^{2}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{HT}\) or \(\hat{\overline{h}}_{H}\) or \(\hat{\overline{h}}_{PEML}\) (as well as \(\hat{\overline{h}}_{GREG}\)) with \(d(i,s)\)=\((N\pi_{i})^{-1}\). Further, \(\hat{S}_{xh,1}\)=\(\sum_{i\in s}(N\pi_{i})^{-1}X_{i}h_{i}-\hat{\overline{X}}_{HT}\hat{ \overline{h}}_{HT}\) and \begin{table} \begin{tabular}{|c|c|} \hline Parameters & Estimators \\ \hline Mean & GREG and PEML estimators under SRS- \\ & WOR; HT, Hajek, GREG and PEML \\ & estimators under \({}^{3}\)RS sampling design; \\ & and RHC, GREG and PEML estimators \\ & under RHC sampling design \\ \hline Variance, correlation & Obtained by plugging in Hajek and PEML \\ coefficient and regression & estimators under SRSWOR and\({}^{1}\)RS \\ coefficient & sampling design, and PEML estimator \\ & under RHC sampling design \\ \hline \end{tabular} \({}^{3}\) We consider RS sampling design since it is a HE\(\pi\)PS sampling design, and it is easier to implement than other HE\(\pi\)PS sampling designs. \end{table} Table 5: Estimators considered for the empirical comparison \(\hat{S}_{x,1}^{2}\)=\(\sum_{i\in s}(N\pi_{i})^{-1}X_{i}^{2}-\hat{\overline{X}}_{HT}^{2}\). We estimate \(\overline{h}\) in \(\nabla g(\overline{h})\) by \(\hat{\overline{h}}_{HT}\) in the case of the mean, the variance and the regression coefficient because \(\hat{\overline{h}}_{HT}\) is an unbiased estimator, and it is easier to compute than the other estimators of \(\overline{h}\) considered in this paper. On the other hand, different estimators of the correlation coefficient that are considered in this paper may become undefined if we estimate \(\overline{h}\) by any estimator other than \(\hat{\overline{h}}_{H}\) and \(\hat{\overline{h}}_{PEML}\) (see the 4\({}^{th}\) paragraph in the introduction). In this case, we choose \(\hat{\overline{h}}_{H}\) because it is easier to compute than \(\hat{\overline{h}}_{PEML}\). Next, if we consider RHC sampling design, it follows from the proof of Theorem 2 that the asymptotic MSE of \(\sqrt{n}(g(\overline{h})-g(\hat{\overline{h}}))\) is \(\tilde{\Delta}_{2}^{2}\)=\(\lim_{\nu\rightarrow\infty}n\gamma\overline{X}N^{-1}\times\)\(\nabla g(\overline{h})\sum_{i=1}^{N}({\bf V}_{i}-X_{i}\overline{\bf V}/\overline{X})^{T}({\bf V} _{i}-X_{i}\overline{\bf V}/\overline{X})X_{i}^{-1}\nabla g(\overline{h})^{T}\), where \(\gamma\) and \(\overline{\bf V}\) are as in the paragraph following C2. Moreover, \({\bf V}_{i}\) is \(h_{i}\) or \(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{RHC}\) or \(\hat{\overline{h}}_{PEML}\) (as well as \(\hat{\overline{h}}_{GREG}\)) with \(d(i,s)\)=\(G_{i}/NX_{i}\), respectively. We estimate \(\tilde{\Delta}_{2}^{2}\) by \[\begin{array}{l}\hat{\Delta}_{2}^{2}=n\gamma\overline{X}N^{-1}\nabla g( \hat{\overline{h}})\sum_{i\in s}(\hat{\bf V}_{i}-X_{i}\hat{\overline{\bf V}}_ {RHC}/\overline{X})\times\\ (\hat{\bf V}_{i}-X_{i}\hat{\overline{\bf V}}_{RHC}/\overline{X})(G_{i}X_{i}^{- 2})\nabla g(\hat{\overline{h}})^{T},\end{array} \tag{3}\] where \(\hat{\overline{\bf V}}_{RHC}\)=\(\sum_{i\in s}\hat{\bf V}_{i}G_{i}/NX_{i}\), \(\hat{\overline{h}}\)=\(\hat{\overline{h}}_{RHC}\) in the case of the mean, the variance and the regression coefficient, and \(\hat{\overline{h}}\)=\(\hat{\overline{h}}_{PEML}\) in the case of the correlation coefficient. Here, \(\hat{\bf V}_{i}\) is \(h_{i}\) or \(h_{i}-\hat{\overline{h}}_{RHC}-\hat{S}_{xh,2}(X_{i}-\overline{X})/\hat{S}_{x,2 }^{2}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{RHC}\) or \(\hat{\overline{h}}_{PEML}\) (as well as \(\hat{\overline{h}}_{GREG}\)) with \(d(i,s)\)=\(G_{i}/NX_{i}\). Further, \(\hat{S}_{xh,2}\)=\(\sum_{i\in s}h_{i}G_{i}/N-\overline{X}\)\(\hat{\overline{h}}_{RHC}\) and \(\hat{S}_{x,1}^{2}\)=\(\sum_{i\in s}X_{i}G_{i}/N-\overline{X}^{2}\). In the case of the mean, the variance and the regression coefficient, we estimate \(\overline{h}\) in \(\nabla g(\overline{h})\) by \(\hat{\overline{h}}_{RHC}\) for the same reason as discussed in the preceding paragraph, where we discuss the estimation of \(\overline{h}\) by \(\hat{\overline{h}}_{HT}\) under SRSWOR and RS sampling design. On the other hand, in the case of the correlation coefficient, we estimate \(\overline{h}\) in \(\nabla g(\overline{h})\) by \(\hat{\overline{h}}_{PEML}\) under RHC sampling design so that the estimator of the correlation coefficient appeared in the expression of \(\nabla g(\overline{h})\) in this case becomes well defined. We draw \(I\)=1000 samples each of sizes \(n\)=75, 100 and 125 using sampling designs mentioned in Table 5. Then, for each of the parameters, the sampling designs and the estimators mentioned in Table 5, we construct \(I\) many asymptotically 95% CIs based on these samples and compute the average and the standard deviation of their lengths. ### Analysis based on synthetic data In this section, we consider the population values \(\{(Y_{i},X_{i}):1\leq i\leq N\}\) on \((y,x)\) generated from a linear model as follows. We choose \(N\)=5000 and generate the \(X_{i}\)'s from a gamma distribution with mean 1000 and standard deviation (s.d.) 200. Then, \(Y_{i}\) is generated from the linear model \(Y_{i}\)=\(500+X_{i}+\epsilon_{i}\) for \(i\)=\(1,\ldots,N\), where \(\epsilon_{i}\) is generated independently of \(\{X_{i}\}_{i=1}^{N}\) from a normal distribution with mean 0 and s.d. 100. We also generate the population values \(\{(Y_{i},X_{i}):1\leq i\leq N\}\) from a linear model, when \(y\)=\((z_{1},z_{2})\) is a bivariate study variable. The population values \(\{X_{i}\}_{i=1}^{N}\) are generated in the same way as in the earlier case. Then, \(Y_{i}\)=\((Z_{1i},Z_{2i})\) is generated from the linear model \(Z_{ji}\)=\(\alpha_{j}+X_{i}+\epsilon_{ji}\) for \(i\)=\(1,\ldots,N\), where \(\alpha_{1}\)=500 and \(\alpha_{2}\)=1000. The \(\epsilon_{1i}\)'s are generated independently of the \(X_{i}\)'s from a normal distribution with mean 0 and s.d. 100, and the \(\epsilon_{2i}\)'s are generated independently of the \(X_{i}\)'s and the \(\epsilon_{1i}\)'s from a normal distribution with mean 0 and s.d. 200. We consider the estimation of the mean and the variance of \(y\) for the first dataset and the correlation and the regression coefficients between \(z_{1}\) and \(z_{2}\) for the second dataset. The results of the empirical comparison based on synthetic data are summarized as follows. For each of the mean, the variance, the correlation coefficient and the regression coefficient, the plug-in estimator based on the PEML estimator under SRSWOR turns out to be more efficient than any other estimator under any other sampling design (see Tables 2 through 6 in the supplement) considered in Table 5 when compared in terms of relative efficiencies. Also, for each of the above parameters, asymptotically 95% CI based on the PEML estimator under SRSWOR has the least average length (see Tables 7 through 11 in the supplement). Thus the empirical results stated here corroborate the theoretical results stated in Theorems 4 through 7. ### Analysis based on real data In this section, we consider a dataset on the village amenities in the state of West Bengal in India obtained from the Office of the Registrar General & Census Commissioner, India ([https://censusindia.gov.in/nada/index.php/catalog/1362](https://censusindia.gov.in/nada/index.php/catalog/1362)). Relevant study variables for this dataset are described in Table 6 below. We consider the following estimation problems for a population consisting of 37478 villages. For these estimation problems, we use the number of people living in village \(x\) as the size variable. 1. First, we consider the estimation of the mean and the variance of each of \(y_{1}\) and \(y_{2}\). It can be shown from the scatter plot and the least square regression line in Figure 1 in the supplement that \(y_{1}\) and \(x\) have an approximate linear relationship. Also, the correlation coefficient between \(y_{1}\) and \(x\) is 0.72. On the other hand, \(y_{2}\) and \(x\) do not seem to have a linear relationship (see the scatter plot and the least square regression line in Figure 2 in the supplement). 2. Next, we consider the estimation of the correlation and the regression coefficients of \(y_{1}\) and \(y_{3}\) as well as of \(y_{2}\) and \(y_{4}\). The scatter plot and the least square regression line in Figure 3 in the supplement show that \(y_{3}\) does not seem to be dependent on \(x\). Further, we see from the scatter plot and the least square regression line of \(y_{4}\) and \(x\) (see Figure 4 in the supplement) that \(y_{4}\) and \(x\) do not seem to have a linear relationship. The results of the empirical comparison based on real data are summarized in Table 7 below. For further details see Tables 12 through 31 in the supplement. The approximate linear relationship between \(y_{1}\) and \(x\) (see the scatter plot and the least square regression line in Figure 1 in the supplement) could be a possible reason why the plug-in estimator based on the PEML estimator under SRSWOR becomes the most efficient for each of the mean and the variance of \(y_{1}\) among all the estimators under different sampling designs considered in this section. Also, possibly for the same reason, the plug-in estimators of the correlation and the regression coefficients between \(y_{1}\) and \(y_{3}\) based on the PEML \begin{table} \begin{tabular}{|c|c|} \hline \(y_{1}\) & Number of primary schools in village \\ \hline \(y_{2}\) & Scheduled castes population size in village \\ \hline \(y_{3}\) & Number of secondary schools in village \\ \hline \(y_{4}\) & Scheduled tribes population size in village \\ \hline \end{tabular} \end{table} Table 6: Description of study variables estimator under SRSWOR become the most efficient among all the estimators under different sampling designs considered in this section. On the other hand, any of \(y_{2}\), and \(y_{4}\) does not seem to have a linear relationship with \(x\) (see the scatter plots and the least square regression lines in Figures 2 and 4 in the supplement). Possibly, because of this reason, the plug-in estimators of the parameters related to \(y_{2}\) and \(y_{4}\) based on the PEML estimator are not able to outperform the the plug-in estimators of those parameters based on the HT and the Hajek estimators. Next, we observe that there are substantial correlation present between \(y_{2}\) and \(x\) (correlation coefficient=0.67), and \(y_{4}\) and \(x\) (correlation coefficient=0.25). Possibly, because of this, under RS sampling design, which uses the auxiliary information, the plug-in estimators of the parameters related to \(y_{2}\) and \(y_{4}\) based on the HT and the Hajek estimators become the most efficient among all the estimators under different sampling designs considered in this section. \begin{table} \begin{tabular}{|c|c|} \hline Parameters & Most efficient estimators \\ \hline Mean and variance of \(y_{1}\) & The plug-in estimator based on the the \\ & PEML estimator under SRSWOR \\ \hline Mean of \(y_{2}\) & The HT estimator under RS sampling design \\ \hline Variance of \(y_{2}\) & The plug-in estimator based on the Hajek \\ & estimator under RS sampling design \\ \hline Correlation and regression & The plug-in estimator based on the PEML \\ coefficients of \(y_{1}\) and \(y_{3}\) & estimator under SRSWOR \\ \hline Correlation and regression & The plug-in estimator based on the Hajek \\ coefficients of \(y_{2}\) and \(y_{4}\) & estimator under RS sampling design \\ \hline \end{tabular} \end{table} Table 7: Most efficient estimators in terms of relative efficiencies (it follows from Tables 22 through 31 in the supplement that asymptotically 95% CIs based on most efficient estimators have least average lengths.) Concluding discussion and remarks It follows from Theorem 4 that the PEML estimator of the mean under SRSWOR becomes asymptotically either more efficient than or equivalent to any other estimator under any other sampling design considered in this paper. It also follows from Theorems 1 and 2 that the GREG estimator of the mean is asymptotically equivalent to the PEML estimator under different sampling designs considered in this paper. However, our numerical studies (see Section 4) based on finite samples indicate that the PEML estimator of the mean performs slightly better than the GREG estimator under all the sampling designs considered in Section 4 (see Tables 2, 12 and 14 in the supplement). Moreover, as pointed out in the introduction, if the estimators of the variance, the correlation coefficient and the regression coefficient are constructed by plugging in the GREG estimator of the mean, then the estimators of the population variances involved in these parameters may become negative. On the other hand, if the estimators of these parameters are constructed by plugging in the PEML estimator of the mean, then such a problem does not occur. Further, for these parameters, depending on sampling designs, the plug-in estimator based on either the PEML or the Hajek estimator turns out to be asymptotically best among different estimators that we have considered (see Theorems 6 and 7). We see from Theorem 4 that for the population mean, the PEML estimator, which is not design unbiased, performs better than design unbiased estimators like the HT and the RHC estimators. Further, as pointed out in the introduction, the plug-in estimators of the population variance based on the HT and the RHC estimators may become negative. This affects the plug-in estimators of the correlation and the regression coefficients based on the HT and the RHC estimators. It follows from Table 2 that under LMS sampling design, the large sample performances of all the estimators of functions of means considered in this paper are the same as their large sample performances under SRSWOR. The LMS sampling design was introduced to make the ratio estimator of the mean unbiased. It follows from Remark 2 in Section 2 that the performance of the ratio estimator of the mean is worse than several other estimators that we have considered even under LMS sampling design. The coefficient of variation is another well known finite population parameter, which can be expressed as a function of population mean \(g(\overline{h})\). We have \(d\)=1, \(p\)=2, \(h(y)\)=\((y^{2},y)\) and \(g(s_{1},s_{2})\)=\(\sqrt{s_{1}-s_{2}^{2}}/s_{2}\) in this case. Among the estimators considered in this paper, the plug-in estimators of \(g(\overline{h})\) that are based on the PEML and the Hajek estimators of the mean can be used for estimating this parameter since it involves the finite population variance (see the 4\({}^{th}\) paragraph in the introduction). We have avoided reporting the comparison of the estimators of the coefficient of variation in this paper because of complex mathematical expressions. However, the asymptotic results stated in Theorems 6 and 7 also hold for this parameter. An empirical comparison of the biased estimators considered in this paper and their bias-corrected versions are carried out based on jackknifing in Section S4 in the supplement. It follows from this comparison that for all the parameters considered in this paper, the bias-corrected estimators become worse than the original biased estimators in the cases of both the synthetic and the real data. This is because, although bias-correction results in reduction of biases in the original biased estimators, the variances of these estimators increase substantially after bias-correction. ## Supplementary material In the supplement, we discuss some conditions from the main paper and demonstrate situations, where these conditions hold. Then, we state and prove some additional mathematical results. We also give the proofs of Remark 1 and Theorems 2, 3, 6 and 7. The biased estimators considered in this paper are then compared empirically with their bias-corrected versions based on jackknifing in terms of MSE. Finally, we provide the numerical results related to the analysis based on both synthetic and real data (see Section 4). ## Acknowledgments The authors gratefully acknowledge careful reading of an earlier version of the paper by an anonymous reviewer and an associate editor. Critical comments and constructive suggestions from the reviewer and the associate editor led to significant improvement of the paper. The authors would also like to thank Prof. Aloke Kar and Prof. Sandip Mitra for several discussions about Section 4.2 of the paper. ## Appendix Let us begin by providing the expressions (see Table 8 below) of those estimators of Table 8: Estimators of \(\overline{Y}\) \(\overline{Y}\), which are considered in this paper. In Table 8, \(\{\pi_{i}\}_{i=1}^{N}\) denote inclusion probabilities, and \(G_{i}\) is the total of the \(x\) values of that randomly formed group from which the \(i^{th}\) population unit is selected in the sample by RHC sampling design (cf. Chaudhuri et al. (2006)). In the case of the GREG estimator, \(\hat{\overline{Y}}_{*}\)=\(\sum_{i\in s}d(i,s)Y_{i}/\sum_{i\in s}d(i,s)\), \(\hat{\overline{X}}_{*}\)=\(\sum_{i\in s}d(i,s)\)\(\times X_{i}/\sum_{i\in s}d(i,s)\) and \(\hat{\beta}\)=\(\sum_{i\in s}d(i,s)(Y_{i}-\hat{\overline{Y}}_{*})(X_{i}-\hat{\overline{X}}_{*} )/\sum_{i\in s}d(i,s)(X_{i}-\hat{\overline{X}}_{*})^{2}\), where \(\{d(i,s):i\in s\}\) are sampling design weights. Finally, the \(c_{i}\)'s (\(>0\)) in the PEML estimator are obtained by maximizing \(\sum_{i\in s}d(i,s)\log(c_{i})\) subject to \(\sum_{i\in s}c_{i}\)=1 and \(\sum_{i\in s}c_{i}(X_{i}-\overline{X})\)=0. Following Chen and Sitter (1999), we consider both the GREG and the PEML estimators with \(d(i,s)\)=\((N\pi_{i})^{-1}\) under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design, and with \(d(i,s)\)=\(G_{i}/NX_{i}\) under RHC sampling design. Let us denote the HT, the RHC, the Hajek, the ratio, the product, the GREG and the PEML estimators of population means of \(h(y)\) by \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{RHC}\), \(\hat{\overline{h}}_{H}\), \(\hat{\overline{h}}_{RA}\), \(\hat{\overline{h}}_{PR}\), \(\hat{\overline{h}}_{GREG}\) and \(\hat{\overline{h}}_{PEML}\), respectively. Now, we give the proofs of Theorems 1, 4 and 5. The proofs of Remark 1 and Theorems 2, 3, 6 and 7 are given in Section S3 of the supplement. Proof of Theorem 1.: Let us consider SRSWOR and LMS sampling design. It follows from \((i)\) in Lemma S6 in the supplement that \(\sqrt{n}(\hat{\overline{h}}-\overline{h})\xrightarrow{\mathcal{L}}N(0,\Gamma)\) as \(\nu\rightarrow\infty\) for some p.d. matrix \(\Gamma\), when \(\hat{\overline{h}}\) is one of \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{H}\), \(\hat{\overline{h}}_{RA}\), \(\hat{\overline{h}}_{PR}\), and \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\) under any of these sampling designs. Now, note that \(\max_{i\in s}|X_{i}-\overline{X}|\)=\(o_{p}(\sqrt{n})\), and \(\sum_{i\in s}\pi_{i}^{-1}(X_{i}-\overline{X})/\sum_{i\in s}\pi_{i}^{-1}(X_{i}- \overline{X})^{2}\)=\(O_{p}(1/\sqrt{n})\) as \(\nu\rightarrow\infty\) under the above sampling designs (see Lemma S8 in the supplement). Then, by applying Theorem 1 of Chen and Sitter (1999) to each real valued coordinate of \(\hat{\overline{h}}_{PEML}\) and \(\hat{\overline{h}}_{GREG}\), we get \(\sqrt{n}(\hat{\overline{h}}_{PEML}-\hat{\overline{h}}_{GREG})\)=\(o_{p}(1)\) as \(\nu\rightarrow\infty\) for \(d(i,s)\)=\((N\pi_{i})^{-1}\) under these sampling designs. This implies that \(\hat{\overline{h}}_{PEML}\) and \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\) have the same asymptotic distribution. Therefore, if \(\hat{\overline{h}}\) is one of \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{H}\), \(\hat{\overline{h}}_{RA}\), \(\hat{\overline{h}}_{PR}\), and \(\hat{\overline{h}}_{GREG}\) and \(\hat{\overline{h}}_{PEML}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\), we have \[\sqrt{n}(g(\hat{\overline{h}})-g(\overline{h}))\xrightarrow{\mathcal{L}}N(0, \Delta^{2})\text{ as }\nu\rightarrow\infty \tag{4}\] under any of the above-mentioned sampling designs for some \(\Delta^{2}>0\) by the delta method and the condition \(\nabla g(\mu_{0})\neq 0\) at \(\mu_{0}\)=\(\lim_{\nu\rightarrow\infty}\overline{h}\). It can be shown from the proof of \((i)\) in Lemma S6 in the supplement that \(\Delta^{2}\)=\(\nabla g(\mu_{0})\Gamma_{1}\) (\(\nabla g(\mu_{0}))^{T}\), where \(\Gamma_{1}\)=\(\lim_{\nu\rightarrow\infty}nN^{-2}\times\sum_{i=1}^{N}(\mathbf{V}_{i}- \mathbf{T}\pi_{i})^{T}(\mathbf{V}_{i}-\mathbf{T}\pi_{i})(\pi_{i}^{-1}-1)\). It can also be shown from Table 1 in the supplement that under each of the above sampling designs, \(\mathbf{V}_{i}\) in \(\Gamma_{1}\) is \(h_{i}\) or \(h_{i}-\overline{h}\) or \(h_{i}-\overline{h}X_{i}/\overline{X}\) or \(h_{i}+\overline{h}X_{i}/\overline{X}\) or \(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{HT}\) or \(\hat{\overline{h}}_{H}\) or \(\hat{\overline{h}}_{RA}\) or \(\hat{\overline{h}}_{PR}\), or \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\), respectively. Now, by \((i)\) in Lemma S7 in the supplement, we have \[\sigma_{1}^{2}=\sigma_{2}^{2}=(1-\lambda)\lim_{\nu\rightarrow\infty}\sum_{i=1 }^{N}(A_{i}-\bar{A})^{2}/N. \tag{5}\] where \(\sigma_{1}^{2}\) and \(\sigma_{2}^{2}\) are as defined in the statement of Lemma S7, and \(A_{i}\)=\(\nabla g(\mu_{0})\mathbf{V}_{i}^{T}\) for different choices of \(\mathbf{V}_{i}\) mentioned in the preceding paragraph. Note that \(g(\hat{\overline{h}}_{GREG})\) and \(g(\hat{\overline{h}}_{PEML})\) have the same asymptotic distribution under each of SRSWOR and LMS sampling design since \(\sqrt{n}(\hat{\overline{h}}_{PEML}-\hat{\overline{h}}_{GREG})\)=\(o_{p}(1)\) for \(\nu\rightarrow\infty\) under these sampling designs as pointed out earlier in this proof. Further, (5) implies that \(g(\hat{\overline{h}}_{GREG})\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\) has the same asymptotic MSE under SRSWOR and LMS sampling design. Thus \(g(\hat{\overline{h}}_{GREG})\) and \(g(\hat{\overline{h}}_{PEML})\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\) under SRSWOR and LMS sampling design form class 1 in Table 2. Next, (5) yields that \(g(\hat{\overline{h}}_{HT})\) has the same asymptotic MSE under SRSWOR and LMS sampling design. It also follows from (5) that \(g(\hat{\overline{h}}_{H})\) has the same asymptotic MSE under SRSWOR and LMS sampling design. Now, note that \(g(\widehat{\overline{h}}_{HT})\) and \(g(\widehat{\overline{h}}_{H})\) coincide under SRSWOR. Thus \(g(\widehat{\overline{h}}_{HT})\) under SRSWOR, and \(g(\widehat{\overline{h}}_{HT})\) and \(g(\widehat{\overline{h}}_{H})\) under LMS sampling design form class 2 in Table 2. Next, (5) implies that \(g(\widehat{\overline{h}}_{RA})\) has the same asymptotic MSE under SRSWOR and LMS sampling design. Further, (5) implies that \(g(\widehat{\overline{h}}_{PR})\) has the same asymptotic MSE under SRSWOR and LMS sampling design. Thus \(g(\widehat{\overline{h}}_{RA})\) under SRSWOR and LMS sampling design forms class 3 in Table 2, and \(g(\widehat{\overline{h}}_{PR})\) under those sampling designs forms class 4 in Table 2. This completes the proof of Theorem 1. Proof of Theorem 4.: Note that C1 and C2 hold _a.s._\([\mathbb{P}]\) since C5 holds and \(E_{\mathbb{P}}(\epsilon_{i})^{4}<\infty\). Also, note that C3 holds _a.s._\([\mathbb{P}]\) under SRSWOR and LMS sampling design (see Lemma S2 in the supplement). Then, under the above sampling designs, conclusions of Theorems 1 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=\(p\)=1, \(h(y)\)=\(y\) and \(g(s)\)=\(s\). Note that \(W_{i}\)=\(\nabla g(\overline{h})h_{i}^{T}\)=\(Y_{i}\). Also, note that the \(\Delta_{i}^{2}\)'s in Table 3 can be expressed in terms of superposition moments of \((Y_{i},X_{i})\)_a.s._\([\mathbb{P}]\) by SLLN since \(E_{\mathbb{P}}(\epsilon_{i})^{4}<\infty\). Recall from the beginning of Section 3 that we have taken \(\sigma_{x}^{2}\)=1. Then, we have \(\Delta_{2}^{2}-\Delta_{1}^{2}\)=\((1-\lambda)\sigma_{xy}^{2}\), \(\Delta_{3}^{2}-\Delta_{1}^{2}\)=\((1-\lambda)(\sigma_{xy}-E_{\mathbb{P}}(Y_{i})/\mu_{1})^{2}\) and \(\Delta_{4}^{2}-\Delta_{1}^{2}\)=\((1-\lambda)(\sigma_{xy}+E_{\mathbb{P}}(Y_{i})/\mu_{1})^{2}\)_a.s._\([\mathbb{P}]\), where \(\mu_{1}\)=\(E_{\mathbb{P}}(X_{i})\) and \(\sigma_{xy}\)=\(cov_{\mathbb{P}}(X_{i},Y_{i})\). Hence, \(\Delta_{1}^{2}<\Delta_{i}^{2}\)_a.s._\([\mathbb{P}]\) for \(i\)=\(2,3,4\). Next consider the case of \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\). Note that \(n\gamma\to c\) as \(\nu\rightarrow\infty\) for some \(c\geq 1-\lambda\) by Lemma S1 in the supplement. Also, note that _a.s._\([\mathbb{P}]\), C4 holds in the case of RHC sampling design and C3 holds in the case of any HE\(\pi\)PS sampling design (see Lemma S2 in the supplement). Then, under RHC and any HE\(\pi\)PS sampling designs, conclusions of Theorems 2 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=\(p\)=1, \(h(y)\)=\(y\) and \(g(s)\)=\(s\). Further, we have \(\Delta_{5}^{2}-\Delta_{1}^{2}\)=\(\big{\{}E_{\mathbb{P}}\big{(}Y_{i}-E_{\mathbb{P}}(Y_{i})\big{)}^{2} \big{(}\mu_{1}/X_{i}-\lambda\big{)}-\mu_{1}^{2}\sigma_{xy}\big{(}\sigma_{xy}cov _{\mathbb{P}}(X_{i},1/X_{i})-2cov_{\mathbb{P}}(Y_{i},1/X_{i})\big{)}+\lambda \sigma_{xy}^{2}\big{\}}-(1-\lambda)\big{\{}\sigma_{y}^{2}-\sigma_{xy}^{2} \big{\}}\), \(\Delta_{6}^{2}-\Delta_{5}^{2}\)= \(E_{\mathbb{P}}\big{(}Y_{i}^{2}\big{(}\mu_{1}/X_{i}-\lambda\big{)}\big{)}-\big{\{} \lambda E_{\mathbb{P}}(Y_{i}X_{i})-E_{\mathbb{P}}(Y_{i})\mu_{1}\big{\}}^{2}/ \chi\mu_{1}-\big{\{}E_{\mathbb{P}}\big{(}Y_{i}-E_{\mathbb{P}}(Y_{i})-\sigma_{ xy}(X_{i}-\mu_{1})\big{)}^{2}\big{(}\mu_{1}/X_{i}-\lambda\big{)}\big{\}}\), \(\Delta_{7}^{2}-\Delta_{5}^{2}\)=\(\big{\{}\mu_{1}^{2}\sigma_{xy}\big{(}\sigma_{xy}cov_{ \mathbb{P}}(X_{i},1/X_{i})-2cov_{\mathbb{P}}(Y_{i},1/X_{i})\big{)}-\lambda \sigma_{xy}^{2}-\lambda^{2}\sigma_{xy}^{2}/\mu_{1}\chi\big{\}}\), \(\Delta_{8}^{2}-\Delta_{1}^{2}\)=\(c\big{\{}\mu_{1}E_{\mathbb{P}}(Y_{i}-E_{\mathbb{P}}(Y_{i}))^{2}/X_{i}-\mu_{1}^{2} \sigma_{xy}(\sigma_{xy}cov_{\mathbb{P}}(X_{i},\)\(1/X_{i})-2cov_{\mathbb{P}}(Y_{i},1/X_{i}))\big{\}}-(1- \lambda)\big{\{}\sigma_{y}^{2}-\sigma_{xy}^{2}\big{\}}\) and \(\Delta_{9}^{2}-\Delta_{1}^{2}\)=\(c\big{\{}\mu_{1}E_{\mathbb{P}}(Y_{i}^{2}/X_{i})-E_{\mathbb{P}}^{2}(Y_{i}) \big{\}}-(1-\lambda)\big{\{}\sigma_{y}^{2}-\sigma_{xy}^{2}\big{\}}\)_a.s._\([\mathbb{P}]\), where \(\sigma_{y}^{2}\)=\(var_{\mathbb{P}}(Y_{i})\), \(\chi\)=\(\mu_{1}-\lambda(\mu_{2}/\mu_{1})\) and \(\mu_{2}\)=\(E_{\mathbb{P}}(X_{i})^{2}\). Here, we note that \(\chi\)=\(E_{\mathbb{P}}\big{(}X_{i}^{2}(\mu_{1}/X_{i}-\lambda)\big{)}/\mu_{1}>0\) because C5 holds and C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\). Moreover, from the linear model set up, we can show that \(\Delta_{5}^{2}-\Delta_{1}^{2}\)=\(\sigma^{2}(\mu_{1}\mu_{-1}-1)>0\), \(\Delta_{6}^{2}-\Delta_{5}^{2}\)=\(E_{\mathbb{P}}\big{\{}(\alpha+\beta X_{i})-\chi^{-1}X_{i}(\alpha+\beta\mu_{1}- \lambda\alpha-\lambda\beta\mu_{2}/\mu_{1})\big{\}}^{2}\big{\{}\mu_{1}/X_{i}- \lambda\big{\}}\geq 0\), \(\Delta_{7}^{2}-\Delta_{5}^{2}\)=\(\beta^{2}E_{\mathbb{P}}\big{\{}(X_{i}-\mu_{1})-\lambda\chi^{-1}X_{i}(\mu_{1}-\mu_{2}/ \mu_{1})\big{\}}^{2}\big{\{}\mu_{1}/X_{i}-\lambda\big{\}}\geq 0\), \(\Delta_{8}^{2}-\Delta_{1}^{2}\)=\(\sigma^{2}\big{(}c\mu_{1}\mu_{-1}-(1-\lambda)\big{)}\geq c\sigma^{2}(\mu_{1} \mu_{-1}-1)>0\) and \(\Delta_{9}^{2}-\Delta_{1}^{2}\)=\(\sigma^{2}\big{(}c\mu_{1}\mu_{-1}-(1-\lambda)\big{)}+c\alpha^{2}(\mu_{1} \mu_{-1}-1)>0\)_a.s._\([\mathbb{P}]\), where \(\sigma^{2}\)=\(E_{\mathbb{P}}(\epsilon_{i})^{2}\). Note that \(\Delta_{6}^{2}-\Delta_{5}^{2}\geq 0\) and \(\Delta_{7}^{2}-\Delta_{5}^{2}\geq 0\) because C5 holds and C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\). Therefore, \(\Delta_{1}^{2}<\Delta_{i}^{2}\)_a.s._\([\mathbb{P}]\) for \(i\)=\(2,\ldots,9\). This completes the proof of Theorem 4. Proof of Theorem 5.: The proof follows in a straightforward way from Theorem 4. ## References * Avadhani and Sukhatme (1970) Avadhani, M. and Sukhatme, B. (1970). A comparison of two sampling procedures with an application to successive sampling. _J. R. Stat. Soc. Ser. C Appl. Stat._**19**, 251-259. * Avadhani and Srivastava (1972) Avadhani, M. and Srivastava, A. (1972). A comparison of midzuno-sen scheme with pps sampling without replacement and its application to successive sampling. _Ann. Inst. Stat. Math._**24**, 153-164. * Berger (1998) Berger, Y. G. (1998). Rate of convergence to normal distribution for the Horvitz-Thompson estimator. _J. Statist. Plann. Inference_**67**, 209-226. * Boistard et al. (2017) Boistard, H., Lopuhaa, H. P. and Ruiz-Gazen, A. (2017). Functional central limit theorems for single-stage sampling designs. _Ann. Stat_. **45**, 1728-1758. * Cardot and Josserand (2011) Cardot, H. and Josserand, E. (2011). Horvitz-thompson estimators for functional data: Asymptotic confidence bands and optimal allocation for stratified sampling. _Biometrika_**98**, 107-118. * Cardot et al. (2014) Cardot, H., Goga, C. and Lardin, P. (2014). Variance estimation and asymptotic confidence bands for the mean estimator of sampled functional data with high entropy unequal probability sampling designs. _Scand. J. Stat._**41**, 516-534. * Theory Methods_**35**, 2239-2244. * Chaudhuri (2014) Chaudhuri, A. (2014). _Modern survey sampling_. CRC Press, Boca Raton, FL. * Chaudhuri et al. (2014) Chen, J. and Sitter, R. R. (1999). A pseudo empirical likelihood approach to the effective use of auxiliary information in complex surveys. _Statist. Sinica_**9**, 385-406. * Cochran (1977) Cochran, W. G. (1977). _Sampling techniques_. 3rd edition. John Wiley & Sons, New York-London-Sydney. Wiley Series in Probability and Mathematical Statistics. * Conti (2014) Conti, P. L. (2014). On the estimation of the distribution function of a finite population under high entropy sampling designs, with applications. _Sankhya B_**76**, 234-259. * Conti (2015) Conti, P. L. (2015). Inference for quantiles of a finite population: asymptotic versus resampling results. _Scand. J. Stat._**42**, 545-561. * Fuller (2011) Fuller, W. A. (2011). _Sampling statistics_. John Wiley & Sons. * Hajek (1964) Hajek, J. (1964). Asymptotic theory of rejective sampling with varying probabilities from a finite population. _Ann. Math. Stat._**35**, 1491-1523. * Hajek (1971) Hajek, J. (1971). Comment on "An essay on the logical foundations of survey sampling, part one". In _Foundations of Statistical Inference_ (V. P. Godambe and D. A. Sprott, eds.) Holt, Rinehart and Winston, Toronto. * Horvitz and Thompson (1952) Horvitz, D. G. and Thompson, D. J. (1952). A generalization of sampling without replacement from a finite universe. _J. Amer. Statist. Assoc._**47**, 663-685. * Isaki and Fuller (1982) Isaki, C. T. and Fuller, W. A. (1982). Survey design under the regression superpopulation model. _J. Amer. Statist. Assoc._**77**, 89-96. * Lahiri (1951) Lahiri, D. B. (1951). A method of sample selection providing unbiased ratio estimates. _Bull. Int. Stat. Inst._**33**, 133-140. * Midzuno (1952) Midzuno, H. (1952). On the sampling system with probabilities proportionate to the sum of sizes. _Ann. Inst. Stat. Math._**3**, 99-107. * Murthy (1967) Murthy, M. N. (1967). _Sampling theory and methods_. Statistical Publishing Society, Calcutta. * Qiyang and Wellner (2021) Qiyang, H. and Wellner, J. A. (2021). Complex sampling designs: Uniform limit theorems and applications. _Ann. Statist._ **49**, 459-485. * Qiyang and Wellner (2021) Raj, D. (1954). On sampling with probabilities proportional to size. _Ganita_**5**, 175-182. * Rao _et al._ (1962) Rao, J. N..K., Hartley, H. O. and Cochran, W. G. (1962). On a simple procedure of unequal probability sampling without replacement. _J. R. Stat. Soc. Ser. B Methodol._**24**, 482-491. * Rao (2003) Rao, J. N..K. (2003). _Small Area Estimation_. A John Wiley & Sons, Inc, New Jersey. * Sampford (1967) Sampford, M. (1967). On sampling without replacement with unequal probabilities of selection. _Biometrika_**54**, 499-513. * Sarndal _et al._ (2003) Sarndal, C. E., Swensson, B. and Wretman, J. (2003). _Model assisted survey sampling_. Springer Science & Business Media. * Scott and Wu (1981) Scott, A. and Wu, C. F. (1981). On the asymptotic distribution of ratio and regression estimators. _J. Amer. Statist. Assoc._**76**, 98-102. * Sen (1953) Sen, A. (1953). On the estimator of the variance in sampling with varying probabilities. _Jour. Ind. Soc. Ag. Statistics_**5**, 119-127. * Wu (2005) Wu, C. (2005). Algorithms and r codes for the pseudo empirical likelihood method in survey sampling. _Surv. Methodol._**31**, 239. * Wang and Opsomer (2011) Wang, J. C. and Opsomer, J. D. (2011). On asymptotic normality and variance estimation for nondifferentiable survey estimators. _Biometrika_**98**, 91-106. * Anurag Dey _Indian Statistical Institute, Kolkata_ * E-mail: [email protected] * Probal Chaudhuri _Indian Statistical Institute, Kolkata_ * E-mail:[email protected] Supplementary material for "A comparison of estimators of mean and its functions in finite populations" Anurag Dey and Probal Chaudhuri _Indian Statistical Institute, Kolkata_ ###### Abstract In this supplement, we discuss conditions C1 through C4 from the main paper and demonstrate situations, where these conditions hold. Then, we state and prove some additional mathematical results. We also give the proofs of Remark 1 and Theorems 2, 3, 6 and 7 of the main text. The biased estimators considered in the main paper are then compared empirically with their bias-corrected versions based on jackknifing in terms of MSE. Finally, we provide the numerical results related to the analysis based on both synthetic and real data. **Keywords and phrases:** Asymptotic normality, Equivalence classes of estimators, High entropy sampling designs, Inclusion probability, Linear regression model, Rejective sampling design, Relative efficiency, Superpopulation models. ## 1 Discussion of conditions and related results In this section, we demonstrate some situations, when conditions C1 through C4 in the main article hold. Before that we prove and state the following lemma. Recall from the paragraph following C2 in the main text that \(\gamma\)=\(\sum_{i=1}^{n}N_{i}(N_{i}-1)/N(N-1)\) with \(N_{i}\) being the size of the \(i^{th}\) group formed randomly in RHC sampling design. **Lemma S 1**.: _Suppose that \(C0\) holds. Then, \(n\gamma\to c\) for some \(c\geq 1-\lambda>0\) as \(\nu\to\infty\), where \(\lambda\) is as in \(C0\)._ Proof.: Let us first consider the case of \(\lambda\)=0. Note that \[\begin{array}{l}n(N/n-1)(N-n)/(N(N-1))\leq n\gamma\leq\\ n(N/n+1)(N-n)/(N(N-1))\end{array} \tag{1}\] by (1) in Section 2 of the main text. Moreover, \(n(N/n+1)(N-n)/(N(N-1))\)=\((1+n/N)(N-n)/(N-1)\to 1\) and \(n(N/n-1)(N-n)/(N(N-1))\)=\((1-n/N)(N-n)/(N-1)\to 1\) as \(\nu\to\infty\) because C0 holds and \(\lambda\)=0. Thus we have \(n\gamma\to 1\) as \(\nu\to\infty\) in this case. Next, consider the case, when \(\lambda>0\) and \(\lambda^{-1}\) is an integer. Here, we consider the following sub-cases. Let us first consider the sub-case, when \(N/n\) is an integer for all sufficiently large \(\nu\). Then, by (1), we have \(n\gamma\)=\((N-n)/(N-1)\) for all sufficiently large \(\nu\). Now, since C0 holds, we have \[(N-n)/(N-1)\to 1-\lambda\text{ as }\nu\to\infty. \tag{2}\] Further, consider the sub-case, when \(N/n\) is a non-integer and \(N/n-\lambda^{-1}\geq 0\) for all sufficiently large \(\nu\). Then by (1) in Section 2 of the main text, we have \[n\gamma=(N/(N-1))(n/N)\lfloor N/n\rfloor\big{(}2-\big{(}(n/N)\lfloor N/n\rfloor \big{)}-(n/N)\big{)} \tag{3}\] for all sufficiently large \(\nu\). Now, since C0 holds, we have \(0\leq N/n-\lambda^{-1}<1\) for all sufficiently large \(\nu\). Then, \(\lfloor N/n\rfloor\)=\(\lambda^{-1}\) for all sufficiently large \(\nu\), and hence \[(N/(N-1))(n/N)\lfloor N/n\rfloor\bigg{(}2-\big{(}(n/N)\lfloor N/n\rfloor \big{)}-(n/N)\bigg{)}\to 1-\lambda \tag{4}\] as \(\nu\to\infty\). Next, consider the sub-case, when \(N/n\) is a non-integer and \(N/n-\lambda^{-1}<0\) for all sufficiently large \(\nu\). Then, the result in (3) holds by (1), and \(-1\leq N/n-\lambda^{-1}<0\) for all sufficiently large \(\nu\) by C0. Therefore, \(\lfloor N/n\rfloor\)=\(\lambda^{-1}-1\) for all sufficiently large \(\nu\), and hence the result in (4) holds. Thus, in the case of \(\lambda>0\) and \(\lambda^{-1}\) being an integer, \(n\gamma\) converges to \(1-\lambda\) as \(\nu\to\infty\) through all the sub-sequences, and hence \(n\gamma\to 1-\lambda\) as \(\nu\to\infty\). Thus we have \(c\)=\(1-\lambda\) in this case. Finally, consider the case, when \(\lambda>0\), and \(\lambda^{-1}\) is a non-integer. Then, \(N/n\) must be a non-integer for all sufficiently large \(\nu\), and hence \(\big{(}(n/N)\lfloor N/n\rfloor\big{)}-(n/N)\big{)}\) for all sufficiently large \(\nu\) by (1) in Section 2 of the main text. Note that in this case, \(N/n-\lfloor\lambda^{-1}\rfloor\to\lambda^{-1}-\lfloor\lambda^{-1}\rfloor\in(0,1)\) as \(\nu\to\infty\) by C0. Therefore, \(\lfloor\lambda^{-1}\rfloor<N/n<\lfloor\lambda^{-1}\rfloor+1\) for all sufficiently large \(\nu\), and hence \(\lfloor N/n\rfloor\)=\(\lfloor\lambda^{-1}\rfloor\) for all sufficiently large \(\nu\). Thus \(n\gamma\to\lambda\lfloor\lambda^{-1}\rfloor(2-\lambda\lfloor\lambda^{-1} \rfloor-\lambda)\) as \(\nu\to\infty\) by C0. Now, if \(m\)=\(\lfloor\lambda^{-1}\rfloor\) and \(\lambda^{-1}\) is a non-integer, then \((m+1)^{-1}<\lambda<m^{-1}\). Therefore, \(\lambda\lfloor\lambda^{-1}\rfloor(2-\lambda\lfloor\lambda^{-1}\rfloor- \lambda)-1+\lambda\)=\(-\big{(}1-(2m+1)\lambda+m(m+1)\lambda^{2}\big{)}\)=\(-(1-m\lambda)(1-(m+1)\lambda)>0\). Thus we have \(c\)=\(\lambda\lfloor\lambda^{-1}\rfloor(2-\lambda\lfloor\lambda^{-1}\rfloor- \lambda)>1-\lambda\) in this case. This completes the proof of the Lemma. Next, recall \(\{{\bf V}_{i}\}_{i=1}^{N}\) from the paragraph preceding the condition C3 and \(b\) from the condition C5 in the main text. Let us define \(\Sigma_{1}\)=\(nN^{-2}\sum_{i=1}^{N}({\bf V}_{i}-{\bf T}\pi_{i})^{T}({\bf V}_{i}-{\bf T}\pi_{i })(\pi_{i}^{-1}-1)\) and \(\Sigma_{2}\)=\(n\gamma\overline{X}N^{-1}\sum_{i=1}^{N}({\bf V}_{i}-X_{i}\overline{{\bf V}}/ \overline{X})^{T}({\bf V}_{i}-X_{i}\overline{{\bf V}}/\overline{X})/X_{i}\), where \({\bf T}\)=\(\sum_{i=1}^{N}{\bf V}_{i}(1-\pi_{i})/\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\), the \(\pi_{i}\)'s are inclusion probabilities and \(\overline{{\bf V}}\)=\(\sum_{i=1}^{N}{\bf V}_{i}/N\). Now, we state the following lemma. **Lemma S 2.**\((i)\) _Suppose that \(C0\) and \(C5\) hold, and \(\{(h(Y_{i}),X_{i}):1\leq i\leq N\}\) are generated from a superpopulation distribution \(\mathbb{P}\) with \(E_{\mathbb{P}}||h(Y_{i})||^{4}<\infty\). Then, \(C1\), \(C2\) and \(C4\) hold a.s. \([\mathbb{P}]\). \((ii)\) Further, if \(C0\) and \(C5\) hold, and \(E_{\mathbb{P}}||h(Y_{i})||^{2}<\infty\), then \(C3\) holds a.s. \([\mathbb{P}]\) under SRSWOR and LMS sampling design. Moreover, if \(C0\) holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\), \(C5\) holds, and \(E_{\mathbb{P}}||h(Y_{i})||^{2}<\infty\), then \(C3\) holds a.s. \([\mathbb{P}]\) under any \(\pi\)PS sampling design._ Proof.: As before, for simplicity, let us write \(h(Y_{i})\) as \(h_{i}\). Under the conditions C5 and \(E_{\mathbb{P}}||h(Y_{i})||^{4}<\infty\), C1 holds _a.s._\([\mathbb{P}]\) by SLLN. Also, under C5, C2 holds _a.s._\([\mathbb{P}]\). Next, by SLLN, \(\lim_{\nu\to\infty}\Sigma_{2}\)=\(cE_{\mathbb{P}}(X_{i})E_{\mathbb{P}}[(h_{i}-(E_{\mathbb{P}}(X_{i}))^{-1}X_{i} \ E_{\mathbb{P}}(h_{i}))^{T}(h_{i}-(E_{\mathbb{P}}(X_{i}))^{-1}X_{i}E_{\mathbb{ P}}(h_{i}))\)\(X_{i}^{-1}]\)_a.s._\([\mathbb{P}]\) for \({\bf V}_{i}\)=\(h_{i}\), \(h_{i}-\overline{h}X_{i}/\overline{X}\) and \(h_{i}+\overline{h}X_{i}/\overline{X}\) because \(n\gamma\to c\) as \(\nu\to\infty\) by Lemma S1. Similarly, \(\lim_{\nu\to\infty}\Sigma_{2}\)=\(cE_{\mathbb{P}}(X_{i})E_{\mathbb{P}}[(h_{i}\ -E_{\mathbb{P}}(h_{i}))^{T}(h_{i}-E_{\mathbb{P}}(h_{i}))/X_{i}]\)_a.s._\([\mathbb{P}]\) for \({\bf V}_{i}\)=\(h_{i}-\overline{h}\), and \(\lim_{\nu\to\infty}\Sigma_{2}\)=\(cE_{\mathbb{P}}(X_{i})E_{\mathbb{P}}[\ (h_{i}-E_{\mathbb{P}}(h_{i})-C_{xh}(X_{i}-E_{\mathbb{P}}(X_{i})))^{T}(h_{i}-E_{ \mathbb{P}}(h_{i})-C_{xh}(X_{i}-E_{\mathbb{P}}(X_{i})))/X_{i}]\)_a.s._\([\mathbb{P}]\) for \({\bf V}_{i}\)=\(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\). Here, \(C_{xh}\)=\((E_{\mathbb{P}}(h_{i}X_{i})-E_{\mathbb{P}}(h_{i})E_{\mathbb{P}}(X_{i}))/\ (E_{ \mathbb{P}}(X_{i})^{2}-(E_{\mathbb{P}}(X_{i}))^{2})\). Note that the above limits are p.d. matrices because C5 holds. Therefore, C4 holds _a.s._\([\mathbb{P}]\). This completes the proof of (i) in Lemma S2. Next, note that \(\Sigma_{1}\)=\((1-n/N)(\sum_{i=1}^{N}{\bf V}_{i}^{T}{\bf V}_{i}/N-\overline{{\bf V}}^{T} \overline{{\bf V}})\) under SRSWOR. Then, C3 holds _a.s._\([\mathbb{P}]\) by directly applying SLLN. Under LMS sampling design, C3 can be shown to hold _a.s._\([\mathbb{P}]\) in the same way as the proof of the result \(\sigma_{1}^{2}\)=\(\sigma_{2}^{2}\) in the proof of Lemma 2 in the Appendix. Next, we have \(\lim_{\nu\rightarrow\infty}\Sigma_{1}\)=\(E_{\mathbb{P}}\big{[}\big{\{}h_{i}+\chi^{-1}(E_{\mathbb{P}}(X_{i}))^{-1}X_{i} \big{(}\lambda E_{\mathbb{P}}(h_{i}X_{i})-E_{\mathbb{P}}(h_{i})E_{\mathbb{P}}(X _{i})\big{)}\big{\}}^{T}\big{\{}h_{i}+\chi^{-1}(E_{\mathbb{P}}(X_{i}))^{-1}X_{i }\big{(}\lambda E_{\mathbb{P}}(h_{i}X_{i})-E_{\mathbb{P}}(h_{i})E_{\mathbb{P}}( X_{i})\big{)}\big{\}}\big{\{}E_{\mathbb{P}}(X_{i})/X_{i}-\lambda\big{\}}\big{]}\)\(a.s.\)\([\mathbb{P}]\) for \(\mathbf{V}_{i}\)=\(h_{i}\), \(h_{i}-\overline{h}X_{i}/\overline{X}\) and \(h_{i}+\overline{h}X_{i}/\overline{X}\) under any \(\pi\)PS sampling design (i.e., a sampling design with \(\pi_{i}\)=\(nX_{i}/\sum_{i=1}^{N}X_{i}\)) by SLLN because C0 and C5 hold, and \(E_{\mathbb{P}}||h_{i}||^{2}<\infty\). Here, \(\chi\)=\(E_{\mathbb{P}}(X_{i})-\lambda(E_{\mathbb{P}}(X_{i})^{2}/E_{\mathbb{P}}(X_{i}))\). Moreover, under any \(\pi\)PS sampling design, we have \(\lim_{\nu\rightarrow\infty}\Sigma_{1}\)=\(E_{\mathbb{P}}\big{[}\big{\{}h_{i}-E_{\mathbb{P}}(h_{i})+\lambda\chi^{-1}(E_{ \mathbb{P}}(X_{i}))^{-1}X_{i}C_{xh}\big{\}}^{T}\big{\{}h_{i}-E_{\mathbb{P}}(h_ {i})+\lambda\chi^{-1}(E_{\mathbb{P}}(X_{i}))^{-1}X_{i}C_{xh}\big{\}}\times \big{\{}E_{\mathbb{P}}(X_{i})/X_{i}-\lambda\big{\}}\big{]}\)_a.s._\([\mathbb{P}]\) for \(\mathbf{V}_{i}\)=\(h_{i}-\overline{h}\) and \(\lim_{\nu\rightarrow\infty}\Sigma_{1}\)= \(E_{\mathbb{P}}\big{[}\big{\{}h_{i}-E_{\mathbb{P}}(h_{i})-C_{xh}(X_{i}-E_{ \mathbb{P}}(X_{i}))\big{\}}^{T}\big{\{}h_{i}-E_{\mathbb{P}}(h_{i})-C_{xh}(X_{i} -E_{\mathbb{P}}(X_{i}))\big{\}}\big{\{}E_{\mathbb{P}}(X_{i})/X_{i}-\lambda\big{\}} \big{]}\)_a.s._\([\mathbb{P}]\) for \(\mathbf{V}_{i}\)=\(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\). Note that the above limits are p.d. matrices because C5 holds and C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\). Therefore, C3 holds _a.s._\([\mathbb{P}]\) under any \(\pi\)PS sampling design. This completes the proof of (ii) in Lemma S2. ## 2 Additional mathematical details In this section, we state and prove some technical results, which will be required to prove the theorems stated in the main text. **Lemma S 3**.: _Suppose that C2 holds. Then, LMS sampling design is a high entropy sampling design. Moreover, under LMS sampling design, there exist constants \(L,L^{\prime}>0\) such that_ \[L\leq\min_{1\leq i\leq N}(N\pi_{i}/n)\leq\max_{1\leq i\leq N}(N\pi_{i}/n)\leq L ^{\prime} \tag{5}\] _for all sufficiently large \(\nu\)._ The condition (5) was considered earlier in Wang and Opsomer (2011), Boistard et al. (2017), etc. However, the above authors did not discuss whether LMS sampling design satisfies (5) or not. Proof.: Suppose that \(P(s)\) and \(R(s)\) denote LMS sampling design and SRSWOR, respectively. Note that SRSWOR is a rejective sampling design. Then, \(P(s)\)=\((\overline{x}/\overline{X})/^{N}C_{n}\) and \(R(s)\)=\((^{N}C_{n})^{-1}\), where \(\overline{x}\)=\(\sum_{i\in s}X_{i}/n\) and \(s\in\mathcal{S}\). By Cauchy-Schwarz inequality, we have \(D(P||R)\)=\(E_{R}((\overline{x}/\overline{X})\log(\overline{x}/\ \overline{X}))\leq K_{1}E_{R}|\overline{x}/\overline{X}-1|\leq K_{1}E_{R}( \overline{x}/\overline{X}-1)^{2}\) for some \(K_{1}>0\) since C2 holds, and \(\log(x)\leq|x-1|\) for \(x>0\). Here \(E_{R}\) denotes the expectation with respect to \(R(s)\). Therefore, \(nD(P||R)\leq K_{1}(1-f)(N/(N-1))(S_{x}^{2}/\overline{X}^{2})\leq 2K_{1}(\sum_{i=1}^{N}X _{i}^{2}/N\overline{X}^{2})\leq 2K_{1}(\max_{1\leq i\leq N}X_{i}/\min_{1\leq i \leq N}X_{i})^{2}\)=\(O(1)\) as \(\nu\rightarrow\infty\), where \(f\)=\(n/N\). Hence, \(D(P||R)\to 0\) as \(\nu\rightarrow\infty\). Thus LMS sampling design is a high entropy sampling design. Next, suppose that \(\{\pi_{i}\}_{i=1}^{N}\) denote inclusion probabilities of \(P(s)\). Then, we have \(\pi_{i}\)=\((n-1)/(N-1)+(X_{i}/\sum_{i=1}^{N}X_{i})((N-n)/(N-1))\) and \(\pi_{i}-n/N\)=\(-(N-n)(N(N-1))^{-1}(X_{i}/\overline{X}-1)\). Further, \[\frac{|\pi_{i}-n/N|}{n/N}=\frac{N-n}{n(N-1)}\left|\frac{X_{i}}{\overline{X}}-1 \right|\leq\frac{N-n}{n(N-1)}\left(\frac{\max_{1\leq i\leq N}X_{i}}{\min_{1 \leq i\leq N}X_{i}}+1\right).\] Therefore, \(\max_{1\leq i\leq N}|N\pi_{i}/n-1|\to 0\) as \(\nu\rightarrow\infty\) by C2. Hence, \(K_{2}\leq\min_{1\leq i\leq N}(N\pi_{i}/n)\)\(\leq\)\(\max_{1\leq i\leq N}(N\pi_{i}/n)\)\(\leq\)\(K_{3}\) for all sufficiently large \(\nu\) and some constants \(K_{2}>0\) and \(K_{3}>0\). Thus (5) holds under LMS sampling design. Next, suppose that \(\{\mathbf{V}_{i}\}_{i=1}^{N}\), \(\overline{\mathbf{V}}\), \(\Sigma_{1}\) and \(\Sigma_{2}\) are as in the previous Section 1. Let us define \(\hat{\overline{V}}_{1}\)= \(\sum_{i\in s}(N\pi_{i})^{-1}V_{i}\) and \(\hat{\overline{V}}_{2}\)=\(\sum_{i\in s}G_{i}V_{i}/NX_{i}\), where \(G_{i}\)'s are as in the paragraph containing Table 8 in the main article. Now, we state the following lemma. **Lemma S 4**.: _Suppose that \(C0\) through C3 hold. Then, under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design, we have \(\sqrt{n}(\hat{\overline{\mathbf{V}}}_{1}-\overline{\mathbf{V}})\xrightarrow{ \mathcal{L}}N(0,\Gamma_{1})\) as \(\nu\rightarrow\infty\), where \(\Gamma_{1}\)=\(\lim_{\nu\rightarrow\infty}\Sigma_{1}\). Further, suppose that C0 through C2 and C4 hold. Then, we have \(\sqrt{n}(\hat{\overline{\mathbf{V}}}_{2}-\overline{\mathbf{V}})\xrightarrow{ \mathcal{L}}N(0,\Gamma_{2})\) as \(\nu\rightarrow\infty\) under RHC sampling, where \(\Gamma_{2}\)=\(\lim_{\nu\rightarrow\infty}\Sigma_{2}\)._ Proof.: Note that SRSWOR is a high entropy sampling design since it is a rejective sampling design. Also, (5) in Lemma S3 holds trivially under SRSWOR. It follows from Lemma S3 that LMS sampling design is a high entropy sampling design, and (5) holds under this sampling design. Further, any HE\(\pi\)PS sampling design satisfies (5) since C2 holds. Now, fix \(\epsilon>0\) and \(\mathbf{m}\in\mathbb{R}^{p}\). Suppose that \(L(\epsilon,\mathbf{m})\)=\((n^{-1}N^{2}\mathbf{m}\Sigma_{1}\mathbf{m}^{T})^{-1}\sum_{i\in G( \epsilon,\mathbf{m})}(\mathbf{m}\;(\mathbf{V}_{i}-\mathbf{T}\pi_{i})^{T})^{2}( \pi_{i}^{-1}-1)\) for \(G(\epsilon,\mathbf{m})\)=\(\{1\leq i\leq N:|\mathbf{m}(\mathbf{V}_{i}-\mathbf{T}\pi_{i})^{T}|> \epsilon\pi_{i}N(n^{-1}\;\mathbf{m}\Sigma_{1}\mathbf{m}^{T})^{1/2}\}\), \(\mathbf{T}\)=\(\sum_{i=1}^{N}\mathbf{V}_{i}(1-\pi_{i})/\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\) and \(\mathbf{Z}_{i}\)=\((n/N\pi_{i})\mathbf{V}_{i}\)\(-(n/N)\mathbf{T}\), \(i\)=\(1,\ldots,N\). Then, given any \(\eta>0\), \(L(\epsilon,\mathbf{m})\leq(\mathbf{m}\Sigma_{1}\mathbf{m}^{T})^{-(1+\eta/2)}\)\(n^{-\eta/2}\epsilon^{-\eta}N^{-1}\)\(\sum_{i=1}^{N}(||\mathbf{m}||||\mathbf{Z}_{i}||)^{2+\eta}(N\pi_{i}/n)\) since \(|\mathbf{m}\mathbf{Z}_{i}^{T}|/(\sqrt{n}\epsilon(\mathbf{m}\Sigma_{1}\mathbf{ m}^{T})^{1/2})>1\) for any \(i\in G(\epsilon,\mathbf{m})\). It follows from Jensen's inequality that \(N^{-1}\sum_{i=1}^{N}||\mathbf{Z}_{i}||^{2+\eta}\)\((N\pi_{i}/n)\)\(\leq\)\(2^{1+\eta}(N^{-1}\sum_{i=1}^{N}||\mathbf{V}_{i}(n/N\pi_{i})||^{2+\eta}\)\((N\pi_{i}/n)+\) \(||(n/N)\mathbf{T}||^{2+\eta}\)) since \(\sum_{i=1}^{N}\pi_{i}\)=\(n\). It also follows from C1, C2 and Jensen's inequality that \(\sum_{i=1}^{N}||\mathbf{V}_{i}||^{2+\eta}\)\(/N\)=\(O(1)\) as \(\nu\rightarrow\infty\) for any \(0<\eta\leq 2\). Further, \(\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})/n\) is bounded away from 0 as \(\nu\rightarrow\infty\) under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design because (5) holds under these sampling designs, and C0 holds. Therefore, \(N^{-1}\sum_{i=1}^{N}||\mathbf{V}_{i}(n/N\pi_{i})||^{2+\eta}(N\pi_{i}/n)\)=\(O(1)\) and \(||(n/N)\mathbf{T}||^{2+\eta}\)=\(O(1)\), and hence \(N^{-1}\sum_{i=1}^{N}||\mathbf{Z}_{i}||^{2+\eta}(N\pi_{i}/n)\)=\(O(1)\) as \(\nu\rightarrow\infty\) under the above sampling designs. Then, \(L(\epsilon,\mathbf{m})\to 0\) as \(\nu\rightarrow\infty\) for any \(\epsilon>0\) under all of these sampling designs since C3 holds. Therefore, \(\inf\{\epsilon>0:L(\epsilon,\mathbf{m})\leq\epsilon\}\to 0\) as \(\nu\rightarrow\infty\), and consequently the Hajek-Lindeberg condition holds for \(\{\mathbf{m}\mathbf{V}_{i}^{T}\}_{i=1}^{N}\) under each of the above sampling designs. Also, \(\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\rightarrow\infty\) as \(\nu\rightarrow\infty\) under these sampling designs. Then, from Theorem 5 in Berger (1998), \(\sqrt{n}\mathbf{m}(\widehat{\mathbf{V}}_{1}-\overline{\mathbf{\nabla}})^{T} \xrightarrow{\mathcal{L}}N(0,\mathbf{m}\Gamma_{1}\mathbf{m}^{T})\) as \(\nu\rightarrow\infty\) under each of the above sampling designs for any \(\mathbf{m}\in\mathbb{R}^{p}\) and \(\Gamma_{1}\)=\(\lim_{\nu\rightarrow\infty}\Sigma_{1}\). Hence, \(\sqrt{n}(\widehat{\mathbf{V}}_{1}-\overline{\mathbf{\nabla}})\xrightarrow{ \mathcal{L}}N(0,\Gamma_{1})\) as \(\nu\rightarrow\infty\) under the above-mentioned sampling designs. Next, define \(L(\mathbf{m})\)=\(n\gamma(\max_{1\leq i\leq N}X_{i})(N^{-1}\sum_{i=1}^{n}N_{i}^{3}(N_{i}-1)\sum_{i=1} ^{N}(\mathbf{m}(\mathbf{V}_{i}\overline{X}/\,X_{i}-\overline{\mathbf{\nabla}}) ^{T})^{4}X_{i})^{1/2}\)\(\times(\overline{X}^{3/2}\sum_{i=1}^{n}N_{i}(N_{i}-1)\mathbf{m}\Sigma_{2} \mathbf{m}^{T})^{-1}\), where \(\gamma\)=\(\sum_{i=1}^{n}N_{i}(N_{i}-1)/N(N-1)\) as before. Note that as \(\nu\rightarrow\infty\), \((N^{-1}\sum_{i=1}^{N}(\mathbf{m}(\mathbf{V}_{i}\overline{X}/X_{i}-\overline{ \mathbf{\nabla}})^{T})^{4}(X_{i}/\overline{X}))^{1/2}\)=\(O(1)\) and \((\max_{1\leq i\leq N}X_{i})/\overline{X}\) =\(O(1)\) since C1 and C2 hold. Now, recall from Section 2 in the main text that the \(N_{i}\)'s are considered as in (1). Then, under C0, we have \((\sum_{i=1}^{n}N_{i}^{3}(N_{i}-1))^{1/2}(\sum_{i=1}^{n}N_{i}(N_{i}-1))^{-1}\)=\(O(1/\sqrt{n})\) and \(n\gamma\)=\(O(1)\) as \(\nu\rightarrow\infty\). Therefore, \(L(\mathbf{m})\to 0\) as \(\nu\rightarrow\infty\) since C4 holds. This implies that condition C1 in Ohlsson (1986) holds for \(\{\mathbf{m}\mathbf{V}_{i}^{T}\}_{i=1}^{N}\). Therefore, by Theorem 2.1 in Ohlsson (1986), \(\sqrt{n}\mathbf{m}(\widehat{\mathbf{V}}_{2}-\overline{\mathbf{\nabla}})^{T} \xrightarrow{\mathcal{L}}N(0,\mathbf{m}\Gamma_{2}\mathbf{m}^{T})\) as \(\nu\rightarrow\infty\) under RHC sampling design for any \(\mathbf{m}\in\mathbb{R}^{p}\) and \(\Gamma_{2}\)=\(\lim_{\nu\rightarrow\infty}\Sigma_{2}\). Hence, \(\sqrt{n}(\widehat{\mathbf{V}}_{2}-\overline{\mathbf{\nabla}})\xrightarrow{ \mathcal{L}}N(0,\Gamma_{2})\) as \(\nu\rightarrow\infty\) under RHC sampling design. Next, suppose that \(\overline{\mathbf{W}}\)=\(\sum_{i=1}^{N}\mathbf{W}_{i}/N\), \(\hat{\overline{\mathbf{W}}}_{1}\)=\(\sum_{i\in s}(N\pi_{i})^{-1}\mathbf{W}_{i}\) and \(\hat{\overline{\mathbf{W}}}_{2}\)=\(\sum_{i\in s}G_{i}\mathbf{W}_{i}/NX_{i}\) for \(\mathbf{W}_{i}\)=\((h_{i},X_{i}h_{i},\)\(X_{i}^{2})\), \(i\)=\(1,\ldots,N\). Let us also define \(\hat{\overline{X}}_{1}\)=\(\sum_{i\in s}\)\((N\pi_{i})^{-1}X_{i}\). Now, we state the following lemma. **Lemma S 5**.: _Suppose that \(C0\) through C\(2\) hold. Then, under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design, we have \(\hat{\overline{\mathbf{W}}}_{1}-\overline{\mathbf{W}}\)=\(o_{p}(1)\), \(\sqrt{n}(\hat{\overline{X}}_{1}-\overline{X})\)=\(O_{p}(1)\) and \(\sqrt{n}(\sum_{i\in s}(N\pi_{i})^{-1}\)\(-1)\)=\(O_{p}(1)\) as \(\nu\rightarrow\infty\). Moreover, under RHC sampling design, we have \(\hat{\overline{\mathbf{W}}}_{2}-\overline{\mathbf{W}}\)=\(o_{p}(1)\) and \(\sqrt{n}(\sum_{i\in s}G_{i}/NX_{i}-1)\)=\(O_{p}(1)\) as \(\nu\rightarrow\infty\)._ Proof.: We first show that as \(\nu\rightarrow\infty\), \(\widehat{\mathbf{W}}_{1}-\overline{\mathbf{W}}\)=\(o_{p}(1)\), \(\sqrt{n}(\widehat{X}_{1}-\overline{X})\)=\(O_{p}(1)\) and \(\sqrt{n}(\sum_{i\in s}(\)\(N\pi_{i})^{-1}-1)\)=\(O_{p}(1)\) under a high entropy sampling design \(P(s)\) satisfying (5) in Lemma S3. Fix \(\mathbf{m}\in\mathbb{R}^{2p+1}\). Suppose that \(\tilde{R}(s)\) is a rejective sampling design with inclusion probabilities equal to those of \(P(s)\) (cf. Berger (1998)). Under \(\tilde{R}(s)\), \(var(\mathbf{m}(\sqrt{n}(\widehat{\mathbf{W}}_{1}-\overline{\mathbf{W}})^{T}))\)=\(\mathbf{m}(nN^{-2}\)\(\sum_{i=1}^{N}(\mathbf{W}_{i}-\mathbf{T}\pi_{i})^{T}(\mathbf{W}_{i}-\mathbf{T}\pi_{i})( \pi_{i}^{-1}-1))\mathbf{m}^{T}(1+e)\) (see Theorem 6.1 in Hajek (1964)), where \(\mathbf{T}\)=\(\sum_{i=1}^{N}\mathbf{W}_{i}(1-\pi_{i})/\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\), and \(e\to 0\) as \(\nu\rightarrow\infty\) whenever \(\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\rightarrow\infty\) as \(\nu\rightarrow\infty\). Note that (5) holds under \(\tilde{R}(s)\), and hence \(\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\rightarrow\infty\) as \(\nu\rightarrow\infty\) under \(\tilde{R}(s)\) because (5) holds under \(P(s)\), and C0 holds. Then, \(\mathbf{m}(nN^{-2}\sum_{i=1}^{N}(\mathbf{W}_{i}-\mathbf{T}\pi_{i})^{T}( \mathbf{W}_{i}-\mathbf{T}\pi_{i})(\pi_{i}^{-1}-1))\mathbf{m}^{T}\leq nN^{-2} \sum_{i=1}^{N}(\mathbf{m}\mathbf{W}_{i}^{T})^{2}/\pi_{i}\)=\(O(1)\) under \(\tilde{R}(s)\) since C1 holds. Therefore, \(\sqrt{n}(\widehat{\mathbf{W}}_{1}-\overline{\mathbf{W}})\)=\(O_{p}(1)\) as \(\nu\rightarrow\infty\) under \(\tilde{R}(s)\) since \(var(\mathbf{m}(\sqrt{n}(\widehat{\mathbf{W}}_{1}-\overline{\mathbf{W}})^{T}))\)=\(O(1)\) as \(\nu\rightarrow\infty\) for any \(\mathbf{m}\in\mathbb{R}^{2p+1}\) under \(\tilde{R}(s)\). Now, \(\sum_{s\in E}P(s)\leq\sum_{s\in E}\tilde{R}(s)\)+\(\sum_{s\in E}\tilde{R}(s)\)+\((2D(P||\tilde{R}))^{1/2}\leq\sum_{s\in E}\tilde{R}(s)\)+ \((2D(P||R))^{1/2}\) (see Lemmas 2 and 3 in Berger (1998)), where \(E\)=\(\{s\in\mathcal{S}:||\sqrt{n}(\widehat{\mathbf{W}}_{1}-\overline{\mathbf{W}})||>\delta\}\) for \(\delta>0\) and \(R(s)\) is any other rejective sampling design. Let us consider a rejective sampling design \(R(s)\) such that \(D(P||R)\to 0\) as \(\nu\rightarrow\infty\). Therefore, given any \(\epsilon>0\), there exists a \(\delta>0\) such that \(\sum_{s\in E}P(s)\leq\epsilon\) for all sufficiently large \(\nu\). Hence, as \(\nu\rightarrow\infty\), \(\sqrt{n}(\widehat{\mathbf{W}}_{1}-\overline{\mathbf{W}})\)=\(O_{p}(1)\) and \(\hat{\overline{\mathbf{W}}}_{1}-\overline{\mathbf{W}}\)=\(o_{p}(1)\) under \(P(s)\). Similarly, we can show that as \(\nu\rightarrow\infty\), \(\sqrt{n}(\hat{\overline{X}}_{1}-\overline{X})\)=\(O_{p}(1)\) and \(\sqrt{n}(\sum_{i\in s}(N\pi_{i})^{-1}-1)\)=\(O_{p}(1)\) under \(P(s)\). Now, recall from the proof of Lemma S4 that SRSWOR and LMS sampling design are high entropy sampling designs, and they satisfy (5). Also, any HE\(\pi\)PS sampling design satisfies (5). Therefore, as \(\nu\rightarrow\infty\), \(\hat{\overline{\mathbf{W}}}_{1}-\overline{\mathbf{W}}\)=\(o_{p}(1)\), \(\sqrt{n}(\hat{\overline{X}}_{1}-\overline{X})\)=\(O_{p}(1)\) and \(\sqrt{n}(\sum_{i\in s}(\)\(N\pi_{i})^{-1}-1)\)=\(O_{p}(1)\) under the above-mentioned sampling designs. Under RHC sampling design, \(var(\mathbf{m}(\sqrt{n}(\hat{\overline{\mathbf{W}}}_{2}-\overline{\mathbf{W}})^{T}))\)=\(\mathbf{m}(n\gamma\overline{X}N^{-1}\sum_{i=1}^{N}\)\((\mathbf{W}_{i}-X_{i}\overline{\mathbf{W}}/\overline{X})^{T}(\mathbf{W}_{i}-X_{i} \overline{\mathbf{W}}/\overline{X})/X_{i})\mathbf{m}^{T}\) (see Ohlsson (1986)). Recall from the proof of Lemma S4 that \(n\gamma\)=\(O(1)\) as \(\nu\rightarrow\infty\). Then, \(var(\mathbf{m}(\sqrt{n}(\hat{\overline{\mathbf{W}}}_{2}-\overline{\mathbf{W}})^{T} ))\leq n\gamma(\overline{X}/N)\sum_{i=1}^{N}(\mathbf{m}\mathbf{W}_{i}^{T})^{2} /X_{i}\) =\(O(1)\) as \(\nu\rightarrow\infty\) since C1 and C2 hold. Hence, as \(\nu\rightarrow\infty\), \(\sqrt{n}(\hat{\overline{\mathbf{W}}}_{2}-\overline{\mathbf{W}})\)=\(O_{p}(1)\) and \(\hat{\overline{\mathbf{W}}}_{2}-\overline{\mathbf{W}}\)=\(o_{p}(1)\) under RHC sampling design. Similarly, we can show that as \(\nu\rightarrow\infty\), \(\sqrt{n}(\sum_{i\in s}G_{i}/NX_{i}-1)\)=\(O_{p}(1)\) under RHC sampling design. Recall from the \(2^{nd}\) paragraph in the Appendix that we denote the HT, the RHC, the Hajek, the ratio, the product, the GREG and the PEML estimators of population means of \(h(y)\) by \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{RHC}\), \(\hat{\overline{h}}_{HA}\), \(\hat{\overline{h}}_{RA}\), \(\hat{\overline{h}}_{PR}\), \(\hat{\overline{h}}_{GREG}\) and \(\hat{\overline{h}}_{PEML}\), respectively. Suppose that \(\hat{\overline{h}}\) denotes one of \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{H}\), \(\hat{\overline{h}}_{RA}\), \(\hat{\overline{h}}_{PR}\), and \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\). Then, a Taylor type expansion of \(\hat{\overline{h}}-\overline{h}\) can be obtained as \(\hat{\overline{h}}-\overline{h}\)=\(\Theta(\hat{\overline{\bf V}}_{1}-\overline{\bf V})+{\bf Z}\), where \(\hat{\overline{\bf V}}_{1}\)=\(\sum_{i\in s}(N\pi_{i})^{-1}{\bf V}_{i}\), and the \({\bf V}_{i}\)'s, \(\Theta\) and \({\bf Z}\) are as described in Table 1 below. On the other hand, if \(\hat{\overline{h}}\) is either \(\hat{\overline{h}}_{RHC}\) or \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\(G_{i}/NX_{i}\), a Taylor type expansion of \(\hat{\overline{h}}-\overline{h}\) can be obtained as \(\hat{\overline{h}}-\overline{h}\)=\(\Theta(\hat{\overline{\bf V}}_{2}-\overline{\bf V})+{\bf Z}\). Here, \(\hat{\overline{\bf V}}_{2}\)=\(\sum_{i\in s}G_{i}{\bf V}_{i}/NX_{i}\), the \(G_{i}\)'s are as in the paragraph containing Table 8 in the main text, and the \({\bf V}_{i}\)'s, \(\Theta\) and \({\bf Z}\) are once again described in Table 1. In Table 1, \(\hat{\overline{X}}_{1}\)=\(\sum_{i\in s}(N\pi_{i})^{-1}X_{i}\), \(\hat{\overline{X}}_{2}\)=\(\hat{\overline{X}}_{1}/\sum_{i\in s}(N\pi_{i})^{-1}\), \(\hat{\beta}_{1}\)=\((\sum_{i\in s}(N\pi_{i})^{-1}\,\sum_{i\in s}(N\pi_{i})^{-1}h_{i}X_{i}- \hat{\overline{h}}_{HT}\hat{\overline{X}}_{1})/(\sum_{i\in s}(N\pi_{i})^{-1} \sum_{i\in s}(N\pi_{i})^{-1}X_{i}^{2}-(\hat{\overline{X}}_{1})^{2})\) and \(\hat{\beta}_{2}\)=\((\sum_{i\in s}(G_{i}/NX_{i})\sum_{i\in s}(G_{i}h_{i}/N)-\hat{\overline{h}}_{RHC} \overline{X})/(\sum_{i\in s}(G_{i}/NX_{i})\,\sum_{i\in s}(G_{i}X_{i}/N)\,- \overline{X}^{2})\). Now, we state the following Lemma. **Lemma S 6**.: \((i)\) _Suppose that \(C0\) through \(C3\) hold. Further, suppose that \(\hat{\overline{h}}\) is one of \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{H}\), \(\hat{\overline{h}}_{RA}\), \(\hat{\overline{h}}_{PR}\), and \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\). Then, under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design,_ \[\sqrt{n}(\hat{\overline{h}}-\overline{h})\xrightarrow{{\cal L}}N(0,\Gamma) \text{ as }\nu\rightarrow\infty \tag{6}\] _for some p.d. matrix \(\Gamma\)._ \((ii)\) _Next, suppose that \(C0\) through \(C2\) and \(C4\) hold, and \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{RHC}\) or \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\(G_{i}/NX_{i}\). Then, (6) holds under RHC sampling design._ Proof.: It can be shown from Lemma S4 that \(\sqrt{n}(\widehat{\textbf{V}}_{1}-\overline{\textbf{V}})\xrightarrow{\mathcal{L}}N(0,\Gamma_{1})\) as \(\nu\rightarrow\infty\) under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design, where \(\Gamma_{1}\)=\(\lim_{\nu\rightarrow\infty}nN^{-2}\times\sum_{i=1}^{N}(\textbf{V}_{i}-\textbf{T} \pi_{i})^{T}(\textbf{V}_{i}-\textbf{T}\pi_{i})(\pi_{i}^{-1}-1)\) with \(\textbf{T}\)=\(\sum_{i=1}^{N}\textbf{V}_{i}(1-\pi_{i})/\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\). Note that \(\Gamma_{1}\) is a p.d. matrix under each of the above sampling designs as C3 holds under these sampling designs. Let us now consider from Table 1 various choices of \(\Theta\) and \(\mathbf{Z}\) corresponding to \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{H}\), \(\hat{\overline{h}}_{RA}\), \(\hat{\overline{h}}_{PR}\), and \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\). Then, it can be shown from Lemma S5 that for each of these choices, \(\sqrt{n}\mathbf{Z}\)=\(o_{p}(1)\) and \(\Theta-1\)=\(o_{p}(1)\) as \(\nu\rightarrow\infty\) under the above-mentioned sampling designs. Therefore, (6) holds under those sampling designs with \(\Gamma\)=\(\Gamma_{1}\). This completes the proof of \((i)\) in Lemma 6 We can show from Lemma S4 that \(\sqrt{n}(\widehat{\textbf{V}}_{2}-\overline{\textbf{V}})\xrightarrow{ \mathcal{L}}N(0,\Gamma_{2})\) as \(\nu\rightarrow\infty\) under RHC sampling design, where \(\Gamma_{2}\)=\(\lim_{\nu\rightarrow\infty}n\gamma\)\(\overline{X}N^{-1}\)\(\sum_{i=1}^{N}(\textbf{V}_{i}-X_{i}\overline{\textbf{V}}/\overline{X})^{T}(\textbf{V}_{i} -X_{i}\overline{\textbf{V}}/\overline{X})/X_{i}\) with \(\gamma\)=\(\sum_{i=1}^{n}N_{i}(N_{i}-1)/N(N-1)\). Note that \(\Gamma_{2}\) is a p.d. matrix since C4 holds. Let us now consider from Table 1 different choices of \(\Theta\) and \(\mathbf{Z}\) corresponding to \(\hat{\overline{h}}_{RHC}\), and \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\(G_{i}/NX_{i}\). Then, it follows from Lemma S5 that for each of these choices, \(\sqrt{n}\mathbf{Z}\)=\(o_{p}(1)\) and \(\Theta-1\)=\(o_{p}(1)\) as \(\nu\rightarrow\infty\) under RHC sampling design. Therefore, (6) holds under RHC sampling design with \(\Gamma\)=\(\Gamma_{2}\). This completes the proof of \((ii)\) in Lemma 6 Let \(\{\textbf{V}_{i}\}_{i=1}^{N}\) be as described in Table 1. Recall \(\Sigma_{1}\) and \(\Sigma_{2}\) from the paragraph preceding Lemma S2 in this supplement. Note that the expression of \(\Sigma_{1}\) remains the same for different HE\(\pi\)PS sampling designs. Also, recall from the paragraph preceding Theorem 3 in the main text that \(\phi\)=\(\overline{X}-(n/N)\sum_{i=1}^{N}X_{i}^{2}/N\overline{X}\). Now, we state the following lemma. **Lemma S 7**.: \((i)\) _Suppose that \(C0\) through \(C3\) hold. Further, suppose that \(\sigma_{1}^{2}\) and \(\sigma_{2}^{2}\) denote \(\lim_{\nu\rightarrow\infty}\nabla g(\mu_{0})\Sigma_{1}\nabla g(\mu_{0})^{T}\) under SRSWOR and LMS sampling design, respectively, where \(\mu_{0}\)=\(\lim_{\nu\rightarrow\infty}\overline{h}\). Then, we have \(\sigma_{1}^{2}\)=\(\sigma_{2}^{2}\)=\((1-\lambda)\lim_{\nu\rightarrow\infty}\sum_{i=1}^{N}(A_{i}-\bar{A})^{2}/N\) for \(A_{i}\)=\(\nabla g(\mu_{0})\textbf{V}_{i}^{T}\), \(i\)=\(1,\ldots,N\). \((ii)\) Next, suppose that C4 holds, and \(\sigma_{3}^{2}\)=\(\lim_{\nu\rightarrow\infty}\nabla g(\mu_{0})\Sigma_{2}\nabla g(\mu_{0})^{T}\) in the case of RHC sampling design. Then, we have \(\sigma_{3}^{2}\)=\(\lim_{\nu\rightarrow\infty}n\gamma((\overline{X}/N)\)\(\sum_{i=1}^{N}A_{i}^{2}/X_{i}-\bar{A}^{2})\). On the other hand, if \(C0\) through \(C3\) hold, and \(\sigma_{4}^{2}\)=\(\lim_{\nu\rightarrow\infty}\nabla g(\mu_{0})\Sigma_{1}\nabla g(\mu_{0})^{T}\) under any HE\(\pi\)PS sampling design, then we have \(\sigma_{4}^{2}\)= \(\lim_{\nu\rightarrow\infty}\left\{(1/N)\sum_{i=1}^{N}A_{i}^{2}\big{(}( \overline{X}/X_{i})-(n/N)\big{)}-\phi^{-1}\overline{X}^{-1}\times\big{(}(n/N) \sum_{i=1}^{N}A_{i}X_{i}/N-\overline{AX}\big{)}^{2}\right\}\). Further, if \(C0\) holds with \(\lambda\)=\(0\) and \(C1\) through \(C3\) hold, _then we have \(\sigma_{4}^{2}{=}\sigma_{3}^{2}{=}{\lim_{\nu\to\infty}}((\overline{X}/N)\,\sum_{i =1}^{N}A_{i}^{2}/X_{i}-\bar{A}^{2})\)._ Proof.: Let us first note that the limits in the expressions of \(\sigma_{1}^{2}\) and \(\sigma_{2}^{2}\) exist in view of C3. Also, note that \(\nabla g(\mu_{0})\Sigma_{1}\nabla g(\mu_{0})^{T}{=}nN^{-2}\sum_{i=1}^{N}(A_{i} -T_{A}\pi_{i})^{2}(\pi_{i}^{-1}{-}1){=}nN^{-2}\,[\sum_{i=1}^{N}A_{i}^{2}(\pi_{i }^{-1}{-}1)-(\sum_{i=1}^{N}A_{i}(1-\pi_{i}))^{2}/\sum_{i=1}^{N}\pi_{i}(1-\pi_{ i})]\), where \(T_{A}{=}\sum_{i=1}^{N}A_{i}(1-\pi_{i})/\sum_{i=1}^{N}\pi_{i}(1-\pi_{i})\) and \(A_{i}{=}\nabla g(\mu_{0}){\bf V}_{i}^{T}\). Now, substituting \(\pi_{i}{=}n/N\) in the above expression for SRSWOR, we get \(\sigma_{1}^{2}{=}\lim_{\nu\to\infty}nN^{-2}\,\,[\sum_{i=1}^{N}A_{i}^{2}(N/n-1) -(\sum_{i=1}^{N}A_{i}(1-n/N))^{2}/n(1-n/N)]{=}{\lim_{\nu\to\infty}}\)\((1-n/N)\,\sum_{i=1}^{N}(A_{i}-\bar{A})^{2}/N\). Since C0 holds, we have \(\sigma_{1}^{2}{=}(1-\lambda)\lim_{\nu\to\infty}\,\sum_{i=1}^{N}(A_{i}-\bar{A} )^{2}/N\). Let \(\{\pi_{i}\}_{i=1}^{N}\) be the inclusion probabilities of LMS sampling design. Then, \(\sigma_{2}^{2}-\sigma_{1}^{2}{=}{\lim_{\nu\to\infty}}\,nN^{-2}[\sum_{i=1}^{N}A _{i}^{2}(\pi_{i}^{-1}-N/n)-((\sum_{i=1}^{N}A_{i}(1-\pi_{i}))^{2}/\sum_{i=1}^{ N}\pi_{i}(1-\pi_{i})-(\sum_{i=1}^{N}A_{i}(1-n/N))^{2}/n(1-n/N))]\). Now, it can be shown from the proof of Lemma S3 that \(\max_{1\leq i\leq N}|N\pi_{i}/n-1|\to 0\) as \(\nu\to\infty\). Therefore, using C1, we can show that \(\lim_{\nu\to\infty}nN^{-2}\sum_{i=1}^{N}A_{i}^{2}(\pi_{i}^{-1}-N/n){=}0\) and \(\lim_{\nu\to\infty}nN^{-2}[(\sum_{i=1}^{N}A_{i}(1-\pi_{i}))^{2}/\sum_{i=1}^{N} \pi_{i}(1-\pi_{i})-(\sum_{i=1}^{N}A_{i}(1-n/N))^{2}/n(1-n/N)]{=}0\), and consequently \(\sigma_{1}^{2}{=}\sigma_{2}^{2}\). This completes the proof of \((i)\) in Lemma 7. Next, consider the case of RHC sampling design and note that the limit in the expression of \(\sigma_{3}^{2}\) exists in view of C4. Also, note that \(\nabla g(\mu_{0})\Sigma_{2}\nabla g(\mu_{0})^{T}=\)\(n\gamma(\overline{X}/N)\sum_{i=1}^{N}(\ A_{i}-\bar{A}X_{i}/\overline{X})^{2}/X_{i}{=}n\gamma(( \overline{X}/N)\sum_{i=1}^{N}A_{i}^{2}/\ X_{i}-\bar{A}^{2})\), where \(\bar{A}{=}\sum_{i=1}^{N}A_{i}/N\) and \(\gamma{=}\sum_{i=1}^{n}N_{i}(N_{i}-1)/N(N-1)\). Thus we have \(\sigma_{3}^{2}{=}{\lim_{\nu\to\infty}}\,\,n\gamma((\overline{X}/N)\sum_{i=1}^{ N}A_{i}^{2}/X_{i}-\bar{A}^{2})\). Next, note that the limit in the expression of \(\sigma_{4}^{2}\) exists in view of C3. Substituting \(\pi_{i}{=}nX_{i}/\sum_{i=1}^{N}X_{i}\) in \(\nabla g(\mu_{0})\Sigma_{1}\nabla g(\mu_{0})^{T}\) for any HE\(\pi\)PS sampling design, we get \(\sigma_{4}^{2}{=}{\lim_{\nu\to\infty}}\,nN^{-2}[\sum_{i=1}^{N}A_{i}^{2}(\sum_{i =1}^{N}X_{i}/nX_{i}{-}1){-}(\sum_{i=1}^{N}A_{i}(1{-}nX_{i}/\sum_{i=1}^{N}X_{i} ))^{2}/\sum_{i=1}^{N}(nX_{i}/\sum_{i=1}^{N}X_{i})(1{-}nX_{i}/\sum_{i=1}^{N}X_{ i})]{=}\lim_{\nu\to\infty}\big{\{}(1/N)\sum_{i=1}^{N}A_{i}^{2}\big{(}(\overline{X}/X_{i}){-}(n/N) \big{)}{-}\phi^{-1}\overline{X}^{-1}\big{(}(n/N)\] \(\times\sum_{i=1}^{N}A_{i}X_{i}/N-\overline{A}\ \overline{X}\big{)}^{2}\big{\}}\). Further, we can show that \(\sigma_{4}^{2}{=}{\lim_{\nu\to\infty}}((\overline{X}/N)\sum_{i=1}^{N}A_{i}^{2}/X_ {i}-\bar{A}^{2})\), when C1 and C2 hold, and C0 holds with \(\lambda{=}0\). It also follows from Lemma S1 that \(n\gamma\to 1\) as \(\nu\to\infty\), when C0 holds with \(\lambda{=}0\). Thus we have \(\sigma_{3}^{2}{=}\sigma_{4}^{2}{=}{\lim_{\nu\to\infty}}((\overline{X}/N)\sum_{i=1 }^{N}A_{i}^{2}/\ X_{i}-\bar{A}^{2})\). This completes the proof of \((ii)\) in Lemma 7. **Lemma S 8**.: _Suppose that \(C0\) through C2 hold. Then under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design, we have_ \[(i)\quad u^{*}=\max_{i\in s}|Z_{i}|=o_{p}(\sqrt{n}),\text{ and }\quad(ii)\quad\sum_{i\in s }\pi_{i}^{-1}Z_{i}/\sum_{i\in s}\pi_{i}^{-1}Z_{i}^{2}=O_{p}(1/\sqrt{n})\] _as \(\nu\to\infty\), where \(Z_{i}{=}X_{i}-\overline{X}\) for \(i{=}1,\ldots,N\)_ Proof.: Let \(P(s)\) be any sampling design and \(E_{P}\) be the expectation with respect to \(P(s)\). Then, \(E_{P}(u^{*}/\sqrt{n})\leq(\max_{1\leq i\leq N}X_{i}+\overline{X})/\sqrt{n}\leq \overline{X}(\max_{1\leq i\leq N}X_{i}/\ \min_{1\leq i\leq N}X_{i}+1)/\sqrt{n}\)=\(o(1)\) as \(\nu\rightarrow\infty\) since C1 and C2 hold. Therefore, \((i)\) holds under \(P(s)\) by Markov inequality. Thus \((i)\) holds under SRSWOR, LMS sampling design and any HE\(\pi\)PS sampling design. Using similar arguments as in the first paragraph of the proof of Lemma S5, it can be shown that \(\sqrt{n}(\sum_{i\in s}Z_{i}/N\pi_{i}-\overline{Z})\)=\(\sqrt{n}\sum_{i\in s}Z_{i}/N\pi_{i}\)=\(O_{p}(1)\) and \(\sum_{i\in s}Z_{i}^{2}/N\pi_{i}-\sum_{i=1}^{N}Z_{i}^{2}/N\)=\(o_{p}(1)\) as \(\nu\rightarrow\infty\) under a high entropy sampling design \(P(s)\) satisfying (5) in Lemma S3. Therefore, \(1/(\sum_{i\in s}Z_{i}^{2}/N\pi_{i})=\)\(O_{p}(1)\) as \(\nu\rightarrow\infty\) under \(P(s)\) since \(\sum_{i=1}^{N}Z_{i}^{2}/N\) is bounded away from 0 as \(\nu\rightarrow\infty\) by C1. Thus under \(P(s)\), \(\sum_{i\in s}\pi_{i}^{-1}Z_{i}/\)\(\sum_{i\in s}\pi_{i}^{-1}Z_{i}^{2}\)=\(O_{p}(1/\sqrt{n})\) as \(\nu\rightarrow\infty\). It follows from Lemma S3 that SRSWOR and LMS sampling design are high entropy sampling designs and satisfy (5) Also, any HE\(\pi\)PS sampling design satisfies (5) since C2 holds. Therefore, the result in \((ii)\) holds under the above-mentioned sampling designs. ## 3 Proofs of Remark \(1\) and Theorems \(2\), \(3\), \(6\) and \(7\) In this section, we give the proofs of Remark \(1\) and Theorems \(2\), \(3\), \(6\) and \(7\) of the main text. Proof of Theorem \(2\).: Let us first consider a HE\(\pi\)PS sampling design. Then, it can be shown in the same way as in the \(1^{st}\) paragraph of the proof of Theorem \(1\) that \(\sqrt{n}(\hat{\overline{h}}_{PEML}-\hat{\overline{h}}_{GREG})\)=\(o_{p}(1)\) for \(d(i,s)\)=\((N\pi_{i})^{-1}\) under this sampling design. It can also be shown in the same way as in the \(1^{st}\) paragraph of the proof of Theorem \(1\) that if \(\hat{\overline{h}}\) is one of \(\hat{\overline{h}}_{HT}\), \(\hat{\overline{h}}_{H}\), and \(\hat{\overline{h}}_{GREG}\) and \(\hat{\overline{h}}_{PEML}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\), then (4) in the proof of Theorem \(1\) holds under the above-mentioned sampling design. Here, we recall from Table \(2\) in the main text that the HT, the ratio and the product estimators coincide under any HE\(\pi\)PS sampling design. Further, the asymptotic MSE of \(\sqrt{n}(g(\hat{\overline{h}})-g(\overline{h}))\) is \(\nabla g(\mu_{0})\Gamma_{1}\ (\nabla g(\mu_{0}))^{T}\), where \(\mu_{0}\)=\(\lim_{\nu\rightarrow\infty}\overline{h}\), \(\Gamma_{1}\)=\(\lim_{\nu\rightarrow\infty}nN^{-2}\sum_{i=1}^{N}(\mathbf{V}_{i}-\mathbf{T} \pi_{i})^{T}(\mathbf{V}_{i}-\mathbf{T}\pi_{i})(\pi_{i}^{-1}-1)\), and \(\mathbf{V}_{i}\) in \(\Gamma_{1}\) is \(h_{i}\) or \(h_{i}-\overline{h}\) or \(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{HT}\) or \(\hat{\overline{h}}_{H}\), or \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\), respectively. Now, since \(\sqrt{n}(\hat{\overline{h}}_{PEML}-\hat{\overline{h}}_{GREG})\)=\(o_{p}(1)\) for \(\nu\rightarrow\infty\) under any HE\(\pi\)PS sampling design, \(g(\widehat{\hat{h}}_{GREG})\) and \(g(\widehat{\hat{h}}_{PEML})\) have the same asymptotic distribution under this sampling design. Thus under any HE\(\pi\)PS sampling design, \(g(\widehat{\hat{h}}_{GREG})\) and \(g(\widehat{\hat{h}}_{PEML})\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\) form class 5, \(g(\widehat{\hat{h}}_{HT})\) forms class 6, and \(g(\widehat{\hat{h}}_{H})\) forms class 7 in Table 2 of the main text. This completes the proof of \((i)\) in Theorem 2. Let us now consider the RHC sampling design. We can show from \((ii)\) in Lemma S6 that \(\sqrt{n}(\widehat{\hat{h}}-\overline{h})\xrightarrow{\mathcal{L}}N(0,\Gamma)\) as \(\nu\rightarrow\infty\) for some p.d. matrix \(\Gamma\), when \(\hat{\overline{h}}\) is either \(\widehat{\hat{h}}_{RHC}\) or \(\widehat{\hat{h}}_{GREG}\) with \(d(i,s)\)=\(G_{i}/NX_{i}\) under RHC sampling design. Further, \(\sqrt{n}(\widehat{\hat{h}}_{PEML}-\widehat{\hat{h}}_{GREG})\)=\(o_{p}(1)\) as \(\nu\rightarrow\infty\) for \(d(i,s)\)=\(G_{i}/NX_{i}\) under RHC sampling design since C2 holds, and \(S_{x}^{2}\) is bounded away from 0 as \(\nu\rightarrow\infty\) (see A2.2 of Appendix 2 in Chen and Sitter (1999)). Therefore, if \(\hat{\overline{h}}\) is one of \(\hat{\overline{h}}_{RHC}\), and \(\hat{\overline{h}}_{GREG}\) and \(\hat{\overline{h}}_{PEML}\) with \(d(i,s)\)=\(G_{i}/NX_{i}\), then we have \[\sqrt{n}(g(\widehat{\hat{h}})-g(\overline{h}))\xrightarrow{\mathcal{L}}N(0, \Delta^{2})\text{ as }\nu\rightarrow\infty \tag{7}\] for some \(\Delta^{2}>0\) by the delta method and the condition \(\nabla g(\mu_{0})\neq 0\) at \(\mu_{0}\)=\(\lim_{\nu\rightarrow\infty}\overline{h}\). Moreover, it follows from the proof of \((ii)\) in Lemma S6 that \(\Delta^{2}\)= \(\nabla g(\mu_{0})\Gamma_{2}(\nabla g(\mu_{0}))^{T}\), where \(\Gamma_{2}\)=\(\lim_{\nu\rightarrow\infty}n\gamma\overline{X}N^{-1}\sum_{i=1}^{N}(\mathbf{V}_{i }-X_{i}\overline{\mathbf{V}}/\overline{X})^{T}(\mathbf{V}_{i}-X_{i}\overline{ \mathbf{V}}/\overline{X})/X_{i}\). It further follows from Table 1 in this supplement that \(\mathbf{V}_{i}\) in \(\Gamma_{2}\) is \(h_{i}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{RHC}\). Also, \(\mathbf{V}_{i}\) in \(\Gamma_{2}\) is \(h_{i}-\overline{h}-S_{xh}(X_{i}-\overline{X})/S_{x}^{2}\) if \(\hat{\overline{h}}\) is \(\hat{\overline{h}}_{GREG}\) with \(d(i,s)\)=\(G_{i}/NX_{i}\). Now, \(g(\hat{\overline{h}}_{GREG})\) and \(g(\hat{\overline{h}}_{PEML})\) have the same asymptotic distribution under RHC sampling design since \(\sqrt{n}(\hat{\overline{h}}_{PEML}-\hat{\overline{h}}_{GREG})\)=\(o_{p}(1)\) for \(\nu\rightarrow\infty\) under this sampling design as pointed out earlier in this paragraph. Thus \(g(\hat{\overline{h}}_{GREG})\) and \(g(\hat{\overline{h}}_{PEML})\) with \(d(i,s)\)=\(G_{i}/NX_{i}\) under RHC sampling design form class 8, and \(g(\hat{\overline{h}}_{RHC})\) forms class 9 in Table 2 of the main article. This completes the proof of \((ii)\) in Theorem 2. _Proof of Remark_ 1. It follows from (ii) in Lemma S7 that in the case of \(\lambda\)=0, \[\sigma_{3}^{2}=\sigma_{4}^{2}=\lim_{\nu\rightarrow\infty}((\overline{X}/N) \sum_{i=1}^{N}A_{i}^{2}/X_{i}-\bar{A}^{2}), \tag{8}\] where \(\sigma_{3}^{2}\) and \(\sigma_{4}^{2}\) are as defined in the statement of Lemma S7, and \(A_{i}\)=\(\nabla g(\mu_{0})\mathbf{V}_{i}^{T}\) for different choices of \(\mathbf{V}_{i}\) mentioned in the proof of Theorem 2 above. Thus \(g(\hat{\overline{h}}_{GREG})\) with \(d(i,s)\)=\((N\pi_{i})^{-1}\) under any HE\(\pi\)PS sampling design, and with \(d(i,s)\)=\(G_{i}/NX_{i}\) under RHC sampling design have the same asymptotic MSE. Therefore, class 8 is merged with class 5 in Table 2 of the main text. Further, (8) implies that \(g(\hat{\overline{h}}_{HT})\) under any HE\(\pi\)PS sampling design and \(g(\widehat{h}_{RHC})\) have the same asymptotic MSE. Therefore, class 9 is merged with class 6 in Table 2 of the main text. This completes the proof of Remark 1. _Proof of Theorem_ 3. Recall the expression of \(A_{i}\)'s from the proofs of Theorem 1 and Remark 1. Note that \(\lim_{\nu\rightarrow\infty}\sum(A_{i}-\bar{A})^{2}/N\)=\(\lim_{\nu\rightarrow\infty}\sum(B_{i}-\bar{B})^{2}/N\), \(\lim_{\nu\rightarrow\infty}n\gamma\big{(}(\overline{X}/N)\)\(\times\sum_{i=1}^{N}A_{i}^{2}/X_{i}-\bar{A}^{2}\big{)}\)=\(\lim_{\nu\rightarrow\infty}n\gamma\big{(}(\overline{X}/N)\sum_{i=1}^{N}B_{i}^{2}/X_{i}-\bar{B}^{2} \big{)}\) and \(\lim_{\nu\rightarrow\infty}\big{\{}(1/N)\sum_{i=1}^{N}A_{i}^{2}\times\big{(}( (\overline{X}/X_{i})-(n/N)\big{)}-\phi^{-1}\overline{X}^{-1}\big{(}(n/N)\sum_{ i=1}^{N}A_{i}X_{i}/N-\bar{A}\overline{X}\big{)}^{2}\big{\}}\)=\(\lim_{\nu\rightarrow\infty}\big{\{}(1/N)\sum_{i=1}^{N}B_{i}^{2}\times\big{(}(\overline{X}/X_{i})-(n/N) \big{)}-\phi^{-1}\overline{X}^{-1}\big{(}(n/N)\sum_{i=1}^{N}B_{i}X_{i}/N-\bar {B}\overline{X}\big{)}^{2}\big{\}}\) for \(B_{i}\)=\(\nabla g(\overline{h})\mathbf{V}_{i}^{T}\) and \(\mathbf{V}_{i}\) as in Table 1 in this supplement since \(\nabla g(\overline{h})\rightarrow\nabla g(\mu_{0})\) as \(\nu\rightarrow\infty\). Here, \(\phi\)=\(\overline{X}-(n/N)\sum_{i=1}^{N}X_{i}^{2}/N\overline{X}\). Then, from Lemma S7 and the expressions of asymptotic MSEs of \(\sqrt{n}(g(\widehat{\overline{h}})-g(\overline{h}))\) discussed in the proofs of Theorems 1 and 2, the results in Table 3 of the main text follow. This completes the proof of Theorem 3. _Proof of Theorem_ 6. Using similar arguments as in the \(1^{st}\) paragraph of the proof of Theorem 4, we can say that under SRSWOR and LMS sampling design, conclusions of Theorems 1 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=1, \(p\)=2, \(h(y)\)=\((y,y^{2})\) and \(g(s_{1},s_{2})\)=\(s_{2}-s_{1}^{2}\) in the same way as conclusions of Theorems 1 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=\(p\)=1, \(h(y)\)=\(y\) and \(g(s)\)=\(s\) in the \(1^{st}\) paragraph of the proof of Theorem 4. Note that \(W_{i}\)=\(Y_{i}^{2}-2Y_{i}\overline{Y}\) for the above choices of \(h\) and \(g\). Further, it follows from SLLN and the condition \(E_{\mathbb{P}}(\epsilon_{i})^{8}<\infty\) that the \(\Delta_{i}^{2}\)'s in Table 3 in the main text can be expressed in terms of superpopulation moments of \((Y_{i},X_{i})\)_a.s._\([\mathbb{P}]\). Note that \(\Delta_{2}^{2}-\Delta_{1}^{2}\)=\(cov_{\mathbb{P}}^{2}(\tilde{W_{i}},X_{i})\)_a.s._\([\mathbb{P}]\), where \(\tilde{W_{i}}\)=\(Y_{i}^{2}-2Y_{i}E_{\mathbb{P}}(Y_{i})\). Then, \(\Delta_{1}^{2}<\Delta_{2}^{2}\)_a.s._\([\mathbb{P}]\). This completes the proof of \((i)\) in Theorem 6. Next consider the case of \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\). Using the same line of arguments as in the \(2^{nd}\) paragraph of the proof of Theorem 4, it can be shown that under RHC and any HE\(\pi\)PS sampling designs, conclusions of Theorems 2 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=1, \(p\)=2, \(h(y)\)=\((y,y^{2})\) and \(g(s_{1},s_{2})\)=\(s_{2}-s_{1}^{2}\) in the same way as conclusions of Theorems 2 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=\(p\)=1, \(h(y)\)=\(y\) and \(g(s)\)=\(s\) in the \(2^{nd}\) paragraph of the proof of Theorem 4. Note that \(\Delta_{7}^{2}-\Delta_{5}^{2}\)=\(\big{\{}\mu_{1}^{2}cov_{\mathbb{P}}(\tilde{W_{i}},X_{i})\big{(}cov_{\mathbb{P}}( \tilde{W_{i}},X_{i})cov_{\mathbb{P}}(X_{i},\,1/X_{i})-2cov_{\mathbb{P}}(\tilde{W _{i}},1/X_{i})\big{)}\big{\}}-\lambda^{2}cov_{\mathbb{P}}^{2}(\tilde{W_{i}},X_{i })/\chi\mu_{1}-\lambda cov_{\mathbb{P}}^{2}(\tilde{W_{i}},X_{i})\leq\big{\{}\mu_ {1}^{2}\times\,cov_{\mathbb{P}}(\tilde{W_{i}},X_{i})\big{(}cov_{\mathbb{P}}( \tilde{W_{i}},X_{i})cov_{\mathbb{P}}(X_{i},1/X_{i})-2cov_{\mathbb{P}}(\tilde{W _{i}},1/X_{i})\big{)}\big{\}}\)_a.s._\([\mathbb{P}]\) because \(\chi>0\). Recall from C6 that \(\xi\)=\(\mu_{3}-\mu_{2}\mu_{1}\) and \(\mu_{j}\)=\(E_{\mathbb{P}}(X_{i})^{j}\) for \(j\)=\(-1,1,2,3\). Then, from the linear model set up, we have \(\big{\{}\mu_{1}^{2}cov_{\mathbb{P}}(\tilde{W_{i}},X_{i})\times\big{(}cov_{\mathbb{P }}(\tilde{W_{i}},X_{i})cov_{\mathbb{P}}(X_{i},\,1/X_{i})-2cov_{\mathbb{P}}( \tilde{W_{i}},1/X_{i})\big{)}\big{\}}\)=\((\beta^{2}\mu_{1})^{2}(\xi-2\mu_{1})((\xi+2\mu_{1})\zeta_{1}-2\zeta_{2})\). Here, \(\zeta_{1}\)=\(1-\mu_{1}\mu_{-1}\) and \(\zeta_{2}\)=\(\mu_{1}-\mu_{2}\mu_{-1}\). Note that \((\xi+2\mu_{1})\zeta_{1}-2\zeta_{2}\)=\(\xi\zeta_{1}+2\mu_{-1}\) and \(\zeta_{1}<0\). Therefore, \(\big{\{}\mu_{1}^{2}cov_{\mathbb{P}}(\tilde{W}_{i},X_{i})\big{(}cov_{\mathbb{P}} (\tilde{W}_{i},X_{i})cov_{\mathbb{P}}(X_{i},\,1/X_{i})-2cov_{\mathbb{P}}(\tilde{ W}_{i},1/X_{i})\big{)}\big{\}}<0\) if \(\xi>2\max\{\mu_{1},\mu_{-1}/(\mu_{1}\mu_{-1}-1)\}\). Hence, \(\Delta_{7}^{2}-\Delta_{5}^{2}<0\)_a.s._\([\mathbb{P}]\). This completes the proof of \((ii)\) in Theorem 6. _Proof of Theorem_ 7. Using the same line of arguments as in the \(1^{st}\) paragraph of the proof of Theorem 4, it can be shown that under SRSWOR and LMS sampling design, conclusions of Theorems 1 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=2, \(p\)=5, \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{1}^{2}\,z_{2}^{2},\,z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\) in the case of the correlation coefficient between \(z_{1}\) and \(z_{2}\), and for \(d\)=2, \(p\)=4, \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{2}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)= \((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\) in the case of the regression coefficient of \(z_{1}\) on \(z_{2}\) in the same way as conclusions of Theorems 1 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=\(p\)=1, \(h(y)\)=\(y\) and \(g(s)\)=\(s\) in the case of the mean of \(y\) in the \(1^{st}\) paragraph of the proof of Theorem 4. Further, if C0 holds with \(0\leq\lambda<E_{\mathbb{P}}(X_{i})/b\), then using similar arguments as in the \(2^{nd}\) paragraph of the proof of Theorem 4, it can also be shown that under RHC and any HE\(\pi\)PS sampling designs, conclusions of Theorems 2 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=2, \(p\)=5, \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{1}^{2}\,z_{2}^{2},\,z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\) in the case of the correlation coefficient between \(z_{1}\) and \(z_{2}\), and for \(d\)=2, \(p\)=4, \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{2}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\) in the case of the regression coefficient of \(z_{1}\) on \(z_{2}\) in the same way as conclusions of Theorems 2 and 3 hold _a.s._\([\mathbb{P}]\) for \(d\)=\(p\)=1, \(h(y)\)=\(y\) and \(g(s)\)=\(s\) in the case of the mean of \(y\) in the \(2^{nd}\) paragraph of the proof of Theorem 4. Note that \(W_{i}\)=\(R_{12}[(\overline{Z}_{1}/S_{1}^{2}-\overline{Z}_{2}/S_{12})Z_{1i}+(\overline{Z}_ {2}/S_{2}^{2}-\overline{Z}_{1}/S_{12})Z_{2i}-Z_{1i}^{2}/2S_{1}^{2}-Z_{2i}^{2}/2 S_{2}^{2}+Z_{1i}Z_{2i}/S_{12}]\) for the correlation coefficient, and \(W_{i}\)=\((1/S_{2}^{2})[-\overline{Z}_{2}Z_{1i}-(\overline{Z}_{1}-2S_{12}\overline{Z}_{2}/S_{ 2}^{2})Z_{2i}-S_{12}Z_{2i}^{2}/S_{2}^{2}+Z_{1i}Z_{2i}]\) for the regression coefficient. Here, \(\overline{Z}_{1}\)=\(\sum_{i=1}^{N}Z_{1i}/N\), \(\overline{Z}_{2}\)=\(\sum_{i=1}^{N}Z_{2i}/N\), \(S_{1}^{2}\)=\(\sum_{i=1}^{N}Z_{1i}^{2}\ /N-\overline{Z}_{1}^{2}\), \(S_{2}^{2}\)=\(\sum_{i=1}^{N}Z_{2i}^{2}/N-\overline{Z}_{2}^{2}\), \(S_{12}\)=\(\sum_{i=1}^{N}Z_{1i}\ Z_{2i}/N-\overline{Z}_{1}\overline{Z}_{2}\) and \(R_{12}\)=\(S_{12}/S_{1}S_{2}\). Also, note that since \(E_{\mathbb{P}}||\epsilon_{i}||^{8}<\infty\), the \(\Delta_{i}^{2}\)'s in Table 3 in the main text can be expressed in terms of superpopulation moments of \((h(Z_{1i},Z_{2i}),X_{i})\)_a.s._\([\mathbb{P}]\) for both the parameters by SLLN. Further, for the above parameters, we have \(\Delta_{2}^{2}-\Delta_{1}^{2}\)=\(cov_{\mathbb{P}}^{2}(\tilde{W}_{i},X_{i})>0\) and \(\Delta_{7}^{2}-\Delta_{5}^{2}\)=\(\big{\{}\mu_{1}^{2}cov_{\mathbb{P}}(\tilde{W}_{i},X_{i})\big{(}cov_{ \mathbb{P}}(\tilde{W}_{i},X_{i})cov_{\mathbb{P}}(X_{i},\,1/X_{i})-2cov_{\mathbb{ P}}(\tilde{W}_{i},1/X_{i})\big{)}\big{\}}-\lambda^{2}cov_{\mathbb{P}}^{2}(\tilde{W}_{i},X_{i})/ \chi\mu_{1}\)\(-\lambda cov_{\mathbb{P}}^{2}(\tilde{W}_{i},X_{i})\leq\big{\{}\mu_{1}^{2}cov_{ \mathbb{P}}(\tilde{W}_{i},X_{i})\big{(}cov_{\mathbb{P}}(\tilde{W}_{i},X_{i})\times cov _{\mathbb{P}}(X_{i},1/X_{i})-2cov_{\mathbb{P}}(\tilde{W}_{i},1/X_{i})\big{)}\big{\}}\)_a.s._\([\mathbb{P}]\), where \(\tilde{W}_{i}\) is the same as \(W_{i}\) with all finite population moments in the expression of \(W_{i}\) replaced by their corresponding superpopulation moments. Also, from the linear model set up, we have \(\big{\{}\mu_{1}^{2}cov_{\mathbb{P}}(\tilde{W_{i}},X_{i})\big{(}cov_{\mathbb{P}}( \tilde{W_{i}},X_{i})cov_{\mathbb{P}}(X_{i},\)\(1/X_{i})-2cov_{\mathbb{P}}(\tilde{W_{i}},1/X_{i})\big{)}\big{\}}\)=\(K(\xi-2\mu_{1})((\xi+2\mu_{1})\zeta_{1}-2\zeta_{2})\) for some constant \(K>0\) in the case of the correlation coefficient, and \(\big{\{}\mu_{1}^{2}cov_{\mathbb{P}}(\tilde{W_{i}},X_{i})\times\big{(}cov_{ \mathbb{P}}(\tilde{W_{i}},X_{i})cov_{\mathbb{P}}(X_{i},1/X_{i})-2cov_{\mathbb{P }}(\tilde{W_{i}},1/X_{i})\big{)}\big{\}}=\)\(K^{\prime}(\xi-2\mu_{1})((\xi+2\mu_{1})\zeta_{1}-2\zeta_{2})\) for some constant \(K^{\prime}>0\) in the case of the regression coefficient. Thus proofs of both the parts of the theorem follow in the same way as the proof of Theorem 6. ## 4 Comparison of estimators with their bias-corrected versions In this section, we empirically compare the biased estimators considered in Table 5 in Section 4 of the main text with their bias-corrected versions based on both synthetic and real data used in Section 4. Following the idea in Stefan and Hidiroglou (2022), we consider the bias-corrected jackknife estimator corresponding to each of the biased estimators considered in Table 5 of the main article. For the mean, we consider the bias-corrected jackknife estimators corresponding to the GREG and the PEML estimators under each of SRSWOR, RS and RHC sampling designs, and the Hajek estimator under RS sampling design. On the other hand, for each of the variance, the correlation coefficient and the regression coefficient, we consider the bias-corrected jackknife estimators corresponding to the estimators that are obtained by plugging in the Hajek and the PEML estimators under each of SRSWOR and RS sampling design, and the PEML estimator under RHC sampling design. Suppose that \(s\) is a sample of size \(n\) drawn using one of the sampling designs given in Table 5 of the main text. Further, suppose that \(s_{-i}\) is the subset of \(s\), which excludes the \(i^{th}\) unit for any given \(i\in s\). Now, for any \(i\in s\), let us denote the estimator \(g(\hat{\overline{h}})\) constructed based on \(s_{-i}\) by \(g(\hat{\overline{h}}_{-i})\). Then, we compute the bias-corrected jackknife estimator of \(g(\overline{h})\) corresponding to \(g(\hat{\overline{h}})\) as \(ng(\hat{\overline{h}})-(n-1)\sum_{i\in s}g(\hat{\overline{h}}_{-i})/n\) (cf. Stefan and Hidiroglou (2022)). Recall from Section 4 in the main article that we draw \(I\)=1000 samples each of sizes \(n\)=75, 100 and 125 from some synthetic as well as real datasets using sampling designs mentioned in Table 5 and compute MSEs of the estimators considered in Table 5 based on these samples. Here, we compute MSEs of the above-mentioned bias-corrected jackknife estimators using the same procedure and compare them with the original biased estimators in terms of their MSEs. We observe from the above analyses that for all the parameters considered in Section 4 of the main text, the bias-corrected jackknife estimators become worse than the original biased estimators in the cases of both the synthetic and the real data (see Tables 2 through 6 and 12 through 21 in Sections 5 and 6 below). Despite reducing the biases of the original biased estimators, bias-correction increases the variances of these estimators significantly. This is the reason why the bias-corrected jackknife estimators have larger MSEs than the original biased estimators in the cases of both the synthetic and the real data. ## 5 Analysis based on synthetic data The results obtained from the analysis carried out in Section 4.1 of the main paper and Section 4 in this supplement are summarized in these sections. Here, we provide some tables that were mentioned in these sections. Tables 2 through 6 contain relative efficiencies of estimators for the mean, the variance, the correlation coefficient and the regression coefficient in the population. Tables 7 through 11 contain the average and the standard deviation of lengths of asymptotically 95% CIs of the above parameters. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{1}{|c|}{ \begin{tabular}{c} Relative efficiency \\ \end{tabular} } & \multicolumn{1}{|c|}{\(n\)=75} & \multicolumn{1}{|c|}{\(n\)=100} & \multicolumn{1}{|c|}{\(n\)=125} \\ \hline \hline RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\widehat{\overline{Y}}_{GREG}\), SRSWOR) & 1.049985 & 1.020252 & 1.035038 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\hat{\overline{Y}}_{H}\), RS) & 4.870516 & 5.370899 & 4.987635 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\hat{\overline{Y}}_{HT}\), RS ) & 2.026734 & 2.061607 & 2.027386 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\hat{\overline{Y}}_{PEML}\), RS) & 1.144439 & 1.124697 & 1.170224 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\hat{\overline{Y}}_{GREG}\), RS) & 1.144455 & 1.124975 & 1.170267 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR\(\mid\hat{\overline{Y}}_{RHC}\), RHC ) & 2.022378 & 1.978623 & 2.143015 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\hat{\overline{Y}}_{PEML}\), RHC) & 1.089837 & 1.030332 & 1.094067 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\hat{\overline{Y}}_{GREG}\), RHC) & 1.089853 & 1.030587 & 1.094108 \\ \hline RE(\(\widehat{\overline{Y}}_{PEML}\), SRSWOR \(\mid\)\(\widehat{\overline{Y}}_{BCPEML}\), SRSWOR) & 1.050461 & 1.021275 & 1.038282 \\ RE(\(\widehat{\overline{Y}}_{GREG}\), SRSWOR \(\mid\)\(\widehat{\overline{Y}}_{BCGREG}\), SRSWOR) & 1.002649 & 1.003156 & 1.005397 \\ RE(\(\widehat{\overline{Y}}_{H}\), RS \(\mid\)\(\widehat{\overline{Y}}_{BCH}\), RS) & 1.036379 & 1.006945 & 1.12841 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), RS \(\mid\)\(\widehat{\overline{Y}}_{BCPEML}\), RS) & 1.016953 & 1.013402 & 1.011762 \\ RE(\(\widehat{\overline{Y}}_{GREG}\), RS \(\mid\)\(\widehat{\overline{Y}}_{BCGREG}\), RS) & 1.016692 & 1.011597 & 1.011493 \\ RE(\(\widehat{\overline{Y}}_{PEML}\), RHC \(\mid\)\(\widehat{\overline{Y}}_{BCPEML}\), RHC) & 1.01914 & 1.02292 & 1.024689 \\ RE(\(\widehat{\overline{Y}}_{GREG}\), RHC \(\mid\)\(\widehat{\overline{Y}}_{BCGREG}\), RHC) & 1.011583 & 1.052311 & 1.023058 \\ \hline \end{tabular} * BCPEML=Bias-corrected PEML estimator, BCH=Bias-corrected Hajek estimator, and BCGREG=Bias-corrected GREG estimator. \end{table} Table 2: Relative efficiencies of estimators for mean of \(y\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{1}^{2},z_{2}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\). \\ \hline \multicolumn{3}{|c|}{Sample size} & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ Relative efficiency & & & \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 1.0304 & 1.0274 & 1.0385 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), RS) & 1.0307 & 1.0838 & 1.0515 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 1.0573 & 1.1862 & 1.1081 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 1.0847 & 1.1459 & 1.0911 \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 89.989 & 95.299 & 123.89 \\ \(\text{RE}(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 90.407 & 96.79 & 141.989 \\ \(\text{RE}(g(\widehat{\overline{h}}_{H})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 90.037 & 102.914 & 152.993 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RS) & 95.68 & 98.758 & 158.832 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RHC) & 86.27 & 120.582 & 125.374 \\ \hline \end{tabular} \end{table} Table 4: Relative efficiencies of estimators for correlation coefficient between \(z_{1}\) and \(z_{2}\). Recall from Table 4 in Section 2 that for correlation coefficient between \(z_{1}\) and \(z_{2}\), \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{1}^{2},z_{2}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\text{Relative efficiency}\) & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 1.0926 & 1.0848 & 1.0419 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), RS) & 1.0367 & 1.0435 & 1.0226 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 1.15067 & 1.136 & 1.1635 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 1.141 & 1.1849 & 1.1631 \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 1.0208 & 1.01 & 1.0669 \\ \(\text{RE}(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 38.642 & 50.009 & 65.398 \\ \(\text{RE}(g(\widehat{\overline{h}}_{H})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 1.0029 & 1.0117 & 1.074 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RS) & 1.0112 & 1.023 & 1.0377 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RHC) & 1.0141 & 1.015 & 1.0126 \\ \hline \end{tabular} \end{table} Table 3: Relative efficiencies of estimators for variance of \(y\). Recall from Table 4 in Section 2 that for variance of \(y\), \(h(y)\)=\((y^{2},y)\) and \(g(s_{1},s_{2})\)=\(s_{1}-s_{2}^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Table 4 in Section 2 that for regression coefficient of \(z_{2}\) on \(z_{1}\), \(h(z_{1},z_{2})\)=\((z_{2},z_{1},z_{1}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\).} \\ \hline \multicolumn{2}{|c|}{ \begin{tabular}{c} Relative efficiency \\ \end{tabular} } & \multicolumn{1}{c|}{\(n\)=75} & \multicolumn{1}{c|}{\(n\)=100} & \multicolumn{1}{c|}{\(n\)=125} \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 1.0498 & 1.04 & 1.0301 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), RS) & 1.0655 & 1.0652 & 1.0548 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 1.1073 & 1.1153 & 1.1135 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 1.0762 & 1.0905 & 1.1108 \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 72.061 & 105.389 & 111.124 \\ \(\text{RE}(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 69.114 & 108.837 & 118.675 \\ \(\text{RE}(g(\widehat{\overline{h}}_{H})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 69.16 & 115.113 & 144.811 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RS) & 72.448 & 127.387 & 131.558 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RHC) & 90.132 & 104.121 & 148.139 \\ \hline \end{tabular} \end{table} Table 6: Relative efficiencies of estimators for regression coefficient of \(z_{2}\) on \(z_{1}\). Recall from Table 4 in Section 2 that for regression coefficient of \(z_{2}\) on \(z_{1}\), \(h(z_{1},z_{2})\)=\((z_{2},z_{1},z_{1}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{ \begin{tabular}{c} Relative efficiency \\ \end{tabular} } & \multicolumn{1}{c|}{\(n\)=75} & \multicolumn{1}{c|}{\(n\)=100} & \multicolumn{1}{c|}{\(n\)=125} \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 1.0389 & 1.0473 & 1.0218 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), RS) & 1.0589 & 1.0829 & 1.0827 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 1.1219 & 1.1334 & 1.2137 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 1.2037 & 1.1307 & 1.1399 \\ \hline \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 80.64 & 91.707 & 124.476 \\ \(\text{RE}(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 79.298 & 89.105 & 123.042 \\ \(\text{RE}(g(\widehat{\overline{h}}_{RS})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 85.97 & 96.22 & 135.449 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RS) & 83.331 & 97.583 & 125.657 \\ \(\text{RE}(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid{}^{2}\) BC \(g(\widehat{\overline{h}}_{PEML})\), RHC) & 75.343 & 112.619 & 115.594 \\ \hline \end{tabular} \end{table} Table 5: Relative efficiencies of estimators for regression coefficient of \(z_{1}\) on \(z_{2}\). Recall from Table 4 in Section 2 that for regression coefficient of \(z_{1}\) on \(z_{2}\), \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{2}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|l|c|c|c|} \hline & & \multicolumn{2}{c|}{Average length} \\ & & \multicolumn{2}{c|}{(Standard deviation)} \\ \hline Estimator and & Sample size & & \\ sampling design & & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(\hat{\overline{Y}}_{H}\), SRSWOR & 536.821 & 538.177 & 539.218 \\ & (11.357) & (9.0784) & (6.8211) \\ & 44.824 & 38.81 & 34.648 \\ \({}^{3}\,\hat{\overline{Y}}_{PEML}\), SRSWOR & (3.7002) & (2.7727) & (2.2055) \\ & 689.123 & 597.999 & 535.951 \\ \(\hat{\overline{Y}}_{HT}\), RS & (7.8452) & (5.7176) & (4.8422) \\ & 102.611 & 87.915 & 59.98307 \\ \(\hat{\overline{Y}}_{H}\), RS & (10.969) & (8.453) & (6.5828) \\ & 345.956 & 115.944 & 78.711 \\ \({}^{3}\,\hat{\overline{Y}}_{PEML}\), RS & (654.77) & (265.93) & (1041.2) \\ & 848.033 & 624.881 & 541.421 \\ \(\hat{\overline{Y}}_{RHC}\), RHC & (6.8489) & (4.9609) & (4.0927) \\ & 64.573 & 56.531 & 50.601 \\ \({}^{3}\,\hat{\overline{Y}}_{PEML}\), RHC & (715.16) & (275.11) & (651.31) \\ \hline \end{tabular} \end{table} Table 7: Average and standard deviation of lengths of asymptotically 95% CIs for mean of \(y\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline Estimator and sampling design & Sample size & & & \\ sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI & & & \\ is constructed & & & \\ \hline \(g(\widehat{\overline{h}}_{H})\), SRSWOR & 1010775 & 878689.4 & 786228 \\ & (34245.5) & (26373.9) & (20414.5) \\ \(g(\widehat{\overline{h}}_{PEML})\), SRSWOR & 29432.4 & 25929 & 23422 \\ & (6076.97) & (4441.2) & (3526.8) \\ & 444594.4 & 434160.7 & 239065 \\ \(g(\widehat{\overline{h}}_{H})\), RS & (44701.7) & (31965.2) & (26739.6) \\ & 1152403 & 1290084 & 235909.1 \\ \(g(\widehat{\overline{h}}_{PEML})\), RS & (9083944) & (869339.1) & (1183961) \\ & 1031407 & 895639 & 801178.9 \\ \(g(\widehat{\overline{h}}_{PEML})\), RHC & (7311193) & (1530759) & (417582.9) \\ \hline \end{tabular} \end{table} Table 8: Average and standard deviation of lengths of asymptotically 95% CIs for variance of \(y\). Recall from Table 4 in Section 2 that for variance of \(y\), \(h(y_{1})\)=\((y^{2},y)\) and \(g(s_{1},s_{2})\)=\(s_{1}-s_{2}^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline Estimator and sampling design & Sample size & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 8.2191 & 8.0909 & 8.0897 \\ & (2.429) & (1.889) & (1.449) \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & 0.2542 & 0.2575 & 0.2583 \\ & (0.0467) & (0.0365) & (0.0294) \\ & 4.6847 & 3.3135 & 1.3942 \\ \(g(\hat{\overline{h}}_{H})\), RS & (2.555) & (1.884) & (1.421) \\ & 5.0473 & 4.3229 & 3.1306 \\ \(g(\hat{\overline{h}}_{PEML})\), RS & (162.9) & (17.19) & (21.04) \\ & 8.3174 & 8.3898 & 8.3514 \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & (15.82) & (41.88) & (19.62) \\ \hline \end{tabular} \end{table} Table 9: Average and standard deviation of lengths of asymptotically 95% CIs for correlation coefficient between \(z_{1}\) and \(z_{2}\). Recall from Table 4 in Section 2 that for correlation coefficient between \(z_{1}\) and \(z_{2}\), \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{1}^{2},z_{2}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 5.9565 & 5.068 & 4.4818 \\ & (2.013) & (1.514) & (1.135) \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & 0.2596 & 0.2251 & 0.2032 \\ & (0.0429) & (0.0324) & (0.025) \\ \(g(\hat{\overline{h}}_{H})\), RS & 3.0488 & 1.469 & 1.1532 \\ & (2.178) & (1.517) & (1.171) \\ \(g(\hat{\overline{h}}_{PEML})\), RS & 3.6477 & 1.8558 & 1.4023 \\ & (19.09) & (4.697) & (4.672) \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & 6.111 & 5.1324 & 4.6658 \\ & (25.16) & (38.36) & (11.17) \\ \hline \end{tabular} \end{table} Table 10: Average and standard deviation of lengths of asymptotically 95% CIs for regression coefficient of \(z_{1}\) on \(z_{2}\). Recall from Table 4 in Section 2 that for regression coefficient of \(z_{1}\) on \(z_{2}\), \(h(z_{1},z_{2})\)=\((z_{1},z_{2},z_{2}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 11.2173 & 9.6463 & 8.5885 \\ & (3.238) & (2.418) & (1.877) \\ & 0.4198 & 0.3652 & 0.3307 \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & (0.0661) & (0.0531) & (0.0405) \\ & 6.7247 & 3.3547 & 1.7421 \\ \(g(\hat{\overline{h}}_{H})\), RS & (3.546) & (2.539) & (1.921) \\ & 11.3373 & 9.988 & 8.7889 \\ \(g(\hat{\overline{h}}_{PEML})\), RS & (151.9) & (31.83) & (7.405) \\ & 19.9049 & 3.5595 & 1.8327 \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & (28.77) & (321.7) & (8.164) \\ \hline \end{tabular} \end{table} Table 11: Average and standard deviation of lengths of asymptotically 95% CIs for regression coefficient of \(z_{2}\) on \(z_{1}\). Recall from Table 4 in Section 2 that for regression coefficient of \(z_{2}\) on \(z_{1}\), \(h(z_{1},z_{2})\)=\((z_{2},z_{1},z_{1}^{2},z_{1}z_{2})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). Analysis based on real data The results obtained from the analyses carried out in Section 4.2 of the main paper and Section 4 in this supplement are summarized in these sections. Here, we provide some scatter plots and tables that were mentioned in these sections. Figures 1 through 4 present scatter plots and least square regression lines between different study and size variables drawn based on all the population values. Tables 12 through 21 contain relative efficiencies of estimators for the mean, the variance, the correlation coefficient and the regression coefficient in the population. Tables 22 through 31 contain the average and the standard deviation of lengths of asymptotically 95% CIs of the above parameters. Figure 1: Scatter plot and least square regression line for variables \(y_{1}\) and \(x\) Figure 2: Scatter plot and least square regression line for variables \(y_{2}\) and \(x\) Figure 3: Scatter plot and least square regression line for variables \(y_{3}\) and \(x\) Figure 4: Scatter plot and least square regression line for variables \(y_{4}\) and \(x\) \begin{table} \begin{tabular}{|c|c|c|c|} \hline Relative efficiency & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ \hline RE(\(g(\hat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\hat{\overline{h}}_{H})\), SRSWOR) & 1.3294 & 1.2413 & 1.1476 \\ RE(\(g(\hat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\hat{\overline{h}}_{H})\), RS) & 2.5303 & 1.6656 & 1.5374 \\ RE(\(g(\hat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\hat{\overline{h}}_{PEML})\), RS) & 3.1642 & 2.4051 & 2.5831 \\ RE(\(g(\hat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\hat{\overline{h}}_{PEML})\), RHC) & 2.5499 & 4.7704 & 3.0985 \\ \hline RE(\(g(\hat{\overline{h}}_{PEML})\), SRSWOR \(\mid{}^{2}\) BC \(g(\hat{\overline{h}}_{PEML})\), SRSWOR) & 1.1812 & 1.2736 & 1.8669 \\ RE(\(g(\hat{\overline{h}}_{H})\), SRSWOR \(\mid{}^{2}\) BC \(g(\hat{\overline{h}}_{H})\), SRSWOR) & 4.3526 & 4.8948 & 6.0349 \\ RE(\(g(\hat{\overline{h}}_{H})\), RS \(\mid{}^{2}\) BC \(g(\hat{\overline{h}}_{H})\), RS) & 1.115 & 1.1239 & 1.2269 \\ RE(\(g(\hat{\overline{h}}_{PEML})\), RS \(\mid{}^{2}\) BC \(g(\hat{\overline{h}}_{PEML})\), RS) & 1.4373 & 1.1739 & 1.6481 \\ RE(\(g(\hat{\overline{h}}_{PEML})\), RHC \(\mid{}^{2}\) BC \(g(\hat{\overline{h}}_{PEML})\), RHC) & 1.8502 & 1.0186 & 1.0384 \\ \hline \end{tabular} \end{table} Table 13: Relative efficiencies of estimators for variance of \(y_{1}\). Recall from Table 4 in Section 2 that for variance of \(y_{1}\), \(h(y_{1})\)=\((y_{1}^{2},y_{1})\) and \(g(s_{1},s_{2})\)=\(s_{1}-s_{2}^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Relative efficiency & Sample size & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ \hline RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{H}\), RS) & 4.367712 & 4.008655 & 4.463214 \\ RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{PEML}\), RS) & 1.148074 & 1.082488 & 1.088804 \\ RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{GREG}\), RS) & 1.216958 & 1.115967 & 1.154132 \\ RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{RHC}\), RHC) & 1.073138 & 1.03213 & 1.07484 \\ RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{PEML}\), RHC) & 1.230884 & 1.0937 & 1.207308 \\ RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{GREG}\), RHC) & 1.304737 & 1.127526 & 1.279746 \\ RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{PEML}\), SRSWOR) & 2.440441 & 2.305339 & 2.350916 \\ RE(\(\widehat{\hat{Y}}_{HT}\), RS \(|\ \widehat{\hat{Y}}_{GREG}\), SRSWOR) & 2.58687 & 2.376638 & 2.49197 \\ \hline RE(\(\widehat{\hat{Y}}_{H}\), RS \(|\ ^{1}\widehat{\hat{Y}}_{BCH}\), RS) & 1.252123 & 1.325047 & 1.241809 \\ RE(\(\widehat{\hat{Y}}_{PEML}\), RS \(|\ ^{1}\widehat{\hat{Y}}_{BCPEML}\), RS) & 1.988105 & 2.146357 & 2.260343 \\ RE(\(\widehat{\hat{Y}}_{GREG}\), RS \(|\ ^{1}\widehat{\hat{Y}}_{BCGREG}\), RS) & 2.055588 & 2.018015 & 2.287817 \\ RE(\(\widehat{\hat{Y}}_{PEML}\), RHC \(|\ ^{1}\widehat{\hat{Y}}_{BCPEML}\), RHC) & 1.831377 & 2.083210 & 2.006134 \\ RE(\(\widehat{\hat{Y}}_{GREG}\), RHC \(|\ ^{1}\widehat{\hat{Y}}_{BCGREG}\), RHC) & 1.925938 & 1.983984 & 2.091003 \\ RE(\(\widehat{\hat{Y}}_{PEML}\), SRSWOR \(|\ ^{1}\widehat{\hat{Y}}_{BCPEML}\), SRSWOR) & 1.001786 & 1.004973 & 1.060588 \\ RE(\(\widehat{\hat{Y}}_{GREG}\), SRSWOR \(|\ ^{1}\widehat{\hat{Y}}_{BCGREG}\), SRSWOR) & 1.021103 & 1.008525 & 1.003390 \\ \hline \end{tabular} \end{table} Table 14: Relative efficiencies of estimators for mean of \(y_{2}\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{4}{|c|}{Table 16: Relative efficiencies of estimators for correlation coefficient between \(y_{1}\) and \(y_{3}\). Recall from Table 4 in Section 2 that for correlation coefficient between \(y_{1}\) and \(y_{3}\), \(h(y_{1},y_{3})\)=\((y_{1},y_{3},y_{1}^{2},y_{3}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\).} \\ \hline Relative efficiency & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ \hline RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 1.0967 & 1.0369 & 1.0374 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{H})\), RS) & 1.317 & 1.4831 & 1.2561 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 1.9803 & 1.9874 & 1.8441 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 2.0562 & 1.9651 & 1.8541 \\ \hline RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 23.149 & 51.887 & 45.976 \\ RE(\(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 90.769 & 163.74 & 154.97 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 72.604 & 79.355 & 163.03 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), RS) & 24.483 & 35.874 & 43.164 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), RHC) & 29.189 & 65.949 & 43.13 \\ \hline \end{tabular} \end{table} Table 16: Relative efficiencies of estimators for correlation coefficient between \(y_{1}\) and \(y_{3}\). Recall from Table 4 in Section 2 that for correlation coefficient between \(y_{1}\) and \(y_{3}\), \(h(y_{1},y_{3})\)=\((y_{1},y_{3},y_{1}^{2},y_{3}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{Sample size} & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ \hline RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 11.893 & 6.967 & 34.691 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 5.0093 & 19.456 & 21.919 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 9.8232 & 10.27 & 16.763 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 2.4768 & 4.8093 & 6.2264 \\ \hline RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 13.301 & 6.3589 & 33.579 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid 2\) BC \(g(\hat{\overline{h}}_{PEML})\), RS) & 4.448 & 7.4621 & 7.989 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid 2\) BC \(g(\hat{\overline{h}}_{PEML})\), RHC) & 21.855 & 3.0076 & 11.368 \\ RE(\(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid 2\) BC \(g(\hat{\overline{h}}_{H})\), SRSWOR) & 8.7641 & 5.6119 & 13.7 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid 2\) BC \(g(\hat{\overline{h}}_{PEML})\), SRSWOR) & 6.2655 & 2.0015 & 6.959 \\ \hline \end{tabular} \end{table} Table 15: Relative efficiencies of estimators for variance of \(y_{2}\). Recall from Table 4 in Section 2 that for variance of \(y_{2}\), \(h(y_{2})\)=\((y_{2}^{2},y_{2})\) and \(g(s_{1},s_{2})\)=\(s_{1}-s_{2}^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \end{table} Table 18: Relative efficiencies of estimators for regression coefficient of \(y_{3}\) on \(y_{1}\). Recall from Table 4 in Section 2 that for regression coefficient of \(y_{3}\) on \(y_{1}\), \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Relative efficiency & Sample size & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ \hline RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 1.8158 & 2.3771 & 3.2021 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 2.5985 & 2.6002 & 3.4744 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 3.3278 & 4.5041 & 6.312 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 2.9788 & 3.9417 & 6.0391 \\ \hline RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 125.17 & 256.45 & 260.15 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), RS) & 145.1 & 333.5 & 135.65 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), RHC) & 86.93 & 238.32 & 292.89 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 93.707 & 101.93 & 121.44 \\ RE(\(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 115.85 & 146.16 & 104.66 \\ \hline \end{tabular} \end{table} Table 20: Relative efficiencies of estimators for regression coefficient of \(y_{2}\) on \(y_{4}\). Recall from Table 4 in Section 2 that for regression coefficient of \(y_{2}\) on \(y_{4}\), \(h(y_{2},y_{4})\)=\((y_{2},y_{4},y_{4}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Relative efficiency & Sample size & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ \hline RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), RS) & 1.8158 & 2.3771 & 3.2021 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), RHC) & 2.5985 & 2.6002 & 3.4744 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{H})\), SRSWOR) & 3.3278 & 4.5041 & 6.312 \\ RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid g(\widehat{\overline{h}}_{PEML})\), SRSWOR) & 2.9788 & 3.9417 & 6.0391 \\ \hline RE(\(g(\widehat{\overline{h}}_{H})\), RS \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), RS) & 125.17 & 256.45 & 260.15 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RS \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), RS) & 145.1 & 333.5 & 135.65 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), RHC \(\mid 2\) BC \(g(\widehat{\overline{h}}_{PEML})\), RHC) & 86.93 & 238.32 & 292.89 \\ RE(\(g(\widehat{\overline{h}}_{PEML})\), SRSWOR \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 93.707 & 101.93 & 121.44 \\ RE(\(g(\widehat{\overline{h}}_{H})\), SRSWOR \(\mid 2\) BC \(g(\widehat{\overline{h}}_{H})\), SRSWOR) & 115.85 & 146.16 & 104.66 \\ \hline \end{tabular} \end{table} Table 19: Relative efficiencies of estimators for correlation coefficient between \(y_{2}\) and \(y_{4}\). Recall from Table 4 in Section 2 that for correlation coefficient between \(y_{2}\) and \(y_{4}\), \(h(y_{2},y_{4})\)=\((y_{2},y_{4},y_{2}^{2},y_{4}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). & \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(h(y_{2},y_{4})\)=\((y_{4},y_{2},y_{2}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|l|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and & \multicolumn{3}{|c|}{\(n\)=75} & \(n\)=100 & \(n\)=125 \\ sampling design & \multicolumn{3}{|c|}{\(n\)=75} & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(\hat{\overline{Y}}_{H}\), SRSWOR & 0.7233 & 0.7303 & 0.7333 \\ & (0.2304) & (0.1885) & (0.1431) \\ \({}^{3}\)\(\hat{\overline{Y}}_{PEML}\), SRSWOR & 0.3703 & 0.3734 & 0.3847 \\ & (0.1608) & (0.1534) & (0.1074) \\ & 0.7738 & 0.7735 & 0.8271 \\ \(\hat{\overline{Y}}_{HT}\), RS & (0.2724) & (1.071) & (0.2001) \\ & 0.4345 & 0.455 & 0.5414 \\ \(\hat{\overline{Y}}_{H}\), RS & (0.8312) & (8.807) & (0.5479) \\ \({}^{3}\)\(\hat{\overline{Y}}_{PEML}\), RS & 0.6784 & 0.7207 & 0.7896 \\ & (0.3945) & (12.176) & (0.2694) \\ & 0.7415 & 0.7716 & 0.8014 \\ \(\hat{\overline{Y}}_{RHC}\), RHC & (0.4007) & (0.6359) & (0.2931) \\ \({}^{3}\)\(\hat{\overline{Y}}_{PEML}\), RHC & 0.4911 & 0.5078 & 0.5289 \\ & (0.9865) & (0.4992) & (0.3594) \\ \hline \end{tabular} \end{table} Table 22: Average and standard deviation of lengths of asymptotically 95% CIs for mean of \(y_{1}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 5.2879 & 4.2111 & 4.4304 \\ & (8.762) & (9.309) & (6.856) \\ & 2.7519 & 2.9935 & 3.0013 \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & (7.181) & (8.622) & (5.952) \\ & 3.5121 & 3.1177 & 3.1095 \\ \(g(\hat{\overline{h}}_{H})\), RS & (1.345) & (11.37) & (10.88) \\ & 3.7475 & 3.939 & 3.792 \\ \(g(\hat{\overline{h}}_{PEML})\), RS & (4.041) & (16.14) & (11.08) \\ & 3.6365 & 3.4972 & 3.4158 \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & (14.99) & (8.278) & (10.95) \\ \hline \end{tabular} \end{table} Table 23: Average and standard deviation of lengths of asymptotically 95% CIs for variance of \(y_{1}\). Recall from Table 4 in Section 2 that for variance of \(y_{1}\), \(h(y_{1})\)=\((y_{1}^{2},y_{1})\) and \(g(s_{1},s_{2})\)=\(s_{1}-s_{2}^{2}\). \begin{table} \begin{tabular}{|l|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and & \multicolumn{3}{|c|}{\(n\)=75} & \(n\)=100 & \(n\)=125 \\ sampling design & \multicolumn{3}{|c|}{\(n\)=75} & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(\hat{\overline{Y}}_{H}\), SRSWOR & 312.1 & 322.48 & 326.36 \\ & (150.08) & (121.86) & (93.707) \\ \({}^{3}\)\(\hat{\overline{Y}}_{PEML}\), SRSWOR & 243.23 & 216.42 & 198.11 \\ & (65.059) & (55.256) & (44.972) \\ & 184.98 & 160.79 & 144.43 \\ \(\hat{\overline{Y}}_{HT}\), RS & (24.336) & (17.942) & (13.89) \\ & 189.49 & 163.19 & 145.82 \\ \(\hat{\overline{Y}}_{H}\), RS & (314.18) & (209.6) & (164.32) \\ \({}^{3}\)\(\hat{\overline{Y}}_{PEML}\), RS & 343.6 & 300.14 & 272.63 \\ & (60.804) & (20.411) & (21.998) \\ \(\hat{\overline{Y}}_{RHC}\), RHC & 277.91 & 240.09 & 214.78 \\ & (16.039) & (12.042) & (9.2784) \\ \({}^{3}\)\(\hat{\overline{Y}}_{PEML}\), RHC & 279.97 & 242.43 & 217.09 \\ & (52.788) & (58.394) & (21.356) \\ \hline \end{tabular} \end{table} Table 24: Average and standard deviation of lengths of asymptotically 95% CIs for mean of \(y_{2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline Estimator and sampling design & Sample size & & & \\ sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI & & & \\ is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 1498664 & 1588740 & 2418155 \\ \(g(\hat{\overline{h}}_{H})\), RS & (3236118) & (2694726) & (3205532) \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & 1035032 & 1077345 & 1002397 \\ \((1472036)\) & (1376947) & (1573834) \\ \(g(\hat{\overline{h}}_{H})\), RS & 887813.9 & 764055.6 & 684218.5 \\ \((464853)\) & (377760) & (298552) \\ \(g(\hat{\overline{h}}_{PEML})\), RS & 1385778 & 1168689 & 1055339 \\ \((1584677)\) & (1339377) & (1177054) \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & 1319413 & 1134532 & 1072290 \\ \((1473379)\) & (1384754) & (1472584) \\ \hline \end{tabular} \end{table} Table 25: Average and standard deviation of lengths of asymptotically 95% CIs for variance of \(y_{2}\). Recall from Table 4 in Section 2 that for variance of \(y_{2}\), \(h(y_{2})\)=\((y_{2}^{2},y_{2})\) and \(g(s_{1},s_{2})\)=\(s_{1}-s_{2}^{2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{2}{c|}{Average length} \\ & \multicolumn{2}{c|}{(Standard deviation)} \\ \hline Estimator and sampling design & Sample size & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 0.3682 & 0.3753 & 0.3893 \\ & (0.1138) & (0.1039) & (0.0936) \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & 0.2747 & 0.2881 & 0.2884 \\ & (0.1095) & (0.1008) & (0.0879) \\ & 0.3351 & 0.3453 & 0.3587 \\ \(g(\hat{\overline{h}}_{H})\), RS & (0.1652) & (0.0938) & (0.1034) \\ & 592.48 & 260.44 & 469.36 \\ \(g(\hat{\overline{h}}_{PEML})\), RS & (0.2859) & (0.3441) & (2.738) \\ & 3838.4 & 2740.5 & 2238.3 \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & (1.2271) & (0.1467) & (0.1104) \\ \hline \end{tabular} \end{table} Table 26: Average and standard deviation of lengths of asymptotically 95% CIs for correlation coefficient between \(y_{1}\) and \(y_{3}\). Recall from Table 4 in Section 2 that for correlation coefficient between \(y_{1}\) and \(y_{3}\), \(h(y_{1},y_{3})\)=\((y_{1},y_{3},y_{1}^{2},y_{3}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 1.6443 & 1.781 & 1.8077 \\ & (1.223) & (1.127) & (0.8849) \\ & 1.3984 & 1.4239 & 1.491 \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & (0.8867) & (0.7898) & (0.6645) \\ & 1.4072 & 1.5299 & 1.5449 \\ \(g(\hat{\overline{h}}_{H})\), RS & (0.6463) & (0.4833) & (0.4883) \\ & 3240.4 & 4938.4 & 1705.3 \\ \(g(\hat{\overline{h}}_{PEML})\), RS & (4.3202) & (1.659) & (2.017) \\ & 50701.7 & 17291.2 & 22245.7 \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & (2.659) & (3.93) & (1.51) \\ \hline \end{tabular} \end{table} Table 27: Average and standard deviation of lengths of asymptotically 95% CIs for regression coefficient of \(y_{1}\) on \(y_{3}\). Recall from Table 4 in Section 2 that for regression coefficient of \(y_{1}\) on \(y_{3}\), \(h(y_{1},y_{3})\)=\((y_{1},y_{3},y_{3}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{c|}{Average length} \\ & \multicolumn{3}{c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 0.1387 & 0.1449 & 0.1508 \\ & (0.091) & (0.072) & (0.0616) \\ & 0.1015 & 0.0994 & 0.1002 \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & (0.0868) & (0.0692) & (0.0593) \\ & 0.1305 & 0.1379 & 0.1447 \\ \(g(\hat{\overline{h}}_{H})\), RS & (0.0919) & (0.0438) & (0.0357) \\ & 113.4 & 263.23 & 78.782 \\ \(g(\hat{\overline{h}}_{PEML})\), RS & (0.1712) & (0.0725) & (0.0545) \\ & 798.95 & 490.91 & 286.92 \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & (0.6227) & (0.0862) & (0.1107) \\ \hline \end{tabular} \end{table} Table 28: Average and standard deviation of lengths of asymptotically 95% CIs for regression coefficient of \(y_{3}\) on \(y_{1}\). Recall from Table 4 in Section 2 that for regression coefficient of \(y_{3}\) on \(y_{1}\), \(h(y_{1},y_{3})\)=\((y_{3},y_{1},y_{1}^{2},y_{1}y_{3})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{|c|}{Average length} \\ & \multicolumn{3}{|c|}{(Standard deviation)} \\ \hline Estimator and sampling design & Sample size & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 0.3428 & 0.359 & 0.3821 \\ & (0.191) & (0.1783) & (0.1844) \\ & 0.3088 & 0.3279 & 0.3537 \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & (0.1886) & (0.171) & (0.1773) \\ & 0.2924 & 0.2926 & 0.298 \\ \(g(\hat{\overline{h}}_{H})\), RS & (0.1561) & (0.1491) & (0.1568) \\ & 833.87 & 300.13 & 242.51 \\ \(g(\hat{\overline{h}}_{PEML})\), RS & (0.5226) & (0.4406) & (0.8658) \\ & 7593.1 & 3526.1 & 2390.9 \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & (0.4385) & (0.4869) & (0.2661) \\ \hline \end{tabular} \end{table} Table 29: Average and standard deviation of lengths of asymptotically 95% CIs for correlation coefficient between \(y_{2}\) and \(y_{4}\). Recall from Table 4 in Section 2 that for correlation coefficient between \(y_{2}\) and \(y_{4}\), \(h(y_{2},y_{4})\)=\((y_{2},y_{4},y_{2}^{2},y_{4}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4},s_{5})\)=\((s_{5}-s_{1}s_{2})/((s_{3}-s_{1}^{2})(s_{4}-s_{2}^{2}))^{1/2}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline & \multicolumn{3}{c|}{Average length} \\ & \multicolumn{3}{c|}{(Standard deviation)} \\ \hline \multicolumn{3}{|c|}{Sample size} \\ Estimator and sampling design & \(n\)=75 & \(n\)=100 & \(n\)=125 \\ based on which CI is constructed & & & \\ \hline \(g(\hat{\overline{h}}_{H})\), SRSWOR & 1.1188 & 1.1117 & 1.1566 \\ & (1.251) & (1.061) & (1.171) \\ \(g(\hat{\overline{h}}_{PEML})\), SRSWOR & 0.9865 & 1.0005 & 1.0534 \\ & (0.9935) & (0.8784) & (0.8758) \\ \(g(\hat{\overline{h}}_{H})\), RS & 0.8575 & 0.847 & 0.8427 \\ & (0.6472) & (0.5219) & (0.4524) \\ \(g(\hat{\overline{h}}_{PEML})\), RS & 1583.8 & 1647.2 & 1533.9 \\ & (1.733) & (1.822) & (1.302) \\ \(g(\hat{\overline{h}}_{PEML})\), RHC & 24127.4 & 10798.8 & 5076.1 \\ & (2.05) & (1.468) & (2.385) \\ \hline \end{tabular} \end{table} Table 30: Average and standard deviation of lengths of asymptotically 95% CIs for regression coefficient of \(y_{2}\) on \(y_{4}\). Recall from Table 4 in Section 2 that for regression coefficient of \(y_{2}\) on \(y_{4}\), \(h(y_{2},y_{4})\)=\((y_{2},y_{4},y_{4}^{2},y_{2}y_{4})\) and \(g(s_{1},s_{2},s_{3},s_{4})\)=\((s_{4}-s_{1}s_{2})/(s_{3}-s_{2}^{2})\).
2306.13138
Spectral Form Factors of Topological Phases
Signatures of dynamical quantum phase transitions and chaos can be found in the time evolution of generalized partition functions such as spectral form factors (SFF) and Loschmidt echoes. While a lot of work has focused on the nature of such systems in a variety of strongly interacting quantum theories, in this work, we study their behavior in short-range entangled topological phases, particularly focusing on the role of symmetry-protected topological zero modes. We show, using both analytical and numerical methods, how the existence of such zero modes in any representative system can mask the SFF with large period (akin to generalized Rabi) oscillations, hiding any behavior arising from the bulk of the spectrum. Moreover, in a quenched disordered system, these zero modes fundamentally change the late-time universal behavior reflecting the chaotic signatures of the zero-energy manifold. Our study uncovers the rich physics underlying the interplay of chaotic signatures and topological characteristics in a quantum system.
Anurag Sarkar, Subrata Pachhal, Adhip Agarwala, Diptarka Das
2023-06-22T18:00:11Z
http://arxiv.org/abs/2306.13138v2
# Spectral Form Factors of Topological Phases ###### Abstract Signatures of dynamical quantum phase transitions and chaos can be found in the time evolution of generalized partition functions such as spectral form factors (SFF) and Loschmidt echos. While a lot of work has focused on the nature of such systems in a variety of strongly interacting quantum theories, in this work, we study their behavior in short-range entangled topological phases - particularly focusing on the role of symmetry protected topological zero modes. We show, using both analytical and numerical methods, how the existence of such zero modes in any representative system can mask the SFF with large period (akin to generalized Rabi) oscillations hiding any behavior arising from the bulk of the spectrum. Moreover, in a quenched disordered system these zero modes fundamentally change the late time universal behavior reflecting the chaotic signatures of the zero energy manifold. Our study uncovers the rich physics underlying the interplay of chaotic signatures and topological characteristics in a quantum system. _Introduction:_ Ideas of thermalization and chaos have become all pervasive in multiple sub-disciplines of physics, including classical and quantum many body systems, quantum field theory, gravity and fluids[1; 2; 3; 4; 5; 6]. These questions become crucial to understand the phases of interacting quantum systems when they are either driven or are coupled to baths, however, methods of characterizing chaos are few and limited. The behavior of spectral form factor (SFF) in such systems has been particularly illuminating. Most interestingly, one finds that SFFs follow universal features independent of underlying microscopic details. SFFs and its variants such as Loschmidt echo [7], Fisher zeros [8; 9] etc., which are generalizations of the equilibrium thermal partition function, have been vigorously investigated for various quantum many body systems ranging from lattice models to black holes [10; 11; 12]. It is known that interacting many-body chaotic systems show a dip-linear ramp structure in the SFF, which saturates at late times[13; 14]. This behavior however has intriguing micro-structures which depends on the underlying symmetries, nature of interactions, dimensionality and localization properties [7]. However, the behavior of SFF vis-a-vis topological properties of a quantum Hamiltonian has been little explored. In this paper, we investigate the behavior of SFF in systems where the Hamiltonians describe a symmetry protected topological phase. Topological phases of matter, unlike symmetry broken systems, identify different thermodynamic phases with different topological invariants which are often protected by some discrete symmetries [15; 16; 17; 18]. A paradigmatic model for investigating topological systems is the Su-Schrieffer-Heeger (SSH) model [19] where the system hosts chiral symmetry protected edge modes in the topological phase. In this work we show that in general, existence of such zero dimensional topologically protected eigenstates can fundamentally transform the SFF showing large time oscillations. We show both numerically and analytically, that these are generalizations of Rabi oscillations, where the SFF locks between a few states of the complete many-body spectrum. We further show that this in fact is a property of even higher order topological phases [20], where a \(d\) dimensional topological phase hosts (\(d-2\)) dimensional boundary modes, with characteristic features in the SFF. Given the quadratic nature of the Hamiltonians, our study also adds to the emerging area of identifying signatures of one-body chaos. This has recently been ex Figure 1: (a) A topological phase achieved in a Hamiltonian system (see eqn. (3) for SSH Model) is tuned by a parameter \(v/w\). This sets an energy scale \(T^{*}=\frac{1}{\beta^{2}}\) (shown by a dashed line) below which the system shows prominent oscillations (see (b), with \(L=60,v/w=0.5,\beta=20\)) in its SFF. The same system in the trivial region (\(v/w=2,\beta=10\)) shows no signs of such oscillations (see (c)). plored in SYK-2[21; 22] as well as in strongly coupled free gauge theories [23]. To find such signatures of one-body chaos and its interplay with topological order, we study a variant of SSH model where a subregion of the bulk is randomized with all-to-all random hoppings while keeping the edge state structure protected. We find that the SFF shows a dip followed by an early time oscillating exponential ramp reminiscent of one-body chaotic signatures in SYK-2 [21]. This develops into an intermediate linear ramp, that plateaus at late times in the trivial phase. In the topological phase a different picture emerges. While in any representative configuration, at late times there are Rabi oscillations; under ensemble averaging, these oscillations get destroyed given the random phase lags between various Rabi modes. This reflects the non-self averaging characteristic of the SFF [24] in the topological, yet disordered systems. Interestingly this averaged asymptotic value in the topological phase is different from that in the trivial phase. We show that it depends on the random matrix properties of the zero energy manifold. We end the paper with a perspective on how this physics of interplay between topological features and chaotic signatures may be a rich playground to uncover a host of new phenomena in both lattice and field theoretic quantum many body systems. _Model and SFF:_ Given a set of single particle eigenvalues \(\epsilon_{n}\) of a quantum non-interacting fermionic system, the generalized partition function is given by \[Z(\beta+it)=\prod_{n}\left(1+\exp(-(\beta+it)\epsilon_{n})\right) \tag{1}\] where \(n\in 0,\ldots,L-1\), \(L\) is the system size, \(\beta\) is inverse temperature and \(t\) is the real time. SFF is defined as \[\text{SFF}(\beta,t)\equiv\mathcal{Z}_{2}(\beta,t)=\frac{Z(\beta+it)Z(\beta-it )}{Z(\beta)^{2}} \tag{2}\] Therefore for any non-interacting system the SFF can be straightforwardly evaluated once the single particle energies are known. In particular for the SSH model we study, the Hamiltonian is given by: \[H=\sum_{i}-v(c_{iA}^{\dagger}c_{iB}+\text{h.c.})-w(c_{iB}^{\dagger}c_{i+1,A}+ \text{h.c.}) \tag{3}\] where \(c_{i\alpha}^{\dagger},c_{i,\alpha}\) are the fermion creation and annihilation operators at site \(i\) for the orbitals \(\alpha\equiv A,B\). \(v\) and \(w\) are the intra and inter-unit cell hopping strength. The system hosts a topological phase when \(|v/w|<1\), characterized by a non-trivial winding number and has two close to zero energy modes in an open system with a edge localization length \(\xi=[\log(|w/v|)]^{-1}\) which reflects extent of the edge states. The system has time-reversal and sub-lattice symmetry restricting it to BDI symmetry class which realizes general off-diagonal real matrices of the free fermion ten-fold classification [25; 26], such that the system spectra is always symmetric about energy \(E=0\). For a eigenspectrum which is symmetric about \(E=0\) the SFF (see eqn. (2)) can be written as \[\text{SFF}=\prod_{\epsilon_{n}>0}Z_{2}^{\epsilon_{n}}(\beta,t)=\prod_{ \epsilon_{n}>0}\frac{(\cosh(\beta\epsilon_{n})+\cos(\epsilon_{n}t))^{2}}{(1+ \cosh(\beta\epsilon_{n}))^{2}} \tag{4}\] which implies that the real-time behavior of the system takes a highly convoluted form dependent on the frequencies of the single-particle energies. This is rather uninteresting as can be seen by the behavior of SFF for \(v/w=2\) (trivial region) (See Fig. 1(c)). However an interesting behavior emerges when boundary modes dominate, whose energy \(\epsilon_{1}\to 0\) exponentially in system size. This results in a time scale \(\sim\pi/\epsilon_{1}\) where the SFF first goes to zero and then oscillates with the same frequency, drowning all noisy behavior due to higher energy modes which are exponentially killed due to a finite \(\beta\) (see SFF in Fig. 1(b) for \(v/w=0.5\) in the topological regime). _Rabi Oscillations:_ We now discuss that the long time oscillations of the SFF are in fact generalized Rabi oscillations where the system locks between the boundary zero modes. To see this consider a general many body wavefunction which is an equal superposition of _all_ the many body basis states \[|\Psi\rangle=\frac{1}{2^{L}}\sum_{\{N_{n}\}}|\{N_{n}\}\rangle \tag{5}\] where each of the \(|\{N_{n}\}\rangle\) specify the Fock state representation labelled by occupancies (\(N_{n}=0,1\)) of the single particle state \(|\psi_{n}\rangle\) with single particle eigen-energy \(\epsilon_{n}\). When this state is quenched with the Hamiltonian and compared with the initial state, this gives a fidelity of \[\langle\Psi|\Psi(t)\rangle\propto Z(it) \tag{6}\] Thus instances where \(Z(t)\to 0\) are Rabi oscillations of a pure state under time evolution. At \(\beta=0\) the fidelity behavior will be uncharacteristic because all \(\epsilon_{n}\)s will show convoluted oscillations. However the interpretation is distinct at a finite temperature where the initial state (see eqn. (5)) can be generalized to \(\propto|\Psi(\beta,t)\rangle=\sum_{\{N_{n}\}}\exp(-\beta E_{\{N_{n}\}}/2)|\{N_ {n}\}\rangle\). When evolved in time, the corresponding fidelity is \(Z(\beta+it)/Z(\beta)\) and therefore the SFF is just its absolute value squared. Hence the SFF going to zero are the Rabi oscillations of the state \(|\Psi(\beta,t)\rangle\). It is insightful to note the behaviour of this state at finite temperature. Given the states are normalized by the thermal occupancy factors, the many body states with largest weights are the ones where \(N_{n}=1\)\(\forall\)\(\epsilon_{n}<0\). In a periodic system, this isolates a single state; however, in an open one dimensional SSH chain, when in topological phase - this results in _four_ states which are exponentially close in their many body energies. These essentially correspond to different ways of occupying the (right and left) boundary modes (\(\equiv|R\rangle,|L\rangle\)). Thus \[|\Psi(\beta,t)\rangle \sim \frac{1}{2}\Big{(}|\circ\circ\rangle+e^{-(\frac{\beta}{2}+it) \epsilon_{a}}|*\circ\rangle\] \[+ e^{-(\frac{\beta}{2}+it)\epsilon_{b}}|\circ*\rangle+|**\rangle \Big{)}\otimes|N_{n}=1\ \forall\ \epsilon_{n}<0\rangle\] where \(\epsilon_{a},\epsilon_{b}\) (\(\epsilon_{b}=-\epsilon_{a}\)) represent the anti-bonding and bonding orbital combined out of the left and right edge states (\(\{|b\rangle,|a\rangle\}=\frac{1}{\sqrt{2}}(|R\rangle\pm|L\rangle)\)) and \(*(\circ)\) represents the their occupancies (vacancies) in \(|n_{a},n_{b}\rangle\) basis. Other states get damped by the bulk gap between the valence and conduction bands \(\sim\exp(-\beta\Delta_{g})\) where \(\Delta_{g}\sim|w-v|\). This introduces a temperature scale \(T^{*}\sim\frac{1}{\beta^{2}}=\Delta_{g}\), below which the Rabi oscillations are strong, while for \(T>\Delta_{g}\) these Rabi oscillations dissolve with the bulk signatures. As is clear these Rabi oscillations have a frequency determined by the edge mode energy scale given by \(\sim\frac{\pi}{\epsilon_{a}}\). Unlike a single qubit Rabi oscillation here the SFF behaves as \(\sim|(1+\cos(\epsilon_{a}t))|^{2}\), thus the rise from the minima is \(\propto t^{4}\) (see Fig. 1). In Fig. 2 we show the behavior of the SFF both in the trivial, topological and critical point; showing that the topological phase (b) has dominant oscillations. The behavior in \(\{v/w,t\}\) plane shows that the time period of Rabi oscillations are in fact dependent on \(\epsilon_{a}\) as it matches the dashed analytical curves expressed as: \(t_{\rm Rabi}=2.653(2k+1)\pi\exp(-0.483L\log(v/w))\). Here, \(k\) are positive integers and \(L\) denotes the system size. (see Fig. 2(d)) _Higher Order Topological Phases:_ In order to further study this physics, and its generalizations, we study SFF in a higher order topological phase where a two dimensional system hosts topologically protected four corner modes on a square lattice [20]. The Hamiltonian is given by \(H(k)=[\gamma+\lambda\cos(k_{x})|\Gamma_{4}+\lambda\sin(k_{x})\Gamma_{3}+[ \gamma+\lambda\cos(k_{y})]\Gamma_{2}+\lambda\sin(k_{y})\Gamma_{1}+\delta\Gamma _{0}\) where \(\Gamma_{0}=\tau_{3}\sigma_{0},\Gamma_{k}=-\tau_{2}\sigma_{k}\) and \(\Gamma_{4}=\tau_{1}\sigma_{0}\); where \(\tau,\sigma\) are Pauli matrices. The system hosts four corner modes when \(|\gamma/\lambda|<1\). Evaluating the SFF, we again find long time oscillations (see Fig. 3) where now the effective state spans over \(2^{4}\) states which can be counted as occupancies of four boundary modes. The oscillations follow \(\sim|(3+4\cos(\epsilon_{a}t)+\cos(2\epsilon_{a}t))|^{2}\) where \(\epsilon_{a}\) is the exponentially small energy scale close to zero. Interestingly the rise form zero is now \(\sim t^{8}\) reflecting the higher number of zero modes in the system (see Fig. 3). In fact for a multipole topological insulator[20] with general number of \(2p\) zero modes, SFF would the scale \(\propto t^{4p}\), in the topological phase for \(T<\Delta_{g}\). This is one of the key results of this work. While the discussion above has been in terms of single particle eigenstates, in general, SFFs in many body settings is easily decomposed into a form \[{\rm SFF}=\frac{1}{Z^{2}}\sum_{m,n}e^{-\beta(E_{m}+E_{n})}e^{it(E_{m}-E_{n})} \tag{8}\] where \(E_{m},E_{n}\) are many-body eigenvalues; it may then appear that that the time averaged behavior must approach \(Z(2\beta)/Z^{2}\) on general grounds. Our results point out that even in an otherwise dense spectra, existence of topologically protected zero modes may render this asymptotic value insignificant and mask it with generalized Rabi-oscillations. It is natural to pose what is the fate of such Rabi oscillations in presence of disorder, and that is the question we next turn to. _Random Bulk Hamiltonians and topological zero modes:_ Motivated by random matrix theory (RMT), where the symmetry properties of random dense Hamiltonian matrices determine their chaotic signatures [27; 28; 29], we mark a finite central region (excluding boundaries) in the bulk of SSH system (see eqn. (3)) as \(\equiv\mathcal{R}\), where all-to-all hopping disorder of the form \(\sum_{i,j}w_{ij}\,c_{iA}^{\dagger}c_{jB}+{\rm h.c.}\) is introduced. Here \(w_{ij}\)'s are chosen from a Gaussian orthogonal ensemble with a scale parameter \(\frac{\sigma}{\sqrt{\mathcal{N}_{R}}}\) where \(N_{\mathcal{R}}\) are the number of sites in the region \(\mathcal{R}\) and \(\{i,j\}\in\mathcal{R}\). Since the disorder respects the sublattice, \(\mathcal{R}\) excludes the edges, and \(\sigma<|w-v|\), hence every disorder configuration retains topologically protected zero modes (for more details on stability to disorder, see Supplemental Material(SM)). We find that the disorder averaged SFF, irrespective of the topological character of the phase, has the following behavior as a function of time: (i) a dip (ii) a exponential oscillation and then a linear ramp (iii) a late time saturation. The three regions are shown in Fig. 4(a). At short times, the _exponentially_ oscillating behavior is characteristic of signatures of single particle chaos in systems such as SYK-2 [21; 22] or in free strongly coupled gauge/string theories [23]. This is in contrast to many body chaos that has a distinct linear ramp right after the dip [13; 14]. To Figure 2: Contour plot of SFF in the \(\beta-t\) plane for an open boundary SSH model (L = 60) in trivial regime (a) with \(v=1,w=0.5\), in topological regime (b) with \(v=0.5,w=1\) and at the critical point (c) with \(v=0.5,w=0.5\). (d) Behavior of SFF in the \(v/w-t\) plane for \(\beta=30\). The dashed lines are analytical curves denoting the zeros of the SFF (see text). understand the underlying physics, it is instructive to consider a further simplified two band model where the single particle density of states comprises of two semicircular lobes (\(\equiv\rho_{\pm}(E)\)) each with a bandwidth of \(2a\) but with a band gap of \(\Delta_{g}\) (details in SM). Assuming that the eigenvalue distributions contributing to any lobe (say \(\rho_{+}(E)\)) are governed by \(N\times N\) random matrix drawn from the gaussian unitary ensemble the SFF is given by (see eqn. (4)) \[\left\langle\mathcal{Z}_{2}^{R}(\beta,t)\right\rangle=\left\langle\,\exp\left( N\int dE\,\rho_{+}(E)\log Z_{2}^{E}(\beta,t)\right)\,\right\rangle \tag{9}\] The angular brackets indicate the RMT averaging where all moments of the density of states \(\rho_{+}(E)\) contribute. At early times, the disconnected piece dominates leading to \(\sim\exp\left[\,\left(N\int dE\,\langle\rho_{+}(E)\rangle\log Z_{2}^{E}(\beta, t)\right)\,\right]\). Under a high temperature expansion the system exhibits the exponential ramp imbued with short-time oscillations. The \(\beta\to 0\) limit is given by \[\left\langle\mathcal{Z}_{2}^{R}(0,t)\right\rangle\approx\frac{1}{16^{N}}\exp \left(8N\sum_{k=1}^{\infty}\frac{(-1)^{k+1}J_{1}(kat)\cos(ka_{\texttt{max}}t)} {k^{2}at}\right) \tag{10}\] where, \(a_{\texttt{max}}=\Delta+a\), and in a microscopic model is decided by the energy scale where the bulk DOS peaks. This result points out that the location of the dip is set by high energy scale \(\pi/a_{\texttt{max}}\), which is both system size independent and is unaffected by its topological properties (see Fig. 4). The initial exponential oscillations next lead to a linear ramp, which occurs at \(t\sim L^{2/3}\). This linear ramp is a result of the leading connected piece i.e., \(\rho_{c}^{(2)}(E_{1},E_{2})\), just as in the case of many-body SFF [30]. The linear ramp further leads to a saturation, which can be remarkably different if the underlying system is trivial or topological. More interestingly the saturation physics is temperature dependent, as we now discuss. At high temperatures (\(T>|w-v|\)), irrespective of the topological features SFF saturates to a value dominated by the Kubo gap of the bulk spectra. This is essentially where all connected spectral correlations are featureless. For the minimal model, this predicts emergence of a plateau at a time \(t\sim 4N\pi\) (see SM) and \(\sim L\) for bulk disordered SSH system. However now as \(T\) is reduced below the bulk gap scale (\(T<|w-v|\)) (see Fig. 1), depending on whether we are in the trivial or the topological phase _another_ saturation plateau appears. The difference between the trivial and the topological phases of the disordered model at low temperatures is due to the presence of exponentially close to zero energy modes (\(\sim\pm\epsilon_{0}\)) which as we know from the clean limit leads to large time Rabi oscillations (see Figure 3: (a) One-particle energy spectrum of 2D HOTI model for parameter values \(\delta=0\), \(\gamma=0.5\) and \(\lambda=1\) with open boundary condition on a \(12\times 12\) lattice. In the inset the four topological zero energy modes are shown. (b) Local probability density of the zero modes showing that they are localized at the corners of the lattice (c) Plot of SFF in the trivial regime (\(\delta=0,\gamma=1,\lambda=0.5\)) of the HOTI model, showing no distinct Rabi oscillations. (d) Generalized Rabi oscillations in the topological regime for the HOTI system (blue) (\(\delta=0\), \(\gamma=0.5,\lambda=1\)) and SSH system (red) (\(v=0.5154,w=1,L=24\)). The rise of the SFF in the former is \(\sim t^{8}\) compared to the latter. Figure 4: (a) SFF in random SSH model (\(L=60,\beta=0,\sigma=0.01\), \(30\) random sites in the bulk) shows the ramp structure in early time for both topological (\(v=0.5,w=1\)) and trivial regimes (\(v=1,w=0.5\)). The green and orange dashed vertical lines denote the dip-time and plateau-time calculated from the toy model. (b) The same system at a higher \(\beta\) (\(=10\)) shows a step-like plateau in late times in topological regime, while the trivial SFF shows no such feature. (c) At very high \(\beta\) (\(=50\)), the early time ramp disappears; the late-time plateau in the topological SFF saturates around \(3/8\). Here, \(\Delta_{g}^{-1}\sim 2\) and \(\epsilon_{0}^{-1}\sim 10^{7}\). All the SFF values are averaged over 2000 random configurations. For further details, see text. Fig. 1). The effective SFF can be captured in the form \(\sim Z_{2}^{\epsilon_{0}}(\beta,t)\mathcal{Z}_{2}^{R}(\beta,t)\) (see eqn. (4)) where at low temperatures \(Z_{2}^{\epsilon_{0}}\) dominates and thus any single configuration, even with bulk disorder, will lead to large time oscillations after a time scale \(t^{*}\sim\frac{1}{\epsilon_{0}}\). However, when configuration averaged, SFF shows an intriguing behavior as is shown in Fig. 4 (b) and (c). At low temperatures the system reaches a _different_ saturation value at later times. The central result of our next discussion is to show that this new saturation value is _not_ governed by large-\(N\) RMT results, as is common in chaotic systems, but is rather governed by a small-\(N\) RMT chaos, as we discuss below. Every peripheral disorder realization in region \(\mathcal{R}\) perturbs the edge modes which are in turn protected by the \((w,v)\) scale of the underlying SSH Hamiltonian. Therefore at low energies i.e. finite \(\beta\), i.e., \(\beta>\Delta_{g}^{-1}\), the SFF gets dominated by RMTs reflecting the couplings within the zero energy manifold itself. For instance the effective Hamiltonian in context of our example becomes \(H_{\text{eff}}=\epsilon_{0}\sigma_{z}+\lambda\sigma_{x}\), where \(\lambda\) can be effectively considered to be drawn from probability distribution \(P(\lambda)\). Thus the configuration averaged \(Z_{2}^{\epsilon_{0}}(\beta,t)\) becomes \[\llangle Z_{2}^{\epsilon_{0}}(\beta,t)\rr\int_{-\infty}^{\infty} d\lambda\,P(\lambda)\frac{\left(\cosh(\beta\epsilon_{\lambda})+\cos(\epsilon_{ \lambda}t)\right)^{2}}{\left(1+\cosh(\beta\epsilon_{\lambda})\right)^{2}} \tag{11}\] where \(\epsilon_{\lambda}=\sqrt{\epsilon_{0}^{2}+\lambda^{2}}\). For a Gaussian distribution of \(\epsilon_{\lambda}\) around \(\epsilon_{0}\) with variance \(\sigma\), \(\lim_{t\to\infty}\llangle Z_{2}^{\epsilon_{0}}(\beta,t)\rr=\alpha_{\text{topo}}\) is \(\sim 3/8\) when \(\epsilon_{o}\ll T<|w-v|\). Thus this late time behavior in the topological phase is fundamentally distinct from the trivial regime where no such zero energy manifold exists and the SFF just saturates to \(1\) (see SM). The analytical predictions exactly match our numerical results as shown in Fig. 4 (b) and (c). Thus SFF and its utility in identifying chaotic signatures, are fundamentally dependent on the topological properties of a quantum system. _Outlook:_ In this work we have shown that interplay of topological order has important implications for both dynamics and for emergence of chaos in quantum systems. Our analysis establishes that emergence of zero energy boundary modes fundamentally change the late time behavior of SFF, a versatile tool that has been critical to diagnose chaos and thermalization in host of systems. In particular we have shown that systems with close to zero boundary modes can exhibit generalized Rabi oscillations which are characterized by topological properties. Our mapping of SFF to the fidelity of a generalized state makes this particularly transparent. We further analysed the role of disorder in the system and showed, using toy models, that the late time behavior of the system in fact holds crucial information of the topological order. While our work has investigated the role of non-interacting topology on the SFF, the natural question to pose is the role of both interactions and long-range topological order. Given the existence of a gap, we expect our results to hold perturbatively under interactions, a systematic study of which, is an exciting future prospect. _Acknowledgement._ A.S and S.P acknowledge funding from IIT Kanpur. A.A. acknowledges support from IIT Kanpur Initiation Grant IITK/PHY/2022010. D.D., & A.S. would like to acknowledge the support provided by the Max Planck Partner Group grant MAXPLA/PHY/2018577. D.D. would also like to acknowledge the support provided by the MATRICS grant SERB/PHY/2020334.
2310.19495
Deep Learning for Visual Navigation of Underwater Robots
This paper aims to briefly survey deep learning methods for visual navigation of underwater robotics. The scope of this paper includes the visual perception of underwater robotics with deep learning methods, the available visual underwater datasets, imitation learning, and reinforcement learning methods for navigation. Additionally, relevant works will be categorized under the imitation learning or deep learning paradigm for underwater robots for clarity of the training methodologies in the current landscape. Literature that uses deep learning algorithms to process non-visual data for underwater navigation will not be considered, except as contrasting examples.
M. Sunbeam
2023-10-30T12:37:49Z
http://arxiv.org/abs/2310.19495v1
# Deep Learning for Visual Navigation of Underwater Robots ###### Abstract This paper aims to briefly survey deep learning methods for visual navigation of underwater robotics. The scope of this paper includes the visual perception of underwater robotics with deep learning methods, the available visual underwater datasets, imitation learning, and reinforcement learning methods for navigation. Additionally, relevant works will be categorized under the imitation learning or deep learning paradigm for underwater robots for clarity of the training methodologies in the current landscape. Literature that uses deep learning algorithms to process non-visual data for underwater navigation will not be considered, except as contrasting examples. deep learning, underwater, imitation learning, AUV, visual navigation, reinforcement learning ## I Introduction This paper will cover deep learning for underwater robotics. It is divided into the perception of underwater robotics, underwater visual datasets, imitation learning, and reinforcement learning for underwater environments. Visual navigation in underwater robotics is more challenging than its land or air counterparts because of the limitations of perception, particularly computer vision. When light travels through the water, it is absorbed and scattered, resulting in a wavelength-dependent attenuation that disturbs the standard ways of handling vision [1]. An example of this problem is capturing images of an object with different brightness and colors based on distance. Additionally, environmental changes are more subtle underwater. Features may change from drifting sand and turbulent current. Detecting and modeling these changes is difficult, especially because of its stochastic nature and the complex fluid equations that govern the dynamics. The hydrodynamics involved is nonlinear and time-varying, and the robot is usually underactuated with not all the states reachable [2]. Deep learning methods for visual navigation are explored, because they offer a learning-based way that may handle these challenges through a data-driven approach. Because autonomous underwater vehicles (AUVs) are less common due to greater hardware considerations, underwater datasets are rare and more difficult to collect. Since deep learning is a data-driven method, it is imperative to have public datasets to benchmark models. The two main deep learning paradigms for control of an AUV through visual navigation is imitation and reinforcement learning. Imitation learning is used for underwater robotics for visual navigation, often mapping RGB images to control commands like roll, pitch, yaw, and throttle. The problem formulation is a supervised learning task where the objective is to learn a policy that imitates the trajectories of a human or robot demonstrator. Two major limitations of imitation learning algorithms are compounding error between differences of expert and agent trajectory as well as the idea that a policy derived from such a method will not outperform the expert demonstrator [4]. Reinforcement learning is the other deep learning paradigm, using a non-differentiable objective function through rewards to train neural networks for a desired task, in this case being underwater visual navigation. Reinforcement learning differs from imitation learning in that an initial dataset is not needed, and an environment must be explored to generate the data to be trained on. Though reinforcement learning is less sample efficient than imitation learning, the lack of dependence on an expert demonstrator allows it to yield policies that are potentially superhuman [15]. The paper's contribution are as follows: * Identify the works that focus on utilizing deep learning for perception in underwater visual navigation algorithms. * Discuss the publicly available underwater datasets. * Categorize the deep learning methods used in AUVs under imitation learning or reinforcement learning. ## II Perception Due to the challenges of visual perception in underwater robotics, [10] offers a deep learning method to improve loop closure and detection for a Visual Graph SLAM algorithm for underwater navigation. Siamese networks, a neural network architecture that has enjoyed great success in similarity learning, was leveraged to detect similar visual features and places. Another work for underwater loop detection for use in a visual SLAM algorithm is [11], but it sets itself apart from [10] by training a network through an unsupervised learning method. Whereas Siamese networks are trained in a supervised manner with image pairs through contrastive loss, the unsupervised learning method detects similar images and loop closure through the clustering of image descriptors. Finally, [9] also deals with loop closure, but the role of the neural network is more limited in scope. The neural network only selects loop candidates, which are sent to an image macther and geometric consistency verifier to output the final loop detected images. ## III Underwater Datasets There are far fewer underwater robot datasets as compared to autonomous ground vehicle or autonomous aerial vehicle datasets since collecting data underwater remains difficult due to hardware constraints and needing a relatively uncommon environment. For deep learning methods, having a wide variety of datasets is paramount because it allows the research community to tackle the same problems, often on a vetted or curated dataset. In computer vision, deep learning innovation was primarily accelerated by the existence of ImageNet [19]. In natural language processing, the large corpus of textual data in the internet has fueled the advances of large language models. For robotics, ground vehicle datasets like CARLA [20] or KITTI [21] and aerial vehicle datasets like Mid-Air [22] have paved the way for development of many deep learning based visual navigation algorithms. As such, [6] offers a visual dataset for AUVs navigating near the seabed. The images were collected from three different depths: one at a few meters, the second 270 meters, and the last at 380 meters. The purpose of this data is to provide image and inertia-pressure samples for use in simultaneous localization and mapping algorithms. A common theme is leveraging generative adversarial networks and generative models to augment real underwater datasets with generated synthetic images. This is done in [7, 17], and [18]. A publicly available dataset, called EUVP, of images that were taken during ocean experiments for exploration and human-robot collaboration is also presented in [18]. ## IV Imitation Learning For [3], a domain expert diver collected data to generate "good" and "bad" navigation scenarios, which were later annotated with labels of yaw and pitch to accomplish the task of exploring a coral reef while avoiding obstacles. A convolutional neural network was trained to map the images to the control commands. They evaluated their models through the percentage of coral covered in the navigation task in certain reef area. Similar to [3, 4] also uses a behavior cloning model to map RGB images to yaw and pitch command. In this case, the task was to explore a shipwck, and a convolutional neural network was used. For this work, the neural network was trained on a mixture of simulation and real-world data. The model was evaluated through the training-validation-test split. Different test accuracies are given for the real-world only dataset, simulation only dataset, and mixed dataset. The primary difference between the two papers were the difference in task and the nature of the demonstration data. One work that is different is [5], which uses goal-conditioned imitation learning for underwater visual navigation. The behavior cloning model learns safe, reactive behavior for difficult terrain, and is conditioned to navigate based on waypoints. The neural network architecture is a convolutional neural network, much like those in [3] and [4]. Similarly, the model was evaluated through a test set and evaluated in real-life qualitatively. ## V Reinforcement Learning In [8], a soft-actor critic deep reinforcement learning algorithm was used to train a neural network. The AUV was a soft robot, and the task was to swim in a straight line in disturbed water. A camera was used to collect RGB images, which served as the observation space for the neural network. Before the deep reinforcement learning algorithm was rolled out underwater, a model was trained in a MuJoCo simulation. For [15], a combination of imitation learning and reinforcement learning is used. First, a generative adversarial imitation Fig. 1: Redrawn network architecture diagram of the Siamese network used to detect loop-closure between two underwater images in [10]. Fig. 2: These are some sample training images selected from the EUVP dataset in [18]. learning algorithm is used to overcome the cold start problem of the initial neural network training to learn a policy. Then, a reward function is designed and trained with proximal policy optimization and soft-actor critic. The results are compared in a Unity simulation. This work uses a light sensor, which is different from the other imitation learning and reinforcement learning work as most use a RGB camera for their visual sensor. ## VI Imitation and Reinforcement Learning Categorization The works where deep learning methods were used for only visual navigation are considered and categorized. Works like [12, 14], and [16] use deep or reinforcement learning methodologies, but fail to incorporate a visual sensor. While there is much more work on using neural networks on inertial, pressure, and position data from non-visual sensors, that is outside the scope of this survey. Furthermore, the specific neural network architectures used in underwater robot navigation will not be discussed in depth like in [13], but are useful for understanding their applications in perception and control. There are far more works using imitation learning algorithms than reinforcement learning. This can be explained by the idea that the simplest class of imitation learning algorithms, behavior cloning, is good at dealing with high dimensional inputs because convolutional layers can be used as a feature extractor to reduce the dimension of the images. The behavior cloning problem formulation is also easier, where the objective is imitating an expert trajectory through a supervised learning setup. The lack of many deep reinforcement learning works for visual navigation of AUVs can be explained by the added difficulty of devising a proper reward function on top of dealing with a noisy observation space. Moreover, the reinforcement learning agent must do exploration before it can exploit through an adequate policy, which introduces an overhead of creating or using a simulation for the underwater robot as real-life exploration can be expensive and dangerous. ## VII Conclusion In this paper, we have covered the deep learning methods used to improve perception under water, which go into improving the visual navigation algorithms that use those modules. Most of the work centers around in detecting loop closures through similarity learning of images through Siamese networks or using other neural network architectures to detect and select loop closure candidates like in [9, 10], and [11]. The improvement of loop closure in visual perception through deep learning improves the visual SLAM algorithms that leverage them for navigation. Next, we discussed the underwater datasets publicly available like the ones in [6] and [7]. While there are some standard public datasets, there are far too few and even less for a robotics application. This can be attributed to the difficulty of creating such datasets due to hardware and environmental constraints. A common trend is to leverage deep learning methods to augment the available datasets, either through improving the quality of the images or generating synthetic images like in [17] and [18]. However, this type of augmentation seems more a stopgap rather than a permanent solution to the scarcity of underwater robotics data. Finally, we categorize the works that use deep learning methods to control an AUV under the imitation (like in [3, 4], and [5]) or reinforcement learning (like in [8] and [15]) paradigm. A pattern that becomes immediately obvious is that there exists far more imitation learning works than deep reinforcement learning works in the domain of visual navigation in underwater robotics. Another trend is that there aren't that many works where some visual sensors are used in the context of imitation or reinforcement learning. This can perhaps be explained by the hardware constraints in AUVs, where limited computation and batteries onboard may make RGB cameras too energy-intensive. From this survey, some gaps in the field of deep learning for visual navigation of underwater robots become apparent. There needs to be more underwater datasets collected and made publicly available, especially with robotics applications. Even though deep learning methods are used to augment existing underwater datasets, the best way to accelerate the pace and scalability of neural networks is through more data. Like in computer vision and natural language processing, curated datasets provide a standard for research and competition, both of which push innovation within the field. Another gap is the lack of reinforcement learning work for visual navigation. The preference for imitation learning is reasonable given the harder formulation of the reinforcement learning problem, but focus in this direction may address the limitations of imitation learning, like compounding errors and the learned policy never being better than the demonstrator. Because simulation is an important component of reinforcement learning in the exploration stage, attention in this area may lead to better simulators that can address the lack of underwater datasets through synthetic data. Fig. 3: Redrawn system diagram for the deep reinforcement learning controller in [8]. Overall, the field of deep learning for visual navigation of underwater robots provides natural challenges crucial for the larger study of AI robotics. One such challenge is the fundamental problem of acting under noisy or misleading visual perception data. This offers opportunities for planning or navigation under uncertainty. Moreover, breakthroughs in the domain of deep learning for underwater visual navigation may be applied to the broader field of learning from sparse environments. Underwater environments are often characterized by sparsely distributed features. In imitation learning and deep reinforcement learning, environments with few features may make it difficult for the neural network to learn a useful policy for the desired task. Because of these reasons, it is crucial to place more research emphasis on the problem of visual navigation in underwater environments.
2307.13211
Magnetic Resonance Parameter Mapping using Self-supervised Deep Learning with Model Reinforcement
This paper proposes a novel self-supervised learning method, RELAX-MORE, for quantitative MRI (qMRI) reconstruction. The proposed method uses an optimization algorithm to unroll a model-based qMRI reconstruction into a deep learning framework, enabling the generation of highly accurate and robust MR parameter maps at imaging acceleration. Unlike conventional deep learning methods requiring a large amount of training data, RELAX-MORE is a subject-specific method that can be trained on single-subject data through self-supervised learning, making it accessible and practically applicable to many qMRI studies. Using the quantitative $T_1$ mapping as an example at different brain, knee and phantom experiments, the proposed method demonstrates excellent performance in reconstructing MR parameters, correcting imaging artifacts, removing noises, and recovering image features at imperfect imaging conditions. Compared with other state-of-the-art conventional and deep learning methods, RELAX-MORE significantly improves efficiency, accuracy, robustness, and generalizability for rapid MR parameter mapping. This work demonstrates the feasibility of a new self-supervised learning method for rapid MR parameter mapping, with great potential to enhance the clinical translation of qMRI.
Wanyu Bian, Albert Jang, Fang Liu
2023-07-25T02:41:44Z
http://arxiv.org/abs/2307.13211v1
# Magnetic Resonance Parameter Mapping using Self-supervised Deep Learning with Model Reinforcement ###### Abstract This paper proposes a novel self-supervised learning method, RELAX-MORE, for quantitative MRI (qMRI) reconstruction. The proposed method uses an optimization algorithm to unroll a model-based qMRI reconstruction into a deep learning framework, enabling the generation of highly accurate and robust MR parameter maps at imaging acceleration. Unlike conventional deep learning methods requiring a large amount of training data, RELAX-MORE is a subject-specific method that can be trained on single-subject data through self-supervised learning, making it accessible and practically applicable to many qMRI studies. Using the quantitative T\({}_{1}\) mapping as an example at different brain, knee and phantom experiments, the proposed method demonstrates excellent performance in reconstructing MR parameters, correcting imaging artifacts, removing noises, and recovering image features at imperfect imaging conditions. Compared with other state-of-the-art conventional and deep learning methods, RELAX-MORE significantly improves efficiency, accuracy, robustness, and generalizability for rapid MR parameter mapping. This work demonstrates the feasibility of a new self-supervised learning method for rapid MR parameter mapping, with great potential to enhance the clinical translation of qMRI. Quantitative MRI, Self-supervised learning, Model reinforcement, Optimization ## I Introduction Deep learning has had a profound impact on medical imaging, including MRI. The breadth of its impact includes disease diagnosis, prognosis, image analysis, and processing [1, 2]. In particular, deep learning methods for MRI reconstruction are becoming increasingly popular due to their capability to learn important image features directly from large datasets [3]. One popular method is estimating unacquired k-space data through training end-to-end convolutional neural networks (CNNs), using undersampled data as input and corresponding fully sampled data as a reference [4, 5, 6, 7, 8, 9, 10]. This enables the networks to learn the mapping function between the undersampled and fully sampled k-space data pairs. A well-trained network can then reconstruct undersampled k-space data for accelerated MRI. Despite recent advances in deep learning MRI reconstruction [3, 4, 5, 6, 7, 8, 9, 10, 11, 12], several challenges remain, including limited training data access. Acquiring fully sampled k-space data can be time-consuming and expensive, and the data collection challenge for quantitative MRI is even more pronounced. Quantitative MRI is an advanced method to quantify tissue MR parameters by modeling the acquired MR signal. For example, a popular qMRI method called variable Flip Angle (vFA) [13] to quantify tissue spin-lattice relaxation time (\(T_{1}\)) requires acquisition for multiple repeated scans at different flip angles. Since each acquisition can take several minutes, the repeated scan time can be non-trivial. There has been some recent progress using deep learning-based methods to accelerate qMRI [14, 15, 16, 17, 18, 19, 20, 21, 22, 23], including MANTIS by Liu et al. [18, 19], DeepDTI by Tian et al. [20], MoDL-QSM by Feng et al. [21], and DOPAMINE by Jun et al. [22], all of which use supervised learning to enable rapid MR parameter mapping from undersampled k-space data. Unsupervised or self-supervised learning methods have recently gained increasing attention due to their nature of reduced data demand. In unsupervised learning methods, the algorithm can identify patterns and structures of the input data without explicit reference pair. Self-supervised learning methods involve training a model to learn the properties and features of the input data through self-instruction. Liu et al. recently [23] proposed a self-supervised learning framework that incorporates an end-to-end CNN mapping and an MRI physics model to guide the generation of MR parameter maps. This method, referred to as REference-free L4ent map eXtraction (RELAX), has shown excellent performance in reconstructing undersampled qMRI data for \(T_{1}\) and spin-spin relaxation time (\(T_{2}\)) mapping in the brain and knee. Notably, RELAX only trains on undersampled k-space datasets without the need for fully sampled k-space for reference, greatly alleviating the data limitation. The end-to-end CNN mapping in RELAX is critical in converting the undersampled k-space data into MR parameter maps through cross-domain learning, despite the end-to-end mapping (e.g., deep U-Net) requires high computing resources such as GPU memory, needs diverse training datasets to provide sufficient image features and lacks convergence guarantees due to its nonlinear network operations and deep structures. Optimization algorithms can be used to unroll deep neural networks and have been applied for MRI reconstruction [24, 25, 26]. Those algorithms stem from classic mathemati cal methods to solve constrained optimization problems and often have well-defined solutions. For example, Variational Network (VN) [4] unrolls a variational model by iteratively updating along the gradient descent direction. ADMM-net [27] learns to unroll the alternating direction method of multipliers (ADMM). ISTA-net [28] unrolls iterative shrinkage thresholding algorithm (ISTA) to solve the reconstruction problem with \(\ell_{1}\) regularizations. PD-net [29] iterates the learnable primal-dual hybrid gradient algorithm (PDHG) and learns the primal and dual variables in an alternative fashion. This paper proposes a new deep learning method for qMRI reconstruction by marrying self-supervised learning and an optimization algorithm to unroll the learning process. This proposed method is an extension of RELAX [23], thus referred to as RELAX with MOdel REinforcement (RELAX-MORE), which jointly enforces the optimization and MR physics models for rapid MR parameter mapping. RELAX-MORE aims to enable efficient, accurate, and robust MR parameter estimation while achieving subject-specific learning, featuring training on single-subject data for further addressing the data limitation. The rest of the paper is organized as follows: Section II introduces the theory and a bi-level optimization scheme of RELAX-MORE. Section III details the implementation of lower level and upper level optimization and describes the experiment setup. Section IV provides the results and discussion. Section V concludes the paper. ## II Theory ### Supervised Learning and Self-supervised Learning for Accelerated MRI Reconstruction Let \(\mathrm{x}\in\mathbb{C}^{N_{x}\times N_{y}\times N_{z}\times N_{c}}\) be the measured MR image with a resolution \(N_{x}\times N_{y}\times N_{z}\), where \(x,y,z\) are spatial dimensions and \(N_{c}\) denotes the number of coil elements in a typical multi-coil MRI reconstruction scenario. The corresponding undersampled k-space measurement \(\mathrm{f}\in\mathbb{C}^{N_{k}\times N_{y}\times N_{kz}\times N_{c}}\) and \(\mathrm{f}=\mathbf{U}\mathcal{F}\mathrm{C}\mathrm{x}+n\) incoptoates the MRI encoding process, where \(\mathbf{U}\) is an undersampling mask for accelerated acquisition, \(\mathcal{F}\) is the discrete Fourier transform, \(\mathbf{C}\) denotes the coil sensitivity map and \(n\in\mathbb{C}^{N_{kz}\times N_{ky}\times N_{kz}\times N_{c}}\) is the signal noise during data acquisition, and \(kx,ky,kz\) are k-space dimensions. Supervised learning uses the fully sampled image \(\mathrm{x}^{*}\in\mathbb{C}^{N_{x}\times N_{y}\times N_{z}\times N_{c}}\) as reference and the goal is to restore the image content from undersampled k-space data by learning network parameters \(\Theta\) through minimizing the following loss function: \[\min_{\Theta}\ell\left(\mathrm{x}\left(\mathrm{f}|\Theta\right),\mathrm{x}^{* }\right). \tag{1}\] The loss function \(\ell\) assesses the difference between the fully sampled \(\mathrm{x}^{*}\) and the network output as the reconstructed image \(\mathrm{x}\left(\mathrm{f}|\Theta\right)\). Once trained with a large amount of data pairs \(\left(\mathrm{f},\mathrm{x}^{*}\right)\), the network can learn the mapping between undersampled measurements \(\mathrm{f}\) and fully sampled data \(\mathrm{x}^{*}\) for further inference. In self-supervised learning, the network is trained to learn the mapping from undersampled k-space data \(\mathrm{f}\) to itself. However, the MR physics model for image formation must be incorporated into the learning pipeline to guide the training process. The loss function in Equation (1) can be reformulated as: \[\min_{\Theta}\ell\left(\mathbf{U}\mathcal{F}\mathrm{C}\mathrm{x}\left( \mathrm{f}|\Theta\right),\mathrm{f}\right). \tag{2}\] Notably, as minimizing Equation (2) involves no fully sampled data and solely relies on learning the intrinsic structure and properties of the undersampled data itself, making it a powerful concept for deep learning methods with limited data [30; 31; 11]. ### Self-supervised Learning for Accelerated qMRI Reconstruction To extend from MRI reconstruction to qMRI reconstruction, the MR parameter maps can be denoted as \(\Delta=\left\{\delta_{i}\right\}_{i=1}^{N}\) where \(\delta_{i}\) represents each MR parameter and \(N\) is the total number of MR parameters to be estimated. Given the MR signal model \(\mathbf{M}\), which is a function of \(\Delta\), and the fully sampled image \(\mathrm{x}^{*}\), the MR parameters \(\Delta\) can be estimated by minimizing the following problem: \[\min_{\Delta}\ell\left(\mathbf{M}(\Delta),\mathrm{x}^{*}\right). \tag{3}\] This is also referred to as model-based qMRI reconstruction. Here, the MR image \(\mathrm{x}\in\mathbb{C}^{N_{x}\times N_{y}\times N_{x}\times N_{c}\times N_{ k}}\) and its undersampled k-space data \(\mathrm{f}\in\mathbb{C}^{N_{kz}\times N_{ky}\times N_{kz}\times N_{c}\times N_{ kz}}\) need to include \(N_{k}\) measurements at varying imaging parameters dependent on individual qMRI method. The Equation (3) can be formulated as: \[\min_{\Delta}\ell\left(\mathbf{U}\mathcal{F}\mathbf{C}\mathbf{M}\left(\Delta \right),\mathrm{f}\right), \tag{4}\] which incorporates the MRI encoding operators. The above minimization problem (4) can be solved in self-supervised learning by minimizing the following loss function: \[\min_{\Theta}\ell\left(\mathbf{U}\mathcal{F}\mathbf{C}\mathbf{M}\left(\Delta \left(\mathrm{f}|\Theta\right)\right),\mathrm{f}\right), \tag{5}\] where the quantitative MR parameter \(\Delta\left(\mathrm{f}|\Theta\right)\) are parametrized from deep neural networks with learnable network parameters. In RELAX, minimizing Equation (5) with respect to trainable network parameters \(\Theta\) is to train a deep neural network via network backpropagation for an end-to-end mapping network. ### The RELAX-MORE Mechanism The proposed RELAX-MORE extends the RELAX framework by further considering unrolling the mapping network using an optimization algorithm. This new method is designed to solve a bi-level optimization problem, where the mapping network learning and quantitative MR parameter estimation can be jointly reformulated as \[\min_{\Theta} \ell(\mathbf{U}\mathcal{F}\mathbf{C}\mathbf{M}(\Delta\left( \mathrm{f}|\Theta\right)),\mathrm{f})\quad\text{s.t.} \tag{6a}\] \[\Delta(\mathrm{f}|\Theta)=\operatorname*{arg\,min}_{\Delta}\phi(\Delta),\] (6b) \[\text{where}\quad\phi\left(\Delta\right):=\tfrac{1}{2}\parallel \mathbf{U}\mathcal{F}\mathbf{C}\mathbf{M}\left(\Delta\right)-\mathrm{f} \parallel_{2}^{2}+\lambda\mathcal{R}\left(\Delta\right). \tag{6c}\] Like RELAX, the upper level (6a) is a backward training process to optimize network parameters \(\Theta\) by minimizing loss function \(\ell\) via normal network backpropagation. However, unlike RELAX, the lower level (6b) can be implemented as a forward process to solve for the optimal solution of \(\min\limits_{\Delta}\phi\left(\Delta\right)\) via a gradient descent based method. The objective function \(\phi\) is a regularized least square model for quantitative MR parameter optimization. The weighted regularization \(\lambda\mathcal{R}\) provides prior information pertinent to the desired MR parameter maps. Hand-crafted regularizations can be employed, such as total variation (TV) ( \(\|\ \cdot\|_{TV}\) ) or \(\ell_{1}\) norm (\(\|\ \cdot\ \|_{1}\)) to promote different aspects of image features. Recent studies show that regularization \(\mathcal{R}_{\Theta}\) using CNNs can improve feature characterization, suppressing the hand-crafted priors [32, 33]. The weight \(\lambda\) can be learned and integrated into \(\mathcal{R}_{\Theta}\). The objective function \(\phi_{\Theta}\) depends on the network parameters \(\Theta\) that learned from regularizers, which can be parametrized as the summation of CNNs with \(\ell_{2,1}\) norm: \[\mathcal{R}_{\Theta}(\Delta):=\sum_{i=1}^{N}\|\mathcal{D}_{\Theta_{i}}(\delta _{i})\|_{2,1}. \tag{7}\] Each \(\mathcal{D}_{\Theta i}\) learns to extract the prior features of different MR parameters \(\delta_{i}\), for each \(i=1,\ldots,N\). Therefore, the RELAX-MORE for qMRI reconstruction can be mathematically described as solving a CNN regularized bi-level optimization problem: \[\min\limits_{\Theta} \ell(\mathbf{U}\mathcal{F}\mathbf{CM}(\Delta\left(\mathbf{f}| \Theta\right)),\mathrm{f})\quad\text{s.t.} \tag{8a}\] \[\Delta(\mathbf{f}|\Theta)=\operatorname*{arg\,min}\limits_{ \Delta}\phi_{\Theta}(\Delta),\] \[=\operatorname*{arg\,min}\limits_{\Delta}\mathcal{H}(\Delta)+ \mathcal{R}_{\Theta}(\Delta), \tag{8b}\] where data fidelity term \(\mathcal{H}(\Delta)=\frac{1}{2}\ \|\ \mathbf{U}\mathcal{F}\mathbf{CM}\left( \Delta\right)-\mathrm{f}\ \|_{2}^{2}\). ## 3 Methods ### The Lower Level Optimization: Forward Unrolling of A Learnable Descent Algorithm Proximal gradient descent, a widely used iterative algorithm, is implemented to unroll the lower level optimization (8b) in RELAX-MORE. For each unrolled iterative phase \(t=1,\ldots,T\), the following steps is used to update MR parameter \(\delta_{i}\) for each \(i=1,\ldots,N\) : \[\bar{\delta}_{i}^{(t)} =\delta_{i}^{(t-1)}-\alpha_{i}^{(t)}\nabla\tfrac{1}{2}\ \|\ \mathbf{U}\mathcal{F}\mathbf{CM}(\{\delta_{i}^{(t-1)}\}_{i=1}^{N})- \mathrm{f}\ \|_{2}^{2} \tag{9a}\] \[\delta_{i}^{(t)} =\mathrm{prox}_{\mathcal{R}_{\Theta},\rho_{t}}(\bar{\delta}_{i}^{(t)}), \tag{9b}\] where the proximal operator for regularization \(\lambda\mathcal{R}(z)\) is defined as: \[\mathrm{prox}_{\lambda\mathcal{R},\rho}(y):=\operatorname*{arg\,min}\limits_{ z}\tfrac{\varrho}{2}\ \|\ z-y\|+\lambda\mathcal{R}(z). \tag{10}\] This step involves implementing the proximal operation for regularization \(\mathcal{R}\), which is equivalent to finding the maximum-a-posteriori solution for the Gaussian denoising problem at a noise level \(\sqrt{\lambda/\rho}\)[34, 35], thus the proximal operator can be interpreted as a Gaussian denoiser. However, because the proximal operator \(\mathrm{prox}_{\mathcal{R}_{\Theta},\rho_{t}}\) in the objective function (8b) does not admit closed form solution, a CNN is used to substitute \(\mathrm{prox}_{\mathcal{R}_{\Theta},\rho_{t}}\), where the network is constructed as a residual learning network \(\mathcal{G}_{\Theta}\). Network \(\mathcal{G}_{\Theta}\) is composed with CNN \(\mathcal{D}_{\Theta}\), which reflects Equation (7), and its adjoint \(\widetilde{\mathcal{D}}_{\Theta}\) with symmetric but reversed architecture to increase the network capability, and a soft shrinkage operator \(\mathcal{S}_{\beta}\) is applied in between. All the substitutions are mathematically proven to be effective by [28, 36]. Then step (9b) becomes \[\delta_{i}^{(t)} =\mathcal{G}_{\Theta_{i}}(\delta_{i}^{(t)})+\delta_{i}^{(t)}, \tag{11a}\] \[=\widetilde{\mathcal{D}}_{\Theta_{i}}\circ\mathcal{S}_{\beta_{i}^{ (t)}}\circ\mathcal{D}_{\Theta_{i}}(\bar{\delta}_{i}^{(t)})+\bar{\delta}_{i}^{( t)},\ \forall i=1,\ldots,N, \tag{11b}\] where the soft thresholding operator is \(\mathcal{S}_{\beta^{(t)}}=\mathrm{prox}_{\|\cdot\|_{2,1,\beta^{(t)}}}(y)=[ \mathrm{sign}(y_{i})\max(|y_{i}|-\beta^{(t)},0)]\) for vector \(y=(y_{1},\ldots,y_{n})\in\mathbb{R}^{n}\) and \(\beta^{(t)}\) is a soft thresholding parameter that is updated at each phase \(t=1,\ldots,T\). In summary, the forward learnable proximal gradient descent algorithm to achieve unrolled lower level optimization in RELAX-MORE can be detailed in Algorithm: ``` 1Input:\(\delta_{i}^{(0)},\alpha_{i}^{(0)},\beta_{i}^{(0)}\), \(i=1,\cdots,N\) 2for\(t=1\)to\(T\)do 3for\(i=1\)to\(N\)do 4\(\bar{\delta}_{i}^{(t)}=\delta_{i}^{(t-1)}-\) \(\alpha_{i}^{(t)}\nabla\tfrac{1}{2}\|\mathbf{U}\mathcal{F}\mathbf{CM}(\{ \delta_{i}^{(t-1)}\}_{i=1}^{N})-\mathrm{f}\|_{2}^{2},\) \(\delta_{i}^{(t)}=\widetilde{\mathcal{D}}_{\Theta_{i}}\circ\mathcal{S}_{\beta_ {i}^{(t)}}\circ\mathcal{D}_{\Theta_{i}}(\bar{\delta}_{i}^{(t)})+\bar{\delta}_{i} ^{(t)}\) 5\(\mathrm{x}^{(t)}=\mathbf{M}(\{\delta_{i}^{(t)}\}_{i=1}^{N}),\) 6output\(\{\delta_{i}^{(T)}\}_{i=1}^{N}\)and\(\mathrm{x}^{(t)},\forall t\in\{1,...,T\}\). ``` **Algorithm: Forward Learnable Descent Algorithm for solving (8b)** The soft thresholding parameter \(\beta^{(t)}\) and step size \(\alpha^{(t)}\) are learnable and updating during each phase. The final outputs of the algorithm are estimated intermediate MR images \(\mathrm{x}^{(t)},\forall t=1,\ldots,T\) for all iterative phases and the reconstructed MR parameter maps \(\delta_{i}^{(T)},\forall i=1,\ldots,N\) of the last phase. ### Initial Input of the Algorithm Using an initial MR parameter input \(\delta_{i}^{(0)}\) that is closer to the optimal solution of the Algorithm can lead to better results and eliminate the need for excessive iterations in gradient descent. To achieve a favorable input, initialization networks are used to learn \(\delta_{i}^{(0)},\forall i=1,\ldots,N\) from undersampled k-space data. First, the undersampled k-space data is converted into the image domain, followed by a Coil Combination Network to combine all coil-wise images. \(N_{k}\) coil-combined images are produced to represent \(N_{k}\) measurements during qMRI acquisition. Next, all images are concatenated together as input into the \(\delta\)-initialization Network to obtain \(\delta_{i}^{(0)},\forall i=1,\ldots,N\). The initial MR images can also be obtained from the signal model using \(\mathrm{x}^{(0)}=\mathbf{M}(\{\delta_{i}^{(0)}\}_{i=1}^{N})\), which will be subsequently used in the loss function \(\ell\) for training the overall network framework in upper level optimization in Equation (8a). ### The Upper Level Optimization: Backward Training of Overall Network The loss function (8a) for RELAX-MORE is a summation of the weighted mean squared error (MSE) between undersampled k-space measurement \(\mathrm{f}\) and the k-space representation of estimated MR images \(\text{x}^{(t)},\forall t=0,\ldots,T\) for all unrolled phases. Therefore, the overall network training objective in the upper level optimization is to minimize the loss function \(\ell\) explicitly expressed as \[\ell(\mathbf{U}\mathcal{F}\text{Cx}^{(t)},\text{f})=\sum_{t=0}^{T}\gamma_{t}\parallel \mathbf{U}\mathcal{F}\text{Cx}^{(t)}-\text{f}\parallel_{2}^{2}, \tag{12}\] where the \(\gamma_{t}\) are the weighting factors for the MSE in order of the importance of the output from each phase and the initial \(\text{x}^{(0)}\). The detailed framework of the proposed RELAX-MORE is illustrated in Fig. 1 The Coil combination Network and \(\delta\)-initialization Network applied 4 repeats of Conv(\(3\times 3\) kernel size, 32 kernels) followed by a ReLU activation function. The last convolution layer has a single kernel Conv(\(3\times 3\), 1). The CNN \(\mathcal{D}\), applied 8 Conv(\(3\times 3\), 64) with a ReLU and the last convolution also has a single kernel Conv(\(3\times 3\), 1). The CNN \(\widetilde{\mathcal{D}}\) applied a reversed network structure. ### Experiment Setup To investigate the feasibility of RELAX-MORE method for reconstructing accelerated qMRI, we used the widely studied \(T_{1}\) mapping through the vFA method [13] as an example. Herein, the MR signal model M can be expressed as: \[\mathbf{M}_{k}(\mathbf{T}_{1},\mathbf{I}_{0})=\mathbf{I}_{0}\cdot\frac{(1-e^ {-TR/\mathbf{T}_{1}})\sin\eta_{k}}{1-e^{-TR/\mathbf{T}_{1}}\cos\eta_{k}}, \tag{13}\] where several MR images are acquired at multiple flip angles \(\eta_{k}\) for \(k=1,\ldots,N_{k}\). \(\mathbf{T}_{1}\in\mathbb{R}^{N_{x}\times N_{y}\times N_{z}}\), \(\mathbf{I}_{0}\in\mathbb{C}^{N_{x}\times N_{y}\times N_{z}}\) are the spin-lattice relaxation time map and proton density map, respectively, reflecting the imaged tissue relaxation properties, with \(T_{1}\) value sensitive to many brain and knee joint diseases. The set of MR parameters estimated in this model are \(\Delta=\{\mathbf{T}_{1},\mathbf{I}_{0}\}\). The flip angles and other imaging parameters, such as repetition time TR are pre-determined. The experiments include in-vivo studies on the brain and knee of healthy volunteers and ex-vivo phantom studies, all of which were carried out on a Siemens 3T Prisma scanner. For the brain study, the purpose is to investigate the efficiency and performance of RELAX-MORE and compare it with other state-of-the-art reconstruction methods. The vFA on the brain of five subjects was performed in the sagittal plane using a spoiled gradient echo sequence at imaging parameters TE/TR = \(12/40\) ms, FA = \(5^{\circ},10^{\circ},20^{\circ},40^{\circ}\), FOV = \(230\times 230\times 160\) mm and matrix size = \(176\times 176\times 48\) with a dedicated 20-channel head coil. For the knee study, the purpose is to investigate the robustness of RELAX-MORE against severe noise contamination and image imperfection. Therefore, the vFA on one knee was performed using a 4-channel receiving-only flex coil at parameters TE/TR = \(12/70\) ms, FA = \(10^{\circ},20^{\circ},40^{\circ}\), FOV = \(160\times 137\times 108\) mm and matrix size \(224\times 192\times 36\). For the phantom study, the purpose is to investigate the reconstruction accuracy of RELAX-MORE. The vFA phantom data was acquired along the coronal plane with sequence parameters TE/TR = \(12/80\) ms, FA = \(5^{\circ},10^{\circ},20^{\circ},40^{\circ},60^{\circ}\), FOV = \(170\times 170\times 60\) mm and matrix size \(128\times 128\times 12\) using the 20-channel head coil. In all 3 studies, to minimize bias on \(T_{1}\) estimation due to \(B_{1}^{+}\) inhomogeneity and imperfect spoiling, \(B_{1}^{+}\) maps were separately acquired for compensation [37], and \(169^{\circ}\) linear RF phase increments between subsequent repetition cycles and strong gradient spoilers were applied to Figure 1: Schematic framework for implementing RELAX-MORE. minimize the impact of imperfect spoiling [38]. The fully acquired MRI k-space data were undersampled retrospectively using two undersampling schemes: 1) 1D Cartesian variable density undersampling at acceleration factor AF = \(3\times\) with the 16 central k-space lines fully sampled (Fig. 2(a)) and 2) 2D Poisson disk undersampling at AF = \(4\times\) with the central \(51\times 51\) k-space portion fully sampled (Fig. 2(b)). The undersampling patterns were varied for each flip angle, like in previous studies [18, 23]. The coil sensitivity maps were estimated from ESPIRiT [39]. The selection of network hyperparameters was non-trivial and optimized by empirically searching on a reasonably large range. More specifically, we used initial \(\alpha_{i}^{(0)}=0.1,\beta_{i}^{(0)}=1e^{-5}\) for brain images; initial \(\alpha_{i}^{(0)}=0.5,\beta_{i}^{(0)}=1e^{-7}\) for knee images; and \(\alpha_{i}^{(0)}=0.9,\beta_{i}^{(0)}=1e^{-3}\) for phantom images. The loss function applied \(\gamma_{t}=1\) at \(t=T\) and \(\gamma_{t}=1e^{-4}\) for other phase steps. The network parameters \(\Theta\) were initialized using Xavier initialization [40]. Because RELAXMORE trains on single-subject data, the batch size was set to include the entire image volume. The network was trained for \(10,000\) epochs using the Adam optimizer [41]. The learning rate, which controls the step size taken in each iteration of optimizing the loss function, was set to \(1e^{-4}\) for brain and knee images, and \(5e^{-4}\) for the phantom image. It should be noted that, unlike conventional deep learning reconstruction methods where training and testing are two separate steps, in RELAX-MORE with subject-specific self-supervised learning, the reconstruction has readily concluded once the training has converged for one subject. All the programming in this study was implemented using Python language and PyTorch package, and experiments were conducted on one NVIDIA A100 80GB GPU and an Intel Xeon 6338 CPU at Centos Linux system. ## 4 Results ### Impact of Unrolling Gradient Descent Algorithm The reconstruction performance of RELAX-MORE is affected by the degree of unrolling the gradient descent algorithm. Fig. 3 demonstrates the evolution of the estimated \(T_{1}\) maps from fully sampled data at different total unrolling phase numbers from \(T=3\) to \(T=9\) of the gradient descent algorithm. The larger \(T\) reflects deeper unrolling operation and thus requires more computing resources. Referring to the zoom-in views of Fig. 3, one can observe that with increasing total phase number, the algorithm starts to better distinguish the \(T_{1}\) values from white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) regions. Initially blurry at a lower \(T\), the tissue details start to sharpen with increasing \(T\), resembling better the fully sampled \(T_{1}\) map obtained using the standard pixel-wise fitting. However, the differences between phase \(T=7,8\) and \(9\) are negligible, indicating that performance gain can reach a plateau with a sufficient depth of unrolling steps. Therefore, to ensure consistent experiment setup and balance the trade-off between algorithm complexity and reconstruction performance, the number of \(T\) was set to \(8\) for all the experiments thereafter. This result illustrates the effectiveness of unrolling gradient descent algorithm in the RELAX-MORE framework. ### Comparison with State-of-the-Art Methods RELAX-MORE was compared with two state-of-the-art non-deep learning qMRI reconstruction methods and one self-supervised deep learning method. These methods include 1) Locally Low Rank (LLR) [42], where image reconstruction was first performed, followed by pixel-wise parameter fitting. LLR exploits the rank-deficiency of local image regions along the acquisition parameter dimension to accelerate parameter mapping; 2) Model-TGV [43], a model-based qMRI reconstruction method that improves the piece-wise constant region restriction of total variation through a generalization of the total variation theory and 3) RELAX [23], an end-to-end self-supervised deep learning method for rapid MR parameter mapping. LLR and Model-TGV were implemented using the recommended optimization parameters in their original papers and code. Contrary to the original RELAX implementation, which was trained using many subjects, in the interest of a fair comparison, our implementation of RELAX was carried out on a single-subject training using an Adam optimizer with \(10,000\) epochs. \(T_{1}\) maps estimated from \(3\times\) 1D Cartesian variable density undersampling using the different methods are presented in Fig. 4. As can be observed in Fig. 4(a), the zero-filled \(T_{1}\) map obtained by pixel-wise fitting the undersampled images is noisy, displaying ripple artifacts due to aliasing. Although LLR can partially remove these artifacts, it retains the noisy signature. Model-TGV averages out the noise, producing a cleaner tissue appearance and better \(T_{1}\) contrast, but it is over-smoothed, resulting in blurry maps. On the other hand, RELAX removes noises and artifacts and produces sharpened maps but remains somewhat blocky. This is hypothesized due to the difficulty of converging the end-to-end network using single-subject data. RELAX-MORE produces artifact-free \(T_{1}\) maps with anticipated excellent performance for noise removal, resulting in a good performance in appearance and contrast. This is further witnessed in the zoom-in view of Fig. 4(b), where the zero-filled, LLR, and even fully sampled images cannot reliably estimate \(T_{1}\) in the cervical spine where the signal is low due to insufficient head coil coverage. Those Figure 2: Exemplified masks: (a) 3x 1D Cartesian variable density undersampling mask and (b) 4x 2D Poisson disk undersampling mask. pixel-wise fitting methods are typically prone to image noise contamination. Although Model-TGV and RELAX can better estimate \(T_{1}\) in these low SNR regions, the blurry Model-TGV results make it difficult to distinguish between the spinal cord and subarachnoid space, whereas RELAX shows a disconnect in the subarachnoid space which may be due to over-sharpening. However, RELAX-MORE shows remarkably good performance in maintaining contrast and tissue details at those regions, enabling clear distinction between the spinal cord and subarachnoid space regions. In Fig. 4(c) of the cerebrum zoom-in view, RELAX-MORE consistently exhibits the best reconstruction performance in correcting artifacts, removing noises, and preserving tissue contrast and details. Referring to the error maps (Fig. 4(d)), which is the absolute difference between the estimated \(T_{1}\) maps in Fig. 4(c) and the fully sampled \(T_{1}\) map, overall, zero-filled shows the most significant improvement in the performance of the proposed model. Figure 4: (a) Qualitative comparison among different methods using \(3\times\) 1D Cartesian variable density undersampling mask; (b) Zoom-in view with details of the region outside the brain; (c) Zoom-in view with in-brain detail; (d) pixel-wise error maps of the in-brain region details. Figure 3: \(T_{1}\) maps generated from RELAX-MORE at different phases of the forward learnable descent algorithm (\(3\leq T\leq 9\)) and \(T_{1}\) map obtained from fully sampled data using the standard pixel-wise fitting. While performance gain is significant with increasing phases at lower phase numbers, incremental performance gain is observed after phase 7. significant error followed by LLR. Model-TGV and RELAX exhibit similar error maps, whereas RELAX-MORE produces the least error. \(T_{\rm 1}\) maps estimated from \(4\times\) 2D Poisson disk undersampling are shown in Fig. 5. Comparing the images in Fig. 5(a), the overall signature differences are like the \(3\times\) 1D undersampling case except for the undersampling artifacts being noise-like due to 2D undersampling. Referring to the zoom-in view and comparing the estimated \(T_{\rm 1}\) maps again show similar differences to the \(3\times\) 1D undersampling case. However, compared with other methods, RELAX-MORE can clearly distinguish between WM and GM \(T_{\rm 1}\) of the cerebellum in Fig. 5(b), particularly in the posterior part. The absolute difference maps taken between the estimated \(T_{\rm 1}\) maps in Fig. 5(c) and the fully sampled map, shown in Fig. 5(d), exhibit similar results to the \(3\times\) 1D undersampling case with RELAX-MORE showing the least error. This is likely achieved through integrating deep learning capability and unrolling gradient descent algorithm with proximal operator prioritizing noise suppression without compromising the fidelity and clarity of the underlying tissue structure. Further performance comparison was carried out using peak-signal-to-noise-ratio (PSNR), structural similarity index measure (SSIM), and normalized mean squared error (NMSE) as evaluation metrics for \(T_{\rm 1}\) maps. PSNR is a measure of the quality of the reconstruction, while SSIM is a measure of the similarity between two maps. PSNR, SSIM, and NMSE are defined in the following: \[PSNR(\rm{v},\rm{v}^{*})=20\log_{10}(max(|\rm{v}^{*})\big{/} \frac{1}{N}\parallel\rm{v}-\rm{v}^{*}\parallel^{2}), \tag{14}\] \[SSIM(\rm{v},\rm{v}^{*})=\frac{(2\mu_{\rm{v}}\mu_{\rm{v}^{*}}+c_ {1})(2\sigma_{\rm{vv}^{*}}+c_{2})}{(\mu_{\rm{v}}^{2}+\mu_{\rm{v}^{*}}^{2}+c_{ 1})(\sigma_{\rm{v}}^{2}+\sigma_{\rm{v}^{*}}^{2}+c_{2})},\] (15) \[NMSE(\rm{v},\rm{v}^{*})=\parallel\rm{v}^{*}-\rm{v}\parallel_{2} ^{2}/\parallel\rm{v}^{*}\parallel_{2}^{2}, \tag{16}\] where \(\rm{v},\rm{v}^{*}\) represent the estimated map and reference map. \(\mu_{\rm{v}},\mu_{\rm{v}^{*}}\) are local means of pixel intensity, \(\sigma_{\rm{v}},\sigma_{\rm{v}^{*}}\) are the standard deviations and \(\sigma_{\rm{vv}^{*}}\) is covariance between \(\rm{v}\) and \(\rm{v}^{*}\), \(C_{1}=(K_{1}L)^{2},C_{2}=(K_{2}L)^{2}\) are two constants that avoid denominator to be zero, and \(K_{1}=0.01,K_{2}=0.03\). \(L\) is the largest pixel value of the magnitude of image. \(T_{\rm 1}\) maps obtained from fully sampled data were used for \(\rm{v}^{*}\), as a reference to compare the performance among different methods. The metric calculations were carried out on results from \(3\times\) 1D Cartesian variable density undersampling, using the brain regions from all five subjects. The analysis results are presented in Fig. 6, where the average values of PSNR, SSIM and NMSE along with corresponding standard deviations among five subjects are pre Figure 5: (a) Qualitative comparison among different methods using \(4\times\) 2D Poisson disk undersampling mask; (b) Zoom-in view with details of the cerebellum; (c) Zoom-in view with in-brain detail; (d) pixel-wise error maps of the in-brain region details. sented using box-whisker plots. For both PSNR and SSIM, RELAX-MORE produces the highest mean value, followed by RELAX, Model-TGV and LLR. Both PSNR and SSIM of RELAX-MORE show the highest consistency, as evidenced by its small standard deviation compared to other methods. Zero-filled shows the lowest mean values for PSNR and SSIM with the largest standard deviation, which can be attributed to aliasing artifacts. RELAX-MORE shows the lowest mean NMSE, followed by RELAX, Model-TGV and LLR. Zero-filled shows the highest mean NMSE. Overall, in agreement with the qualitative observation in Figs. 4 and 5, RELAX-MORE performs superiorly in terms of reconstruction fidelity, structure and texture preservation, and noise suppression, outperforming other state-of-the-art methods. ### Ablation Study: Accuracy and Robustness An ablation study was carried out further to evaluate the parameter estimation accuracy of the proposed method. To analyze RELAX-MORE in discrete tissue environment, it was applied to undersampled data from a phantom composed of five 20 mL vials consisting of peanut oil and \(2\%,4\%,8\%\) Agar dissolved in DI water and boiled egg white, all immersed in a 200 \(\mu\)M \(\mathrm{MnCl_{2}}\) water bath (Fig. 7(a)). From left to right, Fig. 7(b) shows the \(T_{1}\) maps obtained from zero-filled data, RELAX-MORE and fully sampled data. Zero-filled \(T_{1}\) map exhibits ripple artifacts due to the undersampling, whereas these artifacts are eliminated in RELAX-MORE. Comparing the line profile indicated by the line in Fig. 7(b) and shown in Fig. 7(c), zero-filled \(T_{1}\) profile resembles amplitude modulation, where the periodic amplitude fluctuation stems from aliasing effects and the higher frequency fluctuation reflects noise. In good agreement with the fully sampled \(T_{1}\) profile, RELAX-MORE's \(T_{1}\) profile not only removes the periodic amplitude fluctuation but also smoothens the oscillations, demonstrating its ability to average out noise, remove aliasing artifacts while maintaining high accuracy for parameter estimation in a wide range. The ablation study was also carried out to evaluate the parameter estimation robustness of RELAX-MORE against severe imaging imperfection and the generalizability of RELAX-MORE on different anatomies such as knee joints. Fig. 8 shows the \(T_{1}\) maps from a representative sagittal slice of the knee obtained from zero-filled, RELAX-MORE, and fully sampled data. The MRI data was purposely acquired using a Figure 8: Estimated \(T_{1}\) maps for knee images using \(3\times\) 1D Cartesian undersampling mask. Figure 6: Quantitative comparison between the estimated \(T_{1}\) maps with \(3\times\) 1D Cartesian mask and the reference \(T_{1}\) maps estimated from fully sampled data. The comparison was made based on three different measures, namely PSNR, SSIM, and NMSE with median line and mean markers \(\times\). Figure 7: (a) Phantom and (b) corresponding \(T_{1}\) maps generated from Zero-filled, RELAX-MORE, and fully sampled phantom data. Zero-filled and RELAX-MORE \(T_{1}\) maps are estimated from data undersampled using a \(3\times\) 1D Cartesian undersampling mask. (c) Line profile, indicated in (b), along with zoom-in views showing the difference in \(T_{1}\) values. 4-channel receiving-only flex coil, which provides insufficient coil coverage and a poor signal-to-noise ratio for knee MRI. As evident in Fig. 8, the fully sampled \(T_{1}\) map presents enormous noises at bone, cartilage, ligament, meniscus, and muscle and the fine structure, such as the epiphyseal plate (highlighted by a white arrow) is contaminated by noises. The zero-filled \(T_{1}\) map from \(3\times\) 1D Cartesian acceleration also presents severe undersampling artifacts combined with the noise rendering erroneous \(T_{1}\) quantification. However, RELAX-MORE successfully removes all the artifacts, suppresses the unwanted image noises in \(T_{1}\) quantification, and provides a surprisingly favorable quantification of different knee joint structures. RELAX-MORE demonstrates its high robustness and generalizability for reconstructing the in-vivo brain and in-vivo knee joint, making it a widely applicable method for different anatomies. ### Transfer Learning: Computing Time Efficiency RELAX-MORE is a subject-specific method that utilizes self-supervised learning for efficient qMRI reconstruction. While RELAX-MORE can perform well using single-subject data, as shown in all other experiments, the reconstruction process (e.g., network training) needs to be conducted for each subject data. In this experiment, we investigated transfer learning to improve the computing time efficiency for the training/reconstruction of RELAX-MORE. Using transfer learning, the network weights after training on one brain data were then applied as the starting point to train the brain data of another subject. In Fig. 9, the performance of RELAX-MORE with and without transfer learning is presented by plotting the SSIMs for \(T_{1}\) estimations as a function of epoch number at \(3\times\) 1D Cartesian acceleration. Comparing the two SSIM curves in Fig. 9(a), it is observed that transfer learning starts at a higher initial SSIM value during the initial training phase and increases faster compared to the case without transfer learning. This implies that transfer learning benefits the model to converge more efficiently due to commencing training with pre-trained weights. This is further confirmed by comparing the \(T_{1}\) maps generated with (Fig. 9(b) top row) and without (Fig. 9(b) bottom row) transfer learning, where at low epoch values (500, 1000), transfer learning generated \(T_{1}\) maps show better \(T_{1}\) clarity and contrast between different tissue types compared to its non-transfer learning counterpart. As the training epoch increases, both SSIM curves flatten, indicating that the model has reached stable performance. The results suggest that transfer learning can improve the computing time efficiency of RELAX-MORE with reduced training/reconstruction time. At approximately 0.5 seconds per epoch using our GPU device for training the 3D brain data, transfer learning can reduce the reconstruction time to less than 10 min to reach good performance. With the advance of new techniques [44], transfer learning can be a practical approach to further improve the reconstruction timing efficiency for RELAX-MORE. ## V Conclusion This paper proposes a novel self-supervised learning method, RELAX-MORE, for qMRI reconstruction. This proposed method uses an optimization algorithm to unroll a model-based qMRI reconstruction into a deep learning framework, enabling the generation of highly accurate and robust MR parameter maps at imaging acceleration. Unlike conventional deep learning methods requiring a large amount of training data, RELAX-MORE is a subject-specific method that can be trained on single-subject data through self-supervised learning. Furthermore, in our experiments, RELAX-MORE outperforms several state-of-the-art conventional and deep learning methods for accelerated \(T_{1}\) maps. This work also demonstrates several superior aspects of RELAX-MORE in overall performance, accuracy, robustness, generalizability, and computing timing efficiency, making this method a promising candidate for advancing accelerated qMRI for many clinical applications.
2301.09879
Data Augmentation Alone Can Improve Adversarial Training
Adversarial training suffers from the issue of robust overfitting, which seriously impairs its generalization performance. Data augmentation, which is effective at preventing overfitting in standard training, has been observed by many previous works to be ineffective in mitigating overfitting in adversarial training. This work proves that, contrary to previous findings, data augmentation alone can significantly boost accuracy and robustness in adversarial training. We find that the hardness and the diversity of data augmentation are important factors in combating robust overfitting. In general, diversity can improve both accuracy and robustness, while hardness can boost robustness at the cost of accuracy within a certain limit and degrade them both over that limit. To mitigate robust overfitting, we first propose a new crop transformation, Cropshift, which has improved diversity compared to the conventional one (Padcrop). We then propose a new data augmentation scheme, based on Cropshift, with much improved diversity and well-balanced hardness. Empirically, our augmentation method achieves the state-of-the-art accuracy and robustness for data augmentations in adversarial training. Furthermore, when combined with weight averaging it matches, or even exceeds, the performance of the best contemporary regularization methods for alleviating robust overfitting. Code is available at: https://github.com/TreeLLi/DA-Alone-Improves-AT.
Lin Li, Michael Spratling
2023-01-24T09:36:39Z
http://arxiv.org/abs/2301.09879v1
# Data augmentation alone can improve adversarial training ###### Abstract Adversarial training suffers from the issue of robust overfitting, which seriously impairs its generalization performance. Data augmentation, which is effective at preventing overfitting in standard training, has been observed by many previous works to be ineffective in mitigating overfitting in adversarial training. This work proves that, contrary to previous findings, data augmentation alone can significantly boost accuracy and robustness in adversarial training. We find that the hardness and the diversity of data augmentation are important factors in combating robust overfitting. In general, diversity can improve both accuracy and robustness, while hardness can boost robustness at the cost of accuracy within a certain limit and degrade them both over that limit. To mitigate robust overfitting, we first propose a new crop transformation, Cropshift, which has improved diversity compared to the conventional one (Padcrop). We then propose a new data augmentation scheme, based on Cropshift, with much improved diversity and well-balanced hardness. Empirically, our augmentation method achieves the state-of-the-art accuracy and robustness for data augmentations in adversarial training. Furthermore, when combined with weight averaging it matches, or even exceeds, the performance of the best contemporary regularization methods for alleviating robust overfitting. Code is available at: [https://github.com/TreeLLi/DA-Alone-Improves-AT](https://github.com/TreeLLi/DA-Alone-Improves-AT). ## 1 Introduction Adversarial training, despite its effectiveness in defending against adversarial attack, is prone to overfitting. Specifically, while performance on classifying training adversarial examples improves during the later stages of training, test adversarial robustness degenerates. This phenomenon is called _robust overfitting_(Rice et al., 2020). To alleviate overfitting, Rice et al. (2020) propose to track the model's robustness on a reserved validation data and select the checkpoint with the best validation robustness instead of the one at the end of training. This simple technique, named early-stopping (ES), matches the performance of contemporary state-of-the-art methods, suggesting that overfitting in adversarial training impairs its performance significantly. Preventing robust overfitting is, therefore, important for improving adversarial training. Data augmentation is an effective technique to alleviate overfitting in standard training, but it seems to not work well in adversarial training. Almost all previous attempts (Rice et al., 2020; Wu et al., 2020; Gowal et al., 2021; Rebuffi et al., 2021; Carmon et al., 2019) to prevent robust overfitting by data augmentation have failed. Specifically, this previous work found that several advanced data augmentation methods like Cutout (DeVries and Taylor, 2017), Mixup (Zhang et al., 2018) and Cutmix (Yun et al., 2019) failed to improve the robustness of adversarially-trained models to match that of the simple augmentation Flip-Padcrop with ES, as shown in Fig. 1. Thus the method of using ES with Flip-Padcrop has been widely accepted as the "baseline" for combating robust overfitting. Even with ES, Cutout still fails to improve the robustness over the baseline, while Mixup boosts the robustness marginally (\(<0.4\%\)) (Rice et al., 2020; Wu et al., 2020). This contrasts with their excellent performance in standard training. Recently, Tack et al. (2022) observed that AutoAugment (Cubuk et al., 2019) can eliminate robust overfitting and boost robustness greatly. This, however, contradicts the result of Gowal et al. (2021); Carmon et al. (2019) where the baseline was found to outperform AutoAugment in terms of robustness. Overall, to date, there has been no uncontroversial evidence showing that robust generalization can be further improved over the baseline by data augmentation alone, and no convincing explanation about this ineffectiveness. This work focuses on improving the robust generalization ability of adversarial training by data augmentation. We first demonstrate that the superior robustness of AutoAugment claimed by Tack et al. (2022) is actually a false security since its robustness against the more reliable AutoAttack (AA) (Croce and Hein, 2020) (48.71%) is just slightly higher than the baseline's (48.21%) as shown in Fig. 1 (see Appendix A for a detailed discussion). We then investigate the impact of the hardness and diversity of data augmentation on the performance of adversarial training. It is found that, in general, hard augmentation can alleviate robust overfitting and improve the robustness but at the expense of clean accuracy within a certain limit of hardness. Over that limit, both robustness and accuracy decline, even though robust overfitting is mitigated more with the increase in hardness. On the other hand, diverse augmentation generally can alleviate robust overfitting and boost both accuracy and robustness. These results give us the insight that the optimal data augmentation for adversarial training should have as much diversity as possible and well-balanced hardness. To improve robust generalization, we propose a new image transformation, Cropshift, a more diverse replacement for the conventional crop operation, Padcrop. Cropshift is used as a component in a new data augmentation scheme that we call Improved Diversity and Balanced Hardness (IDBH). Empirically, IDBH achieves the state-of-the-art robustness and accuracy among data augmentation methods in adversarial training. It improves the end robustness to be significantly higher than the robustness of the baseline augmentation with early-stopping (Fig. 1), which all previous attempts failed to achieve. Furthermore, it matches the performance of the state-of-the-art regularization methods for improving adversarial training and, when combined with weight averaging, considerably outperforms almost all of them in terms of robustness. ## 2 Related Works Robust overfitting can be successfully mitigated by smoothing labels, using Knowledge Distillation (KD) (Chen et al., 2021) and Temporal Ensembling (TE) (Dong et al., 2022), and/or smoothing weights using Stochastic Weight Averaging (SWA) (Chen et al., 2021) and Adversarial Weight Perturbation (AWP) (Wu et al., 2020). Moreover, Singla et al. (2021) found that using activation functions with low curvature improved the generalization of both accuracy and robustness. Alternatively, Yu et al. (2022) attributed robust overfitting to the training examples with small loss value, and showed that enlarging the loss of those examples during training, called Minimum Loss Constrained Adversarial Training (MLCAT), can alleviate robust overfitting. Our work prevents robust overfitting by data augmentation, and hence complements the above methods. To date, it is still unclear if more training data benefits generalization in adversarial training. Schmidt et al. (2018) showed that adversarial training requires more data, compared to its standard training counterpart, to achieve the same level of generalization. In contrast, Min et al. (2021); Chen et al. (2020) proved that more training data can hurt the generalization in some particular adversarial training regimes on some simplified models and tasks. Empirically, a considerable improvement has been observed in both clean and robust accuracy when the training set is dramatically expanded, in a semi-supervised way, with unlabeled data (Carmon et al., 2019; Alayrac et al., 2019), e.g., using Robust Self-Training (RST) (Carmon et al., 2019) or with synthetic data generated by a generative Figure 1: Our method is the only one that significantly improves both accuracy and robustness over the baseline (Flip-Padcrop with early-stopping). Cutout and Cutmix fail to beat the baseline regarding robustness. AutoAugment achieves only a small improvement on robustness over the baseline. Robustness is evaluated against AutoAttack. See Section 5 for details of training and evaluation settings. model (Gowal et al., 2021b). Although data augmentation alone doesn't work well, it was observed to improve robustness to a large degree when combined with SWA (Rebuffi et al., 2021) or Consistency (CONS) regularization (Tack et al., 2022). In contrast, our work doesn't require any additional data or regularization: it improves robust generalization by data augmentation alone. Common augmentations (He et al., 2016) used in image classification tasks include Padcrop (padding the image at each edge and then cropping back to the original size) and Horizontal Flip. Many more complicated augmentations have been proposed to further boost generalization. Cutout (DeVries and Taylor, 2017) and Random Erasing (Zhong et al., 2020) randomly drop a region in the input space. Mixup (Zhang et al., 2018) and Cutmix (Yun et al., 2019) randomly interpolate two images, as well as their labels, into a new one. AutoAugment (Cubuk et al., 2019) employs a combination of multiple basic image transformations like Color and Rotation and automatically searches for the optimal composition of them. TrivialAugment (Muller and Hutter, 2021) matches the performance of AutoAugment with a similar schedule yet without any explicit search, suggesting that this computationally expensive process may be unnecessary. The method proposed here improves on the above methods by specifically considering the diversity and hardness of the augmentations. The difference between data augmentation in standard and adversarial training is discussed in Appendix B. ## 3 How Data Augmentation Alleviates Robust Overfitting This section describes an investigation into how the hardness and the diversity of data augmentation effects overfitting in adversarial training. During training, the model's robustness was tracked at each epoch using PGD10 applied to the test set. The checkpoint with the highest robustness was selected as the "best" checkpoint. Best (end) robustness/accuracy refers to the robustness/accuracy of the best (last) checkpoint. In this section, the terms accuracy and robustness refer to the end accuracy and robustness unless specified otherwise. The severity of robust overfitting was measured using the best robustness minus the end robustness. Hence, the more positive this gap in robustness the more severe the robust overfitting. The training setup is described in Appendix C. ### Hardness Hardness was measured by the Affinity metric (Gontijo-Lopes et al., 2021) adapted from standard training: \[hardness=\frac{Robustness(M,D_{test})}{Robustness(M,D^{\prime}_{test})} \tag{1}\] where \(M\) is an arbitrary model adversarially trained on the unaugmented training data. \(D_{test}\) refers to the original test data set and \(D^{\prime}_{test}\) is \(D_{test}\) with the augmentation (to be evaluated) applied. \(Robustness(M,D)\) is the robust accuracy of \(M\) evaluated using PGD50 on \(D\). Hardness is a model-specific measure. It increases as the augmentation causes the data to become easier to attack, i.e., as the perturbed, augmented, data becomes more difficult to be correctly classified. We found that moderate levels hardness can alleviate robust overfitting and improve the robustness but at the price of accuracy. Further increasing hardness causes both accuracy and robustness to decline, even though robust overfitting is alleviated further. The value of hardness where this occurs is very sensitive to the capacity of the model. Therefore, to maximize robustness, hardness should be carefully balanced, for each model, between alleviating robust overfitting and impairing accuracy. Experiments design.We investigated the effects of hardness in adversarial training for individual and composed augmentations. For the individual augmentations the following 12 image transformations were choosen: ShearX, ShearY, TranslateX, TranslateY, Rotate, Color, Sharpness, Brightness, Contrast, Solarize, Cutout and Cropshift (a variant of Padcrop introduced in Section 4). For each Eq. (1) was used to calibrate the strength of the augmentation (e.g. angle for Rotation) onto one of 7 levels of hardness (see Appendix C.2 for specific values), except for Color and Sharpness which were applied at 3 strengths. For simplicity, the integers 1 to 7 are used to represent these 7 degrees of hardness. Standard Cutout is allowed to cut partially outside the image, and thus the hardness is not always directly related to the nominal size of the cut-out region (strength). To ensure alignment between strength and hardness, we force Cutout to cut only inside the image and refer this variant as Cutout-i. To control for diversity, each augmentation was applied with one degree of diversity. As a result the effects of applying Cutout and Cropshift with a certain hardness is deterministic throughout training. Specifically, Cutout always cuts at the fixed location (sampled once at the beginning of training). Similarly, CropShift crops a fixed region and shifts it to the fixed location (both sampled once at the beginning of training). We name these two variants as Cutout-i-1 and CropShift-1 respectively. Models were trained with each transformation at each hardness. To investigate composed augmentations, models were trained with various multi-layered data augmentations: Flip-Padcrop (FP), FP-Cutout[Weak] (CW), FP-Cutout[Strong] (CS), FP-AutoAugment-Cutout (AuA) and FP-TrivialAugment-Cutout (TA). All of them shared the same parameters for Flip and Padcrop. CW and CS used 8x8 and 10x10 Cutout respectively. AuA and TA used 16x16 Cutout as in their original settings (Cubuk et al., 2019; Muller and Hutter, 2021). Different from the default experimental setting, augmentations here were always applied during training, and robustness was evaluated against AA since AuA was observed to fool the PGD attack (Appendix A). Hardness increases from FP to TA as more layers stack up and/or the strength of individual components increases. Hence, this experiment can, as for the experiments with individual augmentations, be seen as an investigation into the effects of increasing hardness. Here, we are not controlling for diversity, which also roughly increases from FP to TA. However, this does not affect our conclusions as diversity boosts accuracy and robustness (see Section 3.2), and hence, the decrease in these values that we observe with increased hardness cannot be explained by the effects of increased diversity. Observations.It can be seen that the gap between best and end robustness drops, i.e., robust overfitting turns milder with the increase in hardness in Figs. 1(a) and 1(d). The gap of robustness for AuA is negative in Fig. 1(d) because the PGD10 attack was fooled to select a vulnerable checkpoint as the best: see Appendix A for more discussion. For accuracy and robustness, there are roughly three phases. First, both accuracy (Fig. 1(b)) and robustness (Fig. 1(c)) increase with hardness. This is only observed for some transformations like Cropshift at hardness 1 and 2. In this stage, the underlying model has sufficient capacity to fit the augmented data so it benefits from the growth of data complexity. Second, accuracy (Fig. 1(b)) starts to drop while robustness (Fig. 1(c)) continuously increases. As the intensity of the transformation increases, the distribution of the transformed data generally deviates Figure 2: The performance of models trained with different individual transformations (top row) and composed augmentations (bottom row). None refers to no data augmentation applied. Robustness is evaluated against PGD50. more from the distribution of the original data causing the mixture of them to be harder to fit for standard training. Adversarial training can be considered as standard training plus gradient regularization (Li & Spratling, 2022). Roughly speaking, accuracy drops in this stage because the model's capacity is insufficient to fit the increasingly hard examples for the optimization of the clean loss (standard training) under the constraint of gradient regularization. Nevertheless, robustness could still increase due to the benefit of increasing robust generalization, i.e., smaller adversarial vulnerability. Third, accuracy (Fig. 1(e)) and robustness (Fig. 1(f)) drop together. Accuracy continues to decline due to the severer (standard) underfitting. Meanwhile, the harm of decreasing accuracy now outweighs the benefit of reduced robust overfitting, which results in the degeneration of robustness. The graphs of Color, TranslateX and TranslateY are omitted from Figs. 1(a), 1(b) and 1(c) because they exhibit exceptional behavior at some values of hardness. Nevertheless, these results generally show that robust overfitting is reduced and robustness is improved. These results are presented and discussed in Appendix D.1. Appendix D.2 provides a figure showing best accuracy as a function of hardness for individual augmentations. A more obvious downward trend with increasing hardness can be seen in this graph compared to the graph for end accuracy shown in Fig. 1(b) Figure 3: Performance of the models trained using augmentations with different type diversity (top row), spatial diversity (middle row) and strength diversity (bottom row). Spatial diversity rises from the restricted variant (solid lines) to the unrestricted variant (dashed lines) for the same transformation (line color) in Figs. 2(d), 2(e) and 2(f). Robustness is evaluated against PGD50. ### Diversity To investigate the effects of augmentation diversity, variation in the augmentations was produced in three ways: (1) using varied types of transformations ("type diversity"); (2) varying the spatial area to augment ("spatial diversity"); and (3) increasing the variance of the strength while keeping the mean strength constant ("strength diversity"). Type diversity.We uniformly drew \(M\) transformations, with fixed and roughly equivalent hardness, from the same pool of transformations as the individual experiment in Section 3.1 but excluding Cutout and Cropshift. During training, one of these \(M\) transformations was randomly sampled, with uniform probability, to be applied to each sample separately in a batch. Diversity increases as \(M\) increases from 0 (no augmentation applied) to 10 (one augmentation from a pool of 10 applied). The experiments were repeated for three levels of hardness \(\{1,2,3\}\). For all levels of hardness, the gap of robustness (Fig. 2(a)) reduces, and the end robustness (Fig. 2(b)) and accuracy (Fig. 2(c)) increases, as \(M\) increases. These trends are more pronounced for higher hardness levels. Spatial diversity.Transformations like Cutout and Cropshift (described in Section 4) have large inherent diversity due to the large number of possible crop and/or shift operations that can be performed on the same image. For example, there are 28x28 possible 4x4 pixel cutouts that could be taken from a 32x32 pixel image, all with the same strength of 4x4. In contrast, transformations like Shear and Rotation have only one, or if sign is counted, two variations at the same strength, and hence, have a much lower inherent diversity. To investigate the impact of this rich spatial diversity, we run the experiments to compare the performance of Cutout-i-1 with Cutout-i, and to compare Cropshift-1 and Cropshift, at various levels of hardness. In both cases the former variant of the augmentation method is less diverse than the latter. We observed that the rich spatial diversity in Cutout-i and Cropshift helps dramatically shrink the gap between the best and end robustness (Fig. 2(d)), and boost the end robustness (Fig. 2(e)) and accuracy (Fig. 2(f)) at virtually all hardness. Strength diversity.Diversity in the strength was generated by defining four ranges of hardness: \(\{4\}\), \(\{3,4,5\}\), \(\{2,3,4,5,6\}\) and \(\{1,2,3,4,5,6,7\}\). During training each image was augmented using a strength uniformly sampled at random from the given range. Hence, for each range the hardness of the augmentation, on average, was the same, but the diversity of the augmentations increased with increasing length of the hardness range. Models trained with differing diversity were trained with each of the individual transformations defined in Section 3.1 excluding Color and Sharpness. Strength diversity for Cutout-1, Cropshift-1 and Rotate can be seen to significantly mitigate robust overfitting (Fig. 2(g)) and boost robustness (Fig. 2(h)), whereas for the other transformations it seems have no significant impact on these two metrics. Nevertheless, a clear increase in accuracy (Fig. 2(i)) is observed when increasing strength diversity for almost all transformations. Figure 4: Illustration Cropshift (top) and Pad-crop (bottom) with equivalent hardness. ## 4 Diverse and Hardness-Balanced Data Augmentation This section first describes Cropshift, our proposed version of Padcrop with enhanced diversity and disentangled hardness. Cropshift (Fig. 4; Algorithm 1) first randomly crops a region in the image and then shifts it around to a random location in the input space. The cropped region can be either square or rectangular. The strength of Cropshift is parameterized by the total number, \(N\), of cropped rows and columns. For example, with strength 8, Cropshift removes \(l,r,t,b\) lines from the left, right, top and bottom borders respectively, such that \(l+r+t+b=8\). Cropshift significantly diversifies the augmented data in terms of both the content being cropped and the localization of the cropped content in the final input space. Furthermore, Cropshift offers a more fine-grained control on the hardness. In contrast, for Padcrop hardness is not directly related to the size of the padding, as for example, using 4 pixel padding can results in cropped images with a variety of total image content that is trimmed (from 4 rows and 4 columns trimmed, to no rows and no columns trimmed). To mitigate robust overfitting, we propose a new data augmentation scheme with Improved Diversity and Balanced Hardness (IDBH). Inspired by Muller & Hutter (2021), we design the high-level framework of our augmentation as a 4-layer sequence: flip, crop, color/shape and dropout. Each has distinct semantic meaning and is applied with its own probability. Specifically, we implement flip using Horizontal Flip, crop using Cropshift, dropout using Random Erasing, and color/shape using a set of Color, Sharpness, Brightness, Contrast, Autocontrast, Equalize, Shear (X and Y) and Rotate. The color/shape layer, when applied to augment an image, first samples a transformation according to a probability distribution and then samples a strength from the transformation's strength range. This distribution and the strength range of each component transformation are all theoretically available to optimize. Pseudo-code for the proposed augmentation procedure can be found in Appendix E. The probability and the strength of each layer was jointly optimized by a heuristic search to maximize the robustness. It is important to optimize all layers together, rather than individually. First, this enables a more extensive and more fine-grained search for hardness so that a potentially better hardness balance can be attained. Moreover, it allows a certain hardness to be achieved with greater diversity. For example, raising hardness through Cropshift also improves diversity, while doing so through the color/shape layer hardly increases diversity. However, optimizing the parameters of all layers jointly adds significantly to the computational burden. To tackle this issue, the search space was reduced based on insights gained from the preliminary experiments and other work, and grid search was performed only over this smaller search space. Full details are given in Appendix E. A better augmentation schedule might be possible if, like AuA, a more advanced automatic search was applied. However, automatically searching data augmentation in adversarial training is extremely expensive, and was beyond the resources available to us. IDBH improves diversity through the probabilistic multi-layered structure which results in a very diverse mixture of augmentations including individual transformations and their compositions. We further diversify our augmentation by replacing the conventional crop and dropout methods, Padcrop and Cutout, in AuA and TA with their diversity-enhanced variants Cropshift and Random Erasing respectively. IDBH enables balanced hardness, as the structure design and optimization strategy produce a much larger search space of hardness, so that we are able to find an augmentation that achieves a better trade-off between accuracy and robustness. ## 5 Results We adopt the following setup for training and evaluation (fuller details in Appendix C). The model architectures used were Wide ResNet34 with widening factor of 1 (WRN34-1), its 10x widened version WRN34-10 (Zagoruyko & Komodakis, 2016) and PreAct ResNet18 (PRN18) (He et al., 2016). The training method was PGD10 adversarial training (Madry et al., 2018). SWA was implemented as in Rebuffi et al. (2021). We re-optimized the strength of Cutout and Cutmix per model architecture. AuA was parameterized as in Cubuk et al. (2019) since we didn't have sufficient resource to optimize. TA is parameter-free so no tuning was needed. For PreAct ResNet18, we report the result of two variants, weak and strong, of our method with slightly different parameters (hardness), because we observed a considerable degradation of best robustness on the strong variant when combined with SWA. The reported results are averages over three runs, and the standard deviation is reported in Appendix D.5. ### State-of-the-art Data Augmentation for Adversarial Training From Tab. 1 it can be seen that our method, IDBH, achieves the state-of-the-art best and end robustness for data augmentations, and its improvement over the previous best method is significant. The robust performance is further boosted when combined with SWA. Moreover, our method is the only one on WRN34-1 that successfully improves the end robustness to be higher than the best robustness achieved by the baseline. On PRN18, IDBH[strong] improves the end robustness over the baseline's best robustness by \(+1.78\%\), which is much larger than the existing best record (\(+0.5\%\)) achieved by AuA. This suggests that data augmentation alone, contrary to the previous failed attempts, can significantly beat the baseline augmentation with ES. More importantly, our method also presents the highest best and end accuracy on these architectures, both w. and w.o. SWA, except for WRN34-1 with SWA where our method is very close to the best. Overall, our method improves both accuracy and robustness achieving a much better trade-off between them. Data augmentation was found to be sensitive to the capacity of the underlying model. As shown in Tab. 1, augmentations such as baseline, AuA and TA perform dramatically different on two architectures because they use the same configuration across the architectures. Meanwhile, augmentations like Cutout and ours achieve relatively better performance on both architectures but with different settings for hardness. For example, the optimal strength of Cutout is 8x8 on WRN34-1, but 20x20 on PRN18. Therefore, it is vital for augmentation design to allow optimization with a wide enough range of hardness in order to generalize across models with different capacity. ### Benchmarking state-of-the-art robustness without extra data Tab. 2 shows the robustness of recent robust training and regularization approaches. It can be seen that IDBH matches the best robustness achieved by these methods on PRN18, and outperforms them considerably in terms of best robustness on WRN34-10. This is despite IDBH not being optimised for WRN34-10. In addition, our method also produces an end accuracy that is comparable to the best achieved by other methods suggesting a better trade-off between accuracy and robustness. More importantly, the robustness can be further improved by combining SWA and/or AWP with IDBH. This suggests that IDBH improves adversarial robustness in a way complementary to other regularization techniques. We highlight that our method when combined with both AWP and SWA \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Augmentation} & \multicolumn{4}{c|}{w.o. SWA} & \multicolumn{4}{c}{w. SWA} \\ \cline{2-13} & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{3}{c|}{Robustness (\%)} & \multicolumn{3}{c}{Accuracy (\%)} & \multicolumn{3}{c}{Robustness (\%)} \\ \cline{2-13} & best & end & diff. & best & end & diff. & best & end & diff. & best & end & diff. \\ \hline \multicolumn{13}{c}{Wide ResNet34-1} \\ \hline baseline & 78.37 & 78.96 & -0.58 & 45.11 & 43.84 & 1.26 & 77.76 & 79.47 & -1.71 & 45.71 & 44.69 & 1.02 \\ Cutout & 77.65 & 78.41 & -0.76 & 45.22 & 44.43 & 7.08 & 76.86 & 79.09 & -2.23 & 45.74 & 45.37 & 0.37 \\ Cutmix & 74.12 & 75.52 & -1.40 & 45.10 & 44.49 & 0.61 & **77.95** & 78.06 & -0.11 & 45.27 & 45.32 & -0.05 \\ AuA & 74.59 & 74.93 & -0.34 & 42.62 & 43.28 & **-0.66** & 75.30 & 75.36 & **-0.06** & 43.44 & 43.52 & **-0.08** \\ TA & 73.19 & 73.37 & -0.18 & 42.06 & 41.92 & 0.14 & 73.41 & 73.53 & -0.12 & 42.79 & 42.74 & 0.06 \\ IDBH (ours) & **79.07** & **79.20** & **-0.13** & **46.15** & **45.65** & 0.50 & 77.82 & **79.83** & -2.01 & **46.70** & **46.26** & 0.44 \\ \hline \multicolumn{13}{c}{PreAct ResNet18} \\ \hline baseline & 82.50 & 83.99 & -1.49 & 48.21 & 42.46 & 5.74 & 79.22 & 84.67 & -5.45 & 49.18 & 42.93 & 6.25 \\ Cutout & 83.35 & 84.14 & -0.79 & 49.18 & 47.75 & 1.43 & 81.98 & 84.37 & -2.39 & 50.18 & 48.76 & 1.42 \\ Cutmix & 82.47 & 81.51 & **0.96** & 49.73 & 48.09 & 1.65 & 84.09 & 85.70 & -1.62 & 50.57 & 47.50 & 3.07 \\ AuA & 83.41 & 84.04 & -0.62 & 49.15 & 48.71 & 0.44 & 82.08 & 84.20 & -2.12 & 49.53 & 49.59 & -0.07 \\ TA & 81.68 & 82.26 & -0.58 & 48.83 & 48.62 & **0.21** & 81.63 & 82.73 & **-1.11** & 49.25 & 49.42 & **-0.17** \\ IDBH[weak] (ours) & **84.98** & **85.82** & -0.84 & 50.34 & 48.94 & 1.40 & **84.18** & **86.45** & -2.27 & **51.73** & 49.88 & 1.85 \\ IDBH[strong] (ours) & 83.96 & 84.92 & -0.97 & **50.74** & **49.99** & 0.75 & 82.98 & 85.49 & -2.51 & 51.49 & **50.77** & 0.72 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of various data augmentation methods (w.o. SWA) and their weight-averaged variants (w. SWA) for WRN34-1 and PRN18 on CIFAR10. The best record is highlighted for each metric in each block. All methods were trained in the same way, except that Cutmix was trained longer for convergence as in Rebuffi et al. (2021). Robustness is evaluated against AA. achieves state-of-the-art robustness without extra data. We compare our method with those relying on extra data in Appendix D.3. ### Generalization to Other Datasets Our augmentation generalizes well to other datasets like SVHN and TIN (Tab. 3). It greatly reduces the severity of robust overfitting and improves both accuracy and robustness over the baseline augmentation on both datasets. The robustness on SVHN has been dramatically improved by \(+7.12\%\) for best and \(+13.23\%\) for end. The robustness improvement on TIN is less significant than that on the other datasets because we simply use the augmentation schedule of CIFAR10 without further optimization. A detailed comparison with those regularization methods on these two datasets can be found in Appendix D.4. Please refer to Appendix C for the training and evaluation settings. ### Ablation Test We find that Croshift outperforms Padcrop in our augmentation framework. To compare them, we replaced Croshift with Padcrop in IDBH and kept the remaining layers unchanged. The strength of Padcrop was then optimized for the best robustness separately for w. and w.o. SWA. As shown in Tab. 4, changing Croshift to Padcrop in our augmentation observably degrades both accuracy and robustness both w. and w.o. SWA. ## 6 Conclusion This work has investigated data augmentation as a solution to robust overfitting. We found that improving robust generalization for adversarial training requires data augmentation to be as diverse \begin{table} \begin{tabular}{l l c as possible while having appropriate hardness for the task and network architecture. The optimal hardness of data augmentation is very sensitive to the capacity of the model. To mitigate robust overfitting, we propose a new image transformation Crosshift and a new data augmentation scheme IDBH incorporating Cropshift. Cropshift significantly boosts the diversity and improves both accuracy and robustness compared to the conventional crop transformation. IDBH improves the diversity and allows the hardness to be better balanced compared to alternative augmentation methods. Empirically, IDBH achieves the state-of-the-art accuracy and robustness for data augmentations in adversarial training. This proves that, contrary to previous findings, data augmentation alone can significantly improve robustness and beat the robustness achieved with baseline augmentation and early-stopping. The limit of our work is that we did not have sufficient computational resources to perform more advanced, and more expensive, automatic augmentation search like AutoAugment, which implies that the final augmentation schedule we have described may be suboptimal. Nevertheless, the proposed augmentation method still significantly improves both accuracy and robustness compared to the previous best practice. ## Acknowledgments The authors acknowledge the use of the research computing facility at King's College London, King's Computational Research, Engineering and Technology Environment (CREATE), and the Joint Academic Data science Endeavour (JADE) facility. This research was funded by the King's - China Scholarship Council (K-CSC). Reproducibility Statement.Our methods including both Cropshift and IDBH can be easily implemented using the popular machine learning development frameworks like PyTorch (Paszke et al., 2019). These two algorithms Cropshift and IDBH are illustrated in pseudo codes in Algorithm 1 and Algorithm 2 respectively. The procedure of optimizing IDBH is described in detail in Appendix E. The full parameters of the optimal augmentation schedules we found are described in Tab. 12 and Tab. 11. The training and evaluation settings are described in Section 5 and Appendix C. To further facilitate the reproducibility, we are going to share our code and the pre-trained models with the reviewers and area chairs once the discussion forum is open, and will publish them alongside the paper if accepted.
2304.14990
Robust Stackelberg Equilibria
This paper provides a systematic study of the robust Stackelberg equilibrium (RSE), which naturally generalizes the widely adopted solution concept of the strong Stackelberg equilibrium (SSE). The RSE accounts for any possible up-to-$\delta$ suboptimal follower responses in Stackelberg games and is adopted to improve the robustness of the leader's strategy. While a few variants of robust Stackelberg equilibrium have been considered in previous literature, the RSE solution concept we consider is importantly different -- in some sense, it relaxes previously studied robust Stackelberg strategies and is applicable to much broader sources of uncertainties. We provide a thorough investigation of several fundamental properties of RSE, including its utility guarantees, algorithmics, and learnability. We first show that the RSE we defined always exists and thus is well-defined. Then we characterize how the leader's utility in RSE changes with the robustness level considered. On the algorithmic side, we show that, in sharp contrast to the tractability of computing an SSE, it is NP-hard to obtain a fully polynomial approximation scheme (FPTAS) for any constant robustness level. Nevertheless, we develop a quasi-polynomial approximation scheme (QPTAS) for RSE. Finally, we examine the learnability of the RSE in a natural learning scenario, where both players' utilities are not known in advance, and provide almost tight sample complexity results on learning the RSE. As a corollary of this result, we also obtain an algorithm for learning SSE, which strictly improves a key result of Bai et al. in terms of both utility guarantee and computational efficiency.
Jiarui Gan, Minbiao Han, Jibang Wu, Haifeng Xu
2023-04-28T17:19:21Z
http://arxiv.org/abs/2304.14990v2
# Robust Stackelberg Equilibria+ ###### Abstract This paper provides a systematic study of the _robust Stackelberg equilibrium_ (RSE), which naturally generalizes the widely adopted solution concept of the strong Stackelberg equilibrium (SSE). The RSE accounts for _any_ possible up-to-\(\delta\) suboptimal follower responses in Stackelberg games and is adopted to improve the robustness of the leader's strategy. While a few variants of robust Stackelberg equilibrium have been considered in previous literature, the RSE solution concept we consider is importantly different -- in some sense, it relaxes previously studied robust Stackelberg strategies and is applicable to much broader sources of uncertainties. We provide a thorough investigation of several fundamental properties of RSE, including its utility guarantees, algorithmics, and learnability. We first show that the RSE we defined always exists and thus is well-defined. Then we characterize how the leader's utility in RSE changes with the robustness level considered. On the algorithmic side, we show that, in sharp contrast to the tractability of computing an SSE, it is NP-hard to obtain a fully polynomial approximation scheme (FPTAS) for any constant robustness level. Nevertheless, we develop a quasi-polynomial approximation scheme (QPTAS) for RSE. Finally, we examine the learnability of the RSE in a natural learning scenario, where both players' utilities are not known in advance, and provide almost tight sample complexity results on learning the RSE. As a corollary of this result, we also obtain an algorithm for learning SSE, which strictly improves a key result of [5] in terms of both utility guarantee and computational efficiency. Introduction In a Stackelberg game, a leader engages in a two-step sequential decision-making process with a follower, who acts after observing the leader's strategy. Depending on how the follower reacts to the observed strategy, the leader can adjust their strategy in return to optimize the outcome of the sequential interaction. The _strong Stackelberg equilibrium_ (SSE), in particular, assumes that the follower always responds to the leader's strategy by playing a utility-maximizing strategy and breaks ties in favor of the leader when there are multiple utility-maximizing responses. The concept is widely adopted as the standard solution concept for Stackelberg games and it provides an action guide for the leader. Nevertheless, if the follower actually responds (even slightly) _suboptimally_ to the leader, the quality of the leader's SSE strategy may deteriorate substantially. Indeed, suboptimal responses of the follower are fairly common in practice for various reasons. There has been extensive literature on modeling suboptimal follower behaviors and addressing issues they cause. At a high level, these approaches can be divided into two different categories. The first is to explicitly model the follower's suboptimal decision-making process and use _probabilistic modeling_ to capture the distribution of the follower's suboptimal actions. A prominent example, among many others, is the well-known _quantal response_ (QR) model which softens the assumption of optimal follower response: it assumes that the follower selects every pure strategy with a probability positively related to the utility they offer [24, 37]. The other category -- which is what this paper falls into -- uses _worst-case analysis_. Specifically, it avoids explicitly modeling the follower's irrational behavior and instead considers the worst possible follower behavior, within some plausible ranges. The benefit of this approach, compared to probabilistic modeling such as the quantal response model, is that it requires much less prior knowledge about the follower. Different variants of this approach have been studied in the literature on Stackelberg games. For example, Letchford et al. [20] and Kiekintveld et al. [16] consider situations where the leader does not precisely know the follower's utilities but only knows the intervals containing follower utilities. They study robust Stackelberg strategies that optimize against the worst-case follower utilities. Recent work by Kroer et al. [18] further extends this approach to extensive-form games under limited lookahead by the opponent. While also being about robust Stackelberg strategies, our proposed solution concept of _robust Stackelberg equilibrium_ (RSE) relaxes the aforementioned robust studies by further reducing the required knowledge of follower behavior. Specifically, RSE simply assumes that the follower can take _any_ suboptimal action whose utility is up to a small difference from the optimal one; the worst-case analysis is then adopted which assumes that the follower selects the worst response against the leader's interest from these suboptimal actions. As a result, optimal leader strategies obtained using this concept offer enhanced robustness against potentially suboptimal follower actions. Notably, the RSE solution concept does not make any specific assumptions about the underlying cause of the follower's deviation from the perceived optimal actions -- they may come from the leader's inaccurate knowledge about the true follower utilities as in [20, 16, 18], or the follower's inability of exactly perceiving leader strategies as in [4, 25], or the follower's inability of finding an optimal response, or even a mixture of some of these factors [28]. These causes of deviation from optimality can all be viewed as some form of sub-optimal follower behavior. A concern one may have is that RSE will be more conservative (thus resulting in worse leader utility) than previous robust models with specified causes for sub-optimal follower actions. Fortunately, our results imply that the solution quality of RSE will gracefully improve until the best possible (i.e., the leader utility of SSE) as the level of the follower's sub-optimality, regardless of which causes, decreases to \(0\). Despite its simplicity and power to capture a wide range of suboptimal behaviors, there appears to be a lack of thorough analysis in the literature for such a natural solution concept of robust Stackelberg equilibrium, regarding its properties as an equilibrium concept and its related algorithmic questions. For simultaneous move games, similar \(\delta\)-robust Nash equilibrium concepts have indeed been proposed as a generalization of the Nash equilibrium and thoroughly analyzed [34, 33]. However, simultaneous-move games are quite different from Stackelberg games. For Stackelberg games, most relevant to ours is perhaps an applied study of [28] who proposed a mixed integer linear program for computing a subtle _variant_ of our RSE solution in order to find robust leader strategies. However, the running time of their program is exponential in the worst case; moreover, as we will elaborate in Section 3, the RSE variant they considered may not always exist. Many fundamental questions still remain: how to define the equilibrium concept so that it always exists? If exists, what is the computational complexity of finding the RSE? Are there provably sub-exponential algorithms for computing the RSE? What are the connections between RSE and other equilibrium concepts? These are the questions we aim to answer in this paper. ### Our Contributions We start by formalizing the notion of \(\delta\)-RSE, where \(\delta>0\) is an upper bound on the follower's utility loss due to suboptimal behavior. We prove that under a properly chosen boundary condition, a \(\delta\)-RSE exists in every Stackelberg game for any \(\delta>0\). Consequently, the leader's utility in an \(\delta\)-RSE can be viewed as a well-defined function \(u_{\text{RSE}}(\delta)\) of \(\delta>0\). By analyzing the function \(u_{\text{RSE}}(\delta)\), we show several non-trivial properties of the \(\delta\)-RSE and compare it with other solution concepts, including the classic SSE and the maximin equilibrium. Under minor non-degeneracy assumption, the function \(u_{\text{RSE}}(\delta)\) is proved to exhibit Lipschitz continuity within a regime of small \(\delta\), and approaches the leader utility under SSE as \(\delta\to 0\). Note that such continuity is a "surprisingly" nice property since in discrete strategic games an agent's equilibrium utility typically is _not_ continuous in the other agents' parameters. For instance, in SSE, the leader's utility may drop significantly if the follower's payoff changes even slightly or the follower's response is slightly sub-optimal because these may cause the follower to switch to a response that is dramatically worse to the leader. Interestingly, our added layer of worst-case analysis somehow smoothed the follower response and installed the continuity property. This continuity property has multiple interesting implications. First, it implies that regardless of what causes the follower's up-to-\(\delta\) sub-optimal behavior,1 the leader's utility will always be \(O(\delta)\) off from the SSE utility for small \(\delta\). Second, this property turns out to be very useful for learning both the RSE and SSE, which we will further elaborate on below. Footnote 1: It is easy to see that any \(\delta\) perception error on follower’s payoffs [20, 16] or leader’s mixed strategy [4, 25] will lead to \(O(\delta)\) sub-optimal follower responses when payoffs are bounded. Next, we investigate the complexity of computing a \(\delta\)-RSE. In sharp contrast to the tractability of computing an SSE, we show that for any \(\delta>0\), it is NP-hard to even approximate (the leader's strategy in) a \(\delta\)-RSE; this inapproximability result rules out the possibility of a fully polynomial time approximation scheme (FPTAS), assuming P \(\neq\) NP. Our proof employs a highly nontrivial reduction from the eXact 3-set Cover (X3C) problem. The reduction is combinatorial in nature despite computing (continuous) mixed strategies and a key technical challenge in the proof is to relate continuous game strategy space to the combinatorial solution space of X3C. On the positive side, we present a quasi-polynomial approximation scheme (QPTAS) to compute an approximate RSE. Our proof employs the _probabilistic method_[2] to prove the existence of an approximate \(\delta\)-RSE, which is similar to [22] for proving the existence of an approximate Nash equilibrium with simple format termed the _uniform strategy_ (which can be enumerated in quasi-polynomial time). However, our algorithm for identifying the approximate \(\delta\)-RSE is significantly different -- in fact, the approximate \(\delta\)-RSE is not even a uniform strategy but instead is some strategy nearby. This is due to the nature of bi-level optimization in Stackelberg games. While a uniform leader strategy \(\overline{\mathbf{x}}\) and any nearby strategy \(\mathbf{x}\) will lead to similar leader utilities, they will lead to different follower responses which in turn affects what leader utility is induced in the equilibrium. This challenge forces us to efficiently search the nearby region of a uniform strategy \(\overline{\mathbf{x}}\) for a leader strategy \(\mathbf{x}\) that induces the most favorable follower response. This challenge brought by follower responses does not present in computing an approximate Nash equilibrium. We remark that it is an intriguing open question to close the gap between the above hardness result and QPTAS.2 Last but not least, we turn to the learnability of \(\delta\)-RSE in a setting where the payoff functions are not known in advance but need to be learned from samples of the players' utilities. Such learning paradigm is crucial to today's common practice of "centralized training, decentralized execution" in multi-agent learning [23, 5]. We obtain almost tight results on the learnability of the \(\delta\)-RSE. Specifically, we construct a learning algorithm that, with high probability, outputs a strategy with leader's utility \(O(\epsilon)\) or \(O(1)\) away from the \(\delta\)-RSE by using \(O(1/\epsilon^{2})\) samples, depending on whether a continuity condition is satisfied or not. We then present hard instances with sample complexity lower bounds for each case. As a corollary of this learnability result and the property mentioned before that \(\delta\)-RSE approaches SSE as \(\delta\to 0\), we immediately obtain an algorithm for learning SSE. This algorithm strictly improves a recent learning algorithm for SSE by [5] with both better utility guarantee and better computational efficiency. ### Additional Discussions on Related Work Our work contributes to the rich literature on robust game theory, which studies how to address players' deviations from theoretical optimal actions [1, 7, 10, 21, 34, 33]. As mentioned previously, robust Stackelberg strategies have also been studied in many recent works, in contexts with uncertain follower utilities [20, 16, 18] or follower's uncertain perception of leader's mixed strategies [4, 25]. The key difference between our RSE solution concept and the previous robustness models is that the RSE does not make any specific assumptions about the underlying cause of suboptimal follower behavior. Thus, it is applicable to a wider range of scenarios to address suboptimal follower behavior. To our knowledge, [28] is the only work that studied a very close RSE solution concept as us. Nevertheless, they focused mostly on experimentally verifying the performance of algorithms based on mixed-integer linear programming for computing robust solutions. Finally, uncertainties about the follower's utilities have been considered in machine learning contexts. Notably, the recent work by Bai et al. [5] studied a setting with bandit feedback of the follower's utilities. We analyze the learnability of the RSE in the same learning setup and present strengthened results as a side product of our result for learning the RSE. Stackelberg games have a wide range of applications in economics, finance, public policy-making, and security [36, 11, 35, 29, 15, 26, 32]. Previous work examined equilibrium scenarios where the follower always optimally responds to the leader's strategy. The theory of computing the Stackelberg equilibrium starts from the seminal work by [13], and has formed a celebrated sub-field of Stackelberg games [9, 17, 19]. These results assume that the player's utility information is public and known to each other. Under this assumption, they show several computationally efficient algorithms for computing the Stackelberg equilibrium. On the other hand, another line of research considers the case where the leader does not have a full observation of the follower's utility information [6, 8, 20, 27]. These results take a learning-theoretic approach to the problem of unknown followers and show efficient algorithms to learn the Stackelberg equilibrium. However, all previous work focused on a setting where the follower always optimally responds to the leader's strategy. In these models, the follower responds with the action that maximizes their utility given the strategy the leader commits to, whereas in our model the follower may respond with any approximately optimal action. ## 2 Preliminary A Stackelberg game is played by two players, who are referred to as the leader and the follower, respectively. We let \(u_{l}\in\mathbb{R}^{m\times n}\) (resp. \(u_{f}\in\mathbb{R}^{m\times n}\) ) be the leader's (resp. follower's) utility matrix, where \(m,n\) are the number of actions for the leader (resp. follower). We will correspondingly denote a game instance by the tuple \((u_{l},u_{f})\). Each entry \(u_{l}(i,j)\) of the utility matrix denotes the leader's utility when the leader plays action \(i\) and the follower plays action \(j\); and \(u_{f}(i,j)\) denotes the follower's utility for the same pair of actions. We denote the set of leader's actions by \([m]:=\{1,\ldots,m\}\), and similarly, the set of follower's actions by \([n]\). In the game, the leader moves first by committing to a _mixed strategy_, which is a distribution over the leader's actions. The leader's mixed strategy is denoted as \(\mathbf{x}=(x_{1},\cdots,x_{m})\in\Delta^{m}\), where \(\Delta^{m}=\{\mathbf{x}:\sum_{i\in[m]}x_{i}=1\text{ and }0\leq x_{i}\leq 1\}\) is a \(m-1\)-dimensional simplex and each \(x_{i}\) represents the probability the leader playing action \(i\). Then the follower responds to the strategy the leader commits to by choosing an action that maximizes his expected utility against the leader's strategy. We extend the notation and use \(u_{l}(\mathbf{x},j)=\sum_{i\in[m]}x_{i}\cdot u_{l}(i,j)\) to denote the leader's utility when she plays a mixed strategy \(\mathbf{x}\in\Delta^{m}\) and the follower plays a pure one \(j\in[n]\); the follower's utility is denoted by \(u_{f}(\mathbf{x},j)\) in a similar way. Without loss of generality, we can restrict the follower's response to _pure strategies_, which picks a deterministic action \(j\in[n]\) to play. In cases where there is more than one action that maximizes the follower's utility, it is assumed that the follower breaks ties in favor of the leader. For convenience, we let \(\mathrm{BR}(\mathbf{x}):=\{j|\operatorname{argmax}_{j\in[n]}u_{f}(\mathbf{x},j^{ \prime})\}\) denote the best response set of the follower against a leader strategy \(\mathbf{x}\in\Delta^{m}\). The Strong Stackelberg Equilibrium (SSE), represented as a strategy profile \(\langle\mathbf{x}^{*},j^{*}\rangle\) of the two players, is defined as follows: **Definition 1** (Strong Stackelberg Equilibrium).: _A strategy profile \(\langle\mathbf{x}^{*},j^{*}\rangle\) is a SSE if it holds that:_ \[\mathbf{x}^{*}=\operatorname*{argmax}_{\mathbf{x}\in\Delta^{m}}\max_{j\in\mathrm{BR}( \mathbf{x})}u_{l}(\mathbf{x},j)\quad\text{ and }\quad j^{*}=\operatorname*{argmax}_{j\in\mathrm{BR}(\mathbf{x}^{*})}u_{l}(\mathbf{x} ^{*},j). \tag{1}\] A remark on approximation.In this paper, we work extensively with approximate solutions for both computation and learning parts. To be consistent, throughout the paper, we focus on _additive_ approximation error in terms of both the follower's suboptimal actions as well as the leader's utilities. Therefore, all approximate solutions we provide in this paper are in an additive sense, unless otherwise clarified. Towards this end, we normalize the entries in all players' utility matrices to be within the interval \([0,1]\) as a convention for studying additive approximation. This is without loss of generality since rescaling and additive shifting of the utilities will not change a game's equilibrium. ## 3 The Robust Stackelberg Equilibrium (RSE) and its Properties The best response set \(BR\) defines an ideal situation, where the follower always picks an optimal response without any error. In practice, the follower may pick suboptimal responses due to various reasons, like bounded rationality and limited observations [28]. A straightforward extension of the best response set, therefore, allows a small error \(\delta>0\) in the follower's choice of actions; we define the \(\delta\)-optimal response set of the follower as \[\mathrm{BR}_{\delta}(\mathbf{x}):=\big{\{}j\in[n]\,\big{|}\,u_{f}(\mathbf{x},j)>\max_ {j^{\prime}\in[n]}u_{f}(\mathbf{x},j^{\prime})-\delta\big{\}} \tag{2}\] for any leader strategy \(\mathbf{x}\in\Delta^{m}\). With this extension, we can subsequently define the corresponding \(\delta\)-robust notion of the Stackelberg equilibrium as follows. **Definition 2** (\(\delta\)-Robust Stackelberg Equilibrium (\(\delta\)-Rse)).: _For any \(\delta>0\), a strategy profile \(\langle\mathbf{x}^{*},j^{*}\rangle\) is a \(\delta\)-RSE if it holds that:_ \[\mathbf{x}^{*}=\operatorname*{argmax}_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{BR}_ {\delta}(\mathbf{x})}u_{l}(\mathbf{x},j)\quad\text{ and }\quad j^{*}=\operatorname*{ argmin}_{j\in\mathrm{BR}_{\delta}(\mathbf{x}^{*})}u_{l}(\mathbf{x}^{*},j). \tag{3}\] _The leader's \(\delta\)-RSE utility \(u_{\text{RSE}}(\delta)\) is the utility obtained in a \(\delta\)-RSE, i.e., \(u_{\text{RSE}}(\delta)=u_{l}(\mathbf{x}^{*},j^{*})\)._ It is not immediately clear the notion of \(\delta\)-RSE is well-defined: the maximum of \(\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{l}(\mathbf{x},j)\) (as a function of \(\mathbf{x}\)) may well not exist. Indeed, some variants of the Stackelberg equilibrium do not always exist in some games; one such example is the _weak Stackelberg equilibrium_[36]. We remark that the "\(>\)" in Equation (2) is a delicate choice. To understand why, we present an example that shows that if the \(\delta\)-optimal response set includes those exactly \(\delta\)-optimal responses (i.e., changing "\(>\)" to "\(\geq\)" in Equation (2)), then the equilibrium may not exist. **Example 1** (Non-existence of \(\delta\)-RSE under an Alternative Definition).: _Table 1 illustrates an instance we constructed to show the nonexistence of \(\delta\)-RSE with a slightly revised definition of \(\delta\)-optimal response set. More specifically, in this game represented by Table 1, the optimal leader strategy is playing \(i_{1}\) with infinitely small probability \(\xi>0\) while playing \(i_{2}\) with probability \(1-\xi\). The follower \(\delta\)-optimal response set when leader plays \(\mathbf{x}=(\xi,1-\xi,0)\) would be \(BR_{\delta}(\mathbf{x})=\{j_{2}\}\), resulting leader utility \(u_{l}(\mathbf{x},j_{2})=1-\xi\). Thus, the smaller the \(\xi\) is, the larger the leader utility is. But \(\xi\) can't be equal to \(0\), this would include \(j_{1}\) into the \(\delta\)-optimal follower response set that makes the leader's utility become \(0\). Therefore, since \(\xi\) needs to be infinitely small but greater than \(0\), so the \(\delta\)-RSE is not well defined and does not exist for this example._ Example 1 shows that it is not straightforward to argue the existence of \(\delta\)-RSE. Specifically, to induce the desirable follower response behaviors, the leader optimizes over an _open_ set of mixed strategies and thus cannot achieve the exact optimal strategy. Such a wrinkle of the non-existence of a solution concept is always a concern for game-theoretical analysis. While one may be willing to compromise and settle with the "supremum" of leader strategies, instead of using the exact "max" as used in Equation (3), this is generally viewed as being very undesirable. Fortunately, we are able to show that the \(\delta\)-RSE we defined via Equation (2) and (3) above exists in every game for any \(\delta>0\). **Proposition 1**.: _A \(\delta\)-RSE exists in every game for any \(\delta>0\) and can be computed in \(O(2^{n}\operatorname{poly}(m,n))\) time._ The proof of Proposition 1 is constructive as it also provides an algorithm for computing the \(\delta\)-RSE (though it runs in exponential time in the worst case). The key idea to our proof is the definition and analysis of different _sub-regions_ of the leader's strategy space \(\Delta^{m}\). Each sub-region, denoted by \(\mathcal{X}_{(S\widetilde{j},\widetilde{j})}\), is characterized by three factors: (1) Follower \(\delta\)-optimal response set \(S\in 2^{[n]}\); (2) Follower action \(\widetilde{j}\in S\) with maximal follower utility; (3) Follower action \(j\in S\) with worst leader utility within \(S\) (\(j\) and \(\widetilde{j}\) are different in general). A few observations can be made about \(\mathcal{X}_{(S\widetilde{j},\widetilde{j})}\). First, they form a partition of \(\Delta^{m}\). Second, each \(\mathcal{X}_{(S\widetilde{j},\widetilde{j})}\) is a _polytope_ and thus can be expressed by linear constraints, but with some _strict_ inequality constraints since it may be an open set due to the \(BR_{\delta}\) defined by Equation (2). The main idea of our algorithm is to search the \(\delta\)-RSE candidate within each \(\mathcal{X}_{(S\widetilde{j},\widetilde{j})}\) region by solving a linear program but with _relaxed_ non-strict inequalities (thus in total we solve exponentially many LPs). The crux of our proof is to argue that, while in general the relaxation above is not tight, there always exists a region \(\mathcal{X}_{(S\widetilde{j},\widetilde{j})}\) for which the above relaxation is tight and moreover the leader achieves the best possible leader utility. This proves the existence of a \(\delta\)-RSE. We refer readers to Appendix A.1 for the detailed proof. The following are a few remarks about Proposition 1. First, it implies that for instances with a small \(n\), a \(\delta\)-RSE can be computed efficiently. Nevertheless, the exponential dependency on \(n\) in the time complexity appears to be inevitable, as we will show in the next section that computing a \(\delta\)-RSE is NP-hard. Second, it may not appear a big deal of choosing "\(>\)" in Equation (2) while not "\(\geq\)", especially if one is willing to carry "\(\operatorname{sup}\)/inf" in all derivations. However, we still believe it is helpful to figure out the right way so that we do not \begin{table} \begin{tabular}{|c|c|c|} \hline \(u_{l},u_{f}\) & \(j_{1}\) & \(j_{2}\) \\ \hline \(i_{1}\) & \(0,-1\) & \(0,0\) \\ \hline \(i_{2}\) & \(0,-\delta\) & \(1,0\) \\ \hline \(i_{3}\) & \(0,1\) & \(0,0\) \\ \hline \end{tabular} \end{table} Table 1: An instance whose \(\delta\)-RSE doesn’t exist when \(\delta\)-optimal repsonse set is defined differently where “\(>\)” is replaced with “\(\geq\)”, i.e. \(BR_{\delta}(\mathbf{x})=\{j|u_{f}(\mathbf{x},j)\geq u_{f}(\mathbf{x},j^{\prime})-\delta, \forall j^{\prime}\neq j\}\). always need to worry about the existence and how to tweak the solution to make it achievable.3 This also paves the way for many of our following analysis. Footnote 3: For example, Pita et al. [28] used non-strict inequalities for both the \(\delta\)-optimal follower response and non-\(\delta\)-optimal follower response. This choice actually leads to strange inconsistency in the follower behavior modeling, in which the follower will effectively break ties _against_ the leader among actions strictly within \(\delta\)-optimal response region but then _in favor of_ the leader at the boundary of \(\delta\)-optimal response regions. Proposition 1 guarantees that \(\delta\)-RSE always exists, so the leader's utility in a \(\delta\)-RSE, denoted by \(u_{\text{RSE}}(\delta)\) as in Definition 2, is also a well-defined function. Next, we will derive several basic properties by analyzing the function \(u_{\text{RSE}}(\delta)\). The properties we present next hinge on a notion called the _inducibility gap_, denoted as \(\Delta\). Specifically, the inducibility gap of a Stackelberg game is the largest constant \(\Delta\) such that for every follower action \(j\in[n]\), there exists some leader strategy \(\mathbf{x}^{j}\) that makes \(j\) at least \(\Delta\) better for the follower than any other action \(j^{\prime}\neq j\). **Definition 3** (Inducibility Gap).: _The inducibility gap of a Stackelberg game is the largest \(\Delta\) such that for any follower actions \(j\in[n]\), there exists a leader strategy \(\mathbf{x}^{j}\) such that \(u_{f}(\mathbf{x}^{j},j)\geq u_{f}(\mathbf{x}^{j},j^{\prime})+\Delta,\,\forall j^{ \prime}\in[n]\) and \(j^{\prime}\neq j\)._ Additional discussions on the inducibility gap.We remark that under the classic Strong Stackelberg equilibrium concept (see Equation (1)), it is without loss of generality to assume \(\Delta\geq 0\). This is because if a game has \(\Delta<0\), then there exists some follower action \(j\) that can never be the best follower response (thus will not be played): more formally through contraposition of Definition 3, there exists follower action \(j\) such that for any \(\mathbf{x}\) we have \(u_{f}(\mathbf{x},j)<u_{f}(\mathbf{x},j^{\prime})+\Delta<u_{f}(\mathbf{x},j^{\prime})\) for some \(j^{\prime}\in[n]\). Therefore, it is without loss of generality to ignore/remove this never-to-be-played follower action \(j\) (regardless of solving or learning the game); consequently, we obtain a Stackelberg game in which every follower action at least has some chance to be a best response, and such a game has \(\Delta\geq 0\). However, for \(\delta\)-RSE, it is only without loss of generality to assume \(\Delta\geq-\delta\) since \(\delta\)-suboptimal follower actions will affect the \(\delta\)-RSE. Some of our results about \(\delta\)-RSE hold under assumption \(\Delta>0\), which slightly loses generality compared to \(\Delta\geq-\delta\). This discrepancy becomes smaller as \(\delta\) decreases (i.e., follower becomes less sub-optimal). Overall we believe it is a reasonable assumption to pursue when \(\delta\) is small. With the definition of the inducibility gap, we are now ready to state our main results in this section about characteristics of \(\delta\)-RSE and the leader's utility under this solution concept. **Theorem 1**.: _For any Stackelberg game, let \(u_{\text{SSE}}\) and \(u_{\text{MM}}\) be the leader's utility in the SSE and maximin equilibrium accordingly, the following properties of \(\delta\)-RSE always hold (where \(\Delta\) denotes the inducibility gap):_ 1. _For any_ \(\delta,\delta^{\prime}\) _such that_ \(0<\delta\leq\delta^{\prime}\)_, it holds that_ \(u_{\text{RSE}}(\delta)\) _is monotone non-increasing and is bounded as follows:_ \[u_{\text{SSE}}\geq u_{\text{RSE}}(\delta)\geq u_{\text{RSE}}(\delta^{\prime}) \geq u_{\text{MM}},\] (4) _where_ \(u_{MM}=\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in[n]}u_{l}(\mathbf{x},j)\) _is the leader's maximin utility. Moreover, if_ \(\Delta>\delta\)_, it holds that:_ \[u_{\text{RSE}}(\delta)\geq u_{\text{SSE}}-\delta/\Delta.\] (5) 2. \(u_{\text{RSE}}(0^{+}):=\lim_{\delta\to 0^{+}}u_{\text{RSE}}(\delta)\) _exists; moreover, if if_ \(\Delta>0\)_, then_ \(u_{\text{RSE}}(0^{+})=u_{\text{SSE}}\)_._ 3. _For any_ \(\Delta\in(0,1]\) _and any constant_ \(L>1/\Delta\)_,_ \(u_{\text{RSE}}(\delta)\) _is Lipschitz continuous in interval_ \((0,\Delta-\frac{1}{L}]\) _with Lipschitz constant_ \(L\)_, but_ \(u_{\text{RSE}}(\delta)\) _may be discontinuous at_ \(\delta=\Delta\) Before proceeding to the proof, we make a few remarks about Theorem 1. First, property (1) shows that with the inducibility gap \(\Delta\), the lower bound of the leader's utility under \(\delta\)-RSE can be improved from \(u_{MM}\) to be \(u_{SSE}-\delta/\Delta\). Second, Property (2) relies crucially on the non-degeneracy condition \(\Delta>0\). See Appendix B.1 for an instance with \(\Delta=0\) such that \(u_{\text{RSE}}(0^{+})<u_{\text{SSE}}\). Finally, property (3) shows that \(u_{\text{RSE}}(\delta)\) is Lipschitz continuous when \(\delta<\Delta\). This Lipschitz continuity turns out to be very useful for learning the \(\delta\)-RSE in contexts where the follower utility is not known in advance. We will demonstrate applicability to learning in Section 5. Proof of Theorem 1.: We start with Property (1). **Property (1)**. According to the definition of follower's \(\delta\)-optimal response set in Equation (2), \(\mathrm{BR}_{\delta}(\mathbf{x})\) expands with \(\delta\), i.e., for any leader strategy \(\mathbf{x}\), we have \[\mathrm{BR}_{\delta}(\mathbf{x})\subseteq\mathrm{BR}_{\delta^{\prime}}(\mathbf{x}), \quad\forall 0<\delta\leq\delta^{{}^{\prime}}. \tag{6}\] Recall that \(u_{\text{RSE}}(\delta)=\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{BR}_{ \delta}(\mathbf{x})}u_{l}(\mathbf{x},j)\). Hence, for any \(\mathbf{x}\in\Delta^{m}\), \(0<\delta\leq\delta^{{}^{\prime}}\), we have \[u_{\text{RSE}}(\delta)=\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{BR}_{ \delta}(\mathbf{x})}u_{l}(\mathbf{x},j)\geq\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{ BR}_{\delta^{{}^{\prime}}}(\mathbf{x})}u_{l}(\mathbf{x},j)=u_{\text{RSE}}(\delta^{{}^{ \prime}}). \tag{7}\] Now suppose \(\delta\geq 1\). Recall that both the utility matrices were normalized so that all the utility values are within \([0,1]\). So for any \(\delta\geq 1\) and any \(\mathbf{x}\), we have \(\mathrm{BR}_{\delta}(\mathbf{x})=[n]\), and in turn \[u_{\text{RSE}}(\delta)=\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{BR}_{ \delta}(\mathbf{x})}u_{l}(\mathbf{x},j)=\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in[n]}u_{l}( \mathbf{x},j)=u_{MM}. \tag{8}\] Equations (7) and (8) together imply that, if \(0<\delta\leq\delta^{{}^{\prime}}\) then \[u_{\text{RSE}}(\delta)\geq u_{\text{RSE}}(\delta^{{}^{\prime}})\geq u_{MM} \tag{9}\] Next, we show that \(u_{\text{SSE}}\geq u_{\text{RSE}}(\delta)\). Recall that \(u_{\text{SSE}}=\max_{\mathbf{x}\in\Delta^{m}}\max_{j\in\mathrm{BR}(\mathbf{x})}u_{l}( \mathbf{x},j)\), where \[\mathrm{BR}(\mathbf{x})=\left\{j\,\Big{|}\,\mathrm{argmax}_{j^{\prime}\in[n]}\,u_ {f}(\mathbf{x},j^{\prime})\right\}.\] So we have \(\mathrm{BR}(\mathbf{x})\subseteq\mathrm{BR}_{\delta}(\mathbf{x})\) for any \(\delta>0\) and \(\mathbf{x}\in\Delta^{m}\). It follows that \[u_{\text{SSE}}=\max_{\mathbf{x}\in\Delta^{m}}\max_{j\in\mathrm{BR}(\mathbf{x})}u_{l}( \mathbf{x},j)\geq\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{BR}(\mathbf{x})}u_{l}(\bm {x},j)\geq\max_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{ l}(\mathbf{x},j)=u_{\text{RSE}}(\delta). \tag{10}\] Equation (4) then follows by Equations (9) and (10). To prove Equation (5). We construct a leader strategy \(\widehat{\mathbf{x}}\) and show that playing \(\widehat{\mathbf{x}}\) against a \(\delta\)-rational follower yields a utility of at least \(u_{\text{SSE}}-\frac{\delta}{\Delta}\) for the leader. Let \(\langle\mathbf{x}^{*},j^{*}\rangle\) be the SSE of the game. Let \(\mathbf{x}^{j^{*}}\) be a strategy such that \(u_{f}(\mathbf{x}^{j^{*}},j^{*})\geq u_{f}(\mathbf{x}^{j^{*}},j^{\prime})+\Delta\) for all \(j^{\prime}\neq j^{*}\). By definition of the inducibility gap, such a strategy must exist. Since \(\Delta>\delta\), we can set \(\widehat{\mathbf{x}}=(1-\frac{\delta}{\Delta})\mathbf{x}^{*}+\frac{\delta}{\Delta}\bm {x}^{j^{*}}\) and it is a valid strategy. We have \(\mathrm{BR}_{\delta}(\widehat{\mathbf{x}})=\{j^{*}\}\) due to the following derivations: \[\begin{split} u_{f}(\widehat{\mathbf{x}},j^{*})&=(1- \frac{\delta}{\Delta})u_{f}(\mathbf{x}^{*},j^{*})+\frac{\delta}{\Delta}u_{f}(\mathbf{x}^ {j^{*}},j^{*})\\ &\geq(1-\frac{\delta}{\Delta})u_{f}(\mathbf{x}^{*},j^{\prime})+\frac{ \delta}{\Delta}\big{(}u_{f}(\mathbf{x}^{j^{*}},j^{\prime})+\Delta\big{)}\quad \forall j^{\prime}\neq j^{*}\\ &=(1-\frac{\delta}{\Delta})u_{f}(\mathbf{x}^{*},j^{\prime})+\frac{ \delta}{\Delta}u_{f}(\mathbf{x}^{j^{*}},j^{\prime})+\delta\quad\forall j^{ \prime}\neq j^{*}\\ &=u_{f}(\widehat{\mathbf{x}},j^{\prime})+\delta\quad\forall j^{\prime} \neq j^{*}.\end{split} \tag{11}\] Hence, we have \(\min_{j\in\mathrm{BR}_{\delta}(\widehat{\mathbf{x}})}u_{l}(\widehat{\mathbf{x}},j)=u_{l} (\widehat{\mathbf{x}},j^{*})\), which can then be bounded from below as \[u_{l}(\widehat{\mathbf{x}},j^{*})=(1-\frac{\delta}{\Delta})u_{l}(\mathbf{x}^{*},j^{*})+ \frac{\delta}{\Delta}u_{l}(\mathbf{x}^{j^{*}},j^{*})\geq(1-\frac{\delta}{\Delta})u _{l}(\mathbf{x}^{*},j^{*})=(1-\frac{\delta}{\Delta})u_{\text{SSE}}, \tag{12}\] where we used \(u_{l}(\mathbf{x}^{j^{*}},j^{*})\geq 0\). Since \(u_{\text{SSE}}\leq 1\), we have \[u_{\text{RSE}}(\delta)\geq\min_{j\in\mathrm{BR}_{\delta}(\widehat{\mathbf{x}})}u_ {l}(\widehat{\mathbf{x}},j)\geq(1-\delta/\Delta)u_{\text{SSE}}\geq u_{\text{SSE}}- \delta/\Delta.\] Property (2)\(u_{\text{RSE}}(0^{+})\) exists because, as shown in Property (1), \(u_{\text{RSE}}(\delta)\) is monotone non-increasing in \(\delta\) and is upper bounded by \(u_{\text{SSE}}\). Given that \(\Delta>0\), Equations (5) and (4) together with the squeeze theorem implies \(u_{\text{RSE}}(0^{+})=u_{\text{SSE}}\). 4 Footnote 4: Appendix B.1 shows an instance with \(u_{\text{RSE}}(0^{+})<u_{\text{SSE}}\) when \(\Delta=0\). Property (3)The discontinuity of \(u_{\text{RSE}}(\delta)\) is illustrated in the examples in Appendix B.2. In what follows we prove its Lipschitz continuity. Pick arbitrary \(L>1/\Delta\) and two arbitrary numbers \(\delta\) and \(\delta^{\prime}\) such that \(0<\delta<\delta^{\prime}\leq\Delta-1/L\). We show that \(|u_{\text{RSE}}(\delta)-u_{\text{RSE}}(\delta^{\prime})|\leq L(\delta^{\prime }-\delta)\) to complete the proof. Let \(\langle\mathbf{x}^{*},j^{*}\rangle\) denote a \(\delta\)-RSE, \[\mathbf{x}^{*}\in\operatorname*{argmax}_{\mathbf{x}\in\Delta^{m}}\min_{j\in\mathrm{ BR}_{\delta}(\mathbf{x})}u_{l}(\mathbf{x},j)\text{ and }j^{*}\in\operatorname*{argmin}_{j\in\mathrm{BR}_{\delta}(\mathbf{x}^{*}_{\delta})}u_{l}( \mathbf{x}^{*},j).\] Pick arbitrary \(\widetilde{j}\in\operatorname*{argmax}_{j\in\mathrm{BR}_{\delta}(\mathbf{x}^{*}) }u_{f}(\mathbf{x}^{*},j)\), and let \(\widetilde{\mathbf{x}}\) be a strategy such that \(u_{f}(\widetilde{\mathbf{x}},\widetilde{j})\geq u_{f}(\widetilde{\mathbf{x}},j)+\Delta\) for all \(j\neq\widetilde{j}\) (which exists according to the definition of the inducibility gap). We construct a leader strategy \[\mathbf{y}=\frac{\Delta-\delta^{\prime}}{\Delta-\delta}x^{*}+\frac{\delta^{\prime }-\delta}{\Delta-\delta}\widetilde{\mathbf{x}}.\] We have \(j\notin\mathrm{BR}_{\delta^{\prime}}(\mathbf{y})\) if \(j\notin\mathrm{BR}_{\delta}(\mathbf{x}^{*})\) because \[u_{f}(\mathbf{y},\widetilde{j}) =\frac{\Delta-\delta^{\prime}}{\Delta-\delta}u_{f}(\mathbf{x}^{*}, \widetilde{j})+\frac{\delta^{\prime}-\delta}{\Delta-\delta}u_{f}(\widetilde{ \mathbf{x}},\widetilde{j}) \tag{13}\] \[\geq\frac{\Delta-\delta^{\prime}}{\Delta-\delta}\left(u_{f}(\mathbf{ x}^{*},j)+\delta\right)+\frac{\delta^{\prime}-\delta}{\Delta-\delta}\left(u_{f}( \widetilde{\mathbf{x}},j)+\Delta\right)\] \[=\frac{\Delta-\delta^{\prime}}{\Delta-\delta}u_{f}(\mathbf{x}^{*},j)+ \frac{\delta^{\prime}-\delta}{\Delta-\delta}u_{f}(\widetilde{\mathbf{x}},j)+ \delta^{\prime}\] \[=u_{f}(\mathbf{y},j)+\delta^{\prime},\] where \(u_{f}(\mathbf{x}^{*},\widetilde{j})\geq u_{f}(\mathbf{x}^{*},j)+\delta\) because \(\widetilde{j}\in\operatorname*{argmax}_{j\in\mathrm{BR}_{\delta}(\mathbf{x}^{*}) }u_{f}(\mathbf{x}^{*},j)\) while \(j\notin\mathrm{BR}_{\delta}(\mathbf{x}^{*})\). Hence, \(\mathrm{BR}_{\delta^{\prime}}(\mathbf{y})\subseteq\mathrm{BR}_{\delta}(\mathbf{x}^{*})\) and \[u_{\text{RSE}}(\delta^{\prime})=\min_{j\in\mathrm{BR}_{\delta^{ \prime}}(\mathbf{y})}u_{l}(\mathbf{y},j) =\min_{j\in\mathrm{BR}_{\delta^{\prime}}(\mathbf{y})}\left(\frac{\Delta- \delta^{\prime}}{\Delta-\delta}u_{l}(\mathbf{x}^{*},j)+\frac{\delta^{\prime}- \delta}{\Delta-\delta}u_{l}(\widetilde{\mathbf{x}},j)\right)\] \[\geq\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x}^{*})}\frac{\Delta- \delta^{\prime}}{\Delta-\delta}u_{l}(\mathbf{x}^{*},j)=\frac{\Delta-\delta^{ \prime}}{\Delta-\delta}u_{\text{RSE}}(\delta).\] This means that \[u_{\text{RSE}}(\delta)-u_{\text{RSE}}(\delta^{\prime})\leq\frac{\delta^{\prime }-\delta}{\Delta-\delta}\cdot u_{\text{RSE}}(\delta)\leq\frac{\delta^{\prime }-\delta}{\Delta-\delta}\leq L(\delta^{\prime}-\delta),\] where the last inequality is due to \(\delta\leq\Delta-1/L\). Since \(u_{\text{RSE}}\) is non-increasing as we argued for Property (1), we then have \(|u_{\text{RSE}}(\delta)-u_{\text{RSE}}(\delta^{\prime})|=u_{\text{RSE}}(\delta)-u_ {\text{RSE}}(\delta^{\prime})\leq L(\delta^{\prime}-\delta)\). Theorem 1 indicates a connection between the leader's SSE strategies, \(\delta\)-RSE strategies, and maximin strategies. Given these different equilibrium concepts, a natural question to ask is, is it truly necessary to define and study the \(\delta\)-RSE solution concept? In particular, might it be that the SSE leader strategy or the maximin leader strategy will already perform well, i.e., achieve \(\epsilon\)-optimal \(u_{\text{RSE}}(\delta)\) assuming the suboptimal follower responses? Unfortunately, our following result shows that both the SSE leader strategy and maximin leader strategy are highly suboptimal strategies with a gap of \(\Omega(1)\) loss for approximating the \(\delta\)-RSE. As a result, we cannot simply apply the leader strategy under other equilibrium states (e.g. SSE) to approximate the \(\delta\)-RSE. **Proposition 2** (Suboptimality of Other Equilibrium Strategies).: _There exists an instance whose SSE strategy \(\widehat{\mathbf{x}}_{1}\) and maximin strategy \(\widehat{\mathbf{x}}_{2}\) are highly suboptimal strategies with a gap of \(\frac{1}{4}\) for approximating the \(\delta\)-RSE:_ \[\min_{j\in\text{BR}_{\delta}(\widehat{\mathbf{x}}_{1})}u_{l}(\widehat{\mathbf{x}}_{1},j)\leq u_{\text{RSE}}(\delta)-\frac{1}{4}\quad\text{and}\quad\min_{j\in \text{BR}_{\delta}(\widehat{\mathbf{x}}_{2})}u_{l}(\widehat{\mathbf{x}}_{2},j)\leq u_{ \text{RSE}}(\delta)-\frac{1}{4}.\] _where \(\widehat{\mathbf{x}}_{1}=\operatorname*{argmax}_{\mathbf{x}\in\Delta^{m}}\max_{j\in \text{BR}(\mathbf{x})}u_{l}(\mathbf{x},j)\) and \(\widehat{\mathbf{x}}_{2}=\operatorname*{argmax}_{\mathbf{x}\in\Delta^{m}}\min_{j\in [n]}u_{l}(\mathbf{x},j)\)._ The proof of this proposition is by construction, see Appendix A.2 for the constructed example. ## 4 Algorithmic Properties of RSE In this section, we study the complexity of computing and approximating a \(\delta\)-RSE. Throughout this section, utility matrices of both players are known beforehand as the input to the problem. ### Hardness of Approximating the \(\delta\)-Rse We start by proving it is NP-hard, in general, to obtain an \(\epsilon\)-optimal \(\delta\)-RSE leader strategy as we show in Theorem 2 below. We remark that, though \(\delta\)-RSE appears to be a natural solution concept, we are not aware of any previous results on its computational complexity. The closest result we can find is the NP-hardness of computing an optimal leader strategy that is robust with respect to uncertainty about the follower's utility matrix, studied by [20]. Despite some similarity in spirit, these two problems can not be seen as special cases of each other. Moreover, from a technical point of view, our hardness result also sheds light on the inapproximability of the problem whereas the proof technique of [20] only implies the hardness of exact computation and leaves inapproximability an open problem from their problem. **Theorem 2**.: _It is NP-hard to compute a \(\frac{1}{2n}\)-optimal \(\delta\)-RSE leader strategy._ As mentioned in the remark at the end of Section 2, all approximations in this paper are in an additive sense. The hardness proof is somewhat intricate, so we briefly describe the intuition of the reduction here before presenting the formal proof. Proof.: We show a reduction from the eXact 3-set Cover (X3C) problem. An X3C instance is given by an integer \(k\), a collection of \(m\) subsets \(S_{1},\ldots,S_{m}\subseteq[3k]\), each of size 3. It is a yes-instance if there exists \(J\subseteq[m]\), such that \(|J|=k\) and \(\bigcup_{j\in J}S_{j}=[3k]\); we call such a \(J\) an _exact cover_. Otherwise, it is a no-instance. We reduce an instance of X3C to a game with the following utility matrices. The leader has \(m\) actions to choose from, each corresponding to a subset in the X3C instance. The follower has \(n=m+3k+1\) actions \(\{a\}\cup\{b_{j}:j\in[m]\}\cup\{c_{i}:i\in[3k]\}\), where each \(b_{j}\) corresponds to subset \(S_{j}\), and each \(c_{i}\) corresponds to an element in \([3k]\). Suppose \(\epsilon>0\) is a constant, and let \(\lambda=\frac{\epsilon}{6m\cdot k^{2}}\). The follower's utility function is given as follows (also see Figure 1 for an illustration). * For all \(\ell\in[m]\): \(u_{f}(S_{\ell},a)=1\). * For all \(\ell\in[m]\) and \(j\in[m]\): \[u_{f}(S_{\ell},b_{j})=\begin{cases}\max\left\{1-\frac{\delta}{1-\lambda},\quad 0 \right\},&\text{ if }j\neq\ell\\ \min\left\{1,\quad\frac{1-\delta}{\lambda}\right\},&\text{ if }j=\ell\end{cases}\] (14) This ensures that \(u_{f}(\mathbf{x},b_{j})\leq 1-\delta\) whenever \(x_{j}\leq\lambda\), and \(1-\delta<u_{f}(\mathbf{x},b_{j})\leq 1\), otherwise. * For all \(\ell\in[m]\) and \(i\in[3k]\): \[u_{f}(S_{\ell},c_{i})=\begin{cases}\min\left\{(1-\delta)\cdot\frac{k}{k-1+ \lambda k},\quad 1\right\},&\text{ if }i\notin S_{\ell}\\ \max\left\{0,\quad 1-\delta\cdot\frac{k}{1-\lambda k}\right\},&\text{ if }i\in S_{\ell} \end{cases}\] (15) This ensures that \(1-\delta<u_{f}(\mathbf{x},c_{i})\leq 1\) whenever \(\sum_{\ell:i\in S_{\ell}}x_{\ell}<\frac{1}{k}-\lambda\), and \(u_{f}(\mathbf{x},c_{i})\leq 1-\delta\) otherwise. The leader's utility function is given as follows. * For all \(\ell\in[m]\): \[u_{l}(S_{\ell},a)=\frac{1}{k},\quad\text{ and }\quad u_{l}(S_{\ell},c_{i})=0 \quad\text{ for all }i\in[3k]\] (16) * For all \(\ell\in[m]\) and \(j\in[m]\): \[u_{l}(S_{\ell},b_{j})=\begin{cases}0&\text{ if }j\neq\ell\\ 1&\text{ if }j=\ell\end{cases}\] (17) We show that if the X3C instance is a yes-instance, then the leader obtains utility \(\frac{1}{k}\) in a robust Stackelberg equilibrium; otherwise, the leader obtains at most \(\frac{1}{2k}\cdot(1+\epsilon)\). Hence, no \(\frac{1}{2k}\cdot(1-\epsilon)\)-optimal algorithm exists unless P = NP. Intuitively, to obtain the higher utility of \(\frac{1}{k}\), the leader needs to prevent the follower's actions \(c_{i}\) from being a \(\delta\)-optimal response. According to the utility definition, this requires choosing actions \(S_{\ell}\) that cover \(i\) with a sufficiently high probability close to \(1/k\); see Figure 1 (right). On the other hand, choosing each \(S_{\ell}\) comes with a price as it will cause \(b_{\ell}\) to be a \(\delta\)-optimal response if the probability reaches \(\lambda\); see Figure 1 (left). Hence, in order to maintain utility \(\frac{1}{k}\) for the leader, for each \(S_{\ell}\), we should either pick it with Figure 1: Utility functions for the reduction of Theorem 2. probability close to zero (i.e., \(<\lambda\)), or we pick it with probability at least \(\frac{1}{k}\). This is analogous to a discrete choice of \(S_{\ell}\) as in the X3C. More specifically, the reduction proceeds as follows. First, suppose that the X3C instance is a yes-instance and \(J\) is an exact cover of this instance. Consider the following leader strategy \(\mathbf{x}=(x_{j})_{j\in[m]}\), whereby the leader plays each pure strategy \(S_{j}\) with probability: \(x_{j}=\frac{1}{k}\) if \(j\in J\), and \(x_{j}=0\) if \(j\notin J\). The follower's utility for responding to \(\mathbf{x}\) with each pure strategy is as follows: * Clearly, \(u_{f}(\mathbf{x},a)=1\) and \(u_{l}(\mathbf{x},a)=\frac{1}{k}\). * For each \(b_{j}\), according to (14): * If \(j\in J\), we have \(x_{j}=\frac{1}{k}>\lambda\) and hence, \(u_{f}(\mathbf{x},b_{j})>1-\delta\); Meanwhile, \(u_{l}(\mathbf{x},b_{j})=\frac{1}{k}\) according to (17). * If \(j\notin J\), we have \(x_{j}=0<\lambda\) and hence, \(u_{f}(\mathbf{x},b_{j})<1-\delta\). * For each \(c_{i}\), we have \(i\in S_{j}\) for some \(j\in J\) since \(J\) is an exact cover. Hence, \(\sum_{\ell:i\in S_{\ell}}x_{\ell}\geq\frac{1}{k}>\frac{1}{k}-\lambda\), and we have \(u_{f}(\mathbf{x},c_{i})<1-\delta\). As a result, \(\mathrm{BR}_{\delta}(\mathbf{x})=\{a\}\cup\{b_{j}:j\in J\}\), and \(\min_{j^{\prime}\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{l}(\mathbf{x},j^{\prime})=\frac{ 1}{k}\). Conversely, suppose that by playing some strategy \(\mathbf{x}\), the leader obtains utility at least \(\frac{1}{2k}\cdot(1+\epsilon)\). We show that the instance must be a yes-instance. In this case, we have \(\min_{y\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{l}(\mathbf{x},y)\geq\frac{1}{2k}\cdot(1 +\epsilon)\). According to the definition of the leader's utility function, this implies: * \(c_{i}\notin\mathrm{BR}_{\delta}(\mathbf{x})\) for all \(i\in[3k]\). Hence, we have \[\sum_{\ell:i\in S_{\ell}}x_{\ell}\geq\frac{1}{k}-\lambda.\] (18) Since \(|S_{\ell}|=3\), we have \(\sum_{i\in[3k]}\sum_{\ell:i\in S_{\ell}}x_{\ell}=3\sum_{\ell\in[m]}x_{\ell}=3\), so it holds for all \(i\in[3k]\) that: \[\sum_{\ell:i\in S_{\ell}}x_{\ell}=3-\sum_{i^{\prime}\in[3k]\setminus\{i\}} \sum_{\ell:i^{\prime}\in S_{\ell}}x_{\ell}<\frac{1}{k}+3k\cdot\lambda\leq \frac{1}{k}\cdot(1+\epsilon/2).\] (19) * If \(b_{j}\in\mathrm{BR}_{\delta}(\mathbf{x})\), then it must be that \(x_{j}\geq\frac{1}{2k}\cdot(1+\epsilon)\). Hence, for each \(j\in[m]\), either \(x_{j}\geq\frac{1}{2k}\cdot(1+\epsilon)\), or \(x_{j}\leq\lambda\) (as implied by \(b_{j}\notin\mathrm{BR}_{\delta}(\mathbf{x})\)). This further implies that for each \(i\in[3k]\), there is exactly one \(\ell\) with \(i\in S_{\ell}\) and \(x_{\ell}\geq\frac{1}{2k}\cdot(1+\epsilon)\): the existence of two or more such \(\ell\) would violate (19); on the other hand, if no such \(\ell\) exists, we would have \(\sum_{\ell:i\in S_{\ell}}x_{\ell}\leq m\cdot\lambda<\frac{1}{k}-\lambda\), which violates (18). It follows that, for this \(\ell\), we have \[x_{\ell} =\sum_{\ell^{\prime}:i\in S_{\ell^{\prime}}}x_{\ell^{\prime}}- \sum_{\ell^{\prime}:\ell^{\prime}\neq\ell\text{ and }i\in S_{\ell^{\prime}}}x_{\ell}\] \[\geq\sum_{\ell^{\prime}:i\in S_{\ell^{\prime}}}x_{\ell^{\prime}}- (m-1)\cdot\lambda\quad\geq\quad 1/k-m\cdot\lambda\quad>\quad 1/k-\epsilon/k^{2},\] which implies that the set \(J:=\left\{\ell\in[m]:x_{\ell}\geq\frac{1}{2k}\cdot(1+\epsilon)\right\}\) has at most \(k\) element in it: otherwise, \(\sum_{\ell\in J}x_{\ell}>(k+1)\cdot(1/k-\epsilon/k^{2})>1\). Therefore, \(J\) is an exact cover, and the X3C instance is a yes-instance. Therefore, we have shown that no \(\frac{1}{2k}\cdot(1-\epsilon)\)-optimal algorithm exists unless P = NP. Since \(n>3k\), it is also NP-hard to compute \(\frac{1-\epsilon}{n}\)-optimal \(\delta\)-RSE leader strategy for any \(\epsilon\in(0,1]\). This completes the proof. Theorem 2 shows that it is NP-hard in general to approximate a \(\delta\)-RSE. However, as a corollary of Property (1) in Theorem 1, we show that there exists an efficient algorithm to compute \(\frac{\delta}{\Delta}\)-optimal \(\delta\)-RSE leader strategy. This intuitively illustrates that the difficult instances of finding \(\delta\)-RSE are those with small inducibility gap \(\Delta\) but large robustness requirement \(\delta\). We refer readers to the detailed algorithm of this corollary in Appendix C.1 **Corollary 1**.: _For Stackelberg games with inducibility gap \(\Delta>\delta\), there exists an algorithm that computes \(\frac{\delta}{\Delta}\)-optimal \(\delta\)-RSE leader strategy in \(O(n)\) time._ ### A QPTAS for \(\delta\)-Rse We now move to develop a quasi-polynomial time approximation scheme (QPTAS) for computing an approximate \(\delta\)-RSE for any given \(\delta\). This leaves an intriguing open problem to close the gap between the efficiency of this algorithm and the above inapproximability result -- specifically, to understand whether a PTAS exists for \(\delta\)-RSE or whether Theorem 2 can be strengthened to the hardness of obtaining a constant additive approximation (possibly under some assumption like exponential time hypothesis as used by [30] to rule out PTAS for Nash equilibrium). Our preliminary investigation suggests that either direction seems to require significantly different ideas from our current techniques. **Theorem 3**.: _For any \(\epsilon>0\), we can compute an \(\epsilon\)-optimal \(\delta\)-RSE leader strategy in quasi-polynomial time \(O\big{(}m^{\left\lceil\frac{\log 2n}{2\epsilon^{2}}\right\rceil}n\log n \big{)}\)._ Before presenting the formal proof, we briefly overview the high-level idea. Our algorithm starts with a probabilistic argument similar to [22] for arguing the existence of an approximate Nash equilibrium with a simple format termed the _\(k\)-uniform strategy_. Formally, a mixed strategy \(\mathbf{x}\in\Delta^{m}\) is called \(k\)-uniform for some integer \(k\) if every \(x_{i}=k_{i}/k\) for some integer \(k_{i}\leq k\). [22] prove that in any two-player \(m\times m\) matrix game there always exists a pair of \(k\)-uniform strategies for \(k=\frac{12\ln m}{\epsilon^{2}}\) that is an \(\epsilon\) Nash equilibrium. Consequently, to find an \(\epsilon\) Nash, they only need to exhaustively search all possible \(k\) uniform mixed strategy pairs. Unfortunately, searching for an \(\epsilon\)-optimal \(\delta\)-RSE turns out to require significantly more work due to the bi-level nature of our problem. In fact, the \(\epsilon\)-optimal \(\delta\)-RSE is even not a \(k\)-uniform strategy in general. This is because while any \(k\)-uniform leader strategy \(\overline{\mathbf{x}}\) and a nearby strategy \(\mathbf{x}\) will lead to similar leader utilities, they may lead to different set \(\mathrm{BR}_{\delta}(\mathbf{x})\) of follower \(\delta\)-optimal response actions which in turn affects the induced leader utility. Consequently, the major algorithmic part of our proof is to efficiently search, through a carefully crafted binary search procedure, the entire _nearby convex region_ of each uniform strategy \(\overline{\mathbf{x}}\) in order to identify a leader strategy \(\mathbf{x}\) that induces the most favorable follower response action. Details are presented in the following proof. Proof of Theorem 3.: Let \(\mathcal{G}_{k}\subseteq\Delta^{m}\) denote the set of all \(k\)-uniform mixed strategies for the leader (who has \(m\) actions). Note that there are \(O(m^{k})\) many \(k\)-uniform strategies5. The following lemma is originally from [3] and later used by [22] for computing approximate Nash equilibrium. It can be proved via the probabilistic method [2]. Footnote 5: The total number can be computed as dividing \(k\) items into \(m\) parts, while each part can have zero item. **Lemma 1** ([3, 22]).: _Let \(A\subseteq[0,1]^{m\times n}\) be the leader's payoff matrix. For any \(\epsilon>0\) and any leader strategy \(\mathbf{x}\in\Delta^{m}\), there exists a \(k\)-uniform strategy \(\overline{\mathbf{x}}\in\mathcal{G}_{k}\) with \(k=\lceil\frac{\log 2n}{2\epsilon^{2}}\rceil\) such that_ \[|u_{l}(\mathbf{x},j)-u_{l}(\overline{\mathbf{x}},j)|\leq\epsilon\quad\text{ for all }j=1,\cdots,n.\] We now use Lemma 1 to construct subspaces of the leader's strategy space. Specifically, for each \(\overline{\mathbf{x}}\in\mathcal{G}_{k}\), we construct \(\Delta^{\overline{\mathbf{x}}}\subseteq\Delta^{m}\) such that: \[\Delta^{\overline{\mathbf{x}}}=\big{\{}\mathbf{x}\big{|}\mathbf{x}\in\Delta^{m}\text{ and }|u_{l}(\mathbf{x},j)-u_{l}(\overline{\mathbf{x}},j)|\leq\epsilon\text{ for all }j=1,\cdots,n\big{\}}\] Note that each \(\Delta^{\overline{\mathbf{x}}}\) is a convex region since it is defined by a set of linear constraints by writing \(|u_{l}(\mathbf{x},j)-u_{l}(\overline{\mathbf{x}},j)|\leq\epsilon\) as \(u_{l}(\mathbf{x},j)-u_{l}(\overline{\mathbf{x}},j)\leq\epsilon\) and \(-u_{l}(\mathbf{x},j)+u_{l}(\overline{\mathbf{x}},j)\leq\epsilon\). Moreover, \(\bigcup_{\overline{\mathbf{x}}\in\mathcal{G}_{k}}\Delta^{\overline{\mathbf{x}}}=\Delta ^{m}\), because Lemma 1 implies that any \(\mathbf{x}\in\Delta^{m}\) belongs to some \(\Delta^{\overline{\mathbf{x}}}\). The key to our proof is to compute an approximately optimal \(\delta\)-RSE leader strategy within each convex region \(\Delta^{\overline{\mathbf{x}}}\). Note that, fixing any follower response action \(j\), the mixed strategies \(\mathbf{x}\in\Delta^{\overline{\mathbf{x}}}\) gives rise to roughly the same leader utility, up to at most \(\epsilon\) difference by Lemma 1. However, this does not imply that they are equally good since different mixed strategies may lead to different sets \(\mathrm{BR}_{\delta}(\mathbf{x})\) of \(\delta\)-optimal follower responses, which in turn induces different leader utilities. So we need to search for the \(\mathbf{x}\in\Delta^{\overline{\mathbf{x}}}\) to maximize the worst (over possible follower responses) leader utility, or formally to solve the following \[\text{optimization problem within }\Delta^{\overline{\mathbf{x}}}:\qquad\max_{\mathbf{x} \in\Delta^{\overline{\mathbf{x}}}}\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{l}( \mathbf{x},j) \tag{20}\] Unfortunately, Problem (20) is generally intractable since the feasible region of inside \(\min\) depends on \(\mathbf{x}\). We instead solve the following more tractable variant \[\text{surrogate of Problem (\ref{eq:prob})}:\qquad\max_{\mathbf{x}\in\Delta^{ \overline{\mathbf{x}}}}\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{l}(\overline{\bm {x}},j) \tag{21}\] which substitutes \(u_{l}(\mathbf{x},j)\) in Problem (20) by \(u_{l}(\overline{\mathbf{x}},j)\). Observe that any optimal solution to Problem (21) must be an \(\epsilon\)-optimal solution to Problem (20) because their objective function differs by at most \(\epsilon\) due to Lemma 1 and our restriction of \(\mathbf{x}\in\Delta^{\overline{\mathbf{x}}}\). What is nice about Problem (21) is that its objective function only directly depends on \(j\) (whose choice then depends on \(\mathbf{x}\)). This allows us to design the Algorithm 1 that can efficiently search for the best \(\mathbf{x}\in\Delta^{\overline{\mathbf{x}}}\). We prove its correctness in the following lemma. ``` Input : Leader strategy \(\overline{\mathbf{x}}\in\mathcal{G}_{k}\), its corresponding \(\Delta^{\overline{\mathbf{x}}}\), and target utility \(\mu\). Output : If \(\exists\,\mathbf{x}\in\Delta^{\overline{\mathbf{x}}}\) such that \(\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{l}(\overline{\mathbf{x}},j)\geq\mu\), output True and such an \(\mathbf{x}\); else output False. 1\(Q\leftarrow\varnothing\); 2forevery follower action \(j\in[n]\)do 3if\(u_{l}(\overline{\mathbf{x}},j)<\mu\)then 4\(Q\gets Q\cup\{j\}\); 5 6forevery follower action \(j\in[n]\) and \(j\notin Q\)do 7 Determine if the following Linear Program is feasible: 8 \[\begin{split}\exists&\mathbf{x}\in\Delta^{\overline{\mathbf{x}}} \\ \text{s.t.}& u_{f}(\mathbf{x},j)\geq u_{f}(\mathbf{x},j^{ \prime}),\forall j^{\prime}\in[n]\\ & u_{f}(\mathbf{x},j)\geq u_{f}(\mathbf{x},j^{\prime})+\delta,\forall j^{ \prime}\in Q\end{split}\] 9ifthe above linear feasibility problem is feasible for some \(j\)then 10return True and any feasible solution \(\mathbf{x}\) of that problem. 11 12returnFalse. ``` **Algorithm 1**Utility-Verification **Lemma 2**.: _For any \(\mu\in[0,1]\), there is a polynomial time algorithm that asserts whether the optimal objective of Problem (21) is larger than \(\mu\) or not, and in the former case outputs an \(\mathbf{x}\in\Delta^{\overline{\mathbf{x}}}\) that achieves \(\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x})}u_{l}(\overline{\mathbf{x}},j)\geq\mu\)._ Proof.: The detail of this algorithm is presented in Algorithm 1. At a high level, we first identify all "bad" follower actions \(j\)'s that cannot satisfy our request, i.e., \(u_{l}(\overline{\mathbf{x}},j)<\mu\), and group them into set \(Q\). We then try to see whether there exists an \(\mathbf{x}\in\Delta^{\mathbf{\overline{x}}}\) such that its follower \(\delta\)-optimal response set does not contain any bad actions, i.e., \(\mathrm{BR}_{\delta}(\mathbf{x})\cap Q=\varnothing\). This later question reduces to a series of linear feasibility problems, each for a \(j\not\in Q\) (the Program (22)) deciding whether there exists a \(\mathbf{x}\in\Delta^{\mathbf{\overline{x}}}\) under which the \(j\) is the best follower action and the follower's utility from \(j\) is at least \(\delta\) larger than his utility from any \(j^{\prime}\in Q\). Armed with Lemma 2, we can use binary search to find an \(\mathbf{x}\) that exactly solves Problem (21) after \(\log(1/n)\) rounds since we know the problem only has \(n\) possible values of \(\mu\) (i.e. \(u_{l}(\mathbf{\overline{x}},1),\cdots,u_{l}(\mathbf{\overline{x}},n)\)). This solution will be \(\epsilon\)-optimal for Problem (20). We do this for the \(O(m^{k})\) possible \(k\)-uniform strategies and output the strategy with the largest objective. This is a \(\epsilon\)-optimal \(\delta\)-RSE. ## 5 Learnability of RSE In this section, we turn to another efficiency study of RSE, i.e., the efficiency of learning an \(\epsilon\)-optimal \(\delta\)-RSE without knowing the leader's or follower's utility matrix. Motivated by the recent work of learning the strong Stackelberg equilibrium (SSE) by Bai et al. [5], here we extend it to an online learning problem of the \(\delta\)-RSE. Similar to [5], the learner cannot directly observe the mean reward matrix of a Stackelberg game \(u_{l},u_{f}\in\mathbb{R}^{m\times n}\), but has to learn to approximate the \(\delta\)-RSE from the noisy bandit feedback. The motivation of this learning paradigm is from a common multi-agent learning practice today of "centralized training, decentralized execution" [23]. That is, in many robotics and game-playing applications (see, e.g., OpenAI Gym), the learning environments are well-defined such that the game parameters can be learned in a centralized fashion by controlling agents' action profiles. Thus the agents can learn to estimate game parameters from their noisy feedback, and then deploy the learned strategies in the decentralized environment to play against unknown opponents. We describe the learning setting in Definition 4 and start this section by presenting a sample-efficient learning algorithm that can learn an approximate \(\delta\)-RSE with a utility guarantee. Notably, as a corollary, our sample complexity result strictly strengthens that of [5] -- with both improved _utility guarantee_ and better _computational efficiency6_ -- for non-degenerate Stackelberg games (i.e., \(\Delta>0\)). Footnote 6: Interestingly, the algorithm for learning the mixed strategy SSE in [5] happens to be solving an approximate \(\delta\)-RSE. Quoting the authors’ own words in their paper, “_it is unclear whether this program (the \(\delta\)-RSE problem) can be reformulated to be solved efficiently in polynomial time_”. Our Theorem 2 confirms that their program indeed is NP-hard. **Definition 4** (Learning \(\delta\)-RSE from Bandit Feedback [5]).: _At each round, the learner can query an action pair \((i,j)\) and observe noisy bandit feedback, \(r_{l}(i,j)=u_{l}(i,j)+\xi,r_{f}(i,j)=u_{f}(i,j)+\xi^{\prime}\), where \(\xi,\xi^{\prime}\) are i.i.d. zero-mean noises with finite variances._ **Theorem 4**.: _There exists a learning algorithm that can learn an approximated \(\delta\)-RSE of any Stackelberg game \((u_{l},u_{f})\in\mathbb{R}^{m\times n}\) with leader's utility at least as much as \(u_{\text{RSE}}(\delta+4\epsilon)-2\epsilon\) using \(O(mn\log(mn/\iota)/\epsilon^{2})\) samples with probability at least \(1-\iota\)._ Proof.: We prove its existence by explicitly constructing the learning algorithm. It starts with the following sampling procedure: Play each action pair \((i,j)\) for \(T=\frac{1}{2\epsilon^{2}}\log(\frac{2mn}{\iota})\) rounds to get the mean reward estimation \(\widetilde{u}_{l}(i,j)=\frac{1}{T}\sum_{t=1}^{T}r_{l}^{t}(i,j),\widetilde{u}_ {f}(i,j)=\frac{1}{T}\sum_{t=1}^{T}r_{f}^{t}(i,j)\). According to the concentration inequality, both the estimation \(\widetilde{u}_{l}(i,j),\widetilde{u}_{f}(i,j)\) have the error bound of \(\epsilon\) with probability \(1-\frac{\iota}{mn}\). Thus, by union bound, with probability \(1-\iota\), the utility estimation \(\widetilde{u}_{l},\widetilde{u}_{f}\) satisfies \(||\widetilde{u}_{l}-u_{l}||_{\infty}\leq\epsilon,||\widetilde{u}_{f}-u_{f}||_{ \infty}\leq\epsilon\). The sample complexity in total is \(\frac{mn}{2\epsilon^{2}}\log(\frac{2mn}{\iota})=O(mn\log(mn/\iota)/\epsilon^{2})\). To clarify the context of different games against followers of different rationality, we denote \(V(\mathbf{x};u_{l},u_{f},\delta)\) as the leader utility of strategy \(\mathbf{x}\) against \(\delta\)-rational follower (who takes action from \(\delta\)-optima response set) in Stackelberg game \(u_{l},u_{f}\). And let \(V^{*}(u_{l},u_{f},\delta)\) be the \(\delta\)-RSE of Stackelberg game \(u_{l},u_{f}\). Let \(\langle x,j^{*}\rangle\) be the \((\delta+2\epsilon)\)-RSE of the Stackelberg game \(\widetilde{u}_{l},\widetilde{u}_{f}\). We claim the learning algorithm could simply output \(\langle x,j^{*}\rangle\) as the approximated \(\delta\)-RSE of Stackelberg game \((u_{l},u_{f})\). So the remaining proof is to prove the following series of inequalities: \[V(\mathbf{x};u_{l},u_{f},\delta)\geq V(\mathbf{x};u_{l},\widetilde{u}_{f},\delta+2\epsilon)\,\geq V(\mathbf{x};\widetilde{u}_{l},\widetilde{u}_{f},\delta+2 \epsilon)-\epsilon\] \[= V^{*}(\widetilde{u}_{l},\widetilde{u}_{f},\delta+2\epsilon)- \epsilon\geq V^{*}(\widetilde{u}_{l},u_{f},\delta+4\epsilon)-\epsilon\geq V^{* }(u_{l},u_{f},\delta+4\epsilon)-2\epsilon.\] There are four inequalities in the above arguments. The first and third inequality is by the following Lemma 3. The equality is by the construction of \(\mathbf{x}\). The second and last inequality is based on the fact that, for any \(||\widetilde{u}-u||_{\infty}\leq\epsilon\), for any \(\mathbf{x}\in\Delta^{m}\), \(\widetilde{u}(\mathbf{x},j)-u(\mathbf{x},j)=\mathbf{x}(\widetilde{u}-u)e_{j}\in[-\epsilon,\epsilon]\). This concludes our main proof. **Lemma 3**.: _For any \(||\widetilde{u}_{f}-u_{f}||_{\infty}\leq\epsilon\), for any \(\mathbf{x}\in\Delta^{m}\), \(V(\mathbf{x};u_{l},u_{f},\delta)\geq V(\mathbf{x};u_{l},\widetilde{u}_{f},\delta+2\epsilon)\) and thus, \(V^{*}(u_{l},u_{f},\delta)\geq V^{*}(u_{l},\widetilde{u}_{f},\delta+2\epsilon)\)_ Proof of Lemma 3.: Let \(\mathrm{BR}_{\delta}(\mathbf{x},u_{f})\) denote the set of \(\delta\)-optimal response(s) of \(\mathbf{x}\) for follower with utility function \(u_{f}\). We start by showing that \(\forall\mathbf{x}\in\Delta^{m},\mathrm{BR}_{\delta+2\epsilon}(\mathbf{x},\widetilde{u }_{f})\supseteq\mathrm{BR}_{\delta}(\mathbf{x},u_{f})\), and it suffices to argue that for any \(j\in[n]\), if \(j\not\in\mathrm{BR}_{\delta+2\epsilon}(\mathbf{x},\widetilde{u}_{f})\), then \(j\not\in\mathrm{BR}_{\delta}(\mathbf{x},u_{f})\). That is, given \(j\not\in\mathrm{BR}_{\delta+2\epsilon}(\mathbf{x},\widetilde{u}_{f})\), we know there exists \(j^{\prime}\) such that \(\widetilde{u}_{f}(\mathbf{x},j)-\widetilde{u}_{f}(\mathbf{x},j^{\prime})\leq-\delta-2\epsilon\). Reusing the fact that for any \(||\widetilde{u}-u||_{\infty}\leq\epsilon\), \(\mathbf{x}\in\Delta^{m}\), \(|\widetilde{u}(\mathbf{x},j)-u(\mathbf{x},j)|\leq\epsilon\), we have \(u_{f}(\mathbf{x},j)-u_{f}(\mathbf{x},j^{\prime})\leq\widetilde{u}_{f}(\mathbf{x},j)- \widetilde{u}_{f}(\mathbf{x},j^{\prime})+2\epsilon\leq-\delta\) and thus \(j\not\in\mathrm{BR}_{\delta}(\mathbf{x},u_{f})\). Since \(\mathrm{BR}_{\delta+2\epsilon}(\mathbf{x},\widetilde{u}_{f})\supseteq\mathrm{BR}_{ \delta}(\mathbf{x},u_{f})\), we have by definition, \[V(\mathbf{x};u_{l},u_{f},\delta)=\min_{j\in\mathrm{BR}_{\delta}(\mathbf{x},u_{f})}u_{l }(\mathbf{x},j)\geq\min_{j\in\mathrm{BR}_{\delta+2\epsilon}(\mathbf{x},\widetilde{u}_ {f})}u_{l}(\mathbf{x},j)=V(\mathbf{x};u_{l},\widetilde{u}_{f},\delta+2\epsilon).\] This leads to the following inequalities, \[V^{*}(u_{l},\widetilde{u}_{f},\delta+2\epsilon)=V(\mathbf{x}_{1};u_{l},\widetilde {u}_{f},\delta+2\epsilon)\leq V(\mathbf{x}_{1};u_{l},u_{f},\delta)\leq V(\mathbf{x}_ {2};u_{l},u_{f},\delta)=V^{*}(u_{l},u_{f},\delta),\] where \(\mathbf{x}_{1}\) is \((\delta+2\epsilon)\)-RSE strategy of Stackelberg game \(u_{l},\widetilde{u}_{f}\), and \(\mathbf{x}_{2}\) is \(\delta\)-RSE strategy of Stackelberg game \(u_{l},u_{f}\). A few remarks on Theorem 4 follow. First, we note that the above learning algorithm is sample-efficient but not computationally efficient, since it requires solving the exact \((\delta+2\epsilon)\)-RSE of a Stackelberg game, which we already know is NP-hard to compute. However, it is possible to employ the QPTAS Algorithm 1 to find an \(\epsilon\)-optimal \(\delta\)-RSE according to Theorem 3, and this would not change the order of our learning algorithm's approximation ratio. Second, Theorem 4 combined with Theorem 1 implies the following corollary about the efficient learning of an approximated SSE. Specifically, leveraging the convergence property of \(\delta\)-RSE, the following Corollary 2 strengthens the SSE learning results in the recent work of [5], in terms of providing better utility guarantee and computationally efficient learning algorithms. Specifically, [5] states that under the same sample complexity, a learning algorithm can only learn the \(u_{\text{SSE}}\) up to \(u_{\text{RSE}}(\delta)\) while the gap between them can be arbitrarily large. But our result suggests, as long as the game instance is not degenerated (e.g., with inducibility \(\Delta>0\)), the gap can be bounded so that SSE can be efficiently learned up to \(\epsilon\) utility loss. Moreover, as highlighted by authors in [5], their learning algorithm does not have a computational efficiency guarantee and may take exponential time in general, since there is no efficient algorithm to compute the approximated SSE with pessimistic follower tie-breaking. But our result implies that, as long as the game instance is non-degenerate with inducibility \(\Delta>\epsilon\), we can efficiently compute the \(\epsilon\)-RSE according to Corollary 1. **Corollary 2** (Efficient Learning of SSE).: _For any Stackelberg game with inducibility \(\Delta>0\), an \(\epsilon\)-optimal SSE can be learned from \(O(\frac{1}{\epsilon^{2}})\) samples in polynomial time for any \(\epsilon<\Delta\)._ Thirdly, we point out that this learning result is almost tight due to the Proposition 3 below, where we present the hard instances that fundamentally limit the learnability of \(\delta\)-RSE. Specifically, in the case when \(\Delta<\delta\), the \(u_{\text{RSE}}(\delta+4\epsilon)\) can be arbitrarily worse than \(u_{\text{RSE}}(\delta)\) due to the discontinuity of the \(u_{\text{RSE}}(\delta)\) function. This is an intrinsic barrier and is further explained by the \(\Omega(1)\) utility gap below. Meanwhile, by Theorem 1, when \(\Delta\geq\delta+1/L\) for some constant \(L\), the Lipschitz continuity of \(u_{\text{RSE}}(\delta)\) implies that \(u_{\text{RSE}}(\delta+4\epsilon)\) can be substituted by \(u_{\text{RSE}}(\delta)-O(\epsilon)\) so that the sample complexity dependence on \(\epsilon\) matches with the \(\Omega(1/\sqrt{T})\) of the lower bound instance. **Proposition 3**.: _For any sample size \(T\), there exists a Stackelberg game instance with inducibility gap \(\Delta\) such that any algorithm with \(T\) samples in learning \(\delta\)-RSE is \(\Omega(1/\sqrt{T})\) sub-optimal if \(\delta<\Delta\), or \(\Omega(1)\) sub-optimal if \(\delta\geq\Delta\). More specifically, any output leader strategy \(\widehat{\mathbf{x}}\) satisfies the following with probability at least \(\frac{1}{3}\):_ \[\min_{j\in BR_{\delta}(\widehat{\mathbf{x}})}u_{l}(\widehat{\mathbf{x}},j)\leq\begin{cases} u_{\text{RSE}}(\delta)-\frac{1/\sqrt{T}}{\Delta-\delta+1/\sqrt{T}}&\delta< \Delta\\ u_{\text{RSE}}(\delta)-1/2&\delta\geq\Delta\end{cases}. \tag{23}\] Proof.: To illustrate our proof technique, here we present the proof for the \(\delta<\Delta\) case, whereas defer curious reader to Appendix D.1 for the proof of the \(\delta\geq\Delta\) case. The proof technique is similar but features a different hard instance. Table 2 illustrates a game in which the inducibility gap is \(\Delta\). Suppose \(\Delta>\delta\). Consider the following two Stackelberg games \(G_{1}\), \(G_{2}\) where \(r_{l}(i,j)\sim\text{Bern}(u_{l}(i,j))\) and \(r_{f}(i,j)\sim\text{Bern}(u_{f}(i,j))\), with mean values shown in Table 2 and \(\epsilon\in[0,1]\) is a parameter to be determined. In Stackelberg game \(G_{1}\), the leader strategy in \(\delta\)-RSE can be computed as \[\mathbf{x}_{1}^{*}=\operatorname*{argmax}_{\mathbf{x}\in\Delta^{m}}\min_{j\in BR_{ \delta}(\mathbf{x})}u_{l}(\mathbf{x},j)=(1,0,0).\] That is, in the \(\delta\)-RSE of Stackelberg game \(G_{1}\), the leader plays pure strategy \(i_{1}\) and \(BR_{\delta}(i_{1})=\{j_{1}\}\). As a result, we have \(u_{l}^{*}(\delta)=1\) for \(G_{1}\). On the other hand, for Stackelberg game \(G_{2}\), note that \(u_{f}(i_{1},j_{1})-u_{f}(i_{1},j_{2})=\delta-\epsilon<\delta\). Thus, \(j_{2}\) is included in the follower's \(\delta\)-optimal response set if the leader plays pure strategy \(i_{1}\), i.e. the \(\mathbf{x}_{1}^{*}\) for the other game \(G_{1}\). Consequently, the leader receives \(0\) utility if the leader plays pure strategy \(i_{1}\) in the Stackelberg game \(G_{2}\). However, the leader can play a mixed strategy \(\mathbf{x}_{2}^{*}\) that includes both pure actions \(i_{1}\) and \(i_{3}\) to increase the follower's utility difference between responding with \(j_{1}\) and \(j_{2}\). When the probability of playing \(i_{3}\) is high enough, \(\mathbf{x}_{2}^{*}\) can make \(u_{f}(\mathbf{x}_{2}^{*},j_{1})-u_{f}(\mathbf{x}_{2}^{*},j_{2})=\delta\). Specifically, let \(\mathbf{x}_{2}^{*}=(p,0,1-p)\), then we have the following constraint on \(p\) in order to exclude \(j_{2}\) from the \(\delta\)-optimal response set: \[p\cdot\left(\frac{1-\epsilon}{2}+\delta\right)+(1-p)\cdot\Delta-p\cdot\frac{1 +\epsilon}{2}=\delta \tag{24}\] which makes \(p=\frac{\Delta-\delta}{\Delta-\delta-\epsilon}\). As a result, for the Stackelberg game \(G_{2}\), we have \(BR_{\delta}(\mathbf{x}_{2}^{*})=\{j_{1}\}\) and \(u_{\text{RSE}}(\delta)=u_{l}(\mathbf{x}_{2}^{*},j_{1})=\frac{\Delta-\delta}{\Delta- \delta-\epsilon}\). Therefore, if the learning algorithm cannot distinguish the above two Stackelberg games with finite samples and makes a wrong estimation, the outputted leader strategy will suffer an error of at least \(\frac{\epsilon}{\Delta-\delta+\epsilon}\). Specifically, the following incorrect estimations can happen: 1. If the true game is \(G_{1}\), but the learning algorithm estimated the game as \(G_{2}\) and outputs leader strategy as \(\widehat{\mathbf{x}}=\mathbf{x}_{2}^{*}\). Then the leader utility of playing \(\mathbf{x}_{2}^{*}\) will be \(\frac{\Delta-\delta}{\Delta-\delta+\epsilon}\) while \(u_{\text{RSE}}(\delta)=1\), this has the learning error of \(\frac{\epsilon}{\Delta-\delta+\epsilon}\). 2. If the true game is \(G_{2}\), but the learning algorithm estimated the game as \(G_{1}\) and outputs leader strategy as \(\widehat{\mathbf{x}}=\mathbf{x}_{1}^{*}\). Then the leader utility of playing \(\mathbf{x}_{1}^{*}\) will be \(0\) while \(u_{\text{RSE}}(\delta)=\frac{\Delta-\delta}{\Delta-\delta+\epsilon}\), and this has the learning error of \(\frac{\Delta-\delta}{\Delta-\delta+\epsilon}\) which is even larger than \(\frac{\epsilon}{\Delta-\delta+\epsilon}\). Consequently, if the learning algorithm can output a leader strategy \(\widehat{\mathbf{x}}\) that violates equation (23), i.e. \(\min_{j\in BR_{\delta}(\widehat{\mathbf{x}})}u_{l}(\widehat{\mathbf{x}},j)>u_{\text{ RSE}}(\delta)-\frac{\epsilon}{\Delta-\delta+\epsilon}\), then for the game \(G_{1}\) we have \(u_{l}(\widehat{\mathbf{x}},j)>\frac{\Delta-\delta}{\Delta-\delta+\epsilon}\), while for the game \(G_{2}\) we have \(u_{\text{RSE}}(\delta)=\frac{\Delta-\delta}{\Delta-\delta+\epsilon}\). In other words, the learning algorithm can distinguish \(G_{1}\) from \(G_{2}\) with \(T\) samples. Therefore, consider the learning algorithm that samples \(r_{f}(i,j)\) for \(T\) times, and the goal is to identify if \(u_{f}\in G_{1}\) or \(u_{f}\in G_{2}\). We prove the proposition by contradiction. Suppose Proposition 3 is not correct, then with \(T\) samples, we can identify if \(u_{f}\in G_{1}\) or \(u_{f}\in G_{2}\) with probability more than \(\frac{2}{3}\). Since \(u_{f}(i_{2},j_{1})\), \(u_{f}(i_{2},j_{2})\), \(u_{f}(i_{3},j_{1})\), and \(u_{f}(i_{3},j_{2})\) are the same for \(G_{1}\) and \(G_{2}\), the algorithm can only identify \(G_{1}\) and \(G_{2}\) by sampling \(u_{f}(i_{1},j_{1})\) or \(u_{f}(i_{1},j_{2})\). Formally, we define the problem as follows. Let \(\Omega=[0,1]^{T}\) to be the sample space for the outcome of \(T\) samples of \(r_{f}(i_{1},j_{1})\), our goal is to have the following decision rule \[\text{Rule: }\Omega\rightarrow\{G_{1},G_{2}\}, \tag{25}\] which satisfies the following two properties for any \(\omega\in\Omega\): \[\mathbf{Pr}[u_{f}\in G_{1}|\text{Rule}(\omega)=G_{1}]>\frac{2}{3}\text{ and }\mathbf{Pr}[u_{f}\in G_{2}|\text{Rule}(\omega)=G_{2}]>\frac{2}{3}. \tag{26}\] As a result, let \(\omega_{o}\in\Omega\) be the event this Rule returns \(G_{1}\) (i.e., \(\text{Rule}(\omega_{o})=G_{1}\)). Then we have: \[\mathbf{Pr}[u_{f}\in G_{1}|\omega_{o}]-\mathbf{Pr}[u_{f}\in G_{2}|\omega_{o}] >\frac{1}{3} \tag{27}\] Next, for any event \(\omega\in\Omega\), let \(P_{k}(\omega)=\mathbf{Pr}[u_{f}\in G_{k}|\omega]\) where \(k=1,2\). Then we have \(P_{k}=P_{k,1}\times\cdots\times P_{k,T}\) where \(P_{k,t}\) is the distribution of \(t\)'th sample of \(r_{f}(i_{1},j_{1})\). As a result, by applying a basic KL-divergence argument to distributions \(P_{1}\) and \(P_{2}\)[31, Lemma 2.5], for any event \(\omega\) we have \(|P_{1}(\omega)-P_{2}(\omega)|\leq\epsilon\sqrt{T}\). Plugging \(\omega=\omega_{0}\) and \(\epsilon=\frac{1}{3\sqrt{T}}\) we have \(|P_{1}(\omega)-P_{2}(\omega)|\leq\frac{1}{3}\), contradicting with equation (27). Therefore, the Stackelberg game instances \(G_{1}\) and \(G_{2}\) in table 2 with \(\epsilon=\frac{1}{3\sqrt{T}}\) proves the proposition when \(\delta<\Delta\). ## 6 Conclusion In practice, there are many situations where a follower fails to make the optimal decision in the Stackelberg game. However, the classic solution concept of SSE is not robust for sub-rational follower responses and leads to poor performance of the leader. In this paper, we provide a systematic study on the robust version of Stackelberg equilibrium to account for sub-rational responses from the follower. We propose a well-defined definition of robust Stackelberg equilibrium, \(\delta\)-RSE, and show some nice properties of the leader's utility under a \(\delta\)-RSE. We identify the computational complexity for computing or approximating the \(\delta\)-RSE. We essentially show there does not exist a computationally efficient algorithm to even approximate the \(\delta\)-RSE, unless P=NP. On the other hand, we also propose a QPTAS to compute the \(\epsilon\)-optimal \(\delta\)-RSE for any \(\epsilon>0\). Finally, we also provide sample complexity results for learning the \(\delta\)-RSE in contexts where the follower utility is not known initially. Our results open the possibilities for many other interesting questions. For example, our positive and negative computational results in Section 4 have a small gap due to the logarithmic exponent term in the computational complexity of the QPTAS. An immediate direction for future research is to close this gap by either showing a PTAS algorithm or strengthening the hardness result of inapproximability to a constant factor. In addition, it would be interesting to think about the applicability of this robust solution concept to some specific Stackelberg games like pricing games (played between a seller and a buyer), persuasion games (played between a sender and a receiver), contract design (played between a principal and an agent) and security games (played between a defender and an attacker).
2305.12010
Chemellia: An Ecosystem for Atomistic Scientific Machine Learning
Chemellia is an open-source framework for atomistic machine learning in the Julia programming language. The framework takes advantage of Julia's high speed as well as the ability to share and reuse code and interfaces through the paradigm of multiple dispatch. Chemellia is designed to make use of existing interfaces and avoid ``reinventing the wheel'' wherever possible. A key aspect of the Chemellia ecosystem is the ChemistryFeaturization interface for defining and encoding features -- it is designed to maximize interoperability between featurization schemes and elements thereof, to maintain provenance of encoded features, and to ensure easy decodability and reconfigurability to enable feature engineering experiments. This embodies the overall design principles of the Chemellia ecosystem: separation of concerns, interoperability, and transparency. We illustrate these principles by discussing the implementation of crystal graph convolutional neural networks for material property prediction.
Anant Thazhemadam, Dhairya Gandhi, Venkatasubramanian Viswanathan, Rachel C. Kurchin
2023-05-19T21:37:37Z
http://arxiv.org/abs/2305.12010v1
# Chemellia: ###### Abstract Chemellia is an open-source framework for atomistic machine learning in the Julia programming language. The framework takes advantage of Julia's high speed as well as the ability to share and reuse code and interfaces through the paradigm of multiple dispatch. Chemellia is designed to make use of existing interfaces and avoid "reinventing the wheel" wherever possible. A key aspect of the Chemellia ecosystem is the ChemistryFeaturization interface for defining and encoding features - it is designed to maximize interoperability between featurization schemes and elements thereof, to maintain provenance of encoded features, and to ensure easy decodability and reconfigurability to enable feature engineering experiments. This embodies the overall design principles of the Chemellia ecosystem: separation of concerns, interoperability, and transparency. We illustrate these principles by discussing the implementation of crystal graph convolutional neural networks for material property prediction. + Footnote †: preprint: AIP/12-QED ## I Introduction With the increase in available computational power as well as the democratization of machine learning (ML) methods, application of ML to atomistic problems is rapidly gaining popularity [1; 2; 3; 4; 5]. Typically, the ML is interfaced with some simulation technique, such as density functional theory (DFT), molecular dynamics (MD), etc. This interfacing may be in the form of: (i) **Surrogatization**, replacing simulation entirely in order to run drastically faster, usually in cases where it is prohibitively expensive to run the full model the number of times that would be necessary, such as in design/discovery of new materials including electrocatalysts [6], high-entropy alloys [7], intermetallics [8], and 2D topological insulators [9], among many others; (ii) **Acceleration**, where some expensive step is accelerated with the help of ML but the final result is still physically validated, such as generating ansatze for for quantum Monte Carlo [10] or guiding DFT-based structural optimization [11]; or (iii) **Augmentation**, in which a core aspect of the simulation is replaced with an ML model to improve its performance/accuracy, such as ML potentials in MD [12] or ML functionals in DFT [13; 14]. In any of these applications, it is important to be able to interact with physically meaningful representations of the data that are ingestible by the ML model and annotated with relevant features. The representation of the data (usually a crystal or molecular structure) can be fully three-dimensional, or some simplified representation such as a graph. This representation is then _featurized_ before being fed into the model. Featurization may be as simple as annotation with an atomic or isotopic identity, or may explicitly impute other information about atoms, bonding environments, or overall structural "fingerprints." There is a large variety of featurization and ML modeling approaches that have been utilized, but these are often implemented in a "one-off" way, without much concern for interoperability or extensibility. These practices can also contribute to growing concerns about reproducibility in applications of ML to STEM problems [15]. There are several existing efforts to create broader frameworks, each of which has its own focus area and functionality goals. For example, DeepChem [16] is a widely-used package aimed primarily at molecular systems (e.g. drug discovery) that supports a wide variety of featurizations and models through a common interface. Automatminer [17], in contrast, is substantially more materials-focused and aims to be very "plug-and-play," allowing the user to abstract away specifics of model architecture, hyperparameters, and training procedure. Meanwhile, SchNetPack [18; 19] focuses specifically on incorporating newer developments in deep neural architectures that incorporate physical symmetries, e.g. via equivariant message-passing, and also incorporates GPU-accelerated molecular dynamics directly. As a final example, DScribe [20] provides an interface to many common featurization schemes, such as Coulomb matrices [21], atom-centered symmetry functions [22], or smooth overlap of atomic positions [23]. It is well-established that the solution to a proliferation of standard approaches is usually not another standard approach. [24] Therefore, we introduce a new framework here not with the goal that it will become universally adopted, but rather in the hopes that it might become the primary such framework _in the Julia language_. Separately, we also believe that Julia will come to represent an increasing share of scientific computing use cases. As is discussed in further detail below, the design of the language facilitates high performance as well as broad _interoperability_, which is particularly crucial in the second two ML applications described above (acceleration and augmentation), wherein the ML model needs to interface directly with simulation. The remainder of this document serves to justify why Julia is a promising language for atomistic ML, and also to introduce the principles of the Chemella framework, in which we try to adopt established best practices as well as learn from prior design choices that haven't served the community as well. We finish with a "showcase" demonstrating a concrete implementation of the framework in the form of crystal graph convolutional neural networks and an associated featurization. ## II Design principles ### Why Julia? In designing any computational framework, choice of programming language suffuses virtually every other design and implementation decision [25]. Like human languages, some programming languages can express certain concepts or structures in more concise/intuitive ways than others. Unlike human languages, programming languages can also offer orders of magnitude of separation in performance, lending potentially tremendous weight to this decision [26]. The Julia language is a relatively young (decade-old) entrant to the scientific computing scene, but one with potential to be a major part of its future. This is largely because it achieves a kind of Pareto optimality between expressiveness and performance, solving the so-called "two-language problem," i.e. the common paradigm of prototyping in a high-level but low-performance language such as Python and then re-implementing in a lower-level fast language such as C [27]. The primary paradigm of Julia is multiple dispatch, which has emerged as an effective solution to the age-old expression problem [28], allowing both users and developers to extend interfaces and types nearly orthogonally. For example, the JuliaGraphs [29] ecosystem defines an interface for custom graph types; by dispatching a relatively minimal set of functions (for example, to return the number of nodes in a graph or whether there exists an edge between two given nodes), many sophisticated functionalities in the ecosystem's packages (e.g. centrality measures, algorithms for graph traversal, etc.) will "just work" on a user-defined graph type. Julia also has strong, language-wide support for automatic differentiation (AD) in both forward and reverse mode via a variety of packages [30]. It allows both end users and libraries alike to specify evaluation rules for their custom types via multiple dispatch through the common interface provided by ChainRules.jl [31]. This eases the burden on one library internally supporting AD for all of its types. This is in sharp contrast to AD frameworks such as JAX [32], which need to re-implement differentiable versions of any functions they wish to support (cf. jax.numpy). ### Separation of Concerns Separation of concerns is a fundamental software design principle [33]. Broadly, it relates to _modularity_ of software, i.e., a given section of code should only deal with the information relevant for it to do its intended task. This principle fits naturally within the multiple dispatch paradigm of Julia, and is especially important in creating functional, composable interfaces. Within Chemella, separation of concerns emerges at a few levels. First, as is common practice in the Julia ecosystem broadly, we generally aim for smaller and more scope-focused modular packages that use generic interfaces. Second, separation of concerns also helps in defining the logical constituents of a framework, as well as promoting more maintainable and extensible code. In Chemella, this manifests primarily in the logical boundary between method-level feature-engineering functions (e.g. encoding/decoding operations) and the actual model architecture composition itself (which can sometimes include featurization operations as part of its action as well). To elucidate these ideas in a more concrete way, we can consider a few specific model architectures. Consider CGCNN [34], the earliest work that demonstrated the effectiveness of GCN's for crystalline structures, as a baseline. In the original CGCNN, neighbor relationships between atoms were represented by the presence or absence of an edge between the corresponding nodes (or multiple edges for periodic images). However, GeoCGCNN [35] introduced manual features which included topological distance and spatial distance as well. The architectural composition of the model itself remains largely the same in both cases. However, it is the feature engineering operation that sets these two models apart. In contrast, consider MT-CGCNN [36], which is essentially CGCNN coupled with multi-task learning. Here, the model architecture itself is modified, but the way the features are represented remains the same. Effectively separating out concerns in this way also allows us to realize how model architectures are generalized from other domains; that is, which layers or featurizations could (or should) be composable/interoperable. For instance, different pooling mechanisms, such as EigenPooling [37], can be effectively applied to other GCN's [38]. Designing the framework with this outlook of separation of concerns also helps abstract away the model architectures from the intricacies and specifics associated with the domain itself, to a great extent, as well as maximize interoperability (i.e. minimize the need to re-implement the same functionality over and over again to fit with different architectures). ### Interoperability Julia, powered by the multiple dispatch paradigm, enables interoperability from a fundamental language level. The scope of interoperability Chemellia and its libraries offer, and how they build upon that offered by the language itself, can be explored along different dimensions, which we explore briefly in this section. Implementing the right abstractions helps introduce interoperability at the abstract level. This ensures that logically distinct concepts supported by the framework, such as featurizers, different encoding mechanisms, types of layers, etc, have clear and well-defined boundaries. This also ensures that users have an easier time navigating the framework and using it. Once these abstractions are well defined, the next task is implementation. While implementing a scheme of abstraction and guaranteeing interoperability between these abstract concepts might seem obvious and trivial, more often than not, the code begins to slowly evolve in a manner that blurs the lines between the abstractions and further complicates the relationships between them, especially if these abstractions are not clearly defined. This causes friction both while developing and using the framework. However, when carefully done, it empowers users with the freedom to mix and match between different components, in a plug-and-play fashion, within reasonable limits. Furthermore, in cases where logical components that are canonically correct on their own are not immediately compatible with one another for practical reasons, ChemistryFeaturization (thanks to the structure of the Julia Language) typically makes it easy to write simple "adapters." One example of this is the idea of a Codec, described in more detail in the Showcase below (see also Figure 1). Specifics of implementation can also have a major role in interoperability at the ecosystem level. This relates to how well types and components from other packages across the ecosystem compose with the framework. In this context, the ideal user experience is being able to quickly integrate relevant components from appropriate packages and prototype, rather than spending time jumping through hoops trying to get the framework to integrate properly with said packages or vice versa. Because much of the Julia language is built around this kind of interoperability, it is generally very smooth to leverage other packages within the broader (open-source) Julia ecosystem. For example, Chemellia supports Atom-sBase[39], an interface designed to improve interoperability in specification of atomic geometries (for example, to pass between a simulation and ML model, or for visualization or file I/O). Another example, discussed in more detail below, is the ability to utilize model layers defined in GeometricFlux[40]. In addition, Julia also provides excellent foreign function interface (FFI) support. This allows users to utilize more well-established ecosystems and frameworks for their specific requirements, if the need arises, through built-in methods like ccall or packages like PythonCall[41], PyCall[42], RCall, and JavaCall[43]. For instance, one of Chemellia's libraries, AtomGraphs, uses the pymatgen[44] library via PythonCall for certain operations related to graph building. ### Transparency Transparency in code is valuable along myriad dimensions. It can also refer to a number of distinct concepts. For our purposes, we define transparency as the ease with which a user can understand what the code is doing. This is important to ensure that users know how to use the code in the first place, and also to allow them to validate correctness of results. The first part of this (how to use it) is largely enabled (along with thorough documentation, of course) by giving functions and types clear and descriptive names. When designing an interface such as ChemistryFeaturization that may have diverse concrete implementations, this can be particularly challenging, and the development team has in some cases had extensive discussions about the best name of a single entity in the codebase! The second part (validating correctness of results) comes primarily from readability of the source code itself. The vast majority of machine learning models ultimately boil down to matrix multiplication, along with a few other simple transformations (e.g. elementwise application of a nonlinearity). However, in many of the widely used implementations, it can be frustratingly difficult, given a model object, to ascertain which matrices are multiplied to produce the model's results! The Julia packages (in particular Flux[45; 46]) upon which Chemellia is based largely avoid the massive type hierarchy often present in Python-based frameworks. Julia's metaprogramming tools are also important in code validation. For example, it is easy to use the Owhich macro to find exactly what source code will be run in a given method invocation. Another important aspect of transparency (that encompasses aspects of both of the prior two points) pertains specifically to featurization. Namely, how much information is actually encoded in a given featurization approach, and how much of the full input could be re Figure 1: Overview of components of the Chemellia framework, and connections between them. constructed from what has been encoded? The ChemistryFeaturization interface explicitly addresses this by exposing, along with the encode function for computing a featurization of a given input structure, a decode function that should invert that featurization procedure, to the extent possible. (We note briefly here that in this context, we are using the words "encode" and "decode" specifically to mean translating from human-readable feature values to machine-readable representations and back again, respectively. This is a slightly narrower definition than, for example, the one implicit in the idea of an autoencoder.) A common approach for encoding atomic features is a so-called "one-hot" scheme, where a single one in inserted in a string of zeros to represent the "bin" in which an encoded value falls. Of course, for categorical variables (e.g. group in the periodic table), this is a lossless encoding scheme. However, for continuous-valued ones (e.g. atomic radius), the choice of binning scheme determines the "resolution" with which this information is actually passed to the network. See Section III.2 and Figure 3 below for more on this. In general, prioritizing transparency in all of the above senses helps code be only as complicated as it needs to be. By the same token, however, it is worth noting also that more transparency is not _always_ better, as it can become overwhelming and impede usability. Sometimes, the ability to control how much detail is abstracted away by an interface can be important, especially to users learning that interface for the first time. However, if the balance is struck properly, transparency should be an aid in the learning process. More transparent, less overwhelming designs also help enforce better separation of concerns, since now interfaces are built with the goal of exposing users who may be using a given functionality to only the relevant details, and thus compartmentalizing concepts more effectively. Overall, adopting these principles can help reduce the divide between users and developers, lowering barriers to usability, adoption, and (eventually) contribution. ## III Showcase In order to demonstrate the principles we've outlined above, we will showcase a particular set of functionalities within Chemellia around building graph representations of crystals and models that can ingest them. This section closely mirrors a Pluto [47] notebook that can be found at [https://github.com/Chemellia/Chemellia_showcase](https://github.com/Chemellia/Chemellia_showcase) and run live on JuliaHub at [https://juliahub.com/ui/Notebooks/thazhemadam/Chemellia%20Showcase/jcp_showcase.jl](https://juliahub.com/ui/Notebooks/thazhemadam/Chemellia%20Showcase/jcp_showcase.jl), and the functionality can be found in the open-source packages ChemifyFeaturization, AtomGraphs, and AtomicGraphNets, all hosted (with documentation) on GitHub within the Chemellia organization and registered in the Julia Base registry. ### Graph Building The first step is to convert a standard representation of a crystal structure (e.g. a CIF or XYZ file) into a graph representation that can be used by the model we'll build later. We make use of the AtomsBase [39] interface to handle 3D crystal structure representations - this abstract interface exposes a standard set of functions (e.g. position, bounding_box) for accessing information about these structures that may be stored in different types of data structures in different concrete implementations. It facilitates interoperability between packages for tasks like chemical simulation, file I/O, and visualization. The AtomGraph data type that we'll use for crystal graph representations is implemented in the AtomGraphs package within the Chemellia ecosystem. In contrast to some other atomic graph representations, an AtomGraph is a weighted graph representation (in particular, it utilizes the SimpleWeightedGraph type from the JuliaGraphs [29] ecosystem/interface. The neighbor lists (i.e. other atoms/nodes to which to draw these weighted edges) can be chosen via setting distance/number cutoffs or (for periodic systems) by Voronoi tessellation, and the edge weights can be set by a user-defined function or one of several built-in options (such as exponential or inverse-square decay). In the notebook, the resulting graph topology from different choices of these options are explored in more detail. For example, Figure 2 shows how the neighbor cutoff affects connectivity of the graph. The graph visualizations were produced using the GraphPlot package from JuliaGraphs. ### Featurization With the structure representations (in this case graphs) constructed, they next need to be featurized Figure 2: Demonstration of impact of neighbor list cutoff on connectivity of constructed graph: a smaller cutoff (above) leads to a sparser adjacency matrix (fewer graph edges). - that is, "annotated" with information about the constituent atoms, bonds, or groups thereof upon which the final material property prediction may depend (common choices include atomic number, valence, and electronegativity, among many others). The ChemistryFeaturization interface facilitates featurization and feature engineering through two primary abstractions: * Feature descriptors, which are used to uniquely describe a specific feature of an atom, bond, neighborhood, or entire structure. For this showcase, we will focus on ElementFeatureDescriptors, which describe an atom and require only its chemical identity to be defined. * Codecs, which specify mechanisms for encoding and decoding data of a specific type. A common choice is so-called "one-hot" encoding, where a feature is represented as a bitstring with all zeros and a single one for the "bin" into which the value falls. In ChemistryFeaturization, this is accomplished via the OneHotOneCold codec, which facilitates choice of binning scheme (e.g. linear vs. logarithmic scaling, number of bins), and allows easy encoding _and decoding_ so that a user can easily query the "precision" with which a continuous-valued feature is represented In this language, a "featurization" (which subtypes the AbstractFeaturization type) comprises a set of feature descriptors and associated codecs, as well as a specification for how these encoded features should be combined in order to be ingested by a model. In GraphNodeFeaturization, for example, we choose a set of ElementFeatureDescriptors and associated OneHotOneCold codecs, and the resulting node feature vectors are "stacked" into a feature matrix. In the demonstration notebook, this is illustrated with a featurization with just two features (one categorical and one continuous-valued): the block of the periodic table in which the element resides (\(s\), \(p\), \(d\), or \(f\)) and the atomic mass of the element (See Figure 3). We also show the capacity to customize behavior in more detail as described above. Since the featurization scheme, as a logical concept and a programming abstraction, is distinct from the actual data (e.g., the AtomGraphs), we can, with minimal modification, apply the same featurization scheme to different values and types of representations as well. In addition, the separation of concerns between the feature descriptor and the codec, as well as the transparency enabled by enforcing the presence of a decode function, facilitates feature engineering experiments by the ease of switching out or reconfiguring codecs and sets of feature descriptors independently. The result of employing a featurization scheme onto a representation of the crystal is a FeaturizedAtoms object, which encapsulates the featurization scheme, the original representation itself (which, within reasonable expectations, enables provenance and validates serialization), and the featurized representation obtained by encoding the featurization scheme onto the original representation. It is this FeaturizedAtoms object that is fed into the model (see next section). ### Model Building AtomicGraphNets is a minimal implementation of a model conceptually similar to the original CGCNN [34]. However, as mentioned above, the graphs themselves have continuous-valued weights, and the convolution operation is accomplished via the graph Laplacian, allowing for fast performance in both the reverse (training) and forward directions. AtomicGraphNets is built on the Flux.jl [45; 46] ML framework, which in turn utilizes Zygote [48] for AD support. In keeping with the discussion of interoperability above, the Flux stack has been widely used across a variety of applications that necessitate composability and interoperability between "traditional" ML model elements and other functionality - for example, in state-of-the-art implementations of neural differential equations [49], wherein a neural network is coupled with ODE solvers for tasks such as model surrogatization or parameter estimation. AtomicGraphNets defines some custom layers that can be easily composed with other Flux layers (or any differentiable function with an appropriate call signature). In the notebook, we demonstrate using these layers (via a convenience function) to build a model very similar to the standard CGCNN [34] architecture and show how to perform a training step. Another package, GeometricFlux [40], also provides implementations of a number of standard layers/operations common in graph-based ML generally. It is straightforward to dispatch the operations of these layers onto Figure 3: Schematic of encoding and decoding. The element gallium is fed as input to two different element feature descriptors: block (a categorical-valued feature), and atomic mass (a continuous-valued one). These are one-hot encoded and concatenated into a final node feature vector. Decoding is also demonstrated, showing how the “resolution” of encoding for continuous-valued features can easily be queried. our FeaturizedAtoms objects to take advantage of this functionality without needing to re-implement anything or copy code (this is shown in more detail in the notebook for two particular example layers: convolutional and pooling operations). The functionality demonstrated here is just the beginning. Work is ongoing in implementing additional feature descriptors and codecs in ChemistryFeaturization, as well as new model architectures, and perhaps more importantly, supporting existing ones already implemented elsewhere. ## IV Conclusion Atomistic modeling in Julia (both data-driven and otherwise) is still in its early days. While a young language, Julia is nonetheless poised to become a major player in the future of scientific computing thanks to its speed and interoperability facilitated by multiple dispatch, but also due to the community of users and developers that has sprung up around it. We have built the framework for Chemella to fit within these existing pieces (e.g. adopting interfaces such as AtomsBase) but also to be lightweight and adaptable so it can evolve along with the language and tooling. We believe that the emphasis on retaining provenance and "decodability" is is crucial for reproducible science and can also be of tremendous value to users learning these concepts for the first time - making it easier to "peek under the hood" helps to encourage a new user to understand what is going on in the code rather than just how to use it. To be sure, Chemella ecosystem has functionalities that are yet to be "fleshed out" in detail, but we hope it can be a part of this bright future for atomistic modeling and, in particular, seamless interplay between data-driven approaches and traditional simulation. The Chemella developer team is currently small, but open to growth as well as contributions of packages/functionality! ###### Acknowledgements. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0001211. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. The authors also wish to acknowledge financial support from the Google Summer of Code Program, the Carnegie Mellon Manufacturing Futures Initiative Postdoctoral Fellowship, and the Molecular Sciences Software Institute Postdoctoral Fellowship.
2306.04322
Efficiency and accuracy of GPU-parallelized Fourier spectral methods for solving phase-field models
Phase-field models are widely employed to simulate microstructure evolution during processes such as solidification or heat treatment. The resulting partial differential equations, often strongly coupled together, may be solved by a broad range of numerical methods, but this often results in a high computational cost, which calls for advanced numerical methods to accelerate their resolution. Here, we quantitatively test the efficiency and accuracy of semi-implicit Fourier spectral-based methods, implemented in Python programming language and parallelized on a graphics processing unit (GPU), for solving a phase-field model coupling Cahn-Hilliard and Allen-Cahn equations. We compare computational performance and accuracy with a standard explicit finite difference (FD) implementation with similar GPU parallelization on the same hardware. For a similar spatial discretization, the semi-implicit Fourier spectral (FS) solvers outperform the FD resolution as soon as the time step can be taken 5 to 6 times higher than afforded for the stability of the FD scheme. The accuracy of the FS methods also remains excellent even for coarse grids, while that of FD deteriorates significantly. Therefore, for an equivalent level of accuracy, semi-implicit FS methods severely outperform explicit FD, by up to 4 orders of magnitude, as they allow much coarser spatial and temporal discretization.
A. D. Boccardo, M. Tong, S. B. Leen, D. Tourret, J. Segurado
2023-06-07T10:37:06Z
http://arxiv.org/abs/2306.04322v2
# Efficiency and accuracy of GPU-parallelized Fourier spectral methods for solving phase-field models ###### Abstract Phase-field models are widely employed to simulate microstructure evolution during processes such as solidification or heat treatment. The resulting partial differential equations, often strongly coupled together, may be solved by a broad range of numerical methods, but this often results in a high computational cost, which calls for advanced numerical methods to accelerate their resolution. Here, we quantitatively test the efficiency and accuracy of semi-implicit Fourier spectral-based methods, implemented in Python programming language and parallelized on a graphics processing unit (GPU), for solving a phase-field model coupling Cahn-Hilliard and Allen-Cahn equations. We compare computational performance and accuracy with a standard explicit finite difference (FD) implementation with similar GPU parallelization on the same hardware. For a similar spatial discretization, the semi-implicit Fourier spectral (FS) solvers outperform the FD resolution as soon as the time step can be taken 5 to 6 times higher than afforded for the stability of the FD scheme. The accuracy of the FS methods also remains excellent even for coarse grids, while that of FD deteriorates significantly. Therefore, for an equivalent level of accuracy, semi-implicit FS methods severely outperform explicit FD, by up to 4 orders of magnitude, as they allow much coarser spatial and temporal discretization. _Keywords_: Phase-Field Model, Fourier Spectral Method, Python Programming Language, Graphic Processing Unit ## 1 Introduction The Phase-Field (PF) method is extensively employed to simulate microstructure evolution during solidification and solid-state transformations of alloys, as well as many other problems involving complex pattern formation and their evolution [1, 2, 3, 4, 5, 6]. PF models consist of one or several coupled partial differential equations that are solved, in general, in 2D or 3D domains. The resolution may be implemented with a range of numerical methods, e.g., finite difference (FD), finite element, finite volume, and Fourier spectral (FS), and different hardware, such as central processing units (CPUs) and graphics processing units (GPUs). However, the need for a fine resolution at the scale of the smallest morphological details, e.g., at the scale of dendrite tips for solidification, often results in high computational cost, particularly when large 3D domains are considered. During the last couple of decades, numerous strategies have been explored to accelerate PF calculations. The most widely used numerical method to solve PF models considers a standard first-order forward Euler finite difference (explicit) for time discretization and second-order finite difference for space discretization. Performance improvement may be achieved by means of parallel computing. Notably, George and Warren [7], as well as Nestler and colleagues [8, 9] employed a multi-CPU parallelization by means of message passing interface (MPI). Provatas _et al._[10, 11] used an adaptive mesh refinement algorithm together with multi-CPU parallelization. Non-uniform meshing allows locally refined grid size at the interface and, in this way, it is possible to reduce the total number of degrees of freedom. The extra time required for the remeshing process is usually compensated for problems in which the (refined) interface region represents a small subset of the total domain. Mullis, Jimack and colleagues [12] employed a second-order fully implicit scheme, along with adaptive mesh refinement, and multi-CPU [13]. Takaki and colleagues [14, 15] combined the compute unified device architecture (CUDA) and massive parallelization over hundreds of GPUs and CPU cores. Advanced multi-GPU strategies [16], most recently combined with adaptive mesh refinement and load balancing algorithms [17], have led to PF simulations among the largest reported to date. Such implementations have enabled to tackle challenging multiscale problems, for instance applied to three-dimensional simulations of solidification considering both thermal and solute fields [13], polycrystalline cellular/dendritic growth [18; 19] or eutectic growth [20] at experimentally-relevant length and time scales. A very common alternative to the finite element method or FD in many types of microscopic simulations is the use of Fourier spectral solvers due to their intrinsic periodic nature, which is often an acceptable assumption for microscopic problems at the scale of representative volume elements, and due to their superior numerical performance [21]. The reason behind the efficiency of FS solvers is the use of the fast Fourier transform (FFT) algorithm to perform the Fourier transforms at a computational cost that scales with complexity as \(N\log(N)\), where \(N\) is the amount of data input. Regarding their application to PF, semi-implicit Fourier spectral methods (often named directly FFT methods) have been proposed to solve PF models with periodic boundary conditions (BCs) [22; 23; 24]. In contrast to popular Euler FD schemes, their semi-implicit nature allows the use of larger time steps in comparison with explicit schemes. Moreover, the spatial discretization in the Fourier space gives an exponential convergence rate, in contrast to second order offered by the above-mentioned finite difference method. Moreover, FFTs may be used to solve general FD schemes in Fourier space, while keeping the semi-implicit nature of the integration. As a result, FS resolution of phase-field models were found to offer excellent accuracy and acceleration by orders of magnitude compared to the usual explicit FD algorithms, with particular efficiency for problems involving long-range interactions [22; 23]. To further improve computational efficiency, semi-implicit FS method have also been combined with adaptive meshing, using a non-uniform grid for the physical domain, a uniform mesh for the computational one, and a time-dependent mapping between them [24]. Thanks to their outstanding performance, semi-implicit FFT-based implementations have been used beyond standard PF simulations. For example, they have a very good potential to solve problems coupling microstructural evolution with micromechanics [25] taking advantage of the similar frameworks used for phase-field and crystal plasticity models. Another example, is the use of FFT solvers for phase-field crystal (PFC) models [26; 27]. Independently of the intrinsic performance of the numerical method, massive parallelization is a fundamental ingredient for efficient PF simulations of large domains. As previously stated, CPU and GPU approaches have been widely explored in FD implementations of PF models. Regarding spectral solvers, first implementations were developed for CPU parallelization [22; 23] while most recent developments rely on GPU parallelization showing great potential [28; 29; 30]. However, rigorous analyses of the efficiency and accuracy of combined Fourier based methods and GPU parallelization on reference benchmark problems are lacking. In this paper, we perform a quantitative assessment of the accuracy and efficiency of a semi-implicit Fourier spectral method parallelized on GPU compared to the most common standard approach in PF, i.e., an explicit FD scheme with similar GPU parallelization. This comparison is made for a typical phase-field benchmark problem representative of the Ostwald ripening phenomenon. The Fourier approaches include both a _pure_ spectral approach and several different FD schemes, all integrated in time using a semi-implicit scheme. First, we summarize the phase-field problem formulation in Section 2.1. The semi-implicit Fourier spectral resolution scheme is presented in Section 2.2. In Section 3, the resulting accuracy and computational performance are compared with that of a standard explicit finite difference scheme, also implemented in Python and parallelized on a single GPU. Finally, we summarize our conclusions in Section 4. ## 2 Methods ### Problem formulation We use the Ostwald ripening phase-field simulation benchmark proposed in Ref. [31] to test our FS-GPU implementation. The simulation represents the growth and coarsening of \(p=4\) variants of \(\beta\) particles into an \(\alpha\) matrix. The total free energy of the system formed by \(\alpha\) and \(\beta\) phases is [31, 32, 33]: \[F=\int_{V}\Big{(}f_{chem}+\frac{\kappa_{c}}{2}|\nabla c|^{2}+\sum_{i=1}^{p} \frac{\kappa_{\eta}}{2}|\nabla\eta_{i}|^{2}\Big{)}dV \tag{1}\] where \(f_{chem}\) is the chemical free energy density, \(c\) is the composition field, \(\eta_{i}\) are order parameters (i.e., the phase fields), \(\kappa_{c}\) and \(\kappa_{\eta}\) are the gradient energy coefficients for \(c\) and \(\eta\), respectively, and \(V\) is the volume. Regions in the domain where \(\{\eta_{i}=0,\,\forall\,\,i\}\) correspond to the \(\alpha\) matrix, while regions where \(\{\eta_{i}=1\) and \(\eta_{j}=0\), \(\forall\,\,j\neq i\}\) correspond to \(\beta\) particles of variant \(i\) (e.g., grain orientation index). The chemical free energy density is defined as \(f_{chem}=f^{\alpha}(1-h)+f^{\beta}h+wg\), where \(f^{\alpha}=\varrho^{2}(c-c_{\alpha})^{2}\) and \(f^{\beta}=\varrho^{2}(c_{\beta}-c)^{2}\) are the chemical free energy densities of \(\alpha\) and \(\beta\) phases, respectively, \(\varrho\) parametrizes the concentration-dependence of the free energies, \(c_{\alpha}\) and \(c_{\beta}\) are the equilibrium concentrations of \(\alpha\) and \(\beta\) phases, respectively, \(h\) is an interpolation function, \(g\) is a double-well function, and \(w\) is a parameter that controls the height of the double-well barrier. The interpolation and double-well functions are defined as \(h=\sum_{i=1}^{p}\eta_{i}^{3}(6\eta_{i}^{2}-15\eta_{i}+10)\) and \(g=\sum_{i=1}^{p}[\eta_{i}^{2}(1-\eta_{i})^{2}]+\alpha\sum_{i=1}^{p}\sum_{j\neq i }^{p}(\eta_{i}^{2}\eta_{j}^{2})\), where \(\alpha\) is a parameter that prevents the overlapping of different non-zero \(\eta_{i}\). The evolution of the \(c\) and \(\eta_{i}\) fields are computed with Cahn-Hilliard [34] and Allen-Cahn [35] partial differential equations, respectively: \[\frac{\partial c}{\partial t} =\nabla\cdot\Big{\{}M\nabla\Big{(}\frac{\partial f_{chem}}{\partial c }-\kappa_{c}\nabla^{2}c\Big{)}\Big{\}} \tag{2}\] \[\frac{\partial\eta_{i}}{\partial t} =-L\Big{(}\frac{\partial f_{chem}}{\partial\eta_{i}}-\kappa_{ \eta}\nabla^{2}\eta_{i}\Big{)} \tag{3}\] where \(M\) is the mobility of the solute and \(L\) is a kinetic coefficient. Eqs. (2) and (3) are solved by imposing periodic BCs. Following the two-dimensional (2D) benchmark for square domains of size 200 presented in [31], \(\kappa_{c}=3\), \(\kappa_{\eta}=3\), \(\varrho^{2}=2\), \(c_{\alpha}=0.3\), \(c_{\beta}=0.7\), \(w=1\), \(\alpha=5\), \(M=5\), \(L=5\), and \(p=4\). The concentration and order parameter fields are initialized with: \[c(x,y)=c_{0}+\epsilon \big{\{}\cos(0.105x)\cos(0.11y)+\left[\cos(0.13x)\cos(0.087y) \right]^{2}\] \[\quad+\cos(0.025x-0.15y)\cos(0.07x-0.02y)\big{\}} \tag{4}\] \[\eta_{i}(x,y)=\eta_{\nu} \big{\{}\cos((0.01i)x-4)\cos((0.007+0.01i)y)\] \[\quad+\cos((0.11+0.01i)x)\cos((0.11+0.01i)y)\] \[\quad+\psi\big{[}\cos((0.046+0.001i)x\] \[\quad+(0.0405+0.001i)y)\cos((0.031+0.001i)x\] \[\quad-(0.004+0.001i)y)\big{]}^{2}\big{\}}^{2} \tag{5}\] where \(c_{0}=0.5\), \(\epsilon=0.05\), \(\eta_{\nu}=0.1\) and \(\psi=1.5\). These initial conditions were chosen to lead to nontrivial solutions [31]. However they lack the spatial periodicity required to rigorously test and compare them using periodic BCs, which is also essential to apply the FFT-based solver. In order to address the lack of periodicity, we let the system evolve, applying Eqs (4) and (5) as initial conditions and periodic BCs, for a dimensionless time from 0 to 1, using finite differences with a fine spatial grid and small time step. The resulting periodic fields at \(t=1\) are used as initial condition for all the cases studied. In addition to a thorough investigation of the 2D benchmark as proposed in [31], we also explore a three-dimensional (3D) test case extrapolated from the original case. In this case, we consider cubic domain of edge size 100. The phase fields \(\eta_{i}\) are initialized to the same 2D field at \(t=1\) as in 2D cases along the (\(z=l_{z}/2\)) plane, with \(l_{z}=100\) the height of the domain, and they are extrapolated along the \(z\) direction to produce 3D shapes using the following function: \[\eta_{i}^{\rm(3D)}(x,y,z)=0.5\bigg{\{}1+\tanh\bigg{[}\operatorname{arctanh} \bigg{(}2\eta_{i}^{\rm(2D)}(2x,2y)-1\bigg{)}-\bigg{(}\frac{z-l_{z}/2}{5} \bigg{)}^{4}\bigg{]}\bigg{\}} \tag{6}\] where factors 2 within the \(\eta^{\rm(2D)}\) function are due to the rescaling of the domain from 200 to 100 in size. The exponent 4 and denominator 5 of the last term of Eq. (6) were simply adjusted to produce (\(\eta_{i}=0.5\)) shapes relatively close to ellipsoids without changing the location of the (\(\eta_{i}=0.5\)) interface in the (\(z=l_{z}/2\)) plane. The initial condition for the concentration field is set as \(c(x,y,z)=c_{0}\) in the entire domain. ### Numerical resolution The system of partial differential equations formed by Eqs. (2) and (3) is solved by means of a semi-implicit non-iterative Fourier spectral method with first-order finite difference for time discretization. Hence, \(\partial f_{chem}/\partial c\) and \(\partial f_{chem}/\partial\eta_{i}\) are computed considering their value at the previous time step (explicit part) and the Laplacian of \(c\) and \(\eta_{i}\) are computed at the current time step (implicit part). Following the scheme proposed by Chen and Shen [22], the resolution in the Fourier space, at time \(t+\Delta t\), is: \[c^{(t+\Delta t)}+M\Delta t\kappa_{c}\nabla^{2}\big{(}\nabla^{2}c ^{(t+\Delta t)}\big{)} =c^{(t)}+M\Delta t\nabla^{2}\Bigg{(}\frac{\partial f^{(t)}_{ chem}}{\partial c}\Bigg{)} \tag{7}\] \[\eta^{(t+\Delta t)}_{i}-L\Delta t\kappa_{\eta}\nabla^{2}\eta^{(t+ \Delta t)}_{i} =\eta^{(t)}_{i}-L\Delta t\frac{\partial f^{(t)}_{chem}}{\partial \eta_{i}} \tag{8}\] where \(\Delta t\) is the time step. For the sake of clarity, the unknown fields \(c^{t+\Delta t}\) and \(\eta^{t+\Delta t}_{i}\) will be referred to as \(c\) and \(\eta_{i}\), respectively. By definition of the Fourier transform, the gradient of a field \(f\) is: \[\widehat{\nabla f}=i\boldsymbol{\xi}\widehat{f} \tag{9}\] with \(i\) the imaginary unit, \(\boldsymbol{\xi}\) the frequency vector, and \(\widehat{\ }\) denotes the Fourier transform of the affected variable. The previous differential equations can be transformed to the Fourier space, resulting in: \[\big{(}1+M\Delta t\kappa_{c}\|\boldsymbol{\xi}\|^{4}\big{)} \widehat{c}(\boldsymbol{\xi}) =\widehat{c}^{(t)}(\boldsymbol{\xi})-M\Delta t\|\boldsymbol{\xi} \|^{2}\widehat{\frac{\partial f^{(t)}_{chem}}{\partial c}}(\boldsymbol{\xi}) \tag{10}\] \[\big{(}1+L\Delta t\kappa_{\eta}\|\boldsymbol{\xi}\|^{2}\big{)} \widehat{\eta_{i}}(\boldsymbol{\xi}) =\widehat{\eta_{i}}^{(t)}(\boldsymbol{\xi})-L\Delta t\widehat{ \frac{\partial f^{(t)}_{chem}}{\partial\eta_{i}}}(\boldsymbol{\xi}) \tag{11}\] where the expressions in Eqs. (10) and (11) are two linear systems of algebraical equations in which the right hand side is the independent term \(\mathbf{b}\), depending on the values of the fields at the previous time step: \[\widehat{b}_{c}^{(t)}(\boldsymbol{\xi}) =\widehat{c}^{(t)}-M\Delta t\|\boldsymbol{\xi}\|^{2}\widehat{ \frac{\partial f_{chem}^{(t)}}{\partial c}} \tag{12}\] \[\widehat{b}_{\eta_{i}}^{(t)}(\boldsymbol{\xi}) =\widehat{\eta_{i}}^{(t)}-L\Delta t\widehat{\frac{\partial f_{ chem}^{(t)}}{\partial\eta_{i}}} \tag{13}\] The equations are decoupled for each frequency, so the system can be solved in Fourier space by inverting the left hand side as: \[\widehat{c}(\boldsymbol{\xi}) =\big{(}1+M\Delta t\kappa_{c}\|\boldsymbol{\xi}\|^{4}\big{)}^{-1} \widehat{b}_{c}^{(t)}(\boldsymbol{\xi}) \tag{14}\] \[\widehat{\eta_{i}}(\boldsymbol{\xi}) =\big{(}1+L\Delta t\kappa_{\eta}\|\boldsymbol{\xi}\|^{2}\big{)}^{ -1}\widehat{b}_{\eta_{i}}^{(t)}(\boldsymbol{\xi}) \tag{15}\] and the fields are readily obtained as the inverse Fourier transform of \(\widehat{c}\) and \(\widehat{\eta_{i}}\). If the domain under consideration is rectangular of size \(l_{x}\times l_{y}\) and it is discretized into a grid containing \(p_{x}\) and \(p_{y}\) points in each direction, the discrete Fourier frequency vector corresponds to \(\boldsymbol{\xi}=2\pi[n_{x}/l_{x},n_{y}/l_{y}]\) and the square of the frequency gradient is: \[\|\boldsymbol{\xi}\|^{2}=4\pi^{2}[(n_{x}/l_{x})^{2}+(n_{y}/l_{y})^{2}] \tag{16}\] with \(n_{x}\) and \(n_{y}\) the two-dimensional meshgrid matrices generated with two vectors of the form \([0,...,(p_{x}/2),-(p_{x}/2-1),...,-1]\) and \([0,...,(p_{y}/2),-(p_{y}/2-1),...,-1]\). An alternative to the standard Fourier spectral approach, first proposed in [36] to reduce the aliasing effect in the presence of non-smooth functions, consists in replacing the definition of the derivative in Eq. (9) with a finite difference derivative, but computing it through the use of Fourier transform. To this end, the finite difference stencil under consideration (e.g. backward, forward or central differences) is obtained by transporting the function in Fourier space using the shift theorem [36]. This method allows calculation of the spatial derivatives by considering the local values of the fields, reducing possible oscillations in their values but in general losing accuracy. This approach results in preserving Eq. (9) for computing the gradient but redefining the frequencies. The square of the frequency gradient in Eqs. (14) and (15) is thus redefined as: \[\|\mathbf{\xi}_{c2}\|^{2}= -2\bigg{\{}\frac{\cos(\Delta x\xi_{x})-1}{\Delta x^{2}}+\frac{\cos( \Delta y\xi_{y})-1}{\Delta y^{2}}\bigg{\}} \tag{17}\] \[\|\mathbf{\xi}_{c4}\|^{2}= -\frac{1}{6}\bigg{\{}\frac{16\cos(\Delta x\xi_{x})-\cos(2\Delta x \xi_{x})-15}{\Delta x^{2}}\] \[+\frac{16\cos(\Delta y\xi_{y})-\cos(2\Delta y\xi_{y})-15}{\Delta y ^{2}}\bigg{\}} \tag{18}\] for centered \(\mathcal{O}(\Delta l^{2})\) FD and centered \(\mathcal{O}(\Delta l^{4})\) FD, respectively, where \(\Delta x\) and \(\Delta y\) are the distance between two consecutive grid points in the \(x\) and \(y\) directions, respectively. \(\Delta l\) indicates the approximation of the spatial discretization in the \(x\) (\(\Delta l=\Delta x\)) and \(y\) (\(\Delta l=\Delta y\)) directions. If the domain under consideration is a rectangular parallelepiped of size \(l_{x}\times l_{y}\times l_{z}\) and it is discretized into a grid containing \(p_{x}\), \(p_{y}\), and \(p_{z}\) points in each direction, the discrete Fourier frequency vector corresponds to \(\mathbf{\xi}=2\pi[n_{x}/l_{x},n_{y}/l_{y},n_{z}/l_{z}]\) and the square of the frequency gradient is: \[\|\mathbf{\xi}\|^{2}=4\pi^{2}[(n_{x}/l_{x})^{2}+(n_{y}/l_{y})^{2}+(n_{z}/l_{z})^{2}] \tag{19}\] with \(n_{x}\), \(n_{y}\), and \(n_{z}\) the three-dimensional meshgrid matrices generated with three vectors of the form \([0,...,(p_{x}/2),-(p_{x}/2-1),...,-1]\), \([0,...,(p_{y}/2),-(p_{y}/2-1),...,-1]\), and \([0,...,(p_{z}/2),-(p_{z}/2-1),...,-1]\). For the case of \(\mathcal{O}(\Delta l^{2})\) and \(\mathcal{O}(\Delta l^{4})\) FS-FD methods, the square of the frequency gradient is redefined as: \[\|\mathbf{\xi}_{c2}\|^{2}= -2\bigg{\{}\frac{\cos(\Delta x\xi_{x})-1}{\Delta x^{2}}+\frac{ \cos(\Delta y\xi_{y})-1}{\Delta y^{2}}+\frac{\cos(\Delta z\xi_{z})-1}{\Delta z ^{2}}\bigg{\}} \tag{20}\] \[\|\mathbf{\xi}_{c4}\|^{2}= -\frac{1}{6}\bigg{\{}\frac{16\cos(\Delta x\xi_{x})-\cos(2\Delta x \xi_{x})-15}{\Delta x^{2}}+\frac{16\cos(\Delta y\xi_{y})-\cos(2\Delta y\xi_{y })-15}{\Delta y^{2}}\] \[+\frac{16\cos(\Delta z\xi_{z})-\cos(2\Delta z\xi_{z})-15}{\Delta z ^{2}}\bigg{\}} \tag{21}\] where \(\Delta z\) is the distance between two consecutive grid points along the \(z\) direction. The resolution scheme is presented in Algorithm 1 for 3D domains. The computational scheme is implemented using the Python programming language. For each time step, the computation of discrete Fourier transform (\(\mathcal{F}\)) and discrete inverse Fourier transform (\(\mathcal{F}^{-1}\)) is performed, on the GPU device using Scikit-CUDA [37]. To solve Eqs. (14) and (15) on the GPU device, CUDA kernels are programmed through PyCUDA [38], where the arrays are defined in double precision. The GPU-parallelized FS resolution algorithm is compared to a standard explicit FD solver, also parallelized on GPU through PyCUDA. The FD algorithm uses a common explicit first-order forward Euler time stepping, second-order centered FD scheme for spatial derivatives, and a 5-point (2D domain) or 7-point (3D domain) stencil Laplacian operator discretized on a regular grid of square or cubic elements. Within a time iteration, all calculations are performed on the GPU (device) within three kernels calculating (i) the \(\mu=\partial f_{chem}/\partial c-\kappa_{c}\nabla^{2}c\) term from Eq. (2), (ii) \(c^{(t+\Delta t)}\) from Eq. (2), (iii) \(\eta_{i}^{(t+\Delta t)}\) from Eq. (3). Periodic BCs are applied at the end of each kernel on the calculated field (namely: \(\mu\), \(c\), and \(\eta_{i}\)) using one extra layer of grid points on each side of the domain. Time stepping is applied at the end of the time loop by swapping addresses of current (\(t\)) and next (\(t+\Delta t\)) arrays after the three kernel calls (i.e., on the CPU). Time-consuming memory copies between GPU (device) and CPU (host) are performed only when an output file of the fields is required. Performance comparisons were performed without file output, hence not wasting any time on file writing. The GPU block size was set to \(16\times 16\) for 2D cases and \(8\times 8\times 8\) for 3D cases, which was found to lead to near-optimal performance for all cases. This algorithm represents a nearly optimal performance for a single-GPU FD implementation - and hence a fair comparison for FS-based simulations. Both FS and FD simulations were performed using a single GPU on a computer with the following hardware features: Intel Xeon Gold 6130 microprocessor, 187 GB RAM, GeForce RTX 2080Ti GPU (4352 Cuda cores and 11 GB RAM), and software features: CentOS Linux 7.6.1810, Python 3.8, PyCUDA 2021.1, Scikit-CUDA 0.5.3, and CUDA 10.1 (Toolkit 10.1.243). ### Simulations The developed FS-GPU Python code is employed to simulate the diffusion-driven growth of \(\beta\) grains, with 4 crystal orientations, within an \(\alpha\) matrix. The nucleation of \(\beta\) is generated by the non-uniform fields of the initial condition. To test the computational performance and accuracy of the results, the time and space are discretized in different ways for 2D cases. As shown in Table 1, 15 cases are considered for each FS method by defining different combinations of \(\Delta t\) and numbers of grid points (\(n\)) in regular grids. Furthermore, we also use a FD-GPU Python code, considering an explicit centered FD method in order to compare its results with the proposed FS methods. The 5 cases solved with FD method are also listed in Table 1. For 3D cases, the computational performance is tested by modifying the spatial discretization. As shown in Table 2, 5 cases are considered for each FS and FD methods by defining different of numbers of grid points in regular grids. \begin{table} \begin{tabular}{|c|c|c|} \hline & \multicolumn{2}{c|}{\(\boldsymbol{\Delta\mathbf{t}}\)} \\ \hline **Number of grid points** & FD method & FS methods \\ \hline \(128^{2}\) & \(8.138\times 10^{-3}\) & \(10^{-4}\); \(10^{-3}\); \(5\times 10^{-3}\) \\ \(256^{2}\) & \(5.086\times 10^{-4}\) & \(10^{-4}\); \(10^{-3}\); \(5\times 10^{-3}\) \\ \(512^{2}\) & \(3.560\times 10^{-5}\) & \(10^{-4}\); \(10^{-3}\); \(5\times 10^{-3}\) \\ \(1024^{2}\) & \(2.225\times 10^{-6}\) & \(10^{-4}\); \(10^{-3}\); \(5\times 10^{-3}\) \\ \(2048^{2}\) & \(1.490\times 10^{-7}\) & \(10^{-4}\); \(10^{-3}\); \(5\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 1: Analyzed cases for 2D FS and FD numerical methods. ## 3 Results and discussion The maximum values of \(\Delta t\) for the FS methods and the FD method were determined by computational trial-and-error exploration of the algorithm stability. As expected, the semi-implicit FS algorithm is more stable than explicit FD, particularly when \(n\) is high, as it was possible to employ much higher values of \(\Delta t\). Moreover, as in [22], the unconditional stability of the semi-implicit scheme was observed when grid size is increased, such that, unlike with FD, it is not necessary to decrease \(\Delta t\) as \(n\) increases. Below, we first describe the system and free energy evolution for representative simulations with relatively fine discretization in both FS and FD methods (Section 3.1), before discussing computational performance (Section 3.2) and accuracy (Section 3.3) when changing \(n\) and \(\Delta t\). ### System evolution The evolutions of the total free energy and of its individual components (Eq. (1)), computed with FD (\(\Delta t=2.225\times 10^{-6}\) and \(n=1024^{2}\)) and FS (\(\Delta t=10^{-3}\) and \(n=1024^{2}\)) methods for 2D cases, are presented in Figure 1. A monotonic decrease in total free energy is observed, consistent with the expected energy minimization during microstructure evolution. The chemical contribution (i.e., integrating only the first term in Eq. (1)) behaves similarly as the total free energy because the system tends to equilibrium. The interfacial energy contributions with respect to \(c\) (integrating the second term in Eq. (1)) or \(\eta_{i}\) (integrating independently the last term in Eq. (1) for each \(i\)) increase and decrease when the surface areas of the particles increase and decrease, respectively. Fluctuations in \(\eta_{i}\) components are attributed to shrinkage of some particles whilst others grow by dissolution of neighbor particles. As expected from the fine grids used here and hence the high accuracy of the two simulations (see Section 3.3), the two methods predict the same energy evolution. \begin{table} \begin{tabular}{|c|c|c|} \hline & \multicolumn{2}{|c|}{\(\mathbf{\Delta t}\)} \\ \hline **Number of grid points** & FD method & FS methods \\ \hline \(16^{3}\) & \(4.06\times 10^{-2}\) & \(5\times 10^{-3}\) \\ \(32^{3}\) & \(2.54\times 10^{-2}\) & \(5\times 10^{-3}\) \\ \(64^{3}\) & \(3.17\times 10^{-3}\) & \(5\times 10^{-3}\) \\ \(128^{3}\) & \(7.94\times 10^{-5}\) & \(5\times 10^{-3}\) \\ \(256^{3}\) & \(9.93\times 10^{-6}\) & \(5\times 10^{-3}\) \\ \hline \end{tabular} \end{table} Table 2: Analyzed cases for 3D FS and FD numerical methods. Figure 2 shows the evolution of the 2D microstructure simulated on a \(1024^{2}\) grid using FD (\(\Delta t=2.225\times 10^{-6}\)), FS (\(\Delta t=10^{-3}\)), and \(\mathcal{O}(\Delta l^{4})\) FS-FD (\(\Delta t=10^{-3}\)) methods. The \(\eta_{1}\), \(\eta_{2}\), \(\eta_{3}\), and \(\eta_{4}\) variants of the phase field (i.e., regions with \(\eta_{i}=1\)) are represented in blue, green, yellow, and red, respectively, and the concentration field is represented by solid, dashed, and dotted contour lines at values 0.35, 0.5, and 0.65, respectively. In all cases, the microstructure evolves as expected in an Ostwald ripening simulation. Early in the simulation, several particles of each variant exist and interact with one another. Later on, some particles disappear and others of the same variant merge. Finally, after a sufficiently long time, the microstructure reaches steady state with only one \(\eta_{1}\) particle remaining. The phase evolution results obtained with the different methods are nearly indistinguishable from each other - thus illustrating that the negligible error levels assessed at \(t=100\) in later Section 3.3 can be generalized to the entire duration of the simulation. Figure 1: Free energy evolution, for 2D cases, obtained for a \(1024^{2}\) grid with FD (\(\Delta t=2.225\times 10^{-6}\)) and FS (\(\Delta t=10^{-3}\)) numerical methods. Figure 2: Evolution of phase (color) and concentration (iso-value lines) fields, for 2D cases, at different times (rows) for a grid size of \(1024^{2}\) obtained with different methods (columns), namely: (left) FD (\(\Delta t=2.225\times 10^{-6}\)); (center) FS (\(\Delta t=10^{-3}\)), and (right) \(\mathcal{O}(\Delta l^{4})\) FS-FD (\(\Delta t=10^{-3}\)). Figure 3 shows the evolution of the 3D microstructure simulated on a \(256^{3}\) grid using the FS method. As in Fig. 2, the (\(\eta_{i}=0.5\)) interfaces for \(i=1\), \(2\), \(3\), and \(4\) are represented in blue, green, yellow, and red, respectively. Isovalues of the concentration field along the central (\(z=l_{z}/2\)) plane are represented from \(c=0.3\) to \(c=0.8\) with steps of \(0.05\). In all cases, the microstructure evolves as expected in an Ostwald ripening simulation. Several small particles of each variant exist at early times and one big (here, planar) particle of \(\eta_{1}\) remains when the microstructure reaches steady state. The microstructure evolves faster in the 3D cases because the domain size and particles are relatively smaller than in 2D cases. ### Computational performance The wall clock time required to complete the 2D simulations from time \(1\) to \(100\) (dimensionless) is presented in Figure 4.a. For each studied case, \(6\) simulations are run with the same parameters and the wall clock time is determined by averaging the results because a variation in the wall clock time consumed by ScikitCUDA is observed. Different line thicknesses or symbol sizes represent different \(\Delta t\) (thicker/bigger: lower \(\Delta t\)). Solid blue lines show the FS algorithm, symbols show the FS-FD algorithm with \(\mathcal{O}(\Delta l^{2})\) (yellow \(\circ\)) or \(\mathcal{O}(\Delta l^{4})\) (green \(\times\)), and dashed red lines show the corresponding FD algorithm. As expected, the wall clock time decreases with \(\Delta t\) due to the fact that fewer steps are necessary to reach \(t=100\). For a given \(\Delta t\), the performance of all FS methods is similar because \(\|\boldsymbol{\xi}\|^{2}\) is computed only once at the initialization stage. Figure 3: Evolution of (\(\eta_{i}=0\)) interfaces (color) and concentration (isovalue lines along the (\(z=l_{z}/2\)) plane) fields, for a 3D simulation with \(n=256^{3}\) obtained with FS method. Figure 4: Computational performance for 2D cases. a) wall clock time, b) wall clock time per time step per grid point, and c) wall clock time per grid point, all of them for different numbers of grid points. The wall clock time to compute one time step per grid point is shown in Figure 4.b for 2D simulations. This metric is independent of the FS method and the chosen time step. For all methods, it is significantly higher for \(n=128^{2}\), which is attributed to parallelization inefficiency when the grid size is small. Indeed, a \(128^{2}\) domain corresponds to less than 3.8 grid points per CUDA core, which is not optimal in the absence of any load balancing supervision. The required solution time per grid point per time step using the FS methods is between 5.2 and 6.5 times higher than using the FD method. This is due to the required computation, at every step, of \(\mathcal{F}\) of the partial derivatives of \(f_{chem}\) at time \(t\), and \(\mathcal{F}^{-1}\) of \(\widehat{c}\) and \(\widehat{\eta_{i}}\) at time \(t+\Delta t\), which are not required in FD. Hence, FS methods perform best when \(\Delta t\) is sufficiently larger than with the FD method, namely by a factor of 6.5 or more, in order to compensate for the extra time needed to calculate \(\mathcal{F}\) and \(\mathcal{F}^{-1}\). This compensation is observed at \(n\geq 256^{2}\) for \(\Delta t=5\times 10^{-3}\) (i.e., \(\Delta t_{\text{FS}}/\Delta t_{\text{FD}}=9.8\)), at \(n\geq 512^{2}\) for \(\Delta t=10^{-3}\) (i.e., \(\Delta t_{\text{FS}}/\Delta t_{\text{FD}}=28.1\)), and at \(n\geq 1024^{2}\) for \(\Delta t=10^{-4}\) (i.e., \(\Delta t_{\text{FS}}/\Delta t_{\text{FD}}=44.9\)), as seen in Figures 4.a and 4.c. When \(n=128^{2}\), the FD method has a better performance in comparison with the FS ones, but its accuracy is not as good, as discussed in Section 3.3. The wall clock time required to complete the 3D simulations from time 1 to 100 (dimensionless) is presented in Figure 5.a. Solid blue lines show the FS algorithm, dashed yellow lines show the FS-FD algorithm with \(\mathcal{O}(\Delta l^{2})\), dashed green lines show the FS-FD algorithm with \(\mathcal{O}(\Delta l^{4})\), and dashed red lines show the corresponding FD algorithm. For each case with \(n=1283\) and \(n=256^{3}\), computed with FS-based methods, the wall clock time is determined by averaging the results obtained from 6 simulations because the wall clock time consumed by Scikit-CUDA presents some variations. The same behavior as in 2D cases is observed and the performance is in favor of FS methods when \(\Delta t\) is sufficiently larger than with the FD method, by a factor of 4.3 or more, in order to compensate for the extra time needed to calculate \(\mathcal{F}\) and \(\mathcal{F}^{-1}\). This compensation is observed at \(n\geq 128^{3}\) (\(\Delta t_{\text{FS}}/\Delta t_{\text{FD}}=62.9\)). When \(n<128^{3}\), the FD method has a better performance in comparison with the FS ones, but its accuracy is not as good. The wall clock time to compute one step per each grid point is shown in Figure 5.b. This metric is independent of the FS method and the chosen time step. In all cases, it increases significantly for \(n=16^{3}\) and \(n=32^{3}\), which is related to parallelization inefficiency when the grid size is small. The required solution time per time step per grid point using the FS methods is between 2.3 and 4.3 times higher than the using the FD method. This ratio is lower than in 2D cases because FS methods compute \(\|\boldsymbol{\xi}\|^{2}\) at the begining of the simulation and in this way the number of operations per grid point is the same for 2D and 3D, but it is not the case for FD method, which uses 5-point stencil in 2D and 7-point stencil in 3D, increasing the computational cost per grid point, as illustrated in Figure 5.b with 2D cases shown as symbols. Regarding GPU memory (RAM) consumption, the FS and FD 2D simulations with \(n=2048^{2}\) require 1.93 GB and 0.46 GB, respectively, and the FS and FD 3D simulations with \(n=256^{3}\) require 7.14 GB and 1.52 GB, respectively. This difference arises from the fact that 12 complex arrays are needed for the FS scheme compared to the 6 real ones for FD. The increase in performance (and accuracy, as shown in Section 3.3) provided by FS methods thus comes at the cost of a higher memory consumption - but even the 7 GB required by the largest \(256^{3}\) cases with 5 tracked fields easily fits within almost modern GPU. Figure 5: Computational performance for 3D cases for different numbers of grid points (see Table 2): a) wall clock time and b) wall clock time per time step per grid point. ### Accuracy of the results The accuracy of results obtained for different values of \(\Delta t\) and \(n\) is quantified, in comparison to a reference solution by measuring the L\({}_{2}\) normalized global error of a field \(\phi\) computed at \(t=100\) as: \[E_{\phi}=\frac{\int_{V}(\phi_{0}-\phi)^{2}\,dV}{\int_{V}\phi_{0}^{2}\,dV} \tag{22}\] where \(\phi_{0}\) is the reference solution field and \(\phi\) is the assessed solution field for a given test condition. In the absence of a reference (e.g., analytical) solution, the reference solution for 2D simulations is chosen as the FS implementation with the lowest \(\Delta t=10^{-4}\) and the highest \(n=2048^{2}\). For the phase fields, the solution of \(\eta_{i}\) (\(i=1\) to \(4\)) are added together into a field \(\eta\). We only compare accuracy for the 2D calculations, as we consider that the finest \(256^{3}\) grid is not fine enough to be considered a reference, and we observed trends observed in 2D to be generalizable to 3D. Figure 6 shows the errors, in percentage, for (a) concentration and (b) phase fields with different numbers of grid points and time step size for 2D cases. As expected, the errors decrease with increasing number of grid points and decreasing \(\Delta t\). For the coarsest \(n=128^{2}\) grid, good accuracy (i.e., an error lower than 1%) is achieved, except for FD and \(\mathcal{O}(\Delta l^{2})\) FS-FD methods, giving errors of \(E_{c}[\%]=2.03\) and \(E_{\eta}[\%]=7.74\). FS and \(\mathcal{O}(\Delta l^{4})\) FS-FD methods provide more accurate results than FD or \(\mathcal{O}(\Delta l^{2})\) FS-FD because they lead to more accurate spatial derivatives even on coarser grids. Comparing two cases with equivalent errors, namely \(E_{c}[\%]\approx 10^{-6}\) and \(E_{\eta}[\%]\approx 2\times 10^{-5}\), the wall clock time ratio between FS (\(n=256^{2}\) and \(\Delta t=10^{-3}\)) and FD (\(n=2048^{2}\) and \(\Delta t=1.490\times 10^{-7}\)) is over four orders of magnitude (\(6\times 10^{4}\)), at 46 seconds and 32 days, respectively. Using the \(\mathcal{O}(\Delta l^{4})\) FS-FD algorithm, a lower time step \(\Delta t=10^{-4}\) is required to achieve a similar accuracy. Since the required time step is one order of magnitude lower than with FS, the resulting speed-up compared to FD at a similar error is one order of magnitude lower (\(\mathcal{O}(\Delta l^{4})\) FS-FD is still faster than FD by over three orders of magnitude). In summary, for simulations requiring a sufficient level of accuracy, the FS method, and to a less extent the \(\mathcal{O}(\Delta l^{4})\) FS-FD method, are significantly more efficient than the classical FD algorithm, due to less stringent requirements on both spatial and temporal discretization. ## 4 Conclusions Here, we compared different numerical resolution methods for a benchmark Ostwald-ripening phase-field benchmark simulation. Our spectral solver makes use of a semi-implicit Fourier spectral-based numerical method, implemented in Python language, and parallelized on a single GPU. By comparison with first-order forward Euler finite difference for time discretization and second-order centered finite difference for space discretization, also parallelized on a single GPU, we conclude that: Figure 6: Error for 2D cases of a) concentration field and b) phase field for different numbers of grid point. * Fourier spectral-based methods significantly outperform finite differences when a large number of spatial grid points is required. This advantage is due to the much larger stable time step afforded by semi-implicit FS methods against explicit FD, in spite of the additional operations (transformation and anti-transformation of complex/scalar variables) per time step required by the FS solver. For our implementation, the performance is in favor of FS methods as long as \(\Delta t\) is larger than the maximum stable time step for the FD method by a factor of 6.5 in 2D or 4.3 in 3D. * The time step stability does not strongly depend on the grid size in the semi-implicit FS-based methods, which represents a significant advantage when a fine spatial discretization is required. * Under the same temporal and spatial discretization conditions, all of the Fourier spectral-based methods (namely FS or FS-FD) have the same computational performance, but the highest accuracy is obtained with the Fourier spectral scheme. * For a similar level of accuracy (i.e., of error) for both phase and concentration fields, the computational performance of the FS and FS-FD \(\mathcal{O}(\Delta l^{4})\) methods exceeds that of explicit FD by more than four orders of magnitude. * The Python programming allowed an easy implementation of the model and exploitation of the benefit of GPU device through the CUDA kernel implementation. * For the consider benchmark, Fourier spectral-based methods required 4.7 times more memory (RAM) than the explicit FD, which may limit the applicability of the former for 3D domains with fine grids using single-GPU hardware. ## Acknowledgements This research was supported by the Science Foundation Ireland (SFI) under grant number 16/RC/3872. D.T. acknowledges the financial support from the Spanish Ministry of Science through the Ramon y Cajal grant RYC2019-028233-I. A.B. acknowledges the support of ECHV and financial support from HORIZON-TMA-MSCA-PF-EF 2021 (grant agreement 101063099). ## Data Availability Data will be made available on request.
2307.14218
Interest rate convexity in a Gaussian framework
The contributions of this paper are twofold: we define and investigate the properties of a short rate model driven by a general Gaussian Volterra process and, after defining precisely a notion of convexity adjustment, derive explicit formulae for it.
Antoine Jacquier, Mugad Oumgari
2023-07-26T14:35:28Z
http://arxiv.org/abs/2307.14218v2
# Interest rate convexity in a Gaussian framework ###### Abstract. The contributions of this paper are twofold: we define and investigate the properties of a short rate model driven by a general Gaussian Volterra process and, after defining precisely a notion of convexity adjustment, derive explicit formulae for it. Key words and phrases:interest rates, fractional Brownian motion, convexity adjustment 2010 Mathematics Subject Classification: 60G15, 91-10 The authors would like to thank Damiano Brigo for helpful comments. AJ is supported by the EPSRC grants EP/W032643/1 and EP/T032146/1. ###### Contents * 1 Introduction * 2 Preliminaries * 3 The \(L^{2}\) property ensures that the stochastic integral \((\varphi(t-\cdot)\circ\mathfrak{W})_{t}\) is well defined. * 4 The \(L^{2}\) property ensures that the stochastic integral \((\varphi(t-\cdot)\circ\mathfrak{W})_{t}\) is well defined. * 5 The assumption does not imply that the short rate process itself, while Gaussian, is a semimartingale. **Proposition 2.4**.: _The price of the zero-coupon bond at time \(t\) reads_ \[P_{t,T}=\exp\left\{-\Theta_{t,T}+\frac{1}{2}\int_{t}^{T}\Xi_{T}(u,u)^{2}\mathrm{d }u+\left(\Xi_{T}(t,\cdot)\circ\mathfrak{W}\right)_{t}\right\},\] _and the discounted bond price \(\widetilde{P}_{t,T}:=P_{t,T}\exp\left\{-\int_{0}^{t}r_{s}\mathrm{d}s\right\}\) is a \(\mathbb{Q}\)-martingale satisfying_ \[\frac{\mathrm{d}\widetilde{P}_{t,T}}{\widetilde{P}_{t,T}}=\Xi_{T}(t,t)\,\, \mathrm{d}\mathfrak{W}_{t}.\] **Corollary 2.5**.: _The instantaneous forward rate satisfies \(f_{TT}=r_{T}\) and, for all \(t\in[0,T)\),_ \[f_{t,T}=\theta(T)+\int_{0}^{t}\varphi(T,u)\mathrm{d}\mathfrak{W}_{u}+\int_{t}^ {T}\varphi(T,u)\Xi_{T}(u,u)\mathrm{d}u.\] In differential form, for any fixed \(T>0\), for \(t\in[0,T]\), this is equivalent to \[\mathrm{d}f_{t,T}=\varphi(T-t)\mathrm{d}\mathfrak{W}_{t}-\varphi(T-t)\Xi_{T}( t,t)\mathrm{d}t.\] **Algorithm 2.6**.: For simulation purposes, we assume a time grid of the form \(\mathcal{T}:=\{0=t_{0}<t_{1}<\cdots<t_{N}=T\}\), and we discretise the stochastic integral along this grid with left-point approximations as \[\left(\Xi_{T}(t_{i},\cdot)\circ\mathfrak{W}\right)_{t_{i}}=\int_{0}^{t_{i}} \Xi_{T}(t_{i},u)\mathrm{d}\mathfrak{W}_{u}\approx\sum_{k=0}^{i-1}\Xi_{T}(t_{i },t_{k})\left(\mathfrak{W}_{t_{k+1}}-\mathfrak{W}_{t_{k}}\right),\qquad\text{ for each }i=1,\ldots,N.\] The vector \(\left(\Xi_{T}(t_{i},\cdot)\circ\mathfrak{W}\right)_{t_{i}\in\mathcal{T}}\) of stochastic integrals can then be simulated along the time mesh directly as \[\begin{pmatrix}\left(\Xi_{T}(t_{1},\cdot)\circ\mathfrak{W}\right)_{t_{1}}\\ \vdots\\ \left(\Xi_{T}(t_{N},\cdot)\circ\mathfrak{W}\right)_{t_{N}}\end{pmatrix}\approx \begin{pmatrix}\Xi_{T}(t_{1},t_{0})\\ \Xi_{T}(t_{2},t_{0})\\ \vdots\\ \Xi_{T}(t_{N-1},t_{0})\\ \Xi_{T}(t_{N},t_{0})\end{pmatrix}\begin{pmatrix}\mathfrak{W}_{t_{1}}- \mathfrak{W}_{t_{0}}\\ \vdots\\ \mathfrak{W}_{t_{N}}-\mathfrak{W}_{t_{N-1}}\end{pmatrix},\] where the middle matrix is lower triangular (we omit the null terms everywhere for clarity). **Corollary 2.7**.: _With \(\varphi(t)=\sigma\mathrm{e}^{-\kappa t}\), for some \(\sigma>0\)\(\mathfrak{W}=W\) a Brownian motion, \(\theta(t):=r_{0}\mathrm{e}^{-\kappa t}+\mu\left(1-\mathrm{e}^{-\kappa t}\right)\), we recover exactly the Vasicek model [19], with dynamics_ \[\mathrm{d}r_{t}=\kappa(\mu-r_{t})\mathrm{d}t+\sigma\mathrm{d}W_{t},\qquad\text {starting from }r_{0}.\] Proof of Proposition 2.4.: The price of the zero-coupon bond at time \(t\) then reads \[P_{t,T}:=\mathbb{E}_{t}^{\mathbb{Q}}\left[\exp\left\{-\int_{t}^{ T}r_{s}\mathrm{d}s\right\}\right] =\mathbb{E}_{t}^{\mathbb{Q}}\left[\exp\left\{-\int_{t}^{T}\left( \theta(s)+\int_{0}^{s}\varphi(s,u)\mathrm{d}\mathfrak{W}_{u}\right)\mathrm{d} s\right\}\right] \tag{2.1}\] \[=\mathrm{e}^{-\Theta_{t,T}}\mathbb{E}_{t}^{\mathbb{Q}}\left[\exp \left\{-\int_{t}^{T}\left(\int_{0}^{s}\varphi(s,u)\mathrm{d}\mathfrak{W}_{u} \right)\mathrm{d}s\right\}\right].\] Using Fubini, we can write \[-\int_{t}^{T}\left(\int_{0}^{s}\varphi(s,u)\mathrm{d}\mathfrak{W} _{u}\right)\mathrm{d}s =-\int_{0}^{t}\left(\int_{t}^{T}\varphi(s,u)\mathrm{d}s\right) \mathrm{d}\mathfrak{W}_{u}-\int_{t}^{T}\left(\int_{u}^{T}\varphi(s,u)\mathrm{d }s\right)\mathrm{d}\mathfrak{W}_{u}\] \[=\int_{0}^{t}\Xi_{T}(t,u)\mathrm{d}\mathfrak{W}_{u}+\int_{t}^{T} \Xi_{T}(u,u)\mathrm{d}\mathfrak{W}_{u}, \tag{2.2}\] using (1.2). Plugging this into (2.1), the zero-coupon bond then reads \[P_{t,T} =\mathrm{e}^{-\Theta_{t,T}}\exp\left\{\int_{0}^{t}\Xi_{T}(t,u) \mathrm{d}\mathfrak{W}_{u}\right\}\mathbb{E}_{t}^{\mathbb{O}}\left[\exp\left\{ \int_{t}^{T}\Xi_{T}(u)\mathrm{d}\mathfrak{W}_{u}\right\}\right]\] \[=\mathrm{e}^{-\Theta_{t,T}}\exp\left\{\left(\Xi_{T}(t,\cdot) \circ\mathfrak{W}\right)_{t}\right\}\mathbb{E}_{t}^{\mathbb{O}}\left[\mathrm{e }^{(\Xi_{T}\circ\mathfrak{W})_{t,T}}\right].\] Conditional on \(\mathcal{F}_{t}\), the random variable \(\left(\Xi_{T}\circ\mathfrak{W}\right)_{t,T}\) is centered Gaussian with variance \[\mathbb{V}_{t}\left[\left(\Xi_{T}\circ\mathfrak{W}\right)_{t,T}\right]=\int_{t }^{T}\Xi_{T}(u,u)^{2}\mathrm{d}u,\] so that \[P_{t,T}=\mathrm{e}^{-\Theta_{t,T}}\exp\left\{\left(\Xi_{T}(t,\cdot)\circ W \right)_{t}+\frac{1}{2}\int_{t}^{T}\Xi_{T}(u)^{2}\mathrm{d}u\right\}.\] Note that, using Fubini and Assumption 2.1, \[\left(\Xi_{T}(t,\cdot)\circ\mathfrak{W}\right)_{t}=\int_{0}^{t} \Xi_{T}(t,u)\mathrm{d}\mathfrak{W}_{u} =\int_{0}^{t}\left(\Xi_{T}(u,u)+\int_{u}^{t}\partial_{s}\Xi_{T}(s,u)\mathrm{d}s\right)\mathrm{d}\mathfrak{W}_{u}\] \[=\int_{0}^{t}\Xi_{T}(u,u)\mathrm{d}\mathfrak{W}_{u}+\int_{0}^{t} \int_{u}^{t}\partial_{s}\Xi_{T}(s,u)\mathrm{d}s\mathrm{d}\mathfrak{W}_{u}\] \[=\int_{0}^{t}\Xi_{T}(u,u)\mathrm{d}\mathfrak{W}_{u}+\int_{0}^{t} \int_{0}^{s}\partial_{s}\Xi_{T}(s,u)\mathrm{d}\mathfrak{W}_{u}\mathrm{d}s\] \[=\int_{0}^{t}\Xi_{T}(u,u)\mathrm{d}\mathfrak{W}_{u}+\int_{0}^{t} \int_{0}^{s}\varphi(s,u)\mathrm{d}\mathfrak{W}_{u}\mathrm{d}s.\] This is an \(L^{1}\)-Dirichlet process [18, Definition 2], written as a decomposition of a local martingale and a term with zero quadratic variation. Therefore \(\langle\log(P_{\cdot,T}),\log(P_{\cdot,T})\rangle_{t}=\int_{0}^{t}\Xi_{T}(u, u)^{2}\mathrm{d}u\) and \[\mathrm{d}\log(P_{\cdot,T})=\left(\theta(t)+\left(\partial_{t}\Xi_{T}(t, \cdot)\circ\mathfrak{W}\right)_{t}-\frac{1}{2}\Xi_{T}(t)^{2}\right)\mathrm{d}t +\Xi_{T}(t,t)\mathrm{d}\mathfrak{W}_{t}. \tag{2.3}\] Now, Ito's formula using (2.3) yields \(P_{T,T}=P_{t,T}+\int_{t}^{T}P_{s,T}\mathrm{d}X_{s}+\frac{1}{2}\int_{t}^{T}P_{s,T}\mathrm{d}\langle X,X\rangle_{s}\), hence, for each \(T>0\), \(\mathrm{d}P_{T,T}=\mathrm{d}P_{t,T}-P_{t,T}\mathrm{d}X_{t}-\frac{1}{2}P_{t,T} \mathrm{d}\langle X,X\rangle_{t}\), and therefore, since \(P_{T,T}=1\), \[\frac{\mathrm{d}P_{t,T}}{P_{t,T}} =\mathrm{d}X_{t}+\frac{1}{2}\mathrm{d}\langle X,X\rangle_{t}\] \[=\left(\underbrace{\theta(t)+\left(\partial_{t}\Xi_{T}(t,\cdot) \circ\mathfrak{W}\right)_{t}}_{t_{t}}-\frac{1}{2}\Xi_{T}(t)^{2}\right)\mathrm{d }t+\Xi_{T}(t,t)\mathrm{d}\mathfrak{W}_{t}+\frac{1}{2}\mathrm{d}\left(\int_{0}^ {t}\Xi_{T}(u,u)^{2}\mathrm{d}u\right)\] \[=r_{t}\mathrm{d}t+\Xi_{T}(t,t)\mathrm{d}\mathfrak{W}_{t}-\frac{1 }{2}\Xi_{T}(t)^{2}\mathrm{d}t+\frac{1}{2}\Xi_{T}(t,t)^{2}\mathrm{d}t\] \[=r_{t}\mathrm{d}t+\Xi_{T}(t,t)\mathrm{d}\mathfrak{W}_{t}.\] The dynamics of the discounted zero-coupon bond price in the lemma follows immediately. Proof of Corollary 2.5.: The instantaneous forward rate process (1.4) reads \[f_{t,T} =\partial_{T}\Theta_{t,T}-\partial_{T}\int_{0}^{t}\Xi_{T}(t,u) \mathrm{d}\mathfrak{W}_{u}-\frac{1}{2}\partial_{T}\int_{t}^{T}\Xi_{T}(u)^{2} \mathrm{d}u\] \[=\partial_{T}\Theta_{t,T}-\partial_{T}\int_{0}^{t}\left(-\int_{t} ^{T}\varphi(s,u)\mathrm{d}s\right)\mathrm{d}\mathfrak{W}_{u}-\frac{1}{2} \partial_{T}\int_{t}^{T}\left(-\int_{u}^{T}\varphi(s,u)\mathrm{d}s\right)^{2} \mathrm{d}u\] \[=\theta(T)+\int_{0}^{t}\partial_{T}\left(\int_{t}^{T}\varphi(s,u )\mathrm{d}s\right)\mathrm{d}\mathfrak{W}_{u}-\frac{1}{2}\partial_{T}\int_{t}^ {T}\left(\int_{u}^{T}\varphi(s,u)\mathrm{d}s\right)^{2}\mathrm{d}u\] \[=\theta(T)+\int_{0}^{t}\varphi(T,u)\mathrm{d}\mathfrak{W}_{u}- \frac{1}{2}\left(\int_{T}^{T}\varphi(s,T)^{2}\mathrm{d}s+\int_{t}^{T}\partial _{T}\left[\left(\int_{u}^{T}\varphi(s,u)\mathrm{d}s\right)^{2}\right]\mathrm{ d}u\right)\] \[=\theta(T)+\int_{0}^{t}\varphi(T,u)\mathrm{d}\mathfrak{W}_{u}- \int_{t}^{T}\varphi(T,u)\left(\int_{u}^{T}\varphi(s,u)\mathrm{d}s\right) \mathrm{d}u\] \[=\theta(T)+\int_{0}^{t}\varphi(T,u)\mathrm{d}\mathfrak{W}_{u}+ \int_{t}^{T}\varphi(T,u)\Xi_{T}(u,u)\mathrm{d}u,\] as claimed. **Remark 2.8**.: The two lemmas above correspond to the two sides of the Heath-Jarrow-Morton framework. From the expression of the instantaneous forward rate, let \(\alpha_{t,T}:=\varphi(T-t)\Xi_{T}(t,t)\) and \(\beta_{t,T}:=\varphi(T-t)\), so that \(\mathrm{d}f_{t,T}=\beta_{t,T}\mathrm{d}\mathfrak{W}_{t}-\alpha_{t,T}\mathrm{ d}t\), and consider the discounted bond price \[\widetilde{P}_{t,T}:=P_{t,T}\exp\left\{-\int_{0}^{t}r_{s}\mathrm{d}s\right\}= \exp\left\{-\int_{0}^{t}r_{s}\mathrm{d}s-\int_{t}^{T}f_{t,s}\mathrm{d}s\right\} =:\mathrm{e}^{Z_{t}}.\] Itos' formula then yields \[\frac{\mathrm{d}\widetilde{P}_{t,T}}{\widetilde{P}_{t,T}}=\mathrm{d}Z_{t}+ \frac{1}{2}\mathrm{d}\langle Z,Z\rangle_{t}. \tag{2.4}\] From the differential form of \(f_{t,T}\), we can write, for any \(t\in[0,T)\), \[f_{t,T}=f_{0,T}+\int_{0}^{t}\mathrm{d}f_{s,T}=f_{0,T}+\int_{0}^{t}\left( \varphi(T,u)\mathrm{d}\mathfrak{W}_{u}-\varphi(T,u)\Xi_{T}(u,u)\mathrm{d}u \right)=f_{0,T}+\int_{0}^{t}\beta_{u,T}\mathrm{d}\mathfrak{W}_{u}+\int_{0}^{t} \alpha_{u,T}\mathrm{d}u,\] so that, using stochastic Fubini, we obtain \[F_{t,T} :=\int_{t}^{T}f_{t,s}\mathrm{d}s=\int_{t}^{T}\left(f_{0,s}+\int_{ 0}^{t}\beta_{u,s}\mathrm{d}\mathfrak{W}_{u}+\int_{0}^{t}\alpha_{u,s}\mathrm{d }u\right)\mathrm{d}s\] \[=\int_{t}^{T}f_{0,s}\mathrm{d}s+\int_{0}^{t}\int_{t}^{T}\beta_{u,s}\mathrm{d}s\mathrm{d}\mathfrak{W}_{u}+\int_{0}^{t}\int_{t}^{T}\alpha_{u,s} \mathrm{d}s\mathrm{d}u.\] Now, \[\int_{t}^{T}f_{0,s}\mathrm{d}s =\int_{t}^{T}\left(f_{s,s}-\int_{0}^{s}\partial_{u}f_{u,s}\mathrm{ d}u\right)\mathrm{d}s\] \[=\int_{t}^{T}r_{s}\mathrm{d}s-\int_{0}^{t}\int_{t}^{T}\partial_{u }f_{u,s}\mathrm{d}s\mathrm{d}u-\int_{t}^{T}\int_{u}^{T}\partial_{u}f_{u,s} \mathrm{d}s\mathrm{d}u\] \[=\int_{t}^{T}r_{s}\mathrm{d}s-\int_{0}^{t}\left(\int_{t}^{T} \partial_{u}f_{u,s}\mathrm{d}s-\int_{u}^{T}\partial_{u}f_{u,s}\mathrm{d}s \right)\mathrm{d}u-\int_{0}^{T}\int_{u}^{T}\partial_{u}f_{u,s}\mathrm{d}s \mathrm{d}u\] \[=\int_{t}^{T}r_{s}\mathrm{d}s+\int_{0}^{t}\int_{u}^{t}\partial_{u }f_{u,s}\mathrm{d}s\mathrm{d}u-\int_{0}^{T}\int_{u}^{T}\partial_{u}f_{u,s}s \mathrm{d}u,\] using Fubini, so that \[F_{t,T}=\underbrace{\int_{t}^{T}r_{s}\mathrm{d}s+\int_{0}^{t}\int_{u}^{t}\partial _{u}f_{u,s}\mathrm{d}s\mathrm{d}u-\int_{0}^{T}\int_{u}^{T}\partial_{u}f_{u,s} \mathrm{d}s\mathrm{d}u}_{\int_{t}^{T}f_{0,s}\mathrm{d}s}+\int_{0}^{t}\int_{t}^ {T}\beta_{u,s}\mathrm{d}s\mathrm{d}\mathfrak{W}_{u}+\int_{0}^{t}\int_{t}^{T} \alpha_{u,s}\mathrm{d}s\mathrm{d}u,\] and \[\mathrm{d}F_{t,T}=\left(\int_{t}^{T}\alpha_{t,s}\mathrm{d}s-r_{t}\right) \mathrm{d}t+\left(\int_{t}^{T}\beta_{t,s}\mathrm{d}s\right)\mathrm{d} \mathfrak{W}_{t},\] Therefore, \[\mathrm{d}Z_{t}=\mathrm{d}\left(-\int_{0}^{t}r_{s}\mathrm{d}s-\int_{t}^{T}f_{ t,s}\mathrm{d}s\right)=-r_{t}\mathrm{d}t-\mathrm{d}F_{t,T}=-r_{t}\mathrm{d}t- \mathrm{d}F_{t,T}=-\left(\int_{t}^{T}\alpha_{t,s}\mathrm{d}s\right)\mathrm{d} t-\left(\int_{t}^{T}\beta_{t,s}\mathrm{d}s\right)\mathrm{d}\mathfrak{W}_{t},\] and (2.4) gives \[\frac{\mathrm{d}\widetilde{P}_{t,T}}{\widetilde{P}_{t,T}}=-\left(\int_{t}^{T} \alpha_{t,s}\mathrm{d}s-\frac{1}{2}\left(\int_{t}^{T}\beta_{t,s}\mathrm{d}s \right)^{2}\right)\mathrm{d}t-\left(\int_{t}^{T}\beta_{t,s}\mathrm{d}s\right) \mathrm{d}\mathfrak{W}_{t}.\] The discounted price process \((\widetilde{P}_{t,T})_{t\in[0,T]}\) is therefore a (local) martingale if and only if the drift is null. Now, for all \(t\in(0,T)\), \[\partial_{T}\left\{\int_{t}^{T}\alpha_{t,s}\mathrm{d}s-\frac{1}{2}\left[\int_{ t}^{T}\beta_{t,s}\mathrm{d}s\right]^{2}\right\}=\alpha_{t,T}-\beta_{t,T}\int_{t}^ {T}\beta_{t,s}\mathrm{d}s=\varphi(T-t)\left[\Xi_{T}(t,t)-\int_{t}^{T}\varphi( s,t)\mathrm{d}s\right],\] which is equal to zero by definition of the functions. Therefore the drift (as a function of \(T\)) is constant. Since it is trivially equal to zero at \(T=t\), it is null everywhere and \((\widetilde{P}_{t,T})_{t\in[0,T]}\) is a \(\mathbb{Q}\)-local martingale. ### Convexity adjustments We now enter the core of the paper, investigating the influence of the Gaussian driver on the convexity of bond prices. We first start with the following simple proposition: **Proposition 2.9**.: _For any \(T,\tau\geq 0\),_ \[\mathrm{d}\left(\frac{1}{P_{t,\tau}}\right) =\frac{\left(\Xi_{\tau}(t,t)^{2}\gamma_{\mathfrak{W}}^{\prime}(t) -r_{t}\right)\mathrm{d}t}{P_{t,\tau}}-\frac{\Xi_{\tau}(t,t)}{P_{t,\tau}} \mathrm{d}\mathfrak{W}_{t},\] \[\mathrm{d}\left(\frac{P_{t,T}}{P_{t,\tau}}\right) =\frac{P_{t,T}}{P_{t,\tau}}\Big{(}\Xi_{T}(t,t)-\Xi_{\tau}(t,t) \Big{)}\Big{\{}-\Xi_{\tau}(t,t)\gamma_{\mathfrak{W}}^{\prime}(t)\mathrm{d}t+ \mathrm{d}\mathfrak{W}_{t}\Big{\}},\] _and there exists a probability measure \(\mathbb{Q}^{\tau}\) such that \(\mathfrak{W}_{t}^{\mathbb{Q}^{\tau}}\) is a \(\mathbb{Q}^{\tau}\)-Gaussian martingale and_ \[\mathrm{d}\left(\frac{P_{t,T}}{P_{t,\tau}}\right)=\frac{P_{t,T}}{P_{t,\tau}} \Sigma_{t}^{T,\tau}\mathrm{d}\mathfrak{W}_{t}^{\mathbb{Q}^{\tau}}, \tag{2.5}\] _under \(\mathbb{Q}^{\tau}\), where \(\Sigma_{t}^{T,\tau}:=\Xi_{T}(t,t)-\Xi_{\tau}(t,t)\)._ Note that, from the definition of \(\Xi_{T}\) in (1.2), \(\Sigma_{t}^{T,\tau}\) is non-negative whenever \(\tau\geq T\). Proof.: From the definition of the zero-coupond price in (1.3) and Proposition 2.4, \(P_{t,T}\) is strictly positive almost surely and \[\frac{\mathrm{d}P_{t,T}}{P_{t,T}}=r_{t}\mathrm{d}t+\Xi_{T}(t,t)\mathrm{d} \mathfrak{W}_{t},\] and therefore Ito's formula implies that, for any \(0\leq t\leq\tau\), \[\mathrm{d}\left(\frac{1}{P_{t,\tau}}\right)=-\frac{\mathrm{d}P_{t,\tau}}{P_{t, \tau}^{2}}+\frac{\mathrm{d}\langle P_{t,\tau},P_{t,\tau}\rangle}{P_{t,\tau}^{3 }}=\frac{\Big{(}\Xi_{\tau}(t,t)^{2}\gamma_{\mathfrak{W}}^{\prime}(t)-r_{t} \Big{)}\mathrm{d}t}{P_{t,\tau}}-\frac{\Xi_{\tau}(t,t)\mathrm{d}\mathfrak{W}_{t }}{P_{t,\tau}}.\] Therefore \[\mathrm{d}\left(\frac{P_{t,T}}{P_{t,\tau}}\right) =P_{t,T}\mathrm{d}\left(\frac{1}{P_{t,\tau}}\right)+\frac{\mathrm{d }P_{t,T}}{P_{t,\tau}}+\mathrm{d}P_{t,T}\cdot\mathrm{d}\left(\frac{1}{P_{t,\tau}}\right)\] \[=\frac{P_{t,T}}{P_{t,\tau}}\Big{\{}\Big{(}\Xi_{\tau}(t,t)^{2} \gamma^{\prime}_{\mathfrak{m}\mathfrak{I}}(t)-r_{t}\Big{)}\mathrm{d}t-\Xi_{ \tau}(t,t)\mathfrak{d}\mathfrak{I}_{t}+\Big{(}r_{t}\mathrm{d}t+\Xi_{T}(t,t) \mathrm{d}\mathfrak{I}_{t}\Big{)}-\Xi_{T}(t,t)\Xi_{\tau}(t,t)\gamma^{\prime}_{ \mathfrak{m}\mathfrak{I}}(t)\mathrm{d}t\Big{\}}\] \[=\frac{P_{t,T}}{P_{t,\tau}}\Big{(}\Xi_{T}(t,t)-\Xi_{\tau}(t,t) \Big{)}\Big{\{}-\Xi_{\tau}(t,t)\gamma^{\prime}_{\mathfrak{m}\mathfrak{I}}(t) \mathrm{d}t+\mathrm{d}\mathfrak{I}_{t}\Big{\}}.\] Define now the Doleans-Dade exponential \[M_{t}:=\exp\left\{\int_{0}^{t}\Xi_{\tau}(s,s)\gamma^{\prime}_{\mathfrak{m} \mathfrak{I}}(s)\mathrm{d}\mathfrak{I}_{s}-\frac{1}{2}\int_{0}^{t}\left[\Xi_ {\tau}(s,s)\gamma^{\prime}_{\mathfrak{m}\mathfrak{I}}(s)\right]^{2}\mathrm{d }s\right\},\] and the Radon-Nikodym derivative \(\frac{\mathrm{d}\mathbb{Q}^{\tau}}{\mathrm{d}\mathbb{P}}:=M\). Girsanov's Theorem [15, Theorem 8.6.4] then implies that \(\mathfrak{I}_{t}^{\mathbb{Q}^{\tau}}:=\mathfrak{M}_{t}-\int_{0}^{t}\Xi_{\tau} (s,s)\gamma^{\prime}_{\mathfrak{m}\mathfrak{I}}(s)\mathrm{d}s\) is a Gaussian martingale and the ratio \(\frac{P_{t,T}}{P_{t,\tau}}\) satisfies (2.5) under \(\mathbb{Q}^{\tau}\). The following proposition is key and provides a closed-form expression for the convexity adjustments in our setup: **Proposition 2.10**.: _For any \(\tau\geq 0\) let \(\mathfrak{t}_{1},\mathfrak{t}_{2}\geq 0\). We then have_ \[\mathbb{E}^{\mathbb{Q}^{\tau}}\left[\frac{P_{t,\mathfrak{t}_{1}}}{P_{t,\mathfrak {t}_{2}}}\right]=\frac{P_{0,\mathfrak{t}_{1}}}{P_{0,\mathfrak{t}_{2}}}\mathfrak{ E}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2}),\qquad\text{for any $t\in[0,\mathfrak{t}_{1}\wedge\mathfrak{t}_{2}]$},\] _where \(\mathfrak{E}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2}):=\exp\left\{\int_{ 0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{2},\tau}-\Sigma_{s}^{\mathfrak{t}_{1}, \mathfrak{t}_{2}}\right)\Sigma_{s}^{\mathfrak{t}_{2},\tau}\gamma^{\prime}_{ \mathfrak{m}\mathfrak{I}}(s)\mathrm{d}s\right\}\) is the convexity adjustment factor._ **Remark 2.11**.: * If \(\mathfrak{t}_{1}=\mathfrak{t}_{2}\) or if \(\frac{P_{t,\mathfrak{t}_{1}}}{P_{t,\mathfrak{t}_{2}}}\) is constant, there is no convexity adjustment and \(\mathfrak{E}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2})=1\). * More interestingly, if \(\mathfrak{t}_{2}=\tau\), then \(\Sigma_{t}^{\mathfrak{t}_{2},\tau}=\Sigma_{t}^{\mathfrak{t}_{2},\mathfrak{t}_ {2}}=\Xi_{\mathfrak{t}_{2}}(t,t)-\Xi_{\mathfrak{t}_{2}}(t,t)=0\) and \[\mathfrak{E}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2})=\mathfrak{E}_{t}^{ \mathfrak{t}_{2}}(\mathfrak{t}_{1},\mathfrak{t}_{2})=\exp\left\{\int_{0}^{t} \left(\Sigma_{s}^{\mathfrak{t}_{2},\mathfrak{t}_{2}}-\Sigma_{s}^{\mathfrak{t}_ {1},\mathfrak{t}_{2}}\right)\Sigma_{s}^{\mathfrak{t}_{2},\mathfrak{t}_{2}} \gamma^{\prime}_{\mathfrak{m}\mathfrak{I}}(s)\mathrm{d}s\right\}=1,\] and the process \(\left(\frac{P_{t,\mathfrak{t}_{1}}}{P_{t,\mathfrak{t}_{2}}}\right)_{t\geq 0}\) is a \(\mathbb{Q}^{\tau}\)-martingale on \([0,\mathfrak{t}_{1}\wedge\mathfrak{t}_{2}]\). * Regarding the sign of the convexity adjustment, we have \[\Sigma_{s}^{\mathfrak{t}_{2},\tau}-\Sigma_{s}^{\mathfrak{t}_{1},\tau} =\Big{(}\Xi_{\mathfrak{t}_{2}}(s,s)-\Xi_{\tau}(s,s)\Big{)}-\Big{(} \Xi_{\mathfrak{t}_{1}}(s,s)-\Xi_{\tau}(s,s)\Big{)}\] \[=\Xi_{\mathfrak{t}_{2}}(s,s)-\Xi_{\mathfrak{t}_{1}}(s,s)\] \[=-\int_{s}^{\mathfrak{t}_{2}}\varphi(z,s)\mathrm{d}z+\int_{s}^{ \mathfrak{t}_{2}}\varphi(z,s)\mathrm{d}z=-\int_{\mathfrak{t}_{1}}^{\mathfrak{t} _{2}}\varphi(z,s)\mathrm{d}z.\] Since \(\varphi(\cdot)\) is strictly positive, then \(\mathrm{sgn}(\Sigma_{s}^{\mathfrak{t}_{2},\tau}-\Sigma_{s}^{\mathfrak{t}_{1}, \tau})=\mathrm{sgn}(\mathfrak{t}_{1}-\mathfrak{t}_{2})\). Furthermore, since \[\Sigma_{s}^{\mathfrak{t}_{2},\tau}=\Xi_{\mathfrak{t}_{2}}(s,s)-\Xi_{\tau}(s,s)= -\int_{s}^{\mathfrak{t}_{2}}\varphi(z,s)\mathrm{d}z+\int_{s}^{\tau}\varphi(z,s) \mathrm{d}z=\int_{\mathfrak{t}_{2}}^{\tau}\varphi(z,s)\mathrm{d}z,\] then \(\mathrm{sgn}(\Sigma_{s}^{\mathfrak{t}_{2},\tau})=\mathrm{sgn}(\tau- \mathfrak{t}_{2})\), and therefore, assuming \(\gamma^{\prime}_{\mathfrak{m}\mathfrak{I}}\) strictly positive (as will be the case in all the examples considered here), \[\begin{array}{|c|c|c|}\hline\mathrm{sgn}(\log\mathfrak{E}_{t}^{\tau}(\mathfrak{t}_ {1},\mathfrak{t}_{2}))&\mathfrak{t}_{1}>\mathfrak{t}_{2}&\mathfrak{t}_{1}< \mathfrak{t}_{2}\\ \hline\tau<\mathfrak{t}_{2}&\text{negative}&\text{positive}\\ \tau>\mathfrak{t}_{2}&\text{positive}&\text{negative}\\ \hline\end{array}\] Considering without generality \(\mathfrak{t}_{1}<\mathfrak{t}_{2}\), the convexity adjustment is therefore greater than \(1\) for \(\tau<\mathfrak{t}_{2}\) and less than \(1\) above. Proof of Proposition 2.10.: Under \(\mathbb{Q}^{\tau}\), the process defined as \(X_{t}:=P_{t,T}/P_{t,\tau}\) satisfies \(\mathrm{d}X_{t}=X_{t}\Sigma_{t}^{T,\tau}\mathrm{d}\mathfrak{W}_{t}^{\mathbb{Q} ^{\tau}}\), is clearly lognormal and hence Ito's formula implies \[\mathrm{d}\log(X_{t})=\frac{\mathrm{d}X_{t}}{X_{t}}-\frac{1}{2}\frac{\mathrm{d }\langle X,X\rangle_{t}}{X_{t}^{2}}=\Sigma_{t}^{T,\tau}\mathrm{d}\mathfrak{W}_{ t}^{\mathbb{Q}^{\tau}}-\frac{1}{2}\left(\Sigma_{t}^{T,\tau}\right)^{2}\gamma_{ \mathfrak{W}}^{\prime}(t)\mathrm{d}t,\] so that \[X_{t}=X_{0}\exp\left\{\int_{0}^{t}\Sigma_{s}^{T,\tau}\mathrm{d}\mathfrak{W}_{ s}-\frac{1}{2}\int_{0}^{t}\left(\Sigma_{s}^{T,\tau}\right)^{2}\gamma_{ \mathfrak{W}}^{\prime}(s)\mathrm{d}s\right\},\] and therefore \[\frac{P_{t,T}}{P_{t,\tau}}=\frac{P_{0,T}}{P_{0,\tau}}\exp\left\{\int_{0}^{t} \Sigma_{s}^{T,\tau}\mathrm{d}\mathfrak{W}_{s}-\frac{1}{2}\int_{0}^{t}\left( \Sigma_{s}^{T,\tau}\right)^{2}\gamma_{\mathfrak{W}}^{\prime}(s)\mathrm{d}s \right\}.\] With successively \(T=\mathfrak{t}_{1}\) and \(T=\mathfrak{t}_{2}\), we can then write \[\frac{P_{t,\mathfrak{t}_{1}}}{P_{t,\tau}} =\frac{P_{0,\mathfrak{t}_{1}}}{P_{0,\tau}}\exp\left\{\int_{0}^{t} \Sigma_{s}^{\mathfrak{t}_{1},\tau}\mathrm{d}\mathfrak{W}_{s}-\frac{1}{2}\int_{ 0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{1},\tau}\right)^{2}\gamma_{\mathfrak{W} }^{\prime}(s)\mathrm{d}s\right\},\] \[\frac{P_{t,\mathfrak{t}_{2}}}{P_{t,\tau}} =\frac{P_{0,\mathfrak{t}_{2}}}{P_{0,\tau}}\exp\left\{\int_{0}^{t} \Sigma_{s}^{\mathfrak{t}_{2},\tau}\mathrm{d}\mathfrak{W}_{s}-\frac{1}{2}\int_{ 0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{2},\tau}\right)^{2}\gamma_{\mathfrak{W} }^{\prime}(s)\mathrm{d}s\right\},\] so that \[\frac{P_{t,\mathfrak{t}_{1}}}{P_{t,\mathfrak{t}_{2}}} =\frac{P_{0,\mathfrak{t}_{1}}}{P_{0,\mathfrak{t}_{2}}}\exp\left\{ \int_{0}^{t}\Sigma_{s}^{\mathfrak{t}_{1},\tau}\mathrm{d}\mathfrak{W}_{s}-\frac {1}{2}\int_{0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{1},\tau}\right)^{2}\gamma_{ \mathfrak{W}}^{\prime}(s)\mathrm{d}s-\int_{0}^{t}\Sigma_{s}^{\mathfrak{t}_{2},\tau}\mathrm{d}\mathfrak{W}_{s}+\frac{1}{2}\int_{0}^{t}\left(\Sigma_{s}^{ \mathfrak{t}_{2},\tau}\right)^{2}\gamma_{\mathfrak{W}}^{\prime}(s)\mathrm{d}s\right\}\] \[=\frac{P_{0,\mathfrak{t}_{1}}}{P_{0,\mathfrak{t}_{2}}}\exp\left\{ \int_{0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{1},\tau}-\Sigma_{s}^{\mathfrak{t}_{2 },\tau}\right)\mathrm{d}\mathfrak{W}_{s}-\frac{1}{2}\int_{0}^{t}\left(\Sigma_{s }^{\mathfrak{t}_{1},\tau}-\Sigma_{s}^{\mathfrak{t}_{2},\tau}\right)^{2}\gamma _{\mathfrak{W}}^{\prime}(s)\mathrm{d}s\right\}\] \[\qquad\exp\left\{\frac{1}{2}\int_{0}^{t}\left[\left(\Sigma_{s}^{ \mathfrak{t}_{1},\tau}\right)^{2}+\left(\Sigma_{s}^{\mathfrak{t}_{2},\tau} \right)^{2}-2\Sigma_{s}^{\mathfrak{t}_{1},\tau}\Sigma_{s}^{\mathfrak{t}_{2},\tau}\right]\gamma_{\mathfrak{W}}^{\prime}(s)\mathrm{d}s+\frac{1}{2}\int_{0}^{ t}\left[\left(\Sigma_{s}^{\mathfrak{t}_{2},\tau}\right)^{2}-\left(\Sigma_{s}^{ \mathfrak{t}_{1},\tau}\right)^{2}\right]\gamma_{\mathfrak{W}}^{\prime}(s) \mathrm{d}s\right\}\] \[=\frac{P_{0,\mathfrak{t}_{1}}}{P_{0,\mathfrak{t}_{2}}}\exp\left\{ \int_{0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{1},\tau}-\Sigma_{s}^{\mathfrak{t}_{2 },\tau}\right)\mathrm{d}\mathfrak{W}_{s}-\frac{1}{2}\int_{0}^{t}\left(\Sigma_{s }^{\mathfrak{t}_{1},\tau}-\Sigma_{s}^{\mathfrak{t}_{2},\tau}\right)^{2}\gamma _{\mathfrak{W}}^{\prime}(s)\mathrm{d}s\right\}\] \[\qquad\exp\left\{\int_{0}^{t}\left[\left(\Sigma_{s}^{\mathfrak{t} _{2},\tau}\right)^{2}-\Sigma_{s}^{\mathfrak{t}_{1},\tau}\Sigma_{s}^{\mathfrak{t} _{2},\tau}\right]\gamma_{\mathfrak{W}}^{\prime}(s)\mathrm{d}s\right\}.\] The first exponential is a Doleans-Dade exponential martingale under \(\mathbb{Q}^{\tau}\), thus has \(\mathbb{Q}^{\tau}\)-expectation equal to one, and the proposition follows. ### Examples Let \(\mathfrak{W}=W\) be a standard Brownian motion, so that \(\gamma_{\mathfrak{W}}(t)=t\) and \(\gamma_{\mathfrak{W}}^{\prime}(t)=1\). #### 2.3.1. Exponential kernels Assume that \(\varphi(t)=\mathrm{e}^{-\alpha t}\) for some \(\alpha>0\), then the short rate process is of Ornstein-Uhlenbeck type and \[\Xi_{T}(t,u)=\Phi(T-u)-\Phi(t-u)\qquad\text{with}\qquad\Phi(z):=\frac{1}{ \alpha}\mathrm{e}^{-\alpha z}.\] We can further compute \(\Xi_{\tau}(t,t)=\Phi(\tau,t)-\Phi(t,t)\), and \[\Sigma_{t}^{T,\tau}=\Xi_{T}(t,t)-\Xi_{\tau}(t,t)=\Phi(T,t)-\Phi(t,t)-\Phi(\tau,t )+\Phi(t,t)=\Phi(T,t)-\Phi(\tau,t).\] Therefore the diffusion coefficient \(\Sigma_{t}^{T,\tau}\) and the Girsanov drift \(\Xi_{\tau}(t,t)\) read \[\Xi_{\tau}(t,t)=\frac{1}{\alpha}\left(\mathrm{e}^{-\alpha(\tau-t)}-1\right) \qquad\text{and}\qquad\Sigma_{t}^{T,\tau}=\frac{1}{\alpha}\left(\mathrm{e}^{- \alpha(T-t)}-\mathrm{e}^{-\alpha(\tau-t)}\right).\] Finally, regarding the convexity adjustment, \[\log\mathfrak{C}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2})=\frac{\mathrm{e}^{ 2\alpha t}-1}{2\alpha^{3}}\left\{\left(\mathrm{e}^{-\alpha\mathfrak{t}_{1}}- \mathrm{e}^{-\alpha\mathfrak{t}_{2}}\right)\mathrm{e}^{-\alpha\tau}+\mathrm{e} ^{-2\alpha\mathfrak{t}_{2}}-\mathrm{e}^{-\alpha(\mathfrak{t}_{1}+\mathfrak{t}_ {2})}\right\}.\] Note that, as \(\alpha\) tends to zero, namely \(r_{t}=\theta(t)+W_{t}\) (in the limit), we obtain \[\mathfrak{C}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2})=\exp\left\{( \mathfrak{t}_{2}-\mathfrak{t}_{1})(\mathfrak{t}_{2}-\tau)t\right\}.\] #### 2.3.2. Riemann-Liouville kernels Let \(H\in(0,1)\) and \(H_{\pm}:=H\pm\frac{1}{2}\). If \(\varphi(t)=t^{H_{-}}\), with, the short rate process (1.1) is driven by a Riemann-Liouville fractional Brownian motion with Hurst exponent \(H\). Furthermore, with \(H_{+}:=H+\frac{1}{2}\), \[\Xi_{T}(t,u)=\Phi(T-u)-\Phi(t-u)\qquad\text{with}\qquad\Phi(z):=-\frac{z^{H_{+ }}}{H_{+}}.\] Therefore the diffusion coefficient \(\Sigma_{t}^{T,\tau}\) and Girsanov drift \(\Xi_{\tau}(t,t)\) read \[\Xi_{\tau}(t,t)=-\frac{(\tau-t)^{H_{+}}}{H_{+}}\qquad\text{and}\qquad\Sigma_{ t}^{T,\tau}=\frac{(\tau-t)^{H_{+}}-(T-t)^{H_{+}}}{H_{+}}.\] Regarding the convexity adjustment, we instead have \[\mathfrak{C}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2})=\exp\left\{\int_{ 0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{2},\tau}-\Sigma_{s}^{\mathfrak{t}_{1}, \tau}\right)\Sigma_{s}^{\mathfrak{t}_{2},\tau}\mathrm{d}s\right\}\] Unfortunately, there does not seem to be a closed-form simplification here. We can however provide the following approximations: **Lemma 2.12**.: _The following asymptotic expansions are straightforward and provide some closed-form expressions that may help the reader grasp a flavour on the roles of the parameters:_ * _As_ \(t\) _tends to zero,_ \[\log\mathfrak{C}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2})=\frac{t}{H_{+ }^{2}}\left(\mathfrak{t}_{2}^{H_{+}}-\mathfrak{t}_{1}^{H_{+}}\right)\left( \mathfrak{t}_{2}^{H_{+}}-\tau^{H_{+}}\right)+\mathcal{O}\left(t^{2}\right).\] * _For any_ \(\eta>0\)_, as_ \(\varepsilon\) _tends to zero,_ \[\log\mathfrak{C}_{t}^{\mathfrak{t}_{1}-\varepsilon}(\mathfrak{t}_{1}, \mathfrak{t}_{1}+\varepsilon)=\frac{1+\eta}{2H}\Big{(}\mathfrak{t}_{1}^{2H}-( \mathfrak{t}_{1}-t)^{2H}\Big{)}\varepsilon^{2}+\mathcal{O}\left(\varepsilon^{ 3}\right).\] Proof.: From the explicit computation of \(\Sigma_{t}^{T,\tau}\) above, we can write, as \(s\) tends to zero, \[\Sigma_{s}^{T,\tau}=\frac{(\tau-s)^{H_{+}}-(T-s)^{H_{+}}}{H_{+}}=\frac{\tau^{ H_{+}}-T^{H_{+}}}{H_{+}}+\mathcal{O}(s).\] As a function of \(s\), \(\Sigma_{s}^{\mathfrak{t}_{2},\tau}\) is continuously differentiable. Because we are integrating over the compact \([0,t]\), we can integrate term by term, so that \[\log\mathfrak{C}_{t}^{\tau}(\mathfrak{t}_{1},\mathfrak{t}_{2}) =\int_{0}^{t}\left(\Sigma_{s}^{\mathfrak{t}_{2},\tau}-\Sigma_{s}^ {\mathfrak{t}_{1},\tau}\right)\Sigma_{s}^{\mathfrak{t}_{2},\tau}\mathrm{d}s\] \[=\int_{0}^{t}\left\{\left(\frac{\mathfrak{t}_{1}^{H_{+}}- \mathfrak{t}_{2}^{H_{+}}}{H_{+}}+\mathcal{O}(s)\right)\left(\frac{\tau^{H_{+} }-\mathfrak{t}_{2}^{H_{+}}}{H_{+}}+\mathcal{O}(s)\right)\right\}\mathrm{d}s\] \[=\frac{\mathfrak{t}_{1}^{H_{+}}-\mathfrak{t}_{2}^{H_{+}}}{H_{+}} \frac{\tau^{H_{+}}-\mathfrak{t}_{2}^{H_{+}}}{H_{+}}t+\mathcal{O}(t^{2}),\] where we can check by direct computations that the term \(\mathcal{O}(t^{2})\) is indeed non null. ### Extension to smooth Gaussian Volterra semimartingale drivers Let now \(\mathfrak{W}\) in (1.1) be a Gaussian Volterra process with a smooth kernel of the form \[\mathfrak{W}_{t}=\int_{0}^{t}K(t,u)\mathrm{d}W_{u},\] for some standard Brownian motion \(W\). Assuming that \(K\) is a convolution kernel absolutely continuous with square integrable derivative, it follows by [3] that \(\mathfrak{W}\) is a Gaussian semimartingale (yet not necessarily a martingale) with the decomposition \[\mathfrak{W}_{t}=\int_{0}^{t}K(u,u)\mathrm{d}W_{u}+\int_{0}^{t}\left(\int_{0}^ {u}\partial_{1}K(u,s)\mathrm{d}W_{s}\right)\mathrm{d}u=:\int_{0}^{t}K(u,u) \mathrm{d}W_{u}+A(t),\] where \(A\) is a process of bounded variation satisfying \(\mathrm{d}A(t)=A^{\prime}(t)\mathrm{d}t=\left(\int_{0}^{t}\partial_{1}K(t,s) \mathrm{d}W_{s}\right)\mathrm{d}t\) and hence the Ito differential of \(\mathfrak{W}_{t}\) reads \(\mathrm{d}\mathfrak{W}_{t}=K(t,t)\mathrm{d}W_{t}+A^{\prime}(t)\mathrm{d}t\), and its quadratic variation is \(\mathrm{d}\langle\mathfrak{W},\mathfrak{W}\rangle_{t}=\int_{0}^{t}K(u,u)^{2} \mathrm{d}u\). The short rate process (1.1) therefore reads \[r_{t}=\theta(t)+\int_{0}^{t}\varphi(t-u)\mathrm{d}\mathfrak{W}_{u}=\theta(t)+ \int_{0}^{t}\varphi(t-u)\left(K(u,u)\mathrm{d}W_{u}+A^{\prime}(u)\mathrm{d}u \right)=\widetilde{\theta}_{t}+\int_{0}^{t}\varphi(t-u)K(u,u)\mathrm{d}W_{u},\] where \(\widetilde{\theta}_{t}:=\theta+\int_{0}^{t}\varphi(t-u)A^{\prime}(u)\mathrm{ d}u\) and \(\widetilde{\varphi}(t,u):=\varphi(t-u)K(u,u)\). If \(\widetilde{\varphi}\) satisfies Assumption 2.1, then the analysis above still holds. #### 2.4.1. Comments on the bond process Let \(R_{t,T}:=\int_{t}^{T}r_{s}\mathrm{d}s\) be the integrated short rate process and \(B_{t,T}:=\mathrm{e}^{-R_{t,T}}\) the bond price process on \([0,T]\). **Lemma 2.13**.: _The process \((B_{t,T})_{t\in[0,T]}\) satisfies \(B_{T,T}=1\) and, for \(t\in[0,T)\),_ \[\frac{\mathrm{d}B_{t,T}}{B_{t,T}}=r_{t}\mathrm{d}t=\left(\theta(t)+\int_{0}^{ t}\varphi(t-u)A^{\prime}(u)\mathrm{d}u+\int_{0}^{t}\varphi(t-u)K(u,u)\mathrm{d}W_{u }\right)\mathrm{d}t.\] Proof.: For any \(t\in[0,T)\), we can write \[r_{t}=\theta(t)+\int_{0}^{t}\varphi(t-u)\mathrm{d}\left(\int_{0}^{u}K(s,s) \mathrm{d}W_{s}+A(u)\right)=\theta(t)+\int_{0}^{t}\varphi(t-u)A^{\prime}(u) \mathrm{d}u+\int_{0}^{t}\varphi(t-u)K(u,u)\mathrm{d}W_{u}.\] and therefore \[\mathrm{d}R_{t,T}=-r_{t}\mathrm{d}t=-\left(\theta(t)+\int_{0}^{t}\varphi(t-u )A^{\prime}(u)\mathrm{d}u+\int_{0}^{t}\varphi(t-u)K(u,u)\mathrm{d}W_{u}\right) \mathrm{d}t. \tag{2.6}\] Ito's formula [1, Theorem 4] then yields \[B_{T,T} =B_{t,T}-\int_{t}^{T}B_{s,T}\mathrm{d}R_{s,T}+\frac{1}{2}\int_{t }^{T}B_{s,T}\mathrm{d}\langle R,R\rangle_{s,T}\] \[=B_{t,T}+\int_{t}^{T}B_{s,T}\left\{\left(\theta(s)+\int_{0}^{s} \varphi(s,u)A^{\prime}(u)\mathrm{d}u\right)+\int_{0}^{s}\varphi(s,u)K(u,u) \mathrm{d}W_{u}\right\}\mathrm{d}s.\] so that, since \(B_{T,T}=1\), \[\mathrm{d}B_{t,T} =-\mathrm{d}\left(\int_{t}^{T}B_{s,T}\left\{\left(\theta(s)+\int _{0}^{s}\varphi(s,u)A^{\prime}(u)\mathrm{d}u\right)+\int_{0}^{s}\varphi(s,u)K( u,u)\mathrm{d}W_{u}\right\}\mathrm{d}s\right)\] \[=B_{t,T}\left\{\left(\theta(t)+\int_{0}^{t}\varphi(t-u)A^{\prime}( u)\mathrm{d}u\right)+\int_{0}^{t}\varphi(t-u)K(u,u)\mathrm{d}W_{u}\right\} \mathrm{d}t,\] and the lemma follows. **Remark 2.14**.: We can also write \(R_{t,T}\) in integral form as follows, using stochastic Fubini: \[R_{t,T} =\int_{t}^{T}\left[\theta(s)+\int_{0}^{s}\varphi(s,u)A^{\prime}(u) \mathrm{d}u+\int_{0}^{s}\varphi(s,u)K(u,u)\mathrm{d}W_{u}\right]\mathrm{d}s\] \[=\Theta_{t,T}+\int_{t}^{T}\left(\int_{0}^{s}\varphi(s,u)A^{\prime} (u)\mathrm{d}u\right)\mathrm{d}s+\int_{t}^{T}\left(\int_{0}^{s}\varphi(s,u)K( u,u)\mathrm{d}W_{u}\right)\mathrm{d}s\] \[=\Theta_{t,T}+\int_{0}^{t}\left(\int_{t}^{T}\varphi(s,u)\mathrm{ d}s\right)A^{\prime}(u)\mathrm{d}u+\int_{0}^{t}\left(\int_{t}^{T}\varphi(s,u) \mathrm{d}s\right)K(u,u)\mathrm{d}W_{u}\] \[\qquad\qquad+\int_{t}^{T}\left(\int_{u}^{T}\varphi(s,u)\mathrm{ d}s\right)A^{\prime}(u)\mathrm{d}u+\int_{t}^{T}\left(\int_{u}^{T}\varphi(s,u) \mathrm{d}s\right)K(u,u)\mathrm{d}W_{u}\] \[=\Theta_{t,T}+\int_{0}^{t}\Phi_{t}(u)A^{\prime}(u)\mathrm{d}u+ \int_{0}^{t}\Phi_{t}^{K}(u)\mathrm{d}W_{u}+\int_{t}^{T}\Phi_{u}(u)A^{\prime}( u)\mathrm{d}u+\int_{t}^{T}\Phi_{u}^{K}(u)\mathrm{d}W_{u},\] with \(\Phi_{t}(u):=\int_{t}^{T}\varphi(s,u)\mathrm{d}s\) and \(\Phi_{t}^{K}(u):=\Phi_{t}(u)K(u,u)\). As a consistency check, we have \[\mathrm{d}R_{t,T} =-\theta(t)\mathrm{d}t+\Phi_{t}(t)A^{\prime}(t)\mathrm{d}t+\Phi_ {t}^{K}(t)\mathrm{d}W_{t}-\Phi_{t}(t)A^{\prime}(t)\mathrm{d}t-\Phi_{t}^{K}(t) \mathrm{d}W_{t}+\int_{0}^{t}\partial_{t}\Phi_{t}(u)A^{\prime}(u)\mathrm{d}u \mathrm{d}t+\int_{0}^{t}\partial_{t}\Phi_{t}^{K}(u)\mathrm{d}W_{u}\mathrm{d}t\] \[=\left(-\theta(t)+\Phi_{t}(t)A^{\prime}(t)-\Phi_{t}(t)A^{\prime}( t)+\int_{0}^{t}\partial_{t}\Phi_{t}(u)A^{\prime}(u)\mathrm{d}u+\int_{0}^{t} \partial_{t}\Phi_{t}^{K}(u)\mathrm{d}W_{u}\right)\mathrm{d}t+\left(\Phi_{t}^{ K}(t)-\Phi_{t}^{K}(t)\right)\mathrm{d}W_{t}\] \[=\left(-\theta(t)+\Phi_{t}(t)A^{\prime}(t)-\Phi_{t}(t)A^{\prime}( t)+\int_{0}^{t}\partial_{t}\Phi_{t}(u)A^{\prime}(u)\mathrm{d}u\right)\mathrm{d}t+ \int_{0}^{t}\partial_{t}\Phi_{t}^{K}(u)\mathrm{d}W_{u}\mathrm{d}t\] \[=\left(-\theta(t)+\int_{0}^{t}\partial_{t}\Phi_{t}(u)A^{\prime}( u)\mathrm{d}u\right)\mathrm{d}t+\int_{0}^{t}\partial_{t}\Phi_{t}^{K}(u) \mathrm{d}W_{u}\mathrm{d}t\] \[=-\left(\theta(t)+\int_{0}^{t}\varphi(t-u)A^{\prime}(u)\mathrm{d }u\right)\mathrm{d}t-\int_{0}^{t}\varphi(t-u)K(u,u)\mathrm{d}W_{u}\mathrm{d}t,\] which corresponds precisely to (2.6). ## 3. Pricing OIS products and options #### 3.0.1. Simple compounded rate Using Proposition 2.4, we can compute several OIS products and options Consider the simple compounded rate \[r^{S}(t_{0},T):=\frac{1}{\mathfrak{D}(t_{0},T)}\left(\prod_{i=0}^{n-1}\frac{1} {P_{t_{i},t_{i+1}}}-1\right), \tag{3.1}\] where \(\mathfrak{D}(t_{0},T)\) is the day count fraction and \(n\) the number of business days in the period \([t_{0},t_{n}]\). The following then holds directly: \[r^{S}(t_{0}^{R},T)=\frac{1}{\mathfrak{D}(t_{0},T)}\left(\prod_{i=0}^{n-1}\exp \left\{\Theta_{t_{i}^{R},t_{i+1}^{R}}-\frac{1}{2}\int_{t_{i}^{R}}^{t_{i+1}^{R }}\Xi(u,u)^{2}\mathrm{d}u-\left(\Xi(t_{i}^{R},\cdot)\circ\mathfrak{W} \right)_{t_{i}^{R}}\right\}-1\right),\] where the superscript \({}^{R}\) refers to reset dates; we use the superscript \({}^{A}\) to refer to accrual dates below. #### 3.0.2. Compounded rate cashflows with payment delay The present value at time zero of a compounded rate cashflow is given by \[\mathrm{PV}_{\text{flow}} =P_{0,T_{p}}\mathfrak{D}(t_{0}^{A},t_{n}^{A})\mathbb{E}^{\mathbb{ Q}^{T_{p}}}\left[r^{S}\right]\] \[=P_{0,T_{p}}\mathfrak{D}(t_{0}^{A},t_{n}^{A})\mathbb{E}^{\mathbb{ Q}^{T_{p}}}\left[\frac{1}{\mathfrak{D}(t_{0}^{A},t_{n}^{A})}\left\{\prod_{i=0}^{n-1} \left(1+\frac{\mathfrak{D}(t_{i}^{A},t_{i+1}^{A})}{\mathfrak{D}(t_{i}^{R},t_{ i+1}^{R})}\left(\frac{P_{t,t_{i}^{R}}}{P_{t,t_{i+1}^{R}}}-1\right)\right)-1 \right\}\right],\] where \(r^{S}\) denotes the compound RFR rate. In the case where there is no reset delays, namely \(t_{i}^{R}=t_{i}^{A}\) for all \(i=0,\ldots,n\), then \[\operatorname{PV}_{\text{flow}}=P_{0,T_{p}}\mathbb{E}^{Q^{T_{p}}} \left[\prod_{i=0}^{n-1}\left(\frac{P_{t,t_{i}^{R}}}{P_{t,t_{i+1}^{R}}}\right)-1\right] =P_{0,T_{p}}\mathbb{E}^{Q^{T_{p}}}\left[\frac{P_{t,t_{0}^{R}}}{P_{t,t_ {n}^{R}}}-1\right]\] \[=P_{0,T_{p}}\left(\frac{P_{0,t_{0}^{R}}}{P_{0,t_{n}^{R}}} \mathfrak{C}_{t}^{T_{p}}(t_{0}^{R},t_{n}^{R})-1\right)\] \[=P_{0,T_{p}}\left(\frac{P_{0,T_{RS}}}{P_{0,T_{RE}}}\mathfrak{C}_ {t}^{T_{p}}(T_{RS},T_{RE})-1\right),\] where \(t_{0}^{R}=T_{RS}\) and \(t_{n}^{R}=T_{RE}\), using the convexity adjustment formula given in Proposition 2.10. #### 3.0.3. Compounded rate cashflows with reset delay Assuming now that \(t_{i}^{R}\neq t_{i}^{A}\), we can write, from (3.1), \[r_{t}^{S}=\widetilde{r}_{t}^{S}+r_{t}^{S,adj},\] where \[\widetilde{r}_{t}^{S}:=\frac{1}{\mathfrak{D}(t_{0}^{R},t_{n}^{R})}\left(\frac {P_{t,T_{RS}}}{P_{t,T_{RE}}}-1\right),\] and \(r_{t}^{S,adj}\) is implied from the decomposition above. Therefore \[\operatorname{PV}_{\text{flow}} =P_{0,T_{p}}\mathfrak{D}(t_{0}^{A},t_{n}^{A})\mathbb{E}^{Q^{T_{p }}}\left[r_{t}^{S}\right]\] \[=P_{0,T_{p}}\mathfrak{D}(t_{0}^{A},t_{n}^{A})\mathbb{E}^{Q^{T_{p }}}\left[\widetilde{r}_{t}^{S}+r_{t}^{S,adj}\right]\] \[=P_{0,T_{p}}\mathfrak{D}(t_{0}^{A},t_{n}^{A})\mathbb{E}^{Q^{T_{p }}}\left[\frac{1}{\mathfrak{D}(t_{0}^{R},t_{n}^{R})}\left(\frac{P_{t,T_{RS}}}{ P_{t,T_{RE}}}-1\right)+r_{t}^{S,adj}\right]\] \[=P_{0,T_{p}}\mathfrak{D}(t_{0}^{A},t_{n}^{A})\left\{\frac{1}{ \mathfrak{D}(t_{0}^{R},t_{n}^{R})}\left(\frac{P_{0,T_{RS}}}{P_{0,T_{RE}}} \mathfrak{C}_{t}^{T_{p}}(T_{RS},T_{RE})-1\right)+\mathbb{E}^{Q^{T_{p}}}\left[ r_{t}^{S,adj}\right]\right\}\] \[=P_{0,T_{p}}\frac{\mathfrak{D}(t_{0}^{A},t_{n}^{A})}{\mathfrak{D} (t_{0}^{R},t_{n}^{R})}\left\{\frac{P_{0,T_{RS}}}{P_{0,T_{RE}}}\mathfrak{C}_ {t}^{T_{p}}(T_{RS},T_{RE})-1+\mathfrak{D}(t_{0}^{R},t_{n}^{R})\mathbb{E}^{Q^{T _{p}}}\left[r_{t}^{S,adj}\right]\right\}.\] Assume now that \(\mathbb{E}^{Q^{T_{p}}}\left[r_{t}^{S,adj}\right]=r_{0}^{S,adj}\), so that we can simplify the above as \[\operatorname{PV}_{\text{flow}} =P_{0,T_{p}}\frac{\mathfrak{D}(t_{0}^{A},t_{n}^{A})}{\mathfrak{D} (t_{0}^{R},t_{n}^{R})}\left\{\frac{P_{0,T_{RS}}}{P_{0,T_{RE}}}\mathfrak{C}_{t} ^{T_{p}}(T_{RS},T_{RE})-1+\mathfrak{D}(t_{0}^{R},t_{n}^{R})r_{0}^{S,adj}\right\}\] \[=P_{0,T_{p}}\frac{\mathfrak{D}(t_{0}^{A},t_{n}^{A})}{\mathfrak{D} (t_{0}^{R},t_{n}^{R})}\left\{\frac{P_{0,T_{RS}}}{P_{0,T_{RE}}}\mathfrak{C}_{t} ^{T_{p}}(T_{RS},T_{RE})-1+\mathfrak{D}(t_{0}^{R},t_{n}^{R})\left(r_{0}^{S}- \widetilde{r}_{0}^{S}\right)\right\}\] \[=P_{0,T_{p}}\frac{\mathfrak{D}(t_{0}^{A},t_{n}^{A})}{\mathfrak{D} (t_{0}^{R},t_{n}^{R})}\left\{\frac{P_{0,T_{RS}}}{P_{0,T_{RE}}}\mathfrak{C}_{t} ^{T_{p}}(T_{RS},T_{RE})-1+\mathfrak{D}(t_{0}^{R},t_{n}^{R})\left(r_{0}^{S}- \frac{1}{\mathfrak{D}(t_{0}^{R},t_{n}^{R})}\left(\frac{P_{0,T_{RS}}}{P_{0,T_{RE }}}-1\right)\right)\right\}\] \[=P_{0,T_{p}}\frac{\mathfrak{D}(t_{0}^{A},t_{n}^{A})}{\mathfrak{D} (t_{0}^{R},t_{n}^{R})}\left\{\frac{P_{0,T_{RS}}}{P_{0,T_{RE}}}\left(\mathfrak{C} _{t}^{T_{p}}(T_{RS},T_{RE})-1\right)+\mathfrak{D}(t_{0}^{R},t_{n}^{R})r_{0}^{S}\right\}\] ## 4. Numerics ### Zero-coupon dynamics In Figures 1 and 2, we analyse the impact of the parameter (\(\alpha\) in the Exponential kernel case and \(H\) in the Riemann-Liouville case) on the dynamics of the zero-coupon bond over a time span \([0,1]\) and considering a constant curve \(\theta(\cdot)=6\%\). In order to compare them properly, the underlying Brownian path is the same for all kernels. Unsurprisingly, we observe that the Riemann-Liouville case creates a lot more variance of the dynamics. ### Impact of the roughness on convexity We compare in Figures 3 and 4 the impact of the (roughness of the) kernel on the convexity adjustment. We consider a constant curve \(\theta(\cdot)=6\%\) as well as \((t,\mathsf{t}_{1},\mathsf{t}_{2},tau)=(1,2,3,2)\). We note that, as \(\alpha\) tends to zero in the exponential kernel case and as \(H\) tends to \(\frac{1}{2}\) in the Riemann-Liouville case, the convexity adjustments converge to the same value (as expected), approximately equal to \(2.718\). Figure 1. Dynamics of the zero-coupon bond in the Exponential kernel case. Figure 2. Dynamics of the zero-coupon bond in the Riemann-Liouville kernel case.
2305.02203
Massless KG-oscillators in Som-Raychaudhuri cosmic string spacetime in a fine tuned rainbow gravity
A fine tuned rainbow gravity describes both relativistic quantum particles and anti-particles alike. That is, the ratio $y=E/E_{P}$ in the rainbow functions $g_{_{0}}\left( y\right) $ and $% g_{_{1}}\left( y\right) $ should be fine tuned into $0\leq y=E/E_{P}\leq 1\Rightarrow y=\left\vert E\right\vert /E_{P}$, otherwise rainbow gravity will only secure Planck's energy scale $E_p$ invariance for relativistic particles and the anti-particles are left unfortunate (in the sense that their energies will be indefinitely unbounded). Using this fine tuning we discuss the rainbow gravity effect on Klein-Gordon (KG) oscillators in Som-Raychaudhuri cosmic string rainbow gravity spacetime background. We use the rainbow functions: (i) $g_{_{0}}\left( y\right) =1$, $% g_{_{1}}\left( y\right) =\sqrt{1-\epsilon y^{n}}, n=0,1$, loop quantum gravity motivated pairs, (ii) $% g_{_{0}}\left( y\right) =g_{_{1}}\left( y\right) =\left( 1-\epsilon y\right) ^{-1}$, a horizon problem motivated pair, and (iii) $g_{_{0}}\left( y\right) =\left( e^{\epsilon y}-1\right) /\epsilon y$, $g_{_{1}}\left( y\right) =1$, a gamma-ray bursts motivated pair. We show that the energies obtained using the first two rainbow functions in (i) completely comply with the rainbow gravity model (on the invariance of the Planck's energy scale $E_{p}$). The rainbow function pair in (ii) has no effect on massless KG-oscillators. Whereas, the one in (iii) does not show any eminent tendency towards the invariance of the Planck's energy scale. Yet, we suggest a new rainbow function pair $(g_0(y)=(1-\epsilon y)^{-1}, g_1 (y)=1)$, and show that it secures invariance of the Planck's energy scale $E_p$. Moreover, similar performance is observed when this new pair is used for KG-oscillators and KG-Coulombic particles in cosmic string rainbow gravity spacetime and magnetic fields.
Omar Mustafa
2023-04-24T08:22:45Z
http://arxiv.org/abs/2305.02203v1
# Massless KG-oscillators in Som-Raychaudhuri cosmic string spacetime in a fine tuned rainbow gravity ###### Abstract **Abstract:** A fine tuned rainbow gravity describes both relativistic quantum particles and anti-particles alike. That is, the ratio \(y=E/E_{P}\) in the rainbow functions \(g_{{}_{0}}\left(y\right)\) and \(g_{{}_{1}}\left(y\right)\) should be fine tuned into \(0\leq y=E/E_{P}\leq 1\Rightarrow y=\left|E\right|/E_{P}\), otherwise rainbow gravity will only secure Planck's energy scale \(E_{p}\) invariance for relativistic particles and the anti-particles are left unfortunate (in the sense that their energies will be indefinitely unbounded). Using this fine tuning we discuss the rainbow gravity effect on Klein-Gordon (KG) oscillators in Som-Raychaudhuri cosmic string rainbow gravity spacetime background. We use the rainbow functions: (i) \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon y^{n}}\), \(n=0,1\), loop quantum gravity motivated pairs, (ii) \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)=\left(1-\epsilon y\right)^ {-1}\), a horizon problem motivated pair, and (iii) \(g_{{}_{0}}\left(y\right)=\left(\epsilon^{\epsilon y}-1\right)/\epsilon y\), \(g_{{}_{1}}\left(y\right)=1\), a gamma-ray bursts motivated pair. We show that the energies obtained using the first two rainbow functions in (i) completely comply with the rainbow gravity model (on the invariance of the Planck's energy scale \(E_{p}\)). The rainbow function pair in (ii) has no effect on massless KG-oscillators. Whereas, the one in (iii) does not show any eminent tendency towards the invariance of the Planck's energy scale. Yet, we suggest a new rainbow function pair \(\left(g_{{}_{0}}(y)=\left(1-\epsilon y\right)^{-1},g_{{}_{1}}(y)=1\right)\), and show that it secures invariance of the Planck's energy scale \(E_{p}\). Moreover, similar performance is observed when this new pair is used for KG-oscillators and KG-Coulombic particles in cosmic string rainbow gravity spacetime and magnetic fields. **PACS** numbers: 05.45.-a, 03.50.Kk, 03.65.-w **Keywords:** Som-Raychaudhuri cosmic string spacetime, Klein-Gordon (KG) particles/oscillators, rainbow gravity. ## I Introduction A common suggestion of most approaches to quantum gravity (e.g., string field theory [1], loop quantum gravity [2], and non-commutative geometry [3]) is to modify the standard relativistic energy-momentum dispersion relation in the ultraviolet regime into \[E^{2}g_{{}_{0}}\left(y\right)^{2}-p^{2}c^{2}g_{{}_{1}}\left(y\right)^{2}=m^{ 2}c^{4};\;y=E/E_{p}, \tag{1}\] where \(g_{{}_{0}}\left(y\right)\), \(g_{{}_{1}}\left(y\right)\) are the rainbow functions, \(mc^{2}\) is its rest mass energy, and the condition that the limit \(y\to 0\Rightarrow g_{{}_{k}}\left(y\right)=1;\;k=0,1\), retrieves the standard energy-momentum dispersion relation in the infrared regime (i.e, the usual general relativity is recovered). The Einstein's field equations are consequently affected by such modification to read \(G_{\mu\nu}(E/E_{p})=8\pi G(E/E_{p})T_{\mu\nu}(E/E_{p})\), where \(G(E/E_{p})\) is, in this case, an energy-dependent Newton's universal gravitational constant that collapses into the conventional one \(G=G(0)\) at the limit \(E/E_{p}\to 0\). It should be interesting to mention that the modified dispersion relations are naturally produced by the doubly/deformed special relativity (DSR) [4; 5; 6; 7; 8; 9]. DSR extends special relativity and introduces yet another invariant energy scale (the Planck's energy \(E_{p}\) as the maximum energy scale) alongside with the invariance of the speed of light. A generalization of DSR that includes curvature is provided by the doubly general relativity [10], where the spacetime metric depends on the energy of the probe particle and forms the so called rainbow gravity (RG). Rainbow gravity has been an inviting research field over the years [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25]. For example, the effects of RG are reported in the thermodynamical aspects of black holes [12; 13; 26; 27; 28; 29; 30; 32], in the tests of thresholds for ultra high-energy cosmic rays [21; 22; 23], TeV photons [24], gamma-ray bursts [14], nuclear physics experiments [25], dynamical stability conditions of neutron stars [31], charged black holes in massive RG [33], on geometrical thermodynamics and heat engine of black holes in RG [34], f(R) theories [35], the initial singularity problem for closed rainbow cosmology [36], the black hole entropy [37], the removal of the singularity of the early universe [38], the Casimir effect in the rainbow Einstein's universe [15], massive scalar field in RG Schwarzschild metric [39], Yang-Mills black holes [40], etc. In different spacetime rainbow gravity backgrounds, on the other hand, recent studies are carried out on Klein-Gordon (KG) particles (i.e., spin-0 mesons), Dirac particles (spin-1/2 fermionic particles), and Duffen-Kemmer-Peatiau (DKP) particles (spin-1 particles like bosons and photons). For example, in a cosmic string spacetime background in rainbow gravity, Landau levels via Schrodinger and KG equations are studied by Bezzerra et al. [16], Dirac oscillator and Aharonov-Bohm effect are studied by Bakke and Mota [41; 42], DKP-particles by Hosseinpour et al. [9], quantum dynamics of photon by Sogut et al. [19], KG-particles in a topologically trivial Godel-type spacetime by Kangal et al. [20], etc. Recently, moreover, position-dependent mass (PDM) KG-particles in different spacetimes backgrounds are introduced [437; 44; 45; 46; 47; 48; 49; 50]. It has been used in the study of PDM KG-Coulomb particles [43] in cosmic string rainbow gravity spacetime, PDM KG-oscillators in cosmic string spacetime within Kaluza-Klein theory [51], in (2+1)-dimensional Gurses spacetime backgrounds [52], and in Minkowski spacetime with space-like dislocation [53]. It could be interesting to mention that PDM is a notion used to describe coordinate transformation/deformation that renders the mass in the Schrodinger equation to become effectively position-dependent [45; 46; 47; 48; 49; 50]. However, in our very recent study on PDM KG-Coulomb particles [43] in cosmic string rainbow gravity spacetime, we have followed the usual practice in the literature (e.g., [9; 10; 12; 15; 16; 19; 20]) that \(y=E/E_{p}\) in the rainbow functions (1). This has led to some unfortunate results (except for one of the rainbow function models) for the KG-anti-particles (hence the notion that anti-particles are left unfortunate) as they did not satisfy the rainbow gravity model (i.e., their energies are reported to be \(|E|>>E_{p}\)). In the current methodical proposal, we use a necessary and convenient fine tuning on \(y=E/E_{p}\) of the rainbow functions (1) and constraint \(y\) so that \(0\leq y=E/E_{p}\leq 1\) for relativistic particles and anti-particles. This suggestion would allow us to write \(y=\left|E\right|/E_{p}\). We shall use this fine tuning and study massless KG-oscillators in a Godel-type Som-Raychaudhuri cosmic string rainbow gravity spacetime. We consider three pairs of rainbow functions: (a) \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon y^{2}}\), and \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon y}\), which are loop quantum gravity [54; 55] motivated pairs, (b) \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)=\left(1-\epsilon y\right)^{-1}\), a horizon problem [5; 21] motivated pair, and (c) \(g_{{}_{0}}\left(y\right)=\left(e^{\epsilon y}-1\right)/\epsilon y\), \(g_{{}_{1}}\left(y\right)=1\), motivated by gamma-ray bursts at cosmological distances [6]. We propose yet a new experimental (metaphorically speaking) rainbow functions pair \(g_{{}_{0}}\left(y\right)=\left(1-\epsilon y\right)^{-1},\)\(g_{{}_{1}}\left(y\right)=1\) and study it for different KG-particles and anti-particles in different spacetime backgrounds. Hence, the Som-Raychaudhuri cosmic string spacetime metric (in the natural units \(c=\hbar=G=1\)) \[ds^{2}=-\left(dt+\alpha\Omega r^{2}d\varphi\right)^{2}+dr^{2}+\alpha^{2}\,r^{2}d \varphi^{2}+dz^{2}, \tag{2}\] under RG, takes the energy-dependent form \[ds^{2}=-\left(\frac{dt}{g_{{}_{0}}\left(y\right)}+\frac{\alpha\Omega r^{2}d \varphi}{g_{{}_{1}}\left(y\right)}\right)^{2}+\frac{1}{g_{{}_{1}}\left(y \right)^{2}}\left[dr^{2}+\alpha^{2}\,r^{2}d\varphi^{2}+dz^{2}\right];\;y=\left| E\right|/E_{p}, \tag{3}\] where \(\alpha=1-4\mu\) is the deficit angle, \(\mu\) is the linear mass density of the cosmic string so that \(\alpha<1,E=E_{\pm}=\pm\left|E\right|\) are the energies of the probe particle, \(E_{+}=+\left|E\right|\), and anti-particle, \(E_{-}=-\left|E\right|\), in Som-Raychaudhuri cosmic string rainbow gravity spacetime, and \(E_{p}=\sqrt{\hbar c^{5}/G}\) is the Planck energy. Here,the signature of the line elements (2) and (3) is \(\left(-,+,+,+\right)\). Moreover, the corresponding metric tensor \(g_{\mu\nu}\) reads \[g_{\mu\nu}=\left(\begin{array}{cccc}\frac{-1}{g_{{}_{0}}\left(y\right)^{2}} &0&\frac{-\alpha\Omega r^{2}}{g_{{}_{0}}\left(y\right)g_{{}_{1}}\left(y\right) }&0\\ 0&\frac{1}{g_{{}_{1}}\left(y\right)^{2}}&0&0\\ \frac{-\alpha\Omega r^{2}}{g_{{}_{0}}\left(y\right)g_{{}_{1}}\left(y\right)} &0&\frac{\left(\alpha^{2}\,r^{2}-\alpha^{2}\,\Omega^{2}\,\Omega^{2}r^{4} \right)}{g_{{}_{1}}\left(y\right)^{2}}&0\\ 0&0&0&\frac{1}{g_{{}_{1}}\left(y\right)^{2}}\end{array}\right);\;\mu,\nu=t,r, \varphi,z, \tag{4}\] to imply \[\det\left(g_{\mu\nu}\right)=g=-\frac{\alpha^{2}\,r^{2}}{g_{{}_{0}}\left(y \right)^{2}g_{{}_{1}}\left(y\right)^{6}},\] and \[g^{\mu\nu}=\left(\begin{array}{cccc}-g_{{}_{0}}\left(y\right)^{2}\left(1- \Omega^{2}r^{2}\right)&0&-\frac{\Omega g_{{}_{0}}\left(y\right)g_{{}_{1}} \left(y\right)}{\alpha}&0\\ 0&g_{{}_{1}}\left(y\right)^{2}&0&0\\ -\frac{\Omega g_{{}_{0}}\left(y\right)g_{{}_{1}}\left(y\right)}{\alpha}&0& \frac{g_{{}_{1}}\left(y\right)^{2}}{\alpha^{2}\,r^{2}}&0\\ 0&0&0&g_{{}_{1}}\left(y\right)^{2}\end{array}\right). \tag{5}\] It should be noted that massless particles are believed to dominate the very early universe [56]. In the current methodical proposal, we consider (in section 2) massless KG-oscillators in Som-Raychaudhuri cosmic string spacetime. We study the effect rainbow gravity on the spectroscopic structure of such relativistic KG-particles. We show that the rainbow functions pair \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)=\left(1-\epsilon y\right)^ {-1}\) introduces no effect, what so ever, on the spectroscopic structure of the massless KG-oscillators at hand. In subsections 2-A and 2-B, we use \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon|E|^{2}/E_{p}^{2}}\) and \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon|E|/E_{p}}\) respectively. We show that, if such rainbow functions are to serve for the invariance of the Planck's energy scale \(E_{p}\), then the rainbow parameter \(\epsilon\) should satisfy \(\epsilon\geq 1\). However, one may switch off the rainbow parameter (i.e., set \(\epsilon=0\)) to retrieve the energies of KG-particles and anti-particles in Som-Raychaudhuri cosmic string spacetime without rainbow gravity. In subsection 2-C, we use \(g_{{}_{0}}\left(y\right)=\left(e^{ey}-1\right)/\epsilon y\) and \(g_{{}_{1}}\left(y\right)=1\) and show that the exponential structure of the rainbow function manifestly introduces a logarithmic solution for the energy levels. Consequently, such a logarithmic solution does not provide upper limits for the energies of both particles and anti-particles toward the Planck energy scale \(E_{p}\). We propose, in subsection 2-D, a new (to the best of our knowledge) rainbow functions pair, \(g_{{}_{0}}\left(y\right)=\left(1-\epsilon y\right)^{-1},\,g_{{}_{1}}\left(y \right)=1\). We show that this new pair reproduces the same rainbow gravity effects as those motivated by the loop quantum gravity [54; 55] (i.e., \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon|E|^{n}/E_{p}^{n}}\); \(n=1,2\)) in the sense that it preserves the invariance of the Planck's energy scale \(E_{p}\) as well as symmetrization of the energy levels about \(E=0\) for both KG-particles and anti-particles. In section 3, we use our new rainbow functions pair and discuss rainbow gravity effects on massless KG-oscillators and KG-Coulombic particles in cosmic string rainbow gravity spacetime and magnetic fields. We found that the new pair shows consistent performance in the sense that it secures invariance of the Planck's energy scale \(E_{p}\) and maintains symmetrization of the energy levels about \(E=0\) value. Our concluding remarks are given in section 4. ## II Massless KG-particles in Som-Raychaudhuri cosmic string rainbow gravity spacetime background In the Som-Raychaudhuri cosmic string rainbow gravity spacetime background (3), a massless KG-particle is described (in \(c=\hbar=G=1\) units) by the KG-equation \[\frac{1}{\sqrt{-g}}\partial_{\mu}\sqrt{-g}g^{\mu\nu}\partial_{{}_{\upsilon}} \,\Psi=0, \tag{6}\] Which, in a straightforward manner, yields \[\left\{-g_{{}_{0}}\left(y\right)^{2}\partial_{t}^{2}+\left[g_{{}_{0}}\left(y \right)\Omega\,r\,\,\partial_{t}-\frac{g_{{}_{1}}\left(y\right)}{\alpha\,r} \partial_{\varphi}\right]^{2}+\frac{g_{{}_{1}}\left(y\right)^{2}}{r}\partial _{r}\,r\,\,\partial_{r}+g_{{}_{1}}\left(y\right)^{2}\partial_{z}^{2}\right\} \Psi\left(t,r,\varphi,z\right)=0. \tag{7}\] We may now use the substitution \[\Psi\left(t,r,\varphi,z\right)=\exp\left(i\left[\ell\varphi+k_{z}z-Et\right] \right)\psi\left(r\right)=\exp\left(i\left[\ell\varphi+k_{z}z-Et\right] \right)\frac{R\left(r\right)}{\sqrt{r}}, \tag{8}\] to cast our KG-equation (7) as \[\left\{\partial_{r}^{2}-\frac{\left(\tilde{\ell}^{2}-1/4\right)}{r^{2}}- \tilde{\Omega}^{2}r^{2}+\tilde{\lambda}\right\}R\left(r\right)=0, \tag{9}\] where \[\tilde{\lambda}=\frac{g_{{}_{0}}\left(y\right)^{2}E^{2}-2g_{{}_{0}}\left(y \right)g_{{}_{1}}\left(y\right)\tilde{\ell}\tilde{E}-g_{{}_{1}}\left(y\right) ^{2}k_{z}^{2}}{g_{{}_{1}}\left(y\right)^{2}} \tag{10}\] and \[\tilde{\Omega}^{2}=\left(\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y \right)}\tilde{E}\right)^{2},\,\,\tilde{E}=\Omega E,\tilde{\ell}=\frac{\ell} {\alpha}. \tag{11}\] It is obvious that equation (9) resembles the two-dimensional radial Schrodinger oscillators, which in turn manifestly introduces the notion of KG-oscillators. Equation (9) admits exact solution in the form of hypergeometric function so that \[R\left(r\right)\sim\,r^{|\tilde{\ell}|+1/2}\exp\left(-\frac{|\tilde{\Omega}| \,r^{2}}{2}\right)\,\,_{1}F_{1}\left(\frac{1}{2}+\frac{|\tilde{\ell}|}{2}- \frac{\tilde{\lambda}}{4|\tilde{\Omega}|},1+2|\tilde{\ell}|,|\tilde{\Omega}| \,r^{2}\right). \tag{12}\] However, to secure finiteness and square integrability we need to terminate the hypergeometric function into a polynomial of degree \(n_{r}\geq 0\) so that the condition \[\frac{1}{2}+\frac{\left|\tilde{\ell}\right|}{2}-\frac{\tilde{\lambda}}{4|\tilde {\Omega}|}=-n_{r}. \tag{13}\] is satisfied. This would in turn imply that \[\tilde{\lambda}_{n_{r},\ell}=2|\tilde{\Omega}|\left(2n_{r}+|\tilde{\ell}|+1 \right). \tag{14}\] and \[\psi\left(r\right)=\frac{R\left(r\right)}{\sqrt{r}}=\mathcal{N}\,r^{|\tilde{ \ell}|}\exp\left(-\frac{|\tilde{\Omega}|\,r^{2}}{2}\right)\,\,_{1}F_{1}\left( \frac{1}{2}+\frac{|\tilde{\ell}|}{2}-\frac{\tilde{\lambda}}{4|\tilde{\Omega} |},1+2|\tilde{\ell}|,|\tilde{\Omega}|\,r^{2}\right). \tag{15}\] Consequently, Eq.(10) would read \[\left(\frac{g_{{}_{0}}\left(y\right)^{2}}{g_{{}_{1}}\left(y\right)^{2}}\right) E^{2}-2\left(\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}\right) \left[\Omega E\tilde{\ell}\,+|\Omega E|\left(2n_{r}+|\tilde{\ell}|+1\right) \right]=\,k_{z}^{2}. \tag{16}\] At this point, one should notice that \(|\Omega E|=+\Omega_{\pm}E_{\pm}\) or \(|\Omega E|=-\Omega_{\mp}E_{\pm}\) (with \(\Omega_{\pm}=\pm|\Omega|\), representing positive and negative vorticities and \(E_{\pm}=\pm|E|\), representing particle and anti-particle energies). This would result \[\tilde{E}_{\pm}^{2}-2\tilde{E}_{\pm}\Omega_{\pm}K_{+}=k_{z}^{2}\Rightarrow \tilde{E}_{\pm}=\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{ \pm}=\Omega_{\pm}K_{+}\pm\sqrt{\Omega^{2}K_{+}^{2}+k_{z}^{2}}, \tag{17}\] for \(|\Omega E|=+\Omega_{\pm}E_{\pm}\), and \[\tilde{E}_{\pm}^{2}+2\tilde{E}_{\pm}\Omega_{\mp}K_{-}=k_{z}^{2}\Rightarrow \tilde{E}_{\pm}=\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{ \pm}=-\Omega_{\mp}K_{-}\pm\sqrt{\Omega^{2}K_{-}^{2}+k_{z}^{2}}, \tag{18}\] for \(|\Omega E|=-\Omega_{\mp}E_{\pm}\), where \[\tilde{E}_{\pm}=g_{{}_{0}}\left(y\right)E_{\pm}/g_{{}_{1}}\left(y\right),\,K_ {+}=2n_{r}+|\tilde{\ell}|+\ell+1.\,K_{-}=2n_{r}+|\tilde{\ell}|-\ell+1. \tag{19}\] Nevertheless, the two results in (17) and (18) could be rearranged according to the vorticity signatures. That is, equation (17) suggests that \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{+}^{\Omega_{+}}=+ |\Omega|K_{+}+\sqrt{\Omega^{2}K_{+}^{2}+k_{z}^{2}}, \tag{20}\] \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{-}^{\Omega_{-}}=- |\Omega|K_{+}-\sqrt{\Omega^{2}K_{+}^{2}+k_{z}^{2}}, \tag{21}\] for \(|\Omega E|=+\Omega_{\pm}E_{\pm}\), and (18) yields \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{+}^{\Omega_{-}}=+ |\Omega|K_{-}+\sqrt{\Omega^{2}K_{-}^{2}+k_{z}^{2}}, \tag{22}\] \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{-}^{\Omega_{+}}=- |\Omega|K_{-}-\sqrt{\Omega^{2}K_{-}^{2}+k_{z}^{2}}, \tag{23}\] for \(|\Omega E|=-\Omega_{\mp}E_{\pm}\), where \(E_{\pm}^{\Omega_{+}}=E_{\pm}^{(+)}\) and \(E_{\pm}^{\Omega_{-}}=E_{\pm}^{(-)}\) denote energies for positive and negative vorticities, respectively. However, it would be more convenient and instructive to report these energies for positive and negative vorticities. That is, \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{\pm}^{(+)}=\pm| \Omega|K_{\pm}\pm\sqrt{\Omega^{2}K_{\pm}^{2}+k_{z}^{2}}=\pm\tilde{K}_{\pm}, \tag{24}\] \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{-}^{\left(-\right)}= \pm|\Omega|K_{\mp}\pm\sqrt{\Omega^{2}K_{\mp}^{2}+k_{z}^{2}}=\pm\tilde{K}_{\mp}. \tag{25}\] where \[\tilde{K}_{\pm}=\left[|\Omega|K_{\pm}+\sqrt{\Omega^{2}K_{\pm}^{2}+k_{z}^{2}} \right]. \tag{26}\] It should be noted that for all rainbow functions where \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)\) (including the case \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)=\left(1-\epsilon y\right)^ {-1}\)), equations (24) and (25) (along with (26)) represent the energies of massless KG-oscillators in Som-Raychaudhuri cosmic string spacetime. The rainbow gravity has no effect in this case. However, for \(g_{{}_{0}}\left(y\right)\neq g_{{}_{1}}\left(y\right)\) equations (24) and (25) describe the rainbow gravity effect on the spectroscopic structure of massless KG-oscillators in Som-Raychaudhuri cosmic string spacetime. Obviously, moreover, equations (24) and (25) suggest that \[E_{\pm}^{\left(-\right)}\left|{}_{\ell=\pm|\ell|}\right.=E_{\pm}^{\left(+ \right)}\left|{}_{\ell=\mp|\ell|}\right., \tag{27}\] since \(K_{\pm}\left|{}_{\ell=\pm|\ell|}\right.=K_{\mp}\left|{}_{\ell=\mp|\ell|} \right.\Rightarrow\tilde{K}_{\pm}\left|{}_{\ell=\pm|\ell|}\right.=\tilde{K}_{ \mp}\left|{}_{\ell=\mp|\ell|}\right.\). It is clear that equation (27) identifies the so called vorticity-energy correlations. Moreover, one should be able to observe that the relation \(\tilde{K}_{\pm}\left|{}_{\ell=\pm|\ell|}\right.=\tilde{K}_{\mp}\left|{}_{\ell= \mp|\ell|}\right.\) identifies degeneracies associated with the Som-Raychaudhuri cosmic string spacetime. These degeneracies are manifestly introduced by the fact that \(|\Omega E|=+\Omega_{\pm}E_{\pm}\) or \(|\Omega E|=-\Omega_{\mp}E_{\pm}\) (hence, such degeneracies are called spacetime associated degeneracies (STADs)). Such STADs are observed as follows: 1. \(K_{+}=\left(2n_{r}+1\right);\,\forall\tilde{\ell}=-|\tilde{\ell}|\), i.e., all states with \(\tilde{\ell}=-|\tilde{\ell}|\neq 0\) would merge and combine with \(\ell=0\) states for a given value of \(n_{r}\), 2. \(K_{-}=\left(2n_{r}+1\right);\,\forall\tilde{\ell}=+|\tilde{\ell}|\), i.e., all states with \(\tilde{\ell}=+|\tilde{\ell}|\neq 0\) would merge and combine with \(\ell=0\) states for a given value of \(n_{r}\), 3. \(K_{+}=\left(2n_{r}+2|\tilde{\ell}|+1\right);\,\forall\tilde{\ell}=|\tilde{\ell}|\), and 4. \(K_{-}=\left(2n_{r}+2|\tilde{\ell}|+1\right);\,\forall\tilde{\ell}=-|\tilde{ \ell}|\). For example, for \(\forall\tilde{\ell}=+|\tilde{\ell}|\) and positive vorticity \(\Omega=+\left|\Omega\right|\), equation (24) yields \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{+}^{\left(+\right) }=+|\Omega|K_{+}+\sqrt{\Omega^{2}K_{+}^{2}+k_{z}^{2}}=|\Omega|\left(2n_{r}+2| \tilde{\ell}|+1\right)+\sqrt{\Omega^{2}\left(2n_{r}+2|\tilde{\ell}|+1\right)^ {2}+k_{z}^{2}}, \tag{28}\] and \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{-}^{\left(+\right) }=-|\Omega|K_{-}+\sqrt{\Omega^{2}K_{-}^{2}+k_{z}^{2}}=|\Omega|\left(2n_{r}+1 \right)+\sqrt{\Omega^{2}\left(2n_{r}+1\right)^{2}+k_{z}^{2}}, \tag{29}\] whereas, equation (25), for \(\forall\tilde{\ell}=+|\tilde{\ell}|\) and negative vorticity \(\Omega=-\left|\Omega\right|\), yields \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{+}^{\left(-\right) }=|\Omega|\left(2n_{r}+1\right)+\sqrt{\Omega^{2}\left(2n_{r}+1\right)^{2}+k_{ z}^{2}}, \tag{30}\] and \[\frac{g_{{}_{0}}\left(y\right)}{g_{{}_{1}}\left(y\right)}E_{-}^{\left(-\right) }=|\Omega|\left(2n_{r}+2|\tilde{\ell}|+1\right)+\sqrt{\Omega^{2}\left(2n_{r}+ 2|\tilde{\ell}|+1\right)^{2}+k_{z}^{2}}. \tag{31}\] At this point, it should be made clear that the above mentioned vorticity-energy correlations as well as STADs are specifically consequences/byproducts of the Som-Raychaudhuri cosmic string spacetime and have nothings to do with rainbow gravity. However, it is obvious that rainbow gravity will affect the energies of the probe massless KG-oscillators for \(g_{{}_{0}}\left(y\right)\neq g_{{}_{1}}\left(y\right)\). In what follows we report the rainbow gravity effects for different fine-tuned rainbow functions used in the literature. Rainbow functions \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon E^{2}/E_{p}^{2}}\) For the rainbow functions \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\delta E^{2}};\ \delta=\epsilon/E_{p}^{2}\), we re-cast (24) and (25) as \[\frac{E_{\pm}^{\left(+\right)\,2}}{1-\delta E_{\pm}^{\left(+\right)\,2}}=\tilde {K}_{\pm}^{2}\Rightarrow E_{\pm}^{\left(+\right)}=\pm\frac{\tilde{K}_{\pm}}{ \sqrt{\tilde{K}_{\pm}^{2}\delta+1}}=\pm\frac{E_{p}}{\sqrt{\epsilon+\left( \frac{E_{p}}{\tilde{K}_{\pm}^{\left(+\right)}}\right)^{2}}}, \tag{32}\] and \[\frac{E_{\pm}^{\left(-\right)\,2}}{1-\delta E_{\pm}^{\left(-\right)\,2}}= \tilde{K}_{\mp}^{\,2}\Rightarrow E_{\pm}^{\left(-\right)}=\pm\frac{\tilde{K}_ {\mp}}{\sqrt{\tilde{K}_{\mp}^{2}\delta+1}}=\pm\frac{E_{p}}{\sqrt{\epsilon+ \left(\frac{E_{p}}{\tilde{K}_{\pm}^{\left(-\right)}}\right)^{2}}}, \tag{33}\] Notably, when expanding about \(\delta=0\), we get \(\left|E_{\pm}^{\left(+\right)}\right|\sim\tilde{K}_{\pm}-\tilde{K}_{\pm}^{3} \delta/2+O\left(\delta^{2}\right)\) and \(\left|E_{\pm}^{\left(-\right)}\right|\sim\tilde{K}_{\mp}-\tilde{K}_{\mp}^{2} \delta/2+O\left(\delta^{2}\right)\). That is, the energies in both cases are less than the energies when rainbow gravity is switched off (i.e., at \(\delta=0\) we have \(\left|E_{\pm}^{\left(+\right)}\right|=\tilde{K}_{\pm}\) and \(\left|E_{\pm}^{\left(-\right)}\right|=\tilde{K}_{\mp}\)). Yet, for \(|\Omega|\rightarrow\infty\) we have \(\tilde{K}_{\pm}\to 2|\Omega|K_{\pm}\) so that a Taylor expansion would yield \[E_{\pm}^{\left(\pm\right)}\simeq\pm 1/\sqrt{\delta}+O\left(1/\Omega^{2}\right)= \pm E_{p}/\sqrt{\epsilon}\Rightarrow|E_{\pm}^{\left(+\right)}|=E_{p}/\sqrt{ \epsilon}. \tag{34}\] This would suggest that, as long as rainbow gravity model is in point, the rainbow parameter \(\epsilon\) should satisfy \(\epsilon\geq 1\). Which would, in turn, emphasis/document that the Planck energy \(E_{p}\) is the maximum possible probe particle and anti-particle energy. To observe the effect of rainbow gravity, we plot in Figure 1(a) and 1(b) the energy levels without (\(\delta=0\)) and with rainbow gravity (\(\delta=0.1\)), respectively, for \(n_{r}=0,1,2,3\) and \(\ell=0\) (i.e., \(s\)-states) at different values for the vorticity parameter \(\Omega\). Moreover, to observe the effect of the rainbow parameter \(\delta\), we plot in Figure 1(c) the same energy levels for \(\delta=1\). A comparison between Figures 1(a) and 1(b) clearly documents that rainbow gravity puts an upper bound for the energies \(\left|E_{\pm}^{\left(\pm\right)}\right|_{\max.}=1/\sqrt{\delta}=1/\sqrt{0.1} \simeq 3.16\), as given in (34), A comparison between Figures 1(b), at \(\delta=0.1\), and 1(c), at \(\delta=1\), shows that the energy levels are pushed closer to \(E=0\) value and the energy gap about \(E=0\) gets narrower for larger \(\delta\) values. It is also obvious that \(s\)-states, \(\ell=0\), are symmetric about \(E=0\) value (as shown in 1(a), 1(b) and 1(c)). This symmetry breaks when different magnetic quantum numbers \(\ell\neq 0\) are considered. This is documented in Figures 2(a), 2(b), and 2(c). Figure 2(a) shows the energies for \(n_{r}=0\), \(\ell=0,\pm 1,\pm 2,\pm 3\), at \(\delta=0.1\) and different values for vorticity parameter \(\Omega\). One observes that the spacetime associated degeneracies (STADs) discussed above become active and break the symmetry of the energy levels about \(E=0\) value. To clearly show how STADs (discussed for (27)) work, we plot in 2(b) and 2(c) for \(\Omega=+0.1\) and \(\Omega=-0.1\), respectively, using \(n_{r}=0\), \(\ell=0,1,2,3\) for different \(\delta\) values. That is, for \(\Omega=+0.1\) we have to use (32), where \(E_{+}^{(+)}=+\tilde{K}_{+}/\sqrt{\tilde{K}_{+}^{2}\delta+1}\), \(K_{+}=\left(2n_{r}+2|\tilde{\ell}|+1\right);\;\forall\tilde{\ell}=|\tilde{\ell}|\), and \(E_{-}^{(+)}=-\tilde{K}_{-}/\sqrt{\tilde{K}_{-}^{2}\delta+1}\), \(K_{+}=\left(2n_{r}+1\right);\;\forall\tilde{\ell}=|\tilde{\ell}|\) (which is an obvious observation of Fig.2(b)). Whereas, for \(\Omega=-0.1\) we have to use (33), where \(E_{+}^{(-)}=+\tilde{K}_{-}/\sqrt{\tilde{K}_{-}^{2}\delta+1}\), and \(E_{-}^{(-)}=-\tilde{K}_{+}/\sqrt{\tilde{K}_{+}^{2}\delta+1}\)(as obvious in Fig.2(c)). Rainbow functions \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon|E|/E_{p}}\) With the rainbow functions \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-2\beta\left|E\right|}\), where \(\,\beta=\epsilon/2E_{p}\) and \(\left|E\right|=\pm E_{\pm}\), equations (24) and (25), respectively, imply that \[\frac{E_{\pm}^{(+)\,2}}{1-2\beta\left|E_{\pm}^{(+)}\,\right|}=\tilde{K}_{\pm} ^{2}\Rightarrow E_{\pm}^{(+)\,2}+2\beta\left|E_{\pm}^{(+)}\,\right|=\tilde{K}_ {\pm}^{\,2}, \tag{35}\] Under such settings, one would use (35), with \(\left|E\right|=\pm E_{\pm}^{(+)}=\pm E_{\pm}^{(-)}\), to obtain \[\left\{\begin{array}{l}\left(E_{+}^{(+)}\,+\beta\tilde{K}_{\pm}^{2}\right)^{2} =\tilde{K}_{\mp}^{2}+\beta^{2}\tilde{K}_{\pm}^{4}\\ \left(E_{-}^{(+)}\,-\beta\tilde{K}_{\pm}^{2}\right)^{2}=\tilde{K}_{\pm}^{2}+ \beta^{2}\tilde{K}_{\pm}^{4}\end{array}\right\}\Longrightarrow E_{\pm}^{(+)}= \mp\beta\tilde{K}_{\pm}^{2}\pm\sqrt{\tilde{K}_{\pm}^{2}+\beta^{2}\tilde{K}_{ \pm}^{4}} \tag{37}\] and use (36) to obtain \[\left\{\begin{array}{l}\left(E_{+}^{(-)}\,+\beta\tilde{K}_{\mp}^{2}\right)^ {2}=\tilde{K}_{\mp}^{2}+\beta^{2}\tilde{K}_{\mp}^{4}\\ \left(E_{-}^{(-)}\,-\beta\tilde{K}_{\mp}^{2}\right)^{2}=\tilde{K}_{\mp}^{2}+ \beta^{2}\tilde{K}_{\mp}^{4}\end{array}\right\}\Longrightarrow E_{\pm}^{(-)}= \mp\beta\tilde{K}_{\mp}^{2}\pm\sqrt{\tilde{K}_{\mp}^{2}+\beta^{2}\tilde{K}_{ \mp}^{4}} \tag{38}\] Clearly, the symmetry about \(E_{\pm}^{(\pm)}=0\) value is unbroken as a result of the structure of the rainbow functions. This may clearly be observed in Figures 3(a), 3(b), and 3(c). Moreover, for \(\left|\Omega\right|\rightarrow\infty\) a Taylor expansion would yield \[E_{+}^{(+)}\left|\left|\alpha\right|\right|_{\left|\Omega\right|=\infty} \,\simeq\frac{1}{2\beta}+O\left(\frac{1}{\Omega^{2}}\right)\simeq\frac{E_{p }}{\epsilon},\text{ and }\left.E_{-}^{(+)}\left|\left|\alpha\right|\right|_{\left|\Omega\right|= \infty}\,\sim-\frac{1}{2\beta}+O\left(\frac{1}{\Omega^{2}}\right) \tag{39}\] for (35). Similarly, \[E_{+}^{(-)}\left|\left|\alpha\right|\right|_{\left|\Omega\right|=\infty}\, \simeq\frac{1}{2\beta}+O\left(\frac{1}{\Omega^{2}}\right)\simeq\frac{E_{p}}{ \epsilon},\text{ and }E_{-}^{(-)}\left|\left|\alpha\right|\right|_{\left|\Omega\right|= \infty}\,\sim-\frac{1}{2\beta}+O\left(\frac{1}{\Omega^{2}}\right) \tag{40}\] for (36). In Figure 3(a) our \(2\beta=0.1\Rightarrow|E_{\pm}^{(\pm)}|_{max}=10\) and in 4(b) our \(2\beta=1\Rightarrow|E_{\pm}^{(\pm)}|_{max}=1\) are obvious energy bounds. Hence our probe KG-particles' and anti-particles' energies comply with the DSR/rainbow gravity model in such a way that \(\left|E_{\pm}^{(\pm)}\right|\leq E_{p}\), provided that the rainbow parameter satisfies the relation \(\epsilon\geq 1\). Rainbow functions \(g_{{}_{0}}\left(y\right)=\left(e^{\epsilon y}-1\right)/\epsilon y\) and \(g_{{}_{1}}\left(y\right)=1\), \(y=\left|E\right|/E_{p}\) Under such rainbow functions setting, equations (24) and (25) would, respectively, yield \[\left(\frac{e^{\zeta\left|E\right|}-1}{\zeta\left|E\right|}\right)E_{\pm}^{(+ )}=\pm\tilde{K}_{\pm}\Rightarrow\left(e^{\zeta\left|E\right|}-1\right)E_{\pm}^ {(+)}=\pm\zeta\left|E\right|\tilde{K}_{\pm}\Rightarrow\left(e^{\pm\zeta E_{ \pm}^{(+)}}-1\right)=\zeta\tilde{K}_{\pm}, \tag{41}\] \[\left(\frac{e^{\zeta\left|E\right|}-1}{\zeta\left|E\right|}\right)E_{\pm}^{(-)}= \pm\tilde{K}_{\mp}\Rightarrow\left(e^{\zeta\left|E\right|}-1\right)E_{\pm}^{(-) }=\pm\zeta\left|E\right|\tilde{K}_{\mp}\Rightarrow\left(e^{\pm\zeta E_{\pm}^{ (-)}}-1\right)=\zeta\tilde{K}_{\mp}, \tag{42}\] where \(\zeta=\epsilon/E_{p}\). Consequently, (41), with \(\left|E\right|=\pm E_{\pm}^{(+)}=\pm E_{\pm}^{(-)}\), would read \[\left(e^{\pm\zeta E_{\pm}^{(+)}}-1\right)=\zeta\tilde{K}_{\pm}\Rightarrow E_{ \pm}^{(+)}=\pm\frac{1}{\zeta}\ln\left(1+\zeta\tilde{K}_{\pm}\right), \tag{43}\] and (42) reads \[\left(e^{\pm\zeta E_{\pm}^{(-)}}-1\right)=\zeta\tilde{K}_{\mp}\Rightarrow E_{ \pm}^{(-)}=\pm\frac{1}{\zeta}\ln\left(1+\zeta\tilde{K}_{\mp}\right) \tag{44}\] It is obvious one observes that such energy levels are unbounded and may grow indefinitely as a result of the logarithmic term ( which grows up with increasing frequency \(\left|\Omega\right|\), in (26), of the KG-oscillator at hand), As such, one should not expect that the corresponding energies \(\left|E_{\pm}^{(\pm)}\right|\) would be less than the Planck energy \(E_{p}\). Such a rainbow functions structure violates the Planck energy scale invariance, therefore. However, the rainbow function \(g_{{}_{0}}\left(y\right)=\left(e^{\epsilon y}-1\right)/\epsilon y\to 1\) as \(\epsilon\to 0\). Consequently, \(E_{\pm}^{(+)}=\pm\tilde{K}_{\pm}\) and \(E_{\pm}^{(-)}=\pm\tilde{K}_{\mp}\) when rainbow gravity is switched off (i.e., for KG-oscillators in Som-Raychaudhuri cosmic string spacetime, without the effect of rainbow gravity). Nevertheless, with the effect of rainbow gravity, a Taylor expansion around \(\zeta=0\) for (43) and (44) would, respectively, imply \[E_{\pm}^{(+)}\simeq\pm\left[\tilde{K}_{\pm}-\frac{1}{2}\zeta\tilde{K}_{\pm}^ {\,2}+O\left(\zeta^{2}\right)\right]\Rightarrow\left|E_{\pm}^{(+)}\right|< \tilde{K}_{\pm}, \tag{45}\] and \[E_{\pm}^{(-)}\simeq\pm\left[\tilde{K}_{\mp}-\frac{1}{2}\zeta\tilde{K}_{\mp}^ {\,2}+O\left(\zeta^{2}\right)\right]\Rightarrow\left|E_{\pm}^{(-)}\right|< \tilde{K}_{\mp}. \tag{46}\] Moreover, the logarithmic nature of the solutions (43) and (44) (manifested by the exponential structure of the rainbow function here) as well as Figures 4(a) and 4(b), suggest that there are no eminent upper limits for the energies of both particles and anti-particles toward the Planck energy scale \(E_{p}\). Rainbow functions \(g_{{}_{0}}\left(y\right)=\left(1-\epsilon y\right)^{-1},\,g_{{}_{1}}\left(y\right)=1\) We have reported above that for the case \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)\), equations (24) and (25) imply no rainbow gravity effect on the spectroscopic structure of massless KG-oscillators at hand. Therefore, the rainbow functions \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)=\left(1-\epsilon y\right)^{-1}\), used to resolve the horizon problem [5; 21], would have no effect on the spectra. However, it could be interesting to report that the rainbow functions \(g_{{}_{0}}\left(y\right)=\left(1-\sigma\left|E\right|\right)^{-1};\,\sigma= \epsilon/E_{p},\,\,g_{{}_{1}}\left(y\right)=1\) would yield interesting results that comply with the rainbow gravity model and preserve the invariance of the Planck's energy scale \(E_{p}\). Such a rainbow functions pair would yield that \[E_{\pm}^{\left(+\right)}=\pm\left(1-\sigma\left|E\right|\right)\tilde{K}_{\pm }\Longrightarrow\left\{\begin{array}{l}E_{+}^{\left(+\right)}=+\left(1- \sigma E_{+}^{\left(+\right)}\right)\tilde{K}_{+}\\ E_{-}^{\left(+\right)}=-\left(1+\sigma E_{-}^{\left(+\right)}\right)\tilde{K}_ {-}\end{array}\right\}\Longrightarrow E_{\pm}^{\left(+\right)}=\pm\frac{ \tilde{K}_{\pm}}{1+\sigma\tilde{K}_{\pm}}, \tag{47}\] by (24) and \[E_{\pm}^{\left(-\right)}=\pm\left(1-\sigma\left|E\right|\right)\tilde{K}_{\mp }\Longrightarrow\left\{\begin{array}{l}E_{+}^{\left(-\right)}=+\left(1- \sigma E_{+}^{\left(-\right)}\right)\tilde{K}_{-}\\ E_{-}^{\left(-\right)}=-\left(1+\sigma E_{-}^{\left(-\right)}\right)\tilde{K}_ {+}\end{array}\right\}\Longrightarrow E_{\pm}^{\left(-\right)}=\pm\frac{ \tilde{K}_{\mp}}{1+\sigma\tilde{K}_{\mp}} \tag{48}\] by (25). One should notice that for \(\left|\Omega\right|\rightarrow\infty\) the energies \(\left|E_{\pm}^{\left(\pm\right)}\right|\sim 1/\sigma+O\left(1/\Omega^{2}\right)=E_{p}/\epsilon\). Again this result shows that the rainbow parameter should satisfy \(\epsilon\geq 1\). In Figures 5(a) and 5(b) the energy levels are shown for \(\sigma=1/5\) and \(\sigma=1/2\) for different values of the vorticity parameter \(\Omega\). Figures 5(a) and 5(b) clearly show the for \(\sigma=1/5\) the maximum energy is \(\left|E_{\pm}^{\left(\pm\right)}\right|\sim 5\) and for \(\sigma=1/2\) the maximum energy is \(\left|E_{\pm}^{\left(\pm\right)}\right|\sim 2\). Moreover, the energy gap narrows as \(\sigma\) increases. Yet an expansion about \(\sigma=0\) would yield \[\left|E_{\pm}^{\left(+\right)}\right|=\tilde{K}_{\pm}-\sigma\tilde{K}_{\pm}^{ \,2}+O\left(\sigma^{2}\right)<\left|E_{\pm}^{\left(+\right)}\right|_{\sigma=0}= \tilde{K}_{\pm}, \tag{49}\] and \[\left|E_{\pm}^{\left(-\right)}\right|=\tilde{K}_{\mp}-\sigma\tilde{K}_{\mp}^{ \,2}+O\left(\sigma^{2}\right)<\left|E_{\pm}^{\left(-\right)}\right|_{\sigma=0}= \tilde{K}_{\mp}. \tag{50}\] Interestingly, this new experimental pair reproduces the same rainbow gravity effects as those used in loop quantum gravity [54; 55] (i.e., \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon E^{j}/E_{p}^{j}}\); \(j=1,2\)), That is, it preserves the invariance of the Planck's energy scale \(E_{p}\) as well as symmetrization of the energy levels about \(E=0\) for both KG-particles and anti-particles. III Massless KG-particles in cosmic string spacetime and magnetic fields in a rainbow gravity introduced by \(g_{{}_{0}}\left(y\right)=\left(1-\epsilon y\right)^{-1},\)\(g_{{}_{1}}\left(y\right)=1\) In this section, we wish to show that the effects of rainbow gravity introduced by the new rainbow functions pair \(g_{{}_{0}}\left(y\right)=\left(1-\epsilon y\right)^{-1},\)\(g_{{}_{1}}\left(y\right)=1\), are as good as those introduced by the loop quantum gravity [54; 55] (i.e., \(g_{{}_{0}}\left(y\right)=1\), \(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon\left|E\right|^{j}/E_{p}^{j}};\)\(j=1,2\)), in the sense this pair preserves the invariance of the Planck's energy scale \(E_{p}\) and the symmetry of the energy levels about \(E=0\) for both KG-particles and anti-particles. Hereby, we consider massless KG-particles in cosmic string rainbow gravity (i.e., \(\Omega=0\), no vorticity) and in magnetic fields so that the KG-equation would read \[\frac{1}{\sqrt{-g}}\left(\partial_{\mu}-ieA_{\mu}\right)\sqrt{-g}g^{\mu\nu} \left(\partial_{v}-ieA_{v}\right)\Psi=0, \tag{51}\] where we set \(\Omega=0\) in Som-Raychaudhuri cosmic string rainbow gravity spacetime metric (3) so that it reduces into just a cosmic string rainbow gravity spacetime metric. In a straightforward manner, one may follow the same steps as in section 2, with \(A_{\mu}=\left(0,0,A_{\varphi},0\right)\), and obtain \[\left\{\mathcal{E}^{2}+g_{{}_{1}}\left(y\right)^{2}\left[\partial_{r}^{2}+ \frac{1}{r}\partial_{r}-\frac{\left(\ell-eA_{\varphi}\right)^{2}}{\alpha^{2}r ^{2}}\right]\right\}\Psi\left(r\right)=0, \tag{52}\] with \[\mathcal{E}^{2}=g_{{}_{0}}\left(y\right)^{2}E^{2}-g_{{}_{1}}\left(y\right)^{ 2}k_{{}_{z}}^{2}. \tag{53}\] At this point, one should notice that \(A_{\varphi}=\frac{1}{2}B_{{}_{0}}r^{2}\) would result in a non-uniform magnetic field \(\mathbf{B=\nabla\times A}=\frac{3}{2}B_{{}_{0}}r\,\hat{z}\), whereas \(A_{\varphi}=\frac{1}{2}B_{{}_{0}}r\) results in a uniform magnetic field \(\mathbf{B=\nabla\times A}=\frac{1}{2}B_{{}_{0}}\,\hat{z}\). Whilst the former, in (52), would yield KG-oscillators \[\left\{\partial_{r}^{2}-\frac{\left(\tilde{\ell}^{2}-1/4\right)}{r^{2}}+ \frac{1}{4}\tilde{B}^{2}r^{2}+\Lambda_{Osc.}\right\}R\left(r\right)=0;\ \Lambda_{Osc.}=\frac{\mathcal{E}^{2}+g_{{}_{1}}\left(y\right)^{2}\tilde{\ell} \tilde{B}}{g_{{}_{1}}\left(y\right)^{2}},\,\tilde{B}=\frac{eB_{{}_{0}}}{\alpha} \tag{54}\] the later yields KG-Coulombic particles \[\left\{\partial_{r}^{2}-\frac{\left(\tilde{\ell}^{2}-1/4\right)}{r^{2}}+ \frac{\tilde{\ell}\tilde{B}}{r}+\Lambda_{Coul.}\right\}R\left(r\right)=0;\ \Lambda_{Coul.}=\frac{\mathcal{E}^{2}-g_{{}_{1}}\left(y\right)^{2}\tilde{B}^{2} /4}{g_{{}_{1}}\left(y\right)^{2}}. \tag{55}\] Both are textbook Schrodinger like problems and their exact solutions are given by \[\Lambda_{Osc.}=\left|\tilde{B}\right|\left(2n_{r}+\left|\tilde{\ell}\right|+1 \right),\ \ \text{and}\ \Lambda_{Coul.}=-\frac{\tilde{\ell}^{2}\tilde{B}^{2}}{4\left(n_{r}+\left| \tilde{\ell}\right|+1/2\right)^{2}}. \tag{56}\] Under such settings, equation (54) would, for \(g_{{}_{0}}\left(y\right)=\left(1-\sigma\left|E\right|\right)^{-1},\)\(\sigma=\epsilon/E_{p},\ g_{{}_{1}}\left(y\right)=1\), yield \[\frac{E_{osc.}^{2}}{\left(1-\sigma\left|E\right|\right|_{osc.}\right)^{2}}= \mathcal{B}_{n_{r},\ell},\ \mathcal{B}_{n_{r},\ell}=\left|\tilde{B}\right|\left(2n_{r}+\left|\tilde{\ell} \right|+1\right)-\tilde{\ell}\tilde{B}+k_{z}^{2}, \tag{57}\] and equation (55) gives \[\frac{E_{Coil.}^{2}}{\left(1-\sigma\left|E\right|_{Coul.}\right)^{2}}=\mathcal{C} _{n_{r},\ell},\;\mathcal{C}_{n_{r},\ell}=\frac{\tilde{B}^{2}}{4}\left(1-\frac{ \tilde{\ell}^{2}}{4\left(n_{r}+\left|\tilde{\ell}\right|+1/2\right)^{2}}\right) +k_{z}^{2}. \tag{58}\] Obviously, the two results can be rewritten as \[E^{2}\left(1-\sigma^{2}\mathcal{K}\right)+2\sigma\mathcal{K}\left|E\right|- \mathcal{K}=0\Leftrightarrow E_{\pm}^{2}\left(1-\sigma^{2}\mathcal{K}\right) \pm 2\sigma\mathcal{K}E_{\pm}-\mathcal{K}=0;\,E_{\pm}=\pm\left|E\right|, \tag{59}\] where \(\mathcal{K}=\mathcal{B}_{n_{r},\ell}\) for KG-oscillators and \(\mathcal{K}=\mathcal{C}_{n_{r},\ell}\) for KG-Coulombic particles. Such a quadratic equation admits solution in the form of \[E_{\pm}=\frac{\mp\sigma\mathcal{K}\pm\sqrt{\mathcal{K}}}{1-\sigma^{2}\mathcal{ K}}. \tag{60}\] Consequently, an expansion about \(\sigma=0\) would yield that \(\left|E_{\pm}\right|\approx\sqrt{\mathcal{K}}-\sigma K^{2}+O\left(\sigma^{2} \right)<\sqrt{\mathcal{K}}\) (where \(\sqrt{\mathcal{K}}\) is the exact eigenvalue when rainbow gravity is switched off at \(\epsilon=0\)). Moreover, at the limit \(\tilde{B}\rightarrow\infty\) we obtain \(\left|E_{\pm}\right|=1/\sigma=E_{p}/\epsilon\Rightarrow\epsilon\geq 1\). In Figures 6(a) and 6(b), we show the KG-oscillators' and KG-Coulombic particles' energies, respectively, at different \(\left|eB_{\circ}\right|\) values. Notably, at \(\mathcal{K}=1/\sigma^{2}\) of (60) we have energy states to fly away and disappear from the spectrum. We have, therefore, estimated and avoided such singularities while presenting our results in Figures 6 (i.e., we have used \(\left|eB_{\circ}\right|\geq 1.6\) for 6(a) and \(\left|eB_{\circ}\right|\geq 1.8\) for 6(b) ). In both cases, an obvious convergence to \(\left|E_{\pm}\right|=1/\sigma=2\) (for \(\sigma=0.5\) used in the plots) is observed as \(\left|eB_{\circ}\right|>>1.6\) in 6(a) and as \(\left|eB_{\circ}\right|>>1.8\) in 6(b). ## IV Concluding remarks In the current study, we have considered massless KG-oscillators in Som-Raychaudhuri cosmic string rainbow gravity spacetime background. In the light of our observations above and within the fine tuning of the rainbow functions, we give our concluding remarks as follows. Among the rainbow functions we have used, we have observed the family of rainbow functions \(g_{{}_{0}}\left(y\right)=1,\)\(g_{{}_{1}}\left(y\right)=\sqrt{1-\epsilon y^{j}};\)\(j=1,2\) (used in loop quantum gravity) has fully delivered what rainbow gravity is designed for. Such rainbow functions ensure that the energy levels (for massless KG-particles and anti-particles alike) are between \(\pm mc^{2}\) and \(\pm E_{p}\) so that \(mc^{2}\leq|E|\leq E_{p}\). That is, the energies of the particles \(E_{+}\) are within the limits \(mc^{2}\leq E_{+}\leq E_{p}\) and those for the anti-particles \(E_{-}\) are within \(-E_{p}\leq E_{-}\leq-mc^{2}\). This would secure the invariance of the Planck's energy scale \(E_{p}\) and consequently would justify our fine tuning of the rainbow functions (i.e., \(y=E/E_{p}\) should be replaced by \(y=|E|/E_{p}\)), therefore. The rainbow functions pair \(g_{{}_{0}}\left(y\right)=g_{{}_{1}}\left(y\right)=\left(1-\epsilon y\right)^{-1}\) (used in the horizon problem) had no effects on the energy levels of the massless KG-oscillators as clearly suggested by equations (24) and (25). However, the pair \(g_{{}_{0}}\left(y\right)=\left(\epsilon^{ey}-1\right)/\epsilon y,\)\(g_{{}_{1}}\left(y\right)=1\) (a gamma-ray bursts byproduct) has the effect of slowing down the migration of the energy levels towards infinity as may obviously be obtained by a simple comparison between Fig. 1(a) and 4(a) and 4(b). This is attributed to the exponential nature of this pair that manifestly yields a logarithmic nature of the energies as reported in (43) and (44). In connection with the new rainbow functions pair \(g_{{}_{0}}\left(y\right)=\left(1-\epsilon y\right)^{-1},\)\(g_{{}_{1}}\left(y\right)=1\), we have to emphasis that this is an experimental (metaphorically speaking) toy-model that has been accidentally discovered. This toy-model has shown consistent performance in terms of enforcing the energies of the particles \(E_{+}\) to be contained within \(mc^{2}\leq E_{+}\leq E_{p}\) and that of the anti-particles \(E_{-}\) within \(-E_{p}\leq E_{-}\leq-mc^{2}\). Hence, it secures the invariance of the Planck's energy scale \(E_{p}\). This is observed not only for the massless KG-oscillators in Som-Raychaudhuri cosmic string rainbow gravity spacetime (documented in section 2) but also for Massless KG-particles in cosmic string rainbow gravity spacetime and in a non-uniform and uniform magnetic fields (documented in section 3). Such encouraging and interesting performance should, in our opinion, stimulate thorough investigations on the validity and reliability of the proposed experimental rainbow functions toy-model, which readily lies far beyond the scope of the current study. To the best of our knowledge, the current study did not appear elsewhere. **Data availability statement:** The authors declare that the data supporting the findings of this study are available within the paper. **Declaration of interest:** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2303.00942
Meta-information-aware Dual-path Transformer for Differential Diagnosis of Multi-type Pancreatic Lesions in Multi-phase CT
Pancreatic cancer is one of the leading causes of cancer-related death. Accurate detection, segmentation, and differential diagnosis of the full taxonomy of pancreatic lesions, i.e., normal, seven major types of lesions, and other lesions, is critical to aid the clinical decision-making of patient management and treatment. However, existing works focus on segmentation and classification for very specific lesion types (PDAC) or groups. Moreover, none of the previous work considers using lesion prevalence-related non-imaging patient information to assist the differential diagnosis. To this end, we develop a meta-information-aware dual-path transformer and exploit the feasibility of classification and segmentation of the full taxonomy of pancreatic lesions. Specifically, the proposed method consists of a CNN-based segmentation path (S-path) and a transformer-based classification path (C-path). The S-path focuses on initial feature extraction by semantic segmentation using a UNet-based network. The C-path utilizes both the extracted features and meta-information for patient-level classification based on stacks of dual-path transformer blocks that enhance the modeling of global contextual information. A large-scale multi-phase CT dataset of 3,096 patients with pathology-confirmed pancreatic lesion class labels, voxel-wise manual annotations of lesions from radiologists, and patient meta-information, was collected for training and evaluations. Our results show that our method can enable accurate classification and segmentation of the full taxonomy of pancreatic lesions, approaching the accuracy of the radiologist's report and significantly outperforming previous baselines. Results also show that adding the common meta-information, i.e., gender and age, can boost the model's performance, thus demonstrating the importance of meta-information for aiding pancreatic disease diagnosis.
Bo Zhou, Yingda Xia, Jiawen Yao, Le Lu, Jingren Zhou, Chi Liu, James S. Duncan, Ling Zhang
2023-03-02T03:34:28Z
http://arxiv.org/abs/2303.00942v1
# Meta-information-aware Dual-path Transformer ###### Abstract Pancreatic cancer is one of the leading causes of cancer-related death. Accurate detection, segmentation, and differential diagnosis of the full taxonomy of pancreatic lesions, i.e., normal, seven major types of lesions, and "other" lesions, is critical to aid the clinical decision-making of patient management and treatment. However, existing work focus on segmentation and classification for very specific lesion types (PDAC) or groups. Moreover, none of the previous work considers using lesion prevalence-related non-imaging patient information to assist the differential diagnosis. To this end, we develop a meta-information-aware dual-path transformer and exploit the feasibility of classification and segmentation of the full taxonomy of pancreatic lesions. Specifically, the proposed method consists of a CNN-based segmentation path (S-path) and a transformer-based classification path (C-path). The S-path focuses on initial feature extraction by semantic segmentation using a UNet-based network. The C-path utilizes both the extracted features and meta-information for patient-level classification based on stacks of dual-path transformer blocks that enhance the modeling of global contextual information. A large-scale multi-phase CT dataset of 3,096 patients with the pathology-confirmed pancreatic lesion class labels, voxel-wise manual annotations of lesions from radiologists, and patient meta-information, was collected for training and evaluations. Our results show that our method can enable accurate classification and segmentation of the full taxonomy of pancreatic lesions, approaching the accuracy of the radiologist's report and significantly outperforming previous baselines. Results also show that adding the common meta-information, i.e., gender and age, can boost the model's performance, thus demonstrating the importance of meta-information for aiding pancreatic disease diagnosis. Keywords:Pancreatic Lesion Dual-path Transformer Meta-information Aware Differential Diagnosis. ## 1 Introduction Pancreatic cancer is the third leading cause of death among all cancers in the United States, and has the poorest prognosis among all solid malignancies with a 5-year survival rate of about 10% [4]. Early diagnosis and treatment are crucial, which can potentially increase the 5-year survival rate to about 50% [3]. In clinical practice, pancreatic patient management is based on the pancreatic lesion type and the potential of the lesion to become invasive cancer. However, pancreatic lesions are often hard to reach by biopsy needle because of the deep location in the abdomen and the complex structure of surrounding organs and vessels. To this end, accurate imaging-based differential diagnosis of pancreatic lesion type is critical to aid the clinical decision-making of patient management and treatment, e.g., surgery, monitoring, or discharge [11, 18]. Multi-phase Computed Tomography (CT) is the first-line imaging tool for pancreatic disease diagnosis. However, accurate differential diagnosis of pancreatic lesions is very challenging because 1) the same type of lesion may have different textures, shapes, and contrast patterns across multi-phase CT, and 2) pancreatic ductal adenocarcinoma (PDAC) accounts for the majority of cases, e.g., \(>\)60%, in pathology-confirmed patient population, leading to a long-tail problem. Most related work in automatic pancreatic CT image analysis focus on segmentation of certain types of pancreatic lesions, e.g., PDAC and pancreatic neuroendocrine tumor (PNET). UNet-based detection-by-segmentation approaches have been extensively studied for the detection of PDAC [16, 17, 19, 20, 22] and PNET [21]. Shape-induced information, e.g., tubular structure of dilated duct, is exploited to improve the PDAC detection [9, 13]. Graph-based classification network is proposed for pancreatic patient risk stratification and management [18]. There are also recent attempts in the detection and classification of PDAC and nonPDAC using non-contrast CT [14]. However, none of the previous work has yet attempted to address the key clinical need for detection and classification of full taxonomy of pancreatic lesions, i.e., PDAC, PNET, solid pseudopapillary tumor (SPT), intraductal papillary mucinous lesion (IPMN), mucinous cystic lesion (MCN), chronic pancreatitis (CP), serous cystic lesion (SCN) [11], and other rare types that can be further classified into other benign and other malignant. Furthermore, no methods consider adding lesion prevalence-related non-imaging patient information to aid the diagnosis. For example, based on epidemiological data, the incidence of MCN, SCN, and SPT in women is significantly higher than that in men, and MCN, SCN, and SPT has a higher prevalence in young-age, middle-age, and old-age female, respectively [7]. Integrating easily-accessible clinical patient meta-information, e.g., gender and age in the DICOM head, as classification feature inputs could potentially further improve the diagnosis accuracy without needing radiologists' manual input. To address these challenges and unmet needs, we propose a meta-information-aware dual-path transformer (MDPFormer) for classification and segmentation of the full taxonomy of pancreatic lesions, including normal, seven major types of pancreatic lesions, other malignant, and other benign. Motivated by the recent dual-path design of Mask Transformers [1, 12], the proposed MDPFormer consists of a segmentation path (S-path) and a classification path (C-path). The S-path focuses on initial feature extraction by semantic segmentation (normal, PDAC, and nonPDAC) using a CNN-based network. Then, the C-path utilizes both meta-information and the extracted features for individual-level classification (normal, PDAC, PNET, SPT, IPMN, MCN, CP, SCN, other benign, and other malignant) based on stacked dual-path transformer blocks that enhance the modeling of global contextual information. We curated a large-scale multi-phase CT dataset with the pathology-confirmed pancreatic lesion class labels, voxel-wise manual annotations of lesions from radiologists, and patient meta-information. To our knowledge, this model is the most comprehensive to date, and is trained on a labeled dataset (2,372 patients' multi-phase CT scans) larger than that used in previous studies [10, 15, 18]. We independently test our method on a test set consisting of one whole year of 724 consecutive patients with pancreatic lesions from a high-volume pancreatic cancer center. The experimental results show that our method enables accurate classification and segmentation of the full taxonomy of pancreatic lesions, approaching the accuracy of radiologists' reports (by second-line senior readers via referring to current and previous imaging, patient history, and clinical meta-information). Our method without meta-information input demonstrates superior classification and segmentation performance as compared to previous baselines. Adding the meta-information-aware design further boosts the model's performance, demonstrating the importance of meta-information for improving pancreatic disease diagnosis. ## 2 Methods The general pipeline of our method is illustrated in Figure 1. Our pipeline consists of two stages. In the first stage, we use a localization UNet [2] to segment out the pancreas from the whole CT volume. The sub-volume containing the pancreas is then cropped out based on the segmentation mask. In the second stage, the resized sub-volume is inputted into the meta-information-aware dual-path transformer (MDPFormer) to segment and classify the pancreatic lesions. Details are elaborated in the following sections. **Meta-information-aware Dual-path Transformer.** For classification, we denote \(\mathcal{H}_{c}=\{0,1,2,\cdots,9\}\) for the ten patient/lesion classes, i.e., normal, PDAC, PNET, SPT, IPMN, MCN, CP, SCN, other benign, and other malignant. For segmentation, we group the last eight classes into nonPDAC and denote \(\mathcal{H}_{s}=\{0,1,2\}\) for the grouped three patient classes, i.e., normal, PDAC, and nonPDAC. The goal is to enable a more balanced initial class distribution for segmentation, while enabling feature extraction for the full pancreatic lesion taxonomy classification. The training set is thus formulated as \(S=\{(X_{i},M_{i},Y_{i},Z_{i})|i=1,2,\cdots,N\}\), where \(X_{i}\) is the cropped pancreas sub-volume of the i-th patients, \(M_{i}\) is the patient meta information (gender and age), \(Y_{i}\in\mathcal{H}_{s}\) is the 3-class voxel-wise annotation with the same spatial size as \(X_{i}\), and \(Z_{i}\in\mathcal{H}_{c}\) is the 10-class volume-wise label that confirmed by pathology or clinical records. The MDPFormer consists of two paths, including a segmentation path (S-Path) and a classification path (C-Path). The goal of S-path is to extract rich feature representations of the lesion and pancreas at multiple scales by first segmenting the image into three general classes. Given a input \(X\) and a segmentation network \(G_{s}\), we have \[V_{s},F_{d1},F_{d2},F_{d3},F_{d4},F_{e1},F_{e2},F_{e3},F_{e4}=G_{s}(X) \tag{1}\] where \(V_{s}\) is the segmentation output, \(F_{d1},F_{d2},F_{d3},F_{d4}\) are the multi-scale features from the decoder, \(F_{e1},F_{e2},F_{e3},F_{e4}\) are the multi-scale features from the encoder. Here, we deploy a 3D UNet [2] as the S-Path backbone network. Instead of directly using the decoder features as C-path input, we combine the multi-scale encoder and decoder features by \[F_{c}=f_{c}(F_{d}*\sigma(F_{e}))+Q \tag{2}\] where \(\sigma\) is the sigmoid function for generating attention from the encoder features to guide decoder feature outputs, \(f_{c}\) is a convolution layer that further refines the S-Path feature output, and \(Q\) is the learnable position embedding feature that provides position representation to aid the transformer in C-path. \(F_{c}\) is the extracted feature from the S-Path which is used for C-Path input. The C-Path consists of four consecutive dual-path transformer blocks, where each block takes both the S-Path feature and the global memory feature as inputs. Denote \(D\) as the initial 1D memory feature which is randomly initialized learnable parameters [12], we fuse the patient meta-information with the initial Figure 1: The overall pipeline and the detailed structure of our MDPFormer. In stage 1, the pancreas sub-volume is cropped based on a coarse pancreas segmentation mask. In stage 2, the resized pancreas sub-volume is inputted into the MDPFormer for segmentation (left path) and classification (right path). The design of dual-path transformer block in the classification path is illustrated on the bottom right (grey box). memory feature by \[F_{m}=[D,M] \tag{3}\] where \(D\) and \(M\) are concatenated in the length dimension and \(M\) is the meta-information, i.e., patient gender and age, in this work. In each block, we use a cross-attention module to fuse \(F_{m}\) and \(F_{c}\). First, we compute S-Path queries \(q^{s}\), keys \(k^{s}\), and values \(v^{p}\), by learnable linear projections of the S-Path feature \(F_{s}\) at each feature location. Similarly, queries \(q^{c}\), keys \(k^{c}\), and values \(v^{c}\) are computed from C-path global memory feature \(F_{c}\) with another set of projection matrices. The cross-attention output can then be calculated as follows: \[y^{c}=softmax(q^{c}\cdot k^{cs})v^{cs}, \tag{4}\] \[k^{cs}=\begin{bmatrix}k^{c}\\ k^{s}\end{bmatrix},v^{cs}=\begin{bmatrix}v^{c}\\ v^{s}\end{bmatrix}, \tag{5}\] where \([\cdot]\) is the concatenation operator in the channel dimension to fuse the values and keys from both paths. The output \(y^{c}\) is then inputted into the next block as the \(F_{m}\) memory feature input. Using the C-path feature output from the last dual-path transformer block, we predict the final classification \(P\) with two fully connected layers and a softmax. The overall training objective can thus be formulated as: \[\mathcal{L}_{all}=\mathcal{L}_{s}(V_{s},Y)+\mathcal{L}_{c}(P,Z) \tag{6}\] where \(\mathcal{L}_{s}(\cdot)\) is the Dice loss function for segmentation training, and \(\mathcal{L}_{c}(\cdot)\) is the cross-entropy loss for classification training. ## 3 Experimental Results **Data Preparation.** We collected a large-scale multi-phase CT dataset consisting of 3,096 patients from a high-volume pancreatic cancer institution. Each multi-phase CT consists of noncontrast, arterial, and venous phase CT. The data were consecutively collected from 2015-2020. All the 724 patients scanned during 2020 were used as the independent test set, and the rest of the 2,372 patients scanned from 2015-2019 were used as the training set. The training set includes 707 normal, 1,088 PDAC, 110 PNET, 68 SPT, 162 IPMN, 32 MCN, 64 CP, 93 SCN, 48 other benign, and 24 other malignant cases. The test set includes 202 normal, 283 PDAC, 34 PNET, 25 SPT, 73 IPMN, 9 MCN, 29 CP, 38 SCN, 14 other benign, and 17 other malignant cases. All patients with lesions were confirmed by surgical pathology, while normal patients were confirmed by radiology reports and at least 2-year follow-ups. The annotation of lesions was performed collaboratively by an experienced radiologist (with 14 years of specialized experience in pancreatic imaging) and an auto-segmentation model on either arterial or venous phase CT, whichever with better lesion visibility. More specifically, the radiologist first annotates some data to train an auto-segmentation model to segment the remaining data, which is then checked/edited by the radiologist. The CT phases were registered using DEEDS [6]. The gender and age information were extracted from the DICOM head as meta information inputs. The gender is converted to a binary value, i.e., 0 for female and 1 for male. The age is normalized between 0-1 by dividing the value by 100. **Implementation Details.** All CT volumes were resampled into \(0.68\times 0.68\times 3.0\) mm spacing and normalized into zero mean and unit variance. In the training phase of MDPFomer, we cropped the foreground 3D bounding box of the pancreas region, randomly pad small margins on each dimension, and resized the sub-volume into \(160\times 256\times 40\) (Y\(\times\)X\(\times\)Z) for input. We deployed a 5-fold cross-validation strategy using the 2,372 training set to train and validate five models. During inference, the five models' predictions were ensemble by averaging the prediction results. For each fold, we first pre-trained the S-path network for 1000 epochs, and then trained the whole model in an end-to-end fashion with an SGD optimizer. The initial learning rate was set to \(1\times 10^{-3}\) with cosine decay, and the batch size was set to 3. The localization UNet in the first stage followed the same training protocol. **Compared Methods and Evaluation Metrics.** Our method is compared with two types of baseline approaches. One is the "segmentation for classification (S4C)" method where a segmentation network, i.e., nnUNet [8] or (nn)UNETR [5, 8], is first deployed for semantic segmentation of the ten classes on the cropped sub-volume. We then classify the patient based on the class-wise lesion segmentation size. Specifically, if one or multiple lesion classes were presented in the segmentation, we classify the patient to the lesion class with the largest segmentation size; Otherwise, we classify the patient as normal. Note that we implement UNETR [5] in the nnUNet framework [8], called (nn)UNETR, which shows substantially better results than the original UNETR implementation on our data. The other baseline is the CNN-based segmentation-to-classification method. We use the exact same structure of S-path in MDPFomer, and extract all encoder and decoder multi-scale features. Then, we apply global max pooling on each feature map, concatenate them and forward them into two fully connected layers for classification. We also compared our performance with the radiology report, which represents the clinical read performance of second-line senior radiologists (via referring to current and previous imaging, patient history, and clinical information) in the high-volume pancreatic cancer center. The classification performance was evaluated by class-wise accuracy, regular accuracy, and balanced accuracy. The confusion matrices were also reported for detailed evaluation. The segmentation performance was evaluated by the Dice coefficient or score on each class of pancreatic lesion or normal. ### Main Results The classification results are summarised in Table 1. Comparing DPFormer without meta-information-aware to the previous baseline methods, i.e., UNet-based S4C, UNETR-based S4C, and S-Path+FC, we can see that DPFormer can already outperform all the baselines in 9 out of 10 classes and achieve the highest balanced accuracy of 49.71%. In general, it is challenging to use the conventional segmentation approaches to directly segment out the 10 classes and perform classification based on it. For S4C approaches, we can see the classification accuracy of MCN, CP, other benign, and other malignant are all zero or near zero. While S-Path+FC provide slightly better classification result with the additional FC layer for classification, DPFormer with dual path transformer and better feature fusion provides better results. With the meta-information-aware design that incorporates additional gender and age information, our MDPFormer utilizes those easily-accessible tumor-type-related non-imaging information, thus achieving further improved classification balanced accuracy of 56.17%. The classification results compared to the radiology report are also shown in Figure 1 and elaborated in Figure 2. The balanced classification accuracy of the radiology report is 61.41%. Adding meta-information improves our method's balanced classification performance from 49.71% to 56.17%, approaching the performance of the radiology report. Our method also provides better PDAC (96.5% vs. 93.3%) and SCN (55.4% vs. 42.1%) diagnosis accuracy as compared to the reports, which is critical since PDAC is of the highest priority among all pancreatic abnormalities with a 5-year survival rate of approximately 10% and is the most common type (>60% of all pathology-confirmed pancreatic lesions). In general, the radiology reports that perform diagnosis with more meta-information, e.g., patient history, tumor markers, previous report, etc., provide better classification accuracy. Thus, adding additional meta-information may further improve our method's performance. In addition, unlike radiology reports that only give the final diagnosis, our MDFormer provides both classification probabilities and class-wise lesion segmentation outputs with explainability. Examples of our MDPFormer's classification and segmentation results are shown in Figure 3. The accuracy of the "Report" for the normal class is 100% (Table 1 and Figure 2). This is because our normal cases were selected based on the radiology reports reporting an absence of pancreatic lesions. Actually, the radiologists' specificity for the normal pancreas is 93%-96% in a pancreas CT interpretation setting [10]. Our MDPFormer has a higher specificity (99.5%) than radiologists, making it a reliable detection tool for pancreatic lesions in practice. \begin{table} \begin{tabular}{c|c|c||c|c|c||c} \hline _CLASSIFY_ & **nnUNet** & **(nn)UNETR** & **SPath+FC** & **DPFormer** & **MDPFormer** & **Report** \\ \hline **Normal** & 96.0 & 96.2 & 97.0 & **99.0** & **99.5** & 100 \\ \hline **PDAC** & **94.3** & 94.1 & **94.3** & **94.3** & **96.5** & 93.3 \\ \hline **PNET** & 38.2 & 37.5 & 35.3 & **47.1** & **47.1** & 70.6 \\ \hline **SPT** & 64.0 & 62.8 & 60.0 & **64.0** & **72.0** & 84.0 \\ \hline **IPMN** & **69.9** & **68.1** & 43.8 & 60.3 & 65.8 & 68.5 \\ \hline **MCN** & 0.0 & 0.0 & **11.1** & **11.1** & **33.3** & 33.3 \\ \hline **CP** & 6.9 & 17.2 & 24.1 & **31.0** & **44.8** & 69.0 \\ \hline **SCN** & 44.7 & 42.1 & **50.0** & **50.0** & **55.3** & 42.1 \\ \hline **Other-BEN** & 0.0 & 0.0 & 21.4 & **28.6** & **35.7** & 35.7 \\ \hline **Other-MLG** & 0.0 & 0.0 & 0.0 & **11.7** & **11.7** & 17.6 \\ \hline \hline **Regular Acc** & 77.4 & 77.4 & 76.2 & **79.8** & **82.9** & 84.0 \\ \hline **Balance Acc** & 41.4 & 41.8 & 43.6 & **49.7** & **56.2** & 61.4 \\ \hline \end{tabular} \end{table} Table 1: Evaluation of classification performance on lesion diagnosis (%). Both averaged accuracy (second last row) and balanced accuracy (last row) are reported. Ablative studies for the segmentation performance are summarized in Table 2. For MDPFormer, DPFormer, and SPath+FC, please note that the nonPDAC segmentation class is assigned by the final classification prediction. Similar to the observation from classification evaluations, it is difficult for nnUNet and UNETR to directly perform 10-class segmentation with averaged Dice scores of 0.360 and 0.373 reported, respectively. On the other hand, our MDPFormer can provide significantly better segmentation performances for all 10 normal and lesion classes (\(p<0.001\)) and achieve an averaged Dice score of 0.604. Comparing MDPFormer to DPFormer, we can also see that adding the meta-information improves the segmentation performance (averaged Dice of 0.604 versus 0.502). Note that the Dice scores reported in Table 2 are generally higher than that reported in previous studies [15, 17, 18]. This is mainly because the ground truth annotations are generated semi-automatically. Nevertheless, the above results clearly demonstrate the superiority of our MDPFormer over compared methods. Next, we provide three patient case studies to show the impact of adding meta-information for classifying the pancreas lesion. The studies are illustrated in Figure 4, including three patients with MCN, SCN, and SPT, respectively. Figure 2: Comparison of classification performance using confusion matrices. Using DPFormer without patient meta-information, the MCN, SCN, and SPT were misclassified as other benign, IPMN, and other malignant, respectively. The MDFormer adding the gender and age information to the imaging information provide more accurate tumor probability predictions. For example, for the female 68-year-old patient with SCN, the maximal probability predicted by DPFormer is 51.63% for IPMN, while MDPFormer with meta-information provides the maximal probability of 82.37% for SCN. ### Discussion In this work, we present a meta-information-aware dual-path transformer (MDPFormer) for the classification and segmentation of pancreatic lesions in multiphase CT. The MDFormer consists of an S-path and C-path, where the S-path focuses on initial feature extraction by group-level segmentation and the C-path utilizes both meta-information and the extracted features for individual-level classification. Compared to previous baselines, our method without meta Figure 3: Examples of classification and segmentation outputs from our MDPFormer. Ground truth lesion classes are annotated on the left and the predicted classes are shown on the right. Segmented pancreas is depicted in Red; lesion in Green or Blue. information input already shows superior classification and segmentation per \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline _SEGMENT_ & **nnUNet** & **(nn)UNETR** & **SPath+FC** & **DPFormer** & **MDPFormer** \\ \hline **Normal** & 0.950\(\pm\)0.118 & 0.940\(\pm\)0.109 & 0.951\(\pm\)0.107 & **0.953\(\pm\)0.096** & **0.958\(\pm\)0.069** \\ \hline **PDAC** & 0.863\(\pm\)0.157 & 0.860\(\pm\)0.149 & **0.866\(\pm\)0.189** & 0.865\(\pm\)0.199 & **0.869\(\pm\)0.196** \\ \hline **PNET** & 0.259\(\pm\)0.302 & 0.288\(\pm\)0.310 & 0.352\(\pm\)0.381 & **0.355\(\pm\)0.391** & **0.456\(\pm\)0.390** \\ \hline **SPT** & 0.513\(\pm\)0.370 & 0.537\(\pm\)0.352 & 0.624\(\pm\)0.326 & **0.662\(\pm\)0.429** & **0.766\(\pm\)0.414** \\ \hline **IPMN** & 0.475\(\pm\)0.304 & 0.468\(\pm\)0.302 & 0.515\(\pm\)0.340 & **0.518\(\pm\)0.390** & **0.598\(\pm\)0.382** \\ \hline **MCN** & 0.071\(\pm\)0.159 & 0.098\(\pm\)0.189 & 0.211\(\pm\)0.446 & **0.312\(\pm\)0.395** & **0.416\(\pm\)0.441** \\ \hline **CP** & 0.051\(\pm\)0.098 & 0.112\(\pm\)0.253 & 0.280\(\pm\)0.323 & **0.349\(\pm\)0.338** & **0.382\(\pm\)0.335** \\ \hline **SCN** & 0.431\(\pm\)0.351 & 0.428\(\pm\)0.348 & 0.484\(\pm\)0.303 & **0.587\(\pm\)0.441** & **0.765\(\pm\)0.438** \\ \hline **Other-BEN** & 0.0\(\pm\)0.0 & 0.0\(\pm\)0.0 & 0.227\(\pm\)0.397 & **0.293\(\pm\)0.364** & **0.459\(\pm\)0.422** \\ \hline **Other-MLG** & 0.0\(\pm\)0.0 & 0.0\(\pm\)0.0 & 0.088\(\pm\)0.247 & **0.129\(\pm\)0.284** & **0.373\(\pm\)0.394** \\ \hline \hline **Average** & 0.361 & 0.373 & 0.464 & **0.502** & **0.604** \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of segmentation performance on normal and lesion (Dice). Figure 4: Case studies of three patients with MCN, SCN, and SPT. The classification probability predictions of DPFormer and MDPFormer models are shown on the right. formance. Adding the meta-information-aware design further boost these performances, demonstrating the importance of meta-information when diagnosing specific pancreatic lesion type. Our MDPFormer is an open framework with several key components adjustable, which could potentially further improve our future performances. First, we used a simple UNet architecture with two consecutive convolution layers at each scale level for feature extraction. Using more advanced segmentation network blocks maybe can provide richer feature representations for better classification and segmentation performances. Second, we only used meta-information of patient gender and age as inputs, which can be automatically extracted from every DICOM data in practice. Adding additional non-imaging information, e.g., family history, symptoms (weight loss, jaundice), and other patient records (CA 19-9 blood test), may further potentially improve MDPFormer to better match the performance of the radiologists who have access to those non-imaging information for diagnosis. Those are important research directions for our future work. ## 4 Conclusion This paper presents a new meta-information-aware dual-path transformer for classification and segmentation of the full taxonomy of pancreatic lesions. Our experimental results show that the proposed dual-path transformer can efficiently incorporate the patient meta-information and the extracted image features from the CNN-based segmentation path to make accurate pancreatic lesion classification and segmentation. We demonstrate that our method achieves better performance than previous baselines and approaches the accuracy of radiology reports. Our system could be a useful assistant tool for pancreatic lesion detection, segmentation, and diagnosis in the clinical reading environment.
2304.14748
On the power of standard information for tractability for $L_\infty$ approximation of periodic functions in the worst case setting
We study multivariate approximation of periodic function in the worst case setting with the error measured in the $L_\infty$ norm. We consider algorithms that use standard information $\Lambda^{\rm std}$ consisting of function values or general linear information $\Lambda^{\rm all}$ consisting of arbitrary continuous linear functionals. We investigate the equivalences of various notions of algebraic and exponential tractability for $\Lambda^{\rm std}$ and $\Lambda^{\rm all}$ under the absolute or normalized error criterion, and show that the power of $\Lambda^{\rm std}$ is the same as the one of $\Lambda^{\rm all}$ for some notions of algebraic and exponential tractability. Our result can be applied to weighted Korobov spaces and Korobov spaces with exponential weight. This gives a special solution to Open problem 145 as posed by Novak and Wo\'zniakowski (2012).
Jiaxin Geng, Heping Wang
2023-04-28T10:43:05Z
http://arxiv.org/abs/2304.14748v1
On the power of standard information for tractability for \(L_{\infty}\) approximation of periodic functions in the worst case setting ###### Abstract. We study multivariate approximation of periodic function in the worst case setting with the error measured in the \(L_{\infty}\) norm. We consider algorithms that use standard information \(\Lambda^{\mathrm{std}}\) consisting of function values or general linear information \(\Lambda^{\mathrm{all}}\) consisting of arbitrary continuous linear functionals. We investigate the equivalences of various notions of algebraic and exponential tractability for \(\Lambda^{\mathrm{std}}\) and \(\Lambda^{\mathrm{all}}\) under the absolute or normalized error criterion, and show that the power of \(\Lambda^{\mathrm{std}}\) is the same as the one of \(\Lambda^{\mathrm{all}}\) for some notions of algebraic and exponential tractability. Our result can be applied to weighted Korobov spaces and Korobov spaces with exponential weight. This gives a special solution to Open problem 145 as posed by Novak and Wozniakowski (2012) [39]. Key words and phrases:Tractability, Standard information, general linear information, Worst case setting 2010 Mathematics Subject Classification: 41A63; 65C05; 65D15; 65Y20 ## 1. Introduction We study multivariate approximation \(I=\{I_{\infty,d}\}_{d\in\mathbb{N}}\), where \[I_{\infty,d}:H(K_{d})\to L_{\infty}(D_{d})\ \ \text{with}\ \ I_{\infty,d}\left(f \right)=f,\] is the compact embedding operator, \(H(K_{d})\) is a separable reproducing kernel Hilbert function space on \(D_{d}\) with kernel \(K_{d}\), \(D_{d}\subset\mathbb{R}^{d}\), and the dimension \(d\) is large or even huge. We also investigate approximation problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\), where \[\mathrm{APP}_{\infty,d}:H^{\omega}(\mathbb{T}^{d})\to L_{\infty}(\mathbb{T}^ {d})\ \ \text{with}\ \ \mathrm{APP}_{\infty,d}\left(f\right)=f,\] where \(\mathbb{T}^{d}=[0,1]^{d}\) is \(d\)-dimensional torus and \(H^{\omega}(\mathbb{T}^{d})\) is smoothness space of multivariate periodic functions, whose definition is given in Subsection 2.2. We consider algorithms that use finitely many information evaluations. Here information evaluation means continuous linear functional on \(H(K_{d})\) (general linear information) or function value at some point (standard information). We use \(\Lambda^{\mathrm{all}}\) and \(\Lambda^{\mathrm{std}}\) to denote the class of all continuous linear functionals and the class of all function values, respectively. For a given error threshold \(\varepsilon\in(0,1)\), the information complexity \(n(\varepsilon,d)\) is defined to be the minimal number of information evaluations for which the approximation error of some algorithm is at most \(\varepsilon\). Tractability is aimed at studying how the information complexity \(n(\varepsilon,d)\) depends on \(\varepsilon\) and \(d\). There are two kinds of tractability based on polynomial convergence and exponential convergence. The algebraic tractability (ALG-tractability) describes how the information complexity Introduction Let \(L_{\infty}(\varrho)\) be a bounded bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) a probability measure with respect to \(\varrho\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\) and \(\varrho\) is a bounded domain with Lipschitz boundary \(\partial L_{\infty}(\varrho)\). We say that \(L_{\infty}(\varrho)\) is a bounded domain with Lipschitz boundary It ensures that point evaluations are continuous functionals on \(H(K_{d})\). In the sequel we always assume that \[\|K_{d}\|_{\infty}:=\sup_{{\bf x}\in D}\sqrt{K_{d}({\bf x},{\bf x})}<\infty, \tag{2.1}\] which implies that \(H(K_{d})\) is continuously embedded into \(L_{\infty}(D_{d})\), i.e., \[\|f\|_{L_{\infty}(D_{d})}\leq\|K_{d}\|_{\infty}\cdot\|f\|_{H(K_{d})}. \tag{2.2}\] Note that we do not need the measure \(\varrho_{d}\) for this embedding. We consider the multivariate problem \(S=\{S_{d}\}_{d\in\mathbb{N}}\) in the worst case setting, where \(S_{d}:H(K_{d})\to G_{d}\) is a continuous linear operator from \(H(K_{d})\) to a Banach space \(G_{d}\) with norm \(\|\cdot\|_{G_{d}}\). We approximate \(S_{d}(f)\) by algorithms \(A_{n,d}(f)\) of the form \[A_{n,d}(f)=\phi_{n,d}(L_{1}(f),L_{2}(f),\ldots,L_{n}(f)), \tag{2.3}\] where \(L_{1},L_{2},\ldots,L_{n}\) are arbitrary continuous linear functionals or function values on \(H(K_{d})\), and \(\phi_{n,d}:\ \mathbb{R}^{n}\to G_{d}\) is an arbitrary mapping. The worst case approximation error for the algorithm \(A_{n,d}\) of the form (2.3) is defined as \[e(S_{d};A_{n,d}):=\sup_{\|f\|_{H(K_{d})}\leq 1}\|S_{d}(f)-A_{n,d}(f)\|_{G_{d}}.\] The \(n\)th minimal worst case error is defined by \[e(n,S_{d};\Lambda):=\inf_{A_{n,d}\ \text{with}\ L_{i}\in\Lambda}e(S_{d};A_{n,d}),\] where \(\Lambda\in\{\Lambda^{\text{all}},\Lambda^{\text{std}}\}\) and the infimum is taken over all algorithms of the form (2.3). Clearly, we have \[e(n,S_{d};\Lambda^{\text{all}})\leq e(n,S_{d};\Lambda^{\text{std}}). \tag{2.4}\] For \(n=0\), we use \(A_{0,d}=0\). We obtain the so-called initial error \(e(0,S_{d};\Lambda)\), defined by \[e(0,S_{d}):=e(n,S_{d};\Lambda)=\sup_{\|f\|_{H(K_{d})}\leq 1}\|S_{d}(f)\|_{G_{d}}.\] Since \(H(K_{d})\) is a Hilbert space, it follows from [37, Theorem 4.8] that linear algorithms are optimal and hence, \(e(n,S_{d};\Lambda^{\text{all}})\) is equal to the approximation numbers \(a_{n+1}(S_{d})\) of \(S_{d}\), defined by \[a_{n+1}(S_{d}):=\inf_{\begin{subarray}{c}\text{linear}\ A_{n,d}:H(K_{d}) \to G_{d}\\ \text{rank}\ A_{n,d}\leq n\end{subarray}}\sup_{\|f\|_{H(K_{d})}\leq 1}\|S_{d}(f)-A_{n,d }(f)\|_{G_{d}}.\] That is, \[a_{n+1}(S_{d})=e(n,S_{d};\Lambda^{\text{all}}). \tag{2.5}\] We remark that \(e(n,S_{d};\Lambda^{\text{std}})\) is also called the optimal recovery of \(S_{d}\) or the sampling numbers \(g_{n}(S_{d})\), i.e., \[g_{n}(S_{d})=e(n,S_{d};\Lambda^{\text{std}}). \tag{2.6}\] Now for \(1\leq q\leq\infty\), we consider the operators \[I_{q,d}:H(K_{d})\to L_{q}(\varrho_{d})\ \text{ with }\ I_{q,d}(f)=f.\] If (2.1) holds, then \(I_{2,d}\) satisfies the finite trace condition of the kernel \[\operatorname{Tr}\left(K_{d}\right):=\|K_{d}\|_{2}^{2}=\int_{D_{d}}K_{d}({\bf x },{\bf x})d\varrho_{d}({\bf x})\leq\|K_{d}\|_{\infty}<\infty,\] and hence, is compact and Hilbert-Schmidt (see [46, Lemma 2.3]). From [37] we know that \(e(n,I_{2,d};\Lambda^{\rm all})\) depends on the eigenpairs \(\big{\{}(\lambda_{k,d},e_{k,d})\big{\}}_{k=1}^{\infty}\) of the operator \[W_{d}=I_{2,d}^{*}\,I_{2,d}\colon H(K_{d})\to H(K_{d}),\] where \(I_{2,d}^{*}\) is the adjoint operator of \(I_{2,d}\), and \[\lambda_{1,d}\geq\lambda_{2,d}\geq\cdots\geq\lambda_{n,d}\geq\cdots\geq 0.\] That is, \(\{e_{k,d}\}_{k\in\mathbb{N}}\) is an orthonormal basis in \(H(K_{d})\), and \[W_{d}\,e_{k,d}=\lambda_{k,d}\,e_{k,d}.\] Without loss of generality, we may assume that all the eigenvalues of \(W_{d}\) are positive. We set \[\sigma_{k,d}=\sqrt{\lambda_{k,d}},\ \eta_{k,d}=\lambda_{k,d}^{-1/2}e_{k,d},\ \ k\in \mathbb{N}.\] Then \(\sigma_{k,d}\), \(\eta_{k,d},\ k\in\mathbb{N}\) are also called the singular numbers (values) and singular functions of \(I_{2,d}\). By the Mercer theorem we have \[K_{d}(\mathbf{x},\mathbf{y})=\sum_{k=1}^{\infty}e_{k,d}(\mathbf{x})\overline{ e_{k,d}(\mathbf{y})}=\sum_{k=1}^{\infty}\sigma_{k,d}^{2}\eta_{k,d}(\mathbf{x}) \overline{\eta_{k,d}(\mathbf{y})}.\] From [37, p. 118] we get that the \(n\)th minimal worst case error is \[e(n,I_{2,d};\Lambda^{\rm all})=a_{n+1}(I_{2,d})=(\lambda_{n+1,d})^{1/2}=\sigma _{n+1,d},\] and it is achieved by the optimal algorithm \[S_{n,d}^{*}(f)=\sum_{k=1}^{n}\langle f,e_{k,d}\rangle_{H(K_{d})}\,e_{k,d},\] that is, \[e(n,I_{2,d};\Lambda^{\rm all})=\sup_{\|f\|_{H(K_{d})}\leq 1}\|f-S_{n,d}^{*}(f)\| _{L_{2}(\varrho_{d})}=(\lambda_{n+1,d})^{1/2}. \tag{2.7}\] We remark that \(\{e_{k,d}\}\) is an orthonormal basis in \(H(K_{d})\), \(\{\eta_{k,d}\}\) is an orthonormal system in \(L_{2}(\varrho_{d})\), and for \(f\in H(K_{d})\), \[\langle f,\lambda_{k,d}\,\eta_{k,d}\rangle_{H(K_{d})}=\langle f,\eta_{k,d} \rangle_{L_{2}(\varrho_{d})},\] and \[S_{n,d}^{*}(f)=\sum_{k=1}^{n}\langle f,\eta_{k,d}\rangle_{L_{2}(\varrho_{d})} \,\eta_{k,d}. \tag{2.8}\] We denote \[N_{\varrho_{d}}(m,\mathbf{x})=\sum_{k=1}^{m}|\eta_{k}(\mathbf{x})|^{2},\ \ \mathbf{x}\in D_{d},\] and \[N_{\varrho_{d}}(m)=\|N_{\varrho_{d}}(m,\cdot)\|_{\infty}.\] The function \((N_{\varrho_{d}}(m,\mathbf{x}))^{-1}\) is often called Christoffel function in literature (see [36] and references therein). The number \((N_{\varrho_{d}}(m))^{1/2}\) is the exact constant of the Nikolskii inequality for \(V_{m}:={\rm span}\,\{\eta_{1,d},\eta_{2,d},\ldots,\eta_{m,d}\}\), i.e., \[N_{\varrho_{d}}(m)=\sup_{\begin{subarray}{c}f\in V_{m}\\ f\neq 0\end{subarray}}\|f\|_{\infty}^{2}/\|f\|_{L_{2}(\varrho_{d})}^{2}.\] ### Approximation of multivariate periodic functions Let \(\mathbb{T}\) denote the torus, i.e., \(\mathbb{T}=[0,1]\), where the endpoints of the interval are identified, and \(\mathbb{T}^{d}=[0,1]^{d}\) stand for the \(d\)-dimensional torus. We equip \(\mathbb{T}^{d}\) with the Lebesgue measure \(d\mathbf{x}\). Then \(L_{2}(\mathbb{T}^{d})\) is the space of all measurable \(2\pi\)-periodic functions \(f(\mathbf{x})=f(x_{1},\ldots,x_{d})\) on \(\mathbb{T}^{d}\) for which \[\|f\|_{2}:=\|f\|_{L_{2}(\mathbb{T}^{d})}:=\Big{(}\int_{\mathbb{T}^{d}}|f( \mathbf{x})|^{2}\mathrm{d}\mathbf{x}\Big{)}^{1/2}<\infty.\] Consequently, \(\{e^{2\pi\mathrm{i}\mathbf{k}\mathbf{x}}:\ \mathbf{k}\in\mathbb{Z}^{d}\}\) is an orthonormal basis in \(L_{2}(\mathbb{T}^{d})\), where \(\mathbf{k}\mathbf{x}=\sum_{j=1}^{d}k_{j}x_{j}\), \(\mathrm{i}=\sqrt{-1}\). The Fourier coefficients of a function \(f\in L_{1}(\mathbb{T}^{d})\) are defined as \[\hat{f}(\mathbf{k})=\int_{\mathbb{T}^{d}}f(\mathbf{x})e^{-2\pi\mathrm{i} \mathbf{k}\mathbf{x}}\mathrm{d}\mathbf{x},\ \ \mathbf{k}=(\mathrm{k}_{1},\mathrm{k}_{2},\ldots,\mathrm{k}_{d})\in \mathbb{Z}^{d}.\] Let \(\omega\) be a positive function on \(\mathbb{Z}^{d}\), i.e., \(\omega(\mathbf{k})=\omega(k_{1},\ldots,k_{d})>0\) for all \(\mathbf{k}\in\mathbb{Z}^{d}\). We define the smoothness space \(H^{\omega}(\mathbb{T}^{d})\) by \[H^{\omega}(\mathbb{T}^{d})=\Big{\{}f\in L_{2}(\mathbb{T}^{d}):\|f\|_{H^{ \omega}(\mathbb{T}^{d})}=\Big{(}\sum_{\mathbf{k}\in\mathbb{Z}^{d}}|\hat{f}( \mathbf{k})|^{2}\omega(\mathbf{k})^{2}\Big{)}^{1/2}<\infty\Big{\}}.\] Obviously, \(H^{\omega}(\mathbb{T}^{d})\) is a Hilbert space with inner product \[\left\langle f,g\right\rangle_{H^{\omega}(\mathbb{T}^{d})}=\sum_{\mathbf{k} \in\mathbb{Z}^{d}}\hat{f}(\mathbf{k})\overline{\hat{g}(\mathbf{k})}\,\omega( \mathbf{k})^{2}.\] and an orthonormal basis \(\{e_{\mathbf{k}}\}_{\mathbf{k}\in\mathbb{Z}^{d}}\), where \(e_{\mathbf{k}}(\mathbf{x})=\omega(\mathbf{k})^{-1}e^{2\pi\mathrm{i}\mathbf{k }\mathbf{x}}\). It follows from [6, Theorem 3.1] that \(H^{\omega}(\mathbb{T}^{d})\) is compactly embedded into \(L_{\infty}(\mathbb{T}^{d})\) or \(C(\mathbb{T}^{d})\) if and only if \[\sum_{\mathbf{k}\in\mathbb{Z}^{d}}\omega(\mathbf{k})^{-2}<\infty.\] In this case, \(H^{\omega}(\mathbb{T}^{d})\) is a reproducing kernel Hilbert space with reproducing kernel \[K_{d}^{\omega}(\mathbf{x},\mathbf{y})=\sum_{\mathbf{k}\in\mathbb{Z}^{d}} \omega(\mathbf{k})^{-2}e^{2\pi\mathrm{i}\mathbf{k}(\mathbf{x}-\mathbf{y})}, \tag{2.9}\] and \[\|K_{d}^{\omega}\|_{\infty}=\sum_{\mathbf{k}\in\mathbb{Z}^{d}}\omega(\mathbf{ k})^{-2}<\infty.\] We consider the approximation problem \[\mathrm{APP}_{q,d}:H^{\omega}(\mathbb{T}^{d})\to L_{q}(\mathbb{T}^{d}),\ \ \mathrm{ APP}_{q,d}(f)=f,\ 2\leq q\leq\infty.\] Note that if \(2<q\leq\infty\) and \[\sum_{\mathbf{k}\in\mathbb{Z}^{d}}\omega(\mathbf{k})^{-\frac{2q}{q-2}}<\infty,\] then the space \(H^{\omega}(\mathbb{T}^{d})\) is compactly embedded into \(L_{q}(\mathbb{T}^{d})\) (see [6, Proposition 4.12]). It is easily seen that \(\big{(}\omega(\mathbf{k})^{-2},e_{\mathbf{k}}\big{)}_{\mathbf{k}\in\mathbb{Z}^ {d}}\) are the eigenpairs of the operator \[\widetilde{W}=\mathrm{APP}_{2,d}^{*}\,\mathrm{APP}_{2,d},\] where \(e_{\mathbf{k}}(\mathbf{x})=\omega(\mathbf{k})^{-1}e^{2\pi\mathrm{i}\mathbf{k }\mathbf{x}}\). Let \(\{\lambda_{k,d}\}_{k\in\mathbb{N}}\) be the nonincreasing rearrangement of the sequence \(\{\omega(\mathbf{k})^{-2}\}_{\mathbf{k}\in\mathbb{Z}^{d}}\), and \(e_{k,d}\) be the eigenfunction with respect to the eigenvalue \(\lambda_{k,d}\) of \(\widetilde{W}\). Then \((\lambda_{k,d},e_{k,d})_{k\in\mathbb{N}}\) are the eigenpairs of the operator \(\widetilde{W}\) satisfying \[\lambda_{1,d}\geq\lambda_{2,d}\geq\cdots\geq\lambda_{k,d}\geq\cdots>0,\ \ \widetilde{W}e_{k,d}=\lambda_{k,d}\,e_{k,d},\ k\in\mathbb{N}.\] Since \(\{e_{k,d}\}_{k=1}^{\infty}\) is an orthonormal basis in \(H^{\omega}(\mathbb{T}^{d})\), we get for any \(f\in H^{\omega}(\mathbb{T}^{d})\), \[f=\sum_{k=1}^{\infty}\langle f,e_{k,d}\rangle_{{}_{H^{\omega}(\mathbb{T}^{d})} }\,e_{k,d}\quad\text{and}\quad\|f\|_{H^{\omega}(\mathbb{T}^{d})}=\Big{(}\sum_{ k=1}^{\infty}|\langle f,e_{k,d}\rangle_{{}_{H^{\omega}(\mathbb{T}^{d})}}|^{2} \Big{)}^{1/2}.\] It follow from [6, Theorem 3.4 and Proposition 4.12] that \[e(n,\text{APP}_{\infty,d};\Lambda^{\text{all}})=a_{n+1}(\text{APP}_{\infty,d })=\Big{(}\sum_{k=n+1}^{\infty}\lambda_{k,d}\Big{)}^{1/2},\] and for \(2<q<\infty\), \[e(n,\text{APP}_{q,d};\Lambda^{\text{all}})=a_{n+1}(\text{APP}_{q,d})\leq \Big{(}\sum_{k=n+1}^{\infty}\lambda_{k,d}^{\frac{q}{q-2}}\Big{)}^{\frac{q-2}{ 2q}}.\] The initial error \(e(0,\text{APP}_{\infty,d})\) is given by \[e(0,\text{APP}_{\infty,d}):=e(0,\text{APP}_{\infty,d};\Lambda)=\Big{(}\sum_{ k=1}^{\infty}\lambda_{k,d}\Big{)}^{1/2}.\] ### Notions of tractability In this paper, we consider the approximation problem \(\text{APP}=\{\text{APP}_{\infty,d}\}_{d\in\mathbb{N}}\). The information complexity can be studied using either the absolute error criterion (ABS) or the normalized error criterion (NOR). In the worst case setting for \(\star\in\{\text{ABS, NOR}\}\) and \(\Lambda\in\{\Lambda^{\text{all}},\Lambda^{\text{std}}\}\), we define the information complexity \(n^{\star}(\varepsilon,d;\Lambda)\) as \[n^{\star}(\varepsilon,d;\Lambda):=\inf\{n:e(n,\text{APP}_{\infty,d};\Lambda) \leq\varepsilon\,\text{CRI}_{d}\}, \tag{2.10}\] where \[\text{CRI}_{d}:=\begin{cases}1,&\text{for $\star$=ABS},\\ e(0,\text{APP}_{\infty,d}),&\text{for $\star$=NOR}.\end{cases}\ \ =\ \ \begin{cases}1,&\text{for $\star$=ABS},\\ \Big{(}\sum_{k=1}^{\infty}\lambda_{k,d}\Big{)}^{1/2},&\text{for $\star$=NOR}.\end{cases}\] Since \(\Lambda^{\text{std}}\subset\Lambda^{\text{all}}\), we get \[e(n,d;\Lambda^{\text{all}})\leq e(n,d;\Lambda^{\text{std}}).\] It follows that for \(\star\in\{\text{ABS, NOR}\}\), \[n^{\star}(\varepsilon,d;\Lambda^{\text{all}})\leq n^{\star}(\varepsilon,d; \Lambda^{\text{std}}). \tag{2.11}\] In this subsection we recall the various tractability notions in the worst case setting. First we introduce all notions of algebraic tractability. Let \(\text{APP}=\{\text{APP}_{\infty,d}\}_{d\in\mathbb{N}}\), \(\star\in\{\text{ABS, NOR}\}\), and \(\Lambda\in\{\Lambda^{\text{all}},\Lambda^{\text{std}}\}\). In the worst case setting for the class \(\Lambda\), and for error criterion \(\star\), we say that APP is \(\bullet\) Algebraically strongly polynomially tractable (ALG-SPT) if there exist \(C>0\) and a non-negative number \(p\) such that \[n^{\star}(\varepsilon,d;\Lambda)\leq C\varepsilon^{-p},\text{ for all } \varepsilon\in(0,1). \tag{2.12}\] The exponent ALG-\(p^{\star}(\Lambda)\) of ALG-SPT is defined as the infimum of \(p\) for which (2.12) holds; \(\bullet\) Algebraically polynomially tractable (ALG-PT) if there exist \(C>0\) and non-negative numbers \(p,q\) such that \[n^{\star}(\varepsilon,d;\Lambda)\leq Cd^{q}\varepsilon^{-p},\text{ for all }d \in\mathbb{N},\ \varepsilon\in(0,1);\] \(\bullet\) Algebraically quasi-polynomially tractable (ALG-QPT) if there exist \(C>0\) and a non-negative number \(t\) such that \[n^{\star}(\varepsilon,d;\Lambda)\leq C\exp(t(1+\ln d)(1+\ln\varepsilon^{-1})),\text{ for all }d\in\mathbb{N},\ \varepsilon\in(0,1). \tag{2.13}\] The exponent ALG-\(t^{\star}(\Lambda)\) of ALG-QPT is defined as the infimum of \(t\) for which (2.13) holds; \(\bullet\) Algebraically uniformly weakly tractable (ALG-UWT) if \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda)}{ \varepsilon^{-\alpha}+d^{\beta}}=0,\text{ for all }\alpha,\beta>0;\] \(\bullet\) Algebraically weakly tractable (ALG-WT) if \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda)}{ \varepsilon^{-1}+d}=0;\] \(\bullet\) Algebraically \((s,t)\)-weakly tractable (ALG-\((s,t)\)-WT) for fixed \(s,t>0\) if \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda)}{ \varepsilon^{-s}+d^{t}}=0.\] Clearly, ALG-\((1,1)\)-WT is the same as ALG-WT. If APP is not ALG-WT, then APP is called intractable. If the \(n\)th minimal error is exponentially convergent, then we should study tractability with \(\varepsilon^{-1}\) being replaced by \((1+\ln\varepsilon^{-1})\), which is called exponential tractability. Recently, there have been many papers studying exponential tractability (see [5, 8, 15, 24, 30, 54]). In the definitions of ALG-SPT, ALG-PT, ALG-QPT, ALG-UWT, ALG-WT, and ALG-\((s,t)\)-WT, if we replace \(\varepsilon^{-1}\) by \((1+\ln\varepsilon^{-1})\), we get the definitions of _exponential strong polynomial tractability_ (EXP-SPT), _exponential polynomial tractability_ (EXP-PT), _exponential quasi-polynomial tractability_ (EXP-QPT), _exponential uniform weak tractability_ (EXP-UWT), _exponential weak tractability_ (EXP-WT), and _exponential \((s,t)\)-weak tractability_ (EXP-\((s,t)\)-WT), respectively. We now give the above notions of exponential tractability in detail. Let \(\text{APP}=\{\text{APP}_{\infty,d}\}_{d\in\mathbb{N}}\), \(\star\in\{\text{ABS, NOR}\}\), and \(\Lambda\in\{\Lambda^{\text{all}},\Lambda^{\text{std}}\}\). In the worst case setting for the class \(\Lambda\), and for error criterion \(\star\), we say that APP is \(\bullet\) Exponentially strongly polynomially tractable (EXP-SPT) if there exist \(C>0\) and a non-negative number \(p\) such that \[n^{\star}(\varepsilon,d;\Lambda)\leq C(\ln\varepsilon^{-1}+1)^{p},\text{ for all }\varepsilon\in(0,1). \tag{2.14}\] The exponent EXP-\(p^{\star}(\Lambda)\) of EXP-SPT is defined as the infimum of \(p\) for which (2.14) holds; \(\bullet\) Exponentially polynomially tractable (EXP-PT) if there exist \(C>0\) and non-negative numbers \(p,q\) such that \[n^{\star}(\varepsilon,d;\Lambda)\leq Cd^{q}(\ln\varepsilon^{-1}+1)^{p},\text{ for all }d\in\mathbb{N},\ \varepsilon\in(0,1);\] \(\bullet\) Exponentially quasi-polynomially tractable (EXP-QPT) if there exist \(C>0\) and a non-negative number \(t\) such that \[n^{\star}(\varepsilon,d;\Lambda)\leq C\exp(t(1+\ln d)(1+\ln(\ln\varepsilon^{-1 }+1))),\text{ for all }d\in\mathbb{N},\ \varepsilon\in(0,1). \tag{2.15}\] The exponent \(\mathrm{EXP}\)-\(t^{\star}(\Lambda)\) of \(\mathrm{EXP}\)-QPT is defined as the infimum of \(t\) for which (2.15) holds; \(\bullet\) Exponentially uniformly weakly tractable (\(\mathrm{EXP}\)-UWT) if \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda)}{( 1+\ln\varepsilon^{-1})^{\alpha}+d^{\beta}}=0,\text{ for all }\alpha,\beta>0;\] \(\bullet\) Exponentially weakly tractable (\(\mathrm{EXP}\)-WT) if \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda)}{ 1+\ln\varepsilon^{-1}+d}=0;\] \(\bullet\) Exponentially \((s,t)\)-weakly tractable (\(\mathrm{EXP}\)-\((s,t)\)-WT) for fixed \(s,t>0\) if \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda)}{ (1+\ln\varepsilon^{-1})^{s}+d^{t}}=0.\] ### Main results We shall give main results of this paper in this subsection. There are many papers devoted to discussing upper bounds of \(g_{n}(I_{q,d})\) (\(1\leq q\leq\infty\)) in terms of \(a_{n}(I_{2,d})\). The first upper bound about \(g_{n}(I_{2,d})\) was obtained by Wasilkowski and Wozniakowski in [50] by constructing Monte Carlo algorithms. Using the refined Monte Carlo algorithms, the authors in [25] obtained the upper bounds about \(g_{n}(I_{q,d})\) (\(1\leq q\leq\infty\)). Applying the above upper estimates, the authors in [50, 25, 39] obtained some algebraic tractability results for \(\Lambda^{\mathrm{std}}\) for \(I_{q,d}\) (\(q=2\) or \(\infty\)). If nodes \(\mathrm{X}=(x^{1},\ldots,x^{n})\in D_{d}^{n}\) are drawn independently and identically distributed according to a probability measure, then the samples on the nodes \(\mathrm{X}\) is called the random information (see [11, 12, 21]). Krieg and Ullrich in [22] obtained better upper bounds of \(g_{n}(I_{2,d})\) by applying random information and weighted least squares algorithms. Later, the authors in [20, 23, 34, 47] extended the results of [22]. The authors in [35] gave new better upper bounds of \(g_{n}(I_{2,d})\) by applying the weighted least squares method and a new weaver subsampling technique, and finally, the authors in [9] obtained sharp upper bounds of \(g_{n}(I_{2,d})\) by using infinite-dimensional variant of subsampling strategy. The authors in [18] obtained the power of standard information for exponential tractability for \(L_{2}\)-approximation in the worst case setting. The authors in [43] used the weighted least squares method and the subsampling technique in [35] to obtain upper bounds of \(g_{n}(I_{\infty,d})\) in terms of \(a_{m}(I_{2,d})\) and \(N_{\varrho_{d}}\). In this paper we use the weighed least squares method and the subsampling technique in [9] to get an improved upper bounds upper bounds of \(g_{n}(I_{\infty,d})\) and \(g_{n}(\mathrm{APP}_{\infty,d})\). Our result about \(g_{n}(\mathrm{APP}_{\infty,d})\) is sharp. See the following theorems. **Theorem 2.1**.: _There are absolute constants \(c_{1},c_{2}\in\mathbb{N}\) such that_ \[g_{c_{1}m}(I_{\infty,d})^{2}\leq c_{2}\max\left\{\frac{N_{\varrho_{d}}(m)}{m} \sum_{k\geq\lfloor\frac{m}{2}\rfloor}\sigma_{k,d}^{2},\sum_{k\geq\lfloor\frac{ m}{2}\rfloor}\frac{N_{\varrho_{d}}(4k)\sigma_{k,d}^{2}}{k}\right\}.\] **Theorem 2.2**.: _There are absolute constants \(c_{1},c_{2}\in\mathbb{N}\) such that_ \[g_{c_{1}m}(\mathrm{APP}_{\infty,d})\leq c_{2}a_{m+1}(\mathrm{APP}_{\infty,d} )=c_{2}\Big{(}\sum_{k=m+1}^{\infty}\lambda_{k,d}\Big{)}^{1/2}.\] _In other words,_ \[e(c_{1}m,\mathrm{APP}_{\infty,d};\Lambda^{\mathrm{std}})\leq c_{2}e(m, \mathrm{APP}_{\infty,d};\Lambda^{\mathrm{all}}). \tag{2.16}\] Based on Theorem 2.2, we obtain two relations between the information complexities \(n^{\star}(\varepsilon,d;\Lambda^{\mathrm{std}})\) and \(n^{\star}(\varepsilon,d;\Lambda^{\mathrm{all}})\) for \(\star\in\{\mathrm{ABS},\,\mathrm{NOR}\}\). **Theorem 2.3**.: _For \(\star\in\{\mathrm{ABS},\,\mathrm{NOR}\}\), we have_ \[n^{\star}(\varepsilon,d;\Lambda^{\mathrm{std}})\leq 2c_{1}n^{\star}(\frac{ \varepsilon}{c_{2}},d;\Lambda^{\mathrm{all}}), \tag{2.17}\] _where \(c_{1}\), \(c_{2}\) are the constants given in Theorem 2.2._ In the worst case setting, we study the approximation problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\). We obtain the equivalences of various notions of algebraic and exponential tractability for \(\Lambda^{\mathrm{all}}\) and \(\Lambda^{\mathrm{std}}\) for the normalized or absolute error criterion without any condition. See the following theorem. **Theorem 2.4**.: _Consider the approximation problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) for the absolute or normalized error criterion in the worst case setting. Then_ \(\bullet\)__ALG-SPT, ALG-PT, ALG-QPT, ALG-WT, ALG-\((s,t)\)-WT_,_ ALG-UWT _for_\(\Lambda^{\mathrm{all}}\) _is equivalent to_ ALG-SPT_,_ ALG-PT_,_ ALG-QPT_,_ ALG-WT_,_ ALG-\((s,t)\)-WT_,_ ALG-UWT _for_\(\Lambda^{\mathrm{std}}\)_;_ \(\bullet\)__EXP-SPT_,_ EXP-PT_,_ EXP-QPT_,_ EXP-WT_,_ EXP-\((s,t)\)-WT_,_ EXP-UWT _for_\(\Lambda^{\mathrm{all}}\) _is equivalent to_ EXP-SPT_,_ EXP-PT_,_ EXP-QPT_,_ EXP-WT_,_ EXP-\((s,t)\)-WT_,_ EXP-\((s,t)\)_-WWT _for_\(\Lambda^{\mathrm{std}}\)_;_ \(\bullet\) _the exponents of_ SPT _are the same for_ \(\Lambda^{\mathrm{all}}\) _and_ \(\Lambda^{\mathrm{std}}\)_, i.e., for_ \(\star\in\{\mathrm{ABS},\mathrm{NOR}\}\)_,_ \[\mathrm{ALG}-p^{\star}(\Lambda^{\mathrm{all}})=\mathrm{ALG}-p^{\star}( \Lambda^{\mathrm{std}}),\quad\mathrm{EXP}-p^{\star}(\Lambda^{\mathrm{all}})= \mathrm{EXP}-p^{\star}(\Lambda^{\mathrm{std}}).\] ## 3. Proofs of Theorems 2.1-2.3 Theorem 2.1 can be proved in much the same way as [9, Theorem 2.1] and [43, Theorem 1]. For the convenience of the reader we give the proof. Let us keep the notations of Subsection 2.1. We define the probability density \[\rho_{m}(\mathbf{x})=\frac{1}{2}\left(\frac{1}{m}\sum_{k=1}^{m}\left|\eta_{k, d}(\mathbf{x})\right|^{2}+\frac{\sum_{k=m+1}^{\infty}\sigma_{k,d}^{2}\left| \eta_{k,d}(\mathbf{x})\right|^{2}}{\sum_{k=m+1}^{\infty}\sigma_{k,d}^{2}}\right)\] on \(D_{d}\). Let \(\mathbf{x}^{1},\ldots,\mathbf{x}^{n}\in D_{d}\) be drawn independently and identically distributed random points according to this density. We define the infinite-dimensional vectors \(y_{1},\ldots,y_{n}\) by \[\left(y_{i}\right)_{k}=\left\{\begin{array}{ll}\rho_{m}\left(\mathbf{x}^{i} \right)^{-1/2}\eta_{k,d}\left(\mathbf{x}^{i}\right)&\text{ if }1\leq k\leq m,\\ \rho_{m}\left(\mathbf{x}^{i}\right)^{-1/2}\gamma_{m}^{-1}\sigma_{k,d}\eta_{k, d}\left(\mathbf{x}^{i}\right)&\text{ if }m+1\leq k<\infty,\end{array}\right.\] where \[\gamma_{m}:=\max\left\{\sigma_{m+1,d},\left(\frac{1}{m}\sum_{k\geq m+1} \sigma_{k,d}^{2}\right)^{1/2}\right\}>0.\] Note that \(\rho_{m}(\mathbf{x}^{i})>0\) almost surely. It follows from these definitions that \(y_{i}\in\ell_{2}\) with \[\left\|y_{i}\right\|_{2}^{2}=\rho_{m}\left(\mathbf{x}^{i}\right)^{-1}\left( \sum_{k=1}^{m}\left|\eta_{k,d}\left(\mathbf{x}^{i}\right)\right|^{2}+\gamma_{m }^{-2}\sum_{k=m+1}^{\infty}\sigma_{k,d}^{2}\left|\eta_{k,d}\left(\mathbf{x}^{i }\right)\right|^{2}\right)\leq 2m.\] and \[\mathbb{E}\left(y_{i}y_{i}^{*}\right)=\mathrm{diag}\left(1,\ldots,1,\sigma_{m,d}^{2}/\gamma_{m}^{2},\sigma_{m+1,d}^{2}/\gamma_{m}^{2},\ldots\right)=:E.\] with \(\left\|E\right\|_{2\to 2}=1\) since \(\sigma_{m,d}^{2}/\gamma_{m}^{2}\leq 1\) for \(k\geq m+1\). Here, \(diag(v)\) denotes a diagonal matrix with diagonal \(v\), and \(\left\|\cdot\right\|_{2\to 2}\) denotes the spectral norm of a matrix. In order to prove Theorem 2.1 we need the following lemmas. **Lemma 3.1**.: _(See [34, Theorem 1.1] and [35, Theorem 5.3]). Let \(n\geq 3\) and \(y_{1},\ldots,y_{n}\) be \(i.i.d.\) random sequences from \(\ell_{2}\) satisfying \(\left\|y_{i}\right\|_{2}^{2}\leq 2m\) almost surely and \(\left\|E\right\|_{2\to 2}\leq 1\) with \(E=\mathbb{E}\left(y_{i}y_{i}^{*}\right)\). Then for \(0\leq t\leq 1\)_ \[\mathbb{P}\left(\left\|\frac{1}{n}\sum_{i=1}^{n}y_{i}y_{i}^{*}-E\right\|_{2 \to 2}>t\right)\leq 2^{3/4}n\exp\left(-\frac{nt^{2}}{42m}\right).\] Lemma 3.1 gives the concentration inequality for infinite matrices. By Lemma 3.1, we know that there exists a deterministic sample \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\in D_{d}\) with \(n=\left\lfloor 10^{4}m\log(m+1)\right\rfloor\) such that the corresponding \(y_{1},\ldots,y_{n}\) satisfy \[\left\|\frac{1}{n}\sum_{i=1}^{n}y_{i}y_{i}^{*}-E\right\|_{2\to 2}\leq \frac{1}{2}.\] The following lemma gives an infinite-dimensional version of the subsampling theorem that might be of independent interest. **Lemma 3.2**.: _(See [9, Proposition 13]). There are absolute constants \(c_{1}\leq 43200,c_{2}\geq 50,0<c_{3}<21600\) with the following properties. Let \(m\in\mathbb{N},\;n=\left\lfloor 10^{4}m\log(m+1)\right\rfloor\), and \(y_{1},\ldots,y_{n}\) be vectors from \(\ell_{2}\) satisfying and \(\left\|y_{i}\right\|_{2}^{2}\leq 2m\) and_ \[\left\|\frac{1}{n}\sum_{i=1}^{n}y_{i}y_{i}^{*}-\left(\begin{array}{cc}I_{m} &0\\ 0&\Lambda\end{array}\right)\right\|_{2\to 2}\leq\frac{1}{2}.\] _for some Hermitian matrix \(\Lambda\) with \(\left\|\Lambda\right\|_{2\to 2}\leq 1\) where \(I_{m}\in\mathbb{C}^{m\times m}\) denotes the identity matrix. Then, there is a subset \(J\subset\{1,\ldots,n\}\) with \(\left|\;J\;\right|\leq c_{1}m\), such that_ \[c_{2}\left(\begin{array}{cc}I_{m}&0\\ 0&0\end{array}\right)\leq\frac{1}{m}\sum_{i\in J}y_{i}y_{i}^{*}\leq c_{3}I.\] _We can choose \(c_{1}=43200,c_{2}=50\) and \(c_{3}=21600\)._ **Lemma 3.3**.: _(See [43, Theorem 2.1]). Let_ \[P_{m}(f):=\sum_{k=1}^{n}\langle f,\eta_{k,d}\rangle_{L_{2}(\rho_{d})}\,\eta_{ k,d}=\sum_{k=1}^{n}\langle f,e_{k,d}\rangle_{H(K_{d})}\,e_{k,d}.\] _Then we have_ \[\sup_{\|f\|_{H(K_{d})}\leq 1}\|f-P_{m}f\|_{\infty}\leq\sqrt{2\sum_{k\geq\lfloor m /4\rfloor}\frac{N_{\rho_{d}}(4k)}{k}\sigma_{k,d}^{2}}.\] Proof of Theorem 2.1.: Let \(f\in H(K_{d})\) such that \(\|f\|_{H(K_{d})}\leq 1\). According to lemma 3.1 and lemma 3.2, we obtain points \(\mathbf{x}^{1},\ldots,\mathbf{x}^{n}\in D_{d}\) with \(n\leq 43200m\) such that the vectors \[\left(y_{i}\right)_{k}=\left\{\begin{array}{cc}\rho_{m}\left(\mathbf{x}^{i} \right)^{-1/2}\eta_{k,d}\left(\mathbf{x}^{i}\right),&\text{if }1\leq k\leq m, \\ \rho_{m}\left(\mathbf{x}^{i}\right)^{-1/2}\gamma_{m}^{-1}\sigma_{k,d}\eta_{k,d} \left(\mathbf{x}^{i}\right),&\text{if }m+1\leq k<\infty.\end{array}\right.\] satisfy \[\Big{(}\sum_{i=1}^{n}y_{i}y_{i}^{*}\Big{)}_{\leq m}\geq 50mI,\] and \[\Big{(}\sum_{i=1}^{n}y_{i}y_{i}^{*}\Big{)}_{>m}\leq 21600mI,\] where we use the notation \(A_{\geq m}=(A_{k,l})_{k,l\geq m}\) for an infinite matrix \(A\). For the above \(\mathbf{X}=(\mathbf{x}^{1},\dots,\mathbf{x}^{n})\in D_{d}\), we set \[G:=\left(\begin{array}{cccc}\widetilde{\eta}_{1,d}(\mathbf{x}^{1})& \widetilde{\eta}_{2,d}(\mathbf{x}^{1})&\cdots&\widetilde{\eta}_{m,d}(\mathbf{x }^{1})\\ \widetilde{\eta}_{1,d}(\mathbf{x}^{2})&\widetilde{\eta}_{2,d}(\mathbf{x}^{2}) &\cdots&\widetilde{\eta}_{m,d}(\mathbf{x}^{2})\\ \vdots&\vdots&&\vdots\\ \widetilde{\eta}_{1,d}(\mathbf{x}^{n})&\widetilde{\eta}_{2,d}(\mathbf{x}^{n} )&\cdots&\widetilde{\eta}_{m,d}(\mathbf{x}^{n})\end{array}\right)\in\mathbb{ C}^{n\times m},\] where \(\tilde{\eta}_{k,d}:=\frac{\eta_{k,d}}{\sqrt{\rho_{m}}}\). Then we have the identity \[G^{*}G=\Big{(}\sum_{i=1}^{n}y_{i}y_{i}^{*}\Big{)}_{\leq m}.\] It follows that the matrix \(G\) has full rank and the spectral norm of \(G^{+}\) is bounded by \((50m)^{-1/2}\), where \(G^{+}:=(G^{*}G)^{-1}G^{*}\in\mathbb{C}^{m\times n}\) be the Moore-Penrose inverse of the matrix \(G\). We define the weighted least squares estimator \[A_{n}(f):=\operatorname*{arg\,min}_{g\in V_{m}}\sum_{i=1}^{n}\frac{|f( \mathbf{x}^{i})-g(\mathbf{x}^{i})|^{2}}{\varrho_{m}(\mathbf{x}^{i})},\] which has a unique solution in \(V_{m}=\operatorname{span}\{\eta_{1,d},\eta_{2,d},\dots,\eta_{m,d}\}\). For all \(f\in V_{m}\), we have \(A_{n}(f)=f\). Since \(G\) is full rank, the argmin in the definition of \(A_{n}\) is uniquely defined. Let \(N:H(K_{d})\to\mathbb{C}^{n}\) with \(Nf:=(\rho_{m}(\mathbf{x}^{i})^{-1/2}f(\mathbf{x}^{i}))_{1\leq i\leq n}\) be the information mapping. Then the algorithm \(A_{n}\) may be written as \[A_{n}(f)=\sum_{k=1}^{m}(G^{+}Nf)_{k}\eta_{k,d}.\] Now we estimate \(\|f-A_{n}f\|_{\infty}\). Note that \[\|f-A_{n}f\|_{\infty} \leq\|f-P_{m}f\|_{\infty}+\|P_{m}f-A_{n}f\|_{\infty} \tag{3.1}\] \[\leq 2\max\Big{\{}\|f-P_{m}f\|_{\infty},\|P_{m}f-A_{n}f\|_{\infty} \Big{\}}.\] We have \[\|P_{m}f-A_{n}f\|_{\infty}^{2} =\|A_{n}(f-P_{m}f)\|_{\infty}^{2}\] \[=\sup_{\mathbf{x}\in D_{d}}|A_{n}(f-P_{m}f)(\mathbf{x})|^{2}\] \[=\sup_{\mathbf{x}\in D_{d}}\left|\sum_{k=1}^{m}(G^{+}N(f-P_{m}f))_ {k}\eta_{k,d}(\mathbf{x})\right|^{2}\] \[\leq\sup_{\mathbf{x}\in D_{d}}\sum_{k=1}^{m}|\eta_{k,d}(\mathbf{x })|^{2}\cdot\|G^{+}N(f-P_{m}f)\|_{\ell_{2}^{m}}^{2}\] \[\leq N_{\varrho_{d}}(m)\cdot\|G^{+}\|_{2\to 2}^{2}\cdot\|N(f-P_{m}f)\|_{ \ell_{2}^{m}}^{2} \tag{3.2}\] \[\leq\frac{1}{50m}N_{\varrho_{d}}(m)\cdot\|N(f-P_{m}f)\|_{\ell_{2} ^{n}}^{2}.\] We set \[\Psi:=\big{(}\rho_{m}(\mathbf{x}^{i})^{-1/2}\sigma_{k,d}\eta_{k,d}(\mathbf{x} ^{i})\big{)}_{k\geq m+1,i\leq n}=\big{(}\rho_{m}(\mathbf{x}^{i})^{-1/2}e_{k,d}( \mathbf{x}^{i})\big{)}_{k\geq m+1,i\leq n},\] and \[\zeta_{f}:=\big{(}\langle f,\sigma_{k,d}\eta_{k,d}\rangle_{H(K_{d})}\big{)}_{k \geq m+1}=\big{(}\langle f,e_{k,d}\rangle_{H(K_{d})}\big{)}_{k\geq m+1},\] where \(\{e_{k,d}\}_{k\geq 1}\) is an orthonormal basis in \(H(K_{d})\). Obviously, we have \[\|\zeta_{f}\|_{\ell_{2}}^{2}=\sum_{k\geq m+1}|\langle f,e_{k,d}\rangle_{H(K_{d })}|^{2}=\|f-P_{m}f\|_{H(K_{d})}^{2}\leq 1.\] Thus, we obtain \[N(f-P_{m}f)=\Psi\zeta_{f}\] and \[\Psi^{*}\Psi=\gamma_{m}^{2}\Big{(}\sum_{i=1}^{n}y_{i}y_{*}^{*}\Big{)}_{\geq m +1}.\] It follows from Lemma 3.2 that \[\|\Psi\|_{2\to 2}^{2}\leq 21600m\gamma_{m}^{2}.\] Hence, we get \[\|N(f-P_{m}f)\|_{\ell_{2}^{n}}^{2} \leq\|\Psi\|_{2\to 2}^{2}\|\zeta_{f}\|_{\ell_{2}}^{2}\leq 21600m \gamma_{m}^{2}\] \[\leq 21600m\max\bigg{\{}\sigma_{m+1,d}^{2},\frac{1}{m}\sum_{k\geq m +1}\sigma_{k,d}^{2}\bigg{\}}.\] It follows from (3.2) that \[\|P_{m}f-A_{n}f\|_{\infty}^{2} \leq N_{\varrho_{d}}(m)\cdot\frac{1}{50m}21600m\max\Big{\{}\sigma_ {m+1,d}^{2},\frac{1}{m}\sum_{k\geq m+1}\sigma_{k,d}^{2}\Big{\}} \tag{3.3}\] \[\leq 864\frac{N_{\varrho_{d}}(m)}{m}\sum_{k\geq\lfloor m/2\rfloor} \sigma_{k,d}^{2},\] where in the last inequality we use \[\max\Big{\{}\sigma_{m+1,d}^{2},\frac{1}{m}\sum_{k\geq m+1}\sigma_{k,d}^{2} \Big{\}}\leq\frac{2}{m}\sum_{\lfloor m/2\rfloor}\sigma_{k,d}^{2}.\] By (3.1), (3.3) and Lemma 3.3, we obtain \[g_{c_{1}m}(I_{\infty,d})^{2} \leq\sup_{\|f\|_{H(K_{d})}\leq 1}\|f-A_{n}f\|\] \[\leq c\max\Big{\{}\frac{N_{\varrho_{d}}(m)}{m}\sum_{k\geq\lfloor \frac{m}{2}\rfloor}\sigma_{k,d}^{2},\sum_{k\geq\lfloor\frac{m}{4}\rfloor}\frac{ N_{\varrho_{d}}(4k)\sigma_{k,d}^{2}}{k}\Big{\}}. \tag{3.4}\] This completes the proof of Theorem 2.1. Proof of Theorem 2.2.: In the case \(I_{\infty,d}=\operatorname{APP}_{\infty,d}\), we have \(N_{\varrho_{d}}(k)=k\). It follows from (3.4) that \[g_{8c_{1}m}(\operatorname{APP}_{\infty,d})^{2}\leq c\sum_{k\geq 2m}\sigma_{k,d} ^{2}=ca_{2m}(\operatorname{APP}_{\infty,d})^{2}\leq ca_{m+1}(\operatorname{ APP}_{\infty,d})^{2}.\] Theorem 2.2 is proved. Proof of Theorem 2.3.: By (2.16) we have \[e(n,\operatorname{APP}_{\infty,d};\Lambda^{\operatorname{std}})\leq c_{2}e( \lfloor\frac{n}{c_{1}}\rfloor,\operatorname{APP}_{\infty,d};\Lambda^{ \operatorname{all}}).\] It follows that \[n^{\star}(\varepsilon,d;\Lambda^{\operatorname{std}}) =\min\big{\{}n\,:\,e(n,\operatorname{APP}_{\infty,d};\Lambda^{ \operatorname{std}})\leq\varepsilon\text{CRI}_{d}\big{\}}\] \[\leq\min\big{\{}n:c_{2}e(\lfloor\frac{n}{c_{1}}\rfloor, \operatorname{APP}_{\infty,d};\Lambda^{\operatorname{all}})\leq\varepsilon \text{CRI}_{d}\big{\}}\] \[\leq\min\big{\{}c_{1}m+c_{1}:e(m,\operatorname{APP}_{\infty,d}; \Lambda^{\operatorname{all}})\leq\frac{\varepsilon}{c_{2}}\text{CRI}_{d} \big{\}}.\] Hence, we have \[n^{\star}(\varepsilon,d;\Lambda^{\operatorname{std}})\leq c_{1}+c_{1}n^{\star }(\frac{\varepsilon}{c_{2}},d;\Lambda^{\operatorname{all}})\leq 2c_{1}n^{ \star}(\frac{\varepsilon}{c_{2}},d;\Lambda^{\operatorname{all}}).\] Theorem 2.3 is proved. Equivalences of tractability for \(\Lambda^{\operatorname{all}}\) and \(\Lambda^{\operatorname{std}}\) Consider the approximation problem \(\operatorname{APP}=\{\operatorname{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting for the absolute or normalized error criterion. Theorem 2.4 gives the equivalences of various notions of algebraic and exponential tractability for \(\Lambda^{\operatorname{all}}\) and \(\Lambda^{\operatorname{std}}\). However, the proofs of the equivalences of ALG-tractability and the ones of EXP-tractability are similar. In this section we give the proofs of the equivalences of ALG-PT (ALG-SPT), EXP-QPT, EXP-\((s,t)\)-WT, EXP-UWT for \(\Lambda^{\operatorname{all}}\) and \(\Lambda^{\operatorname{std}}\). **Theorem 4.1**.: _Consider the problem \(\operatorname{APP}=\{\operatorname{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting for the absolute or normalized error criterion. Then,_ \(\bullet\) ALG-PT _for \(\Lambda^{\operatorname{all}}\) is equivalent to ALG-PT for \(\Lambda^{\operatorname{std}}\)._ \(\bullet\) ALG-SPT _for \(\Lambda^{\operatorname{all}}\) is equivalent to ALG-SPT for \(\Lambda^{\operatorname{std}}\). In this case, the exponents of ALG-SPT for \(\Lambda^{\operatorname{all}}\) and \(\Lambda^{\operatorname{std}}\) are the same._ Proof.: It follows from (2.11) that ALG-PT (ALG-SPT) for \(\Lambda^{\operatorname{std}}\) means ALG-PT (ALG-SPT) for \(\Lambda^{\operatorname{all}}\). It suffices to show that ALG-PT (ALG-SPT) for \(\Lambda^{\operatorname{all}}\) means that ALG-PT (ALG-SPT) for \(\Lambda^{\operatorname{std}}\). Suppose that ALG-PT holds for \(\Lambda^{\rm all}\). Then there exist \(C\geq 1\) and non-negative \(p,q\) such that \[n^{\star}(\varepsilon,d;\Lambda^{\rm all})\leq Cd^{q}\varepsilon^{-p},\ \ \mbox{for all}\ \ d\in\mathbb{N},\ \varepsilon\in(0,1). \tag{4.1}\] It follows from (2.17) and (4.1) that \[n^{\star}(\varepsilon,d;\Lambda^{\rm std})\leq 2c_{1}n^{\star}(\frac{ \varepsilon}{c_{2}},d;\Lambda^{\rm all})\leq 2c_{1}Cd^{q}\Big{(}\frac{ \varepsilon}{c_{2}}\Big{)}^{-p}=:C^{{}^{\prime}}d^{q}\varepsilon^{-p},\] which means that ALG-PT holds for \(\Lambda^{\rm std}\). If ALG-SPT holds for \(\Lambda^{\rm all}\), then (4.1) holds with \(q=0\). We obtain \[n^{\star}(\varepsilon,d;\Lambda^{\rm std})\leq C^{{}^{\prime}}\varepsilon^{-p},\] which means that ALG-SPT holds for \(\Lambda^{\rm std}\). Furthermore, we get \[\mbox{ALG-}p^{\star}(\Lambda^{\rm std})\leq\mbox{ALG-}p^{\star}(\Lambda^{\rm all })\leq\mbox{ALG-}p^{\star}(\Lambda^{\rm std}),\] which means that the exponents of ALG-SPT for \(\Lambda^{\rm all}\) and \(\Lambda^{\rm std}\) are the same. This completes the proof of Theorem 4.1. **Theorem 4.2**.: _Consider the problem \(\mbox{\rm APP}=\{\mbox{\rm APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting. Then, for the absolute or normalized error criterion \(\mbox{\rm EXP-QPT}\) for \(\Lambda^{\rm all}\) is equivalent to \(\mbox{\rm EXP-QPT}\) for \(\Lambda^{\rm std}\)._ Proof.: Again, it is enough to prove that \(\mbox{\rm EXP-QPT}\) for \(\Lambda^{\rm all}\) implies \(\mbox{\rm EXP-QPT}\) for \(\Lambda^{\rm std}\) for the absolute or normalized error criterion. Suppose that \(\mbox{\rm EXP-QPT}\) holds for \(\Lambda^{\rm all}\) for the absolute or normalized error criterion. Then there exist \(C\geq 1\) and non-negative \(t\) such that for \(\star\in\{\mbox{\rm ABS, NOR}\}\), \[n^{\star}(\varepsilon,d;\Lambda^{\rm all})\leq C\exp(t(1+\ln d)(1+\ln(\ln \varepsilon^{-1}+1))),\mbox{ for all }d\in\mathbb{N},\ \varepsilon\in(0,1). \tag{4.2}\] It follows from (2.17) and (4.2) that \[n^{\star}(\varepsilon,d;\Lambda^{\rm std}) \leq 2c_{1}n^{\star}(\frac{\varepsilon}{c_{2}},d;\Lambda^{\rm all})\] \[\leq 2c_{1}C\exp\big{(}t(1+\ln d)\big{(}1+\ln(\ln\varepsilon^{-1} +\ln c_{2}+1))\big{)}\] \[\leq 2c_{1}C\exp\big{(}t(1+\ln d)(1+\ln(\ln c_{2}+1)+\ln(\ln \varepsilon^{-1}+1))\big{)} \tag{4.3}\] \[\leq 2c_{1}C\exp\big{(}t^{*}(1+\ln d)(1+\ln(\ln\varepsilon^{-1}+1 ))\big{)},\] where \(t^{*}=(1+\ln(\ln c_{2}+1))t\), and in the third inequality we use the fact \[\ln(1+a+b)\leq\ln(1+a)+\ln(1+b),\quad a,b\geq 0.\] The inequality (4.3) implies that \(\mbox{\rm EXP-QPT}\) holds for \(\Lambda^{\rm std}\) for the absolute or normalized error criterion. Theorem 4.2 is proved. **Remark 4.3**.: _From (2.11) and (4.3) we obtain_ \[\mbox{\rm EXP-}t^{*}(\Lambda^{\rm all})\leq\mbox{\rm EXP-}t^{*}(\Lambda^{\rm std })\leq(1+\ln(\ln c_{2}+1))\mbox{\rm EXP-}t^{*}(\Lambda^{\rm all}).\] _Similarly, we get_ \[\mbox{\rm ALG-}t^{\star}(\Lambda^{\rm all})\leq\mbox{\rm ALG-}t^{\star}( \Lambda^{\rm std})\leq(1+\ln c_{2})\mbox{\rm ALG-}t^{\star}(\Lambda^{\rm all }).\] _Since \(c_{2}>1\), we cannot obtain that the exponents \(t^{*}(\Lambda^{\rm all})\) and \(t^{*}(\Lambda^{\rm std})\) of QPT are equal._ **Theorem 4.4**.: _Consider the problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting for the absolute error criterion or normalized error criterion. Then for fixed \(s,t>0\), \(\mathrm{EXP}\)-\((s,t)\)-WT for \(\Lambda^{\mathrm{all}}\) is equivalent to \(\mathrm{EXP}\)-\((s,t)\)-WT for \(\Lambda^{\mathrm{std}}\). Specifically, \(\mathrm{EXP}\)-WT for \(\Lambda^{\mathrm{all}}\) is equivalent to \(\mathrm{EXP}\)-WT for \(\Lambda^{\mathrm{std}}\)._ Proof.: Again, it is enough to prove that \(\mathrm{EXP}\)-\((s,t)\)-WT for \(\Lambda^{\mathrm{all}}\) implies \(\mathrm{EXP}\)-\((s,t)\)-WT for \(\Lambda^{\mathrm{std}}\). Suppose that \(\mathrm{EXP}\)-\((s,t)\)-WT holds for \(\Lambda^{\mathrm{all}}\). Then we have \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda^{ \mathrm{all}})}{(1+\ln\varepsilon^{-1})^{s}+d^{t}}=0. \tag{4.4}\] It follows from (2.17) that \[\frac{\ln n^{\star}(\varepsilon,d;\Lambda^{\mathrm{std}})}{(1+ \ln\varepsilon^{-1})^{s}+d^{t}}\leq\frac{\ln\left(2c_{1}n^{\star}(\frac{ \varepsilon}{c_{2}},d;\Lambda^{\mathrm{all}})\right)}{(1+\ln\varepsilon^{-1}) ^{s}+d^{t}}\] \[\leq\frac{\ln(2c_{1})}{(1+\ln\varepsilon^{-1})^{s}+d^{t}}+\frac{ \ln n^{\star}(\varepsilon/c_{2},d;\Lambda^{\mathrm{all}})}{(1+\ln(\varepsilon/ c_{2})^{-1})^{s}+d^{t}}\cdot G,\] where \[G :=\frac{(1+\ln(\varepsilon/c_{2})^{-1})^{s}+d^{t}}{(1+\ln \varepsilon^{-1})^{s}+d^{t}}\] \[\leq\frac{2^{s}(1+\ln\varepsilon^{-1})^{s}+2^{s}(\ln c_{2})^{s}+ d^{t}}{(1+\ln\varepsilon^{-1})^{s}+d^{t}}\] \[\leq 2^{s}+\frac{2^{s}(\ln c_{2})^{s}}{(1+\ln\varepsilon^{-1})^{s} +d^{t}}\leq 2^{s}+2^{s}(\ln c_{2})^{s}.\] Since \(\varepsilon^{-1}+d\to\infty\) is equivalent to \((1+\ln(\varepsilon/c_{2})^{-1})^{s}+d^{t}\to\infty\), by (4.4) we get \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln(2c_{1})}{(1+\ln\varepsilon^{-1}) ^{s}+d^{t}}=0\quad\text{and}\quad\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n ^{\star}(\varepsilon/c_{2},d;\Lambda^{\mathrm{all}})}{(1+\ln(\varepsilon/c_{2 })^{-1})^{s}+d^{t}}=0.\] We obtain \[\lim_{\varepsilon^{-1}+d\to\infty}\frac{\ln n^{\star}(\varepsilon,d;\Lambda^{ \mathrm{std}})}{(1+\ln\varepsilon^{-1})^{s}+d^{t}}=0,\] which implies that \(\mathrm{EXP}\)-\((s,t)\)-WT holds for \(\Lambda^{\mathrm{std}}\). This completes the proof of Theorem 4.4. **Theorem 4.5**.: _Consider the problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting for the absolute or normalized error criterion. Then, \(\mathrm{EXP}\)-\(\mathrm{UWT}\) for \(\Lambda^{\mathrm{all}}\) is equivalent to \(\mathrm{EXP}\)-\(\mathrm{UWT}\) for \(\Lambda^{\mathrm{std}}\)._ Proof.: By definition we know that \(\mathrm{APP}\) is \(\mathrm{EXP}\)-\(\mathrm{UWT}\) if and only if \(\mathrm{APP}\) is \(\mathrm{EXP}\)-\((s,t)\)-WT for all \(s,t>0\). Then Theorem 4.5 follows from Theorem 4.4 immediately. ## 5. Applications of Theorem 2.1 This section is devoted to giving the applications of Theorem 2.4 about weighted Korobov spaces and Korobov spaces with exponential weight. First we claim that the information complexity of \(L_{2}\) approximation in the average case setting with the covariance kernel \(K_{d}^{\omega}\) given in (2.9) and the one of \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) are the same using \(\Lambda^{\mathrm{all}}\). Consider the approximation problem \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\), \[\tilde{I}_{d}\ :\ C([0,1]^{d})\to L_{2}([0,1]^{d})\quad\text{with}\quad\tilde{I}_{d}(f)=f. \tag{5.1}\] The space \(C([0,1]^{d})\) of continuous real functions is equipped with a zero-mean Gaussian measure \(\mu_{d}\) whose covariance kernel is given by \(K_{d}^{\omega}\). We approximate \(\tilde{I}_{d}\,f\) by algorithms \(A_{n,d}f\) of the form (2.3) that use \(n\) continuous linear functionals \(L_{i},\ i=1,\ldots,n\) on \(C([0,1]^{d})\). The average case error for \(A_{n,d}\) is defined by \[e^{\mathrm{avg}}(A_{n,d})\ =\ \Big{[}\int_{C([0,1]^{d})}\big{\|}\tilde{I}_{d} \left(f\right)-A_{n,d}(f)\big{\|}_{L_{2}([0,1]^{d})}^{2}\mu_{d}(\mathrm{df}) \Big{]}^{\frac{1}{2}}.\] The \(n\)th minimal average case error, for \(n\geq 1\), is defined by \[e^{\mathrm{avg}}(n,\tilde{I}_{d})=\inf_{A_{n,d}}e(A_{n,d}),\] where the infimum is taken over all algorithms of the form (2.3). Let \(\{\lambda_{k,d}\}_{k\in\mathbb{N}}\) be the nonincreasing rearrangement of the sequence \(\{\omega(\mathbf{k})^{-2}\}_{\mathbf{k}\in\mathbb{Z}^{d}}\). Then the \(n\)th minimal average case error \(e^{\mathrm{avg}}(n,\tilde{I}_{d})\) is (see [37]) \[e^{\mathrm{avg}}(n,\tilde{I}_{d})=\Big{(}\sum_{k=n+1}^{\infty}\lambda_{k,d} \Big{)}^{1/2}.\] For \(n=0\), we use \(A_{0,d}=0\). We obtain the so-called initial error \[e^{\mathrm{avg}}(0,\tilde{I}_{d})=e^{\mathrm{avg}}(A_{0,d})=\Big{(}\sum_{k=1 }^{\infty}\lambda_{k,d}\Big{)}^{1/2}.\] The information complexity for \(\tilde{I}_{d}\) in the average case setting can be studied using either the absolute error criterion (ABS), or the normalized error criterion (NOR). Then we define the information complexity \(n^{\mathrm{avg,\star}}(\varepsilon,d)\) for \(\star\in\{\mathrm{ABS},\,\mathrm{NOR}\}\) as \[n^{\mathrm{avg,\star}}(\varepsilon,\tilde{I}_{d}):=\min\{n:\,e^{\mathrm{avg}} (n,\tilde{I}_{d})\leq\varepsilon\mathrm{CRI}_{d}\},\] where \[\mathrm{CRI}_{d}=\left\{\begin{array}{cl}1,&\text{for $\star$=$ABS},\\ e^{\mathrm{avg}}(0,\tilde{I}_{d}),&\text{for $\star$=$NOR}.\end{array}\right.\] We note that \[e^{\mathrm{avg}}(n,\tilde{I}_{d})=e(n,\mathrm{APP}_{\infty,d};\Lambda^{ \mathrm{all}})\ \ \text{and}\ \ e^{\mathrm{avg}}(0,\tilde{I}_{d})=e(0,\mathrm{APP}_{\infty,d}).\] It follows that \[n^{\mathrm{avg,\star}}(\varepsilon,\tilde{I}_{d})=n^{\star}(\varepsilon, \mathrm{APP}_{\infty,d};\Lambda^{\mathrm{all}}),\] which shows the Claim. This means that using \(\Lambda^{\mathrm{all}}\), ALG-tractability and EXPTractability of various notions for \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting and for \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\) in the average case setting are the same. ### Weighted Korobov spaces \(H(K_{d,\mathbf{r},\mathbf{g}})\) Let \(\mathbf{r}=\{r_{k}\}_{k\in\mathbb{N}}\) and \(\mathbf{g}=\{g_{k}\}_{k\in\mathbb{N}}\) be two sequences satisfying \[1\geq g_{1}\geq g_{2}\geq\cdots\geq g_{k}\geq\cdots>0, \tag{5.2}\] and \[\frac{1}{2}<r_{1}\leq r_{2}\leq\cdots\leq r_{k}\leq\cdots. \tag{5.3}\] For \(d=1,2,\cdots\), we define the spaces \[H_{d,\mathbf{r},\mathbf{g}}=H_{1,r_{1},g_{1}}\otimes H_{1,r_{2},g_{2}}\otimes \cdots\otimes H_{1,r_{d},g_{d}}.\] Here \(H_{1,\alpha,\beta}\) is the Korobov space of univariate complex valued functions \(f\) defined on \([0,1]\) such that \[\|f\|^{2}_{H_{1,\alpha,\beta}}:=|\hat{f}(0)|^{2}+\beta^{-1}\sum_{h\in\mathbb{Z },h\neq 0}|h|^{2\alpha}|\hat{f}(h)|^{2}<\infty,\] where \(\beta\in(0,1]\) is a scaling parameter, and \(\alpha>0\) is a smoothness parameter, \[\hat{f}(h)=\int_{0}^{1}f(x)e^{-2\pi\mathrm{i}hx}\mathrm{d}x\ \ \text{for}\ \ h\in\mathbb{Z}\] are the Fourier coefficients of \(f\), \(\mathrm{i}=\sqrt{-1}\). If \(\alpha>\frac{1}{2}\), then \(H_{1,\alpha,\beta}\) consists of \(1\)-periodic functions and is a reproducing kernel Hilbert space with reproducing kernel \[R_{\alpha,\beta}(x,y):=1+2\beta\sum_{j=1}^{\infty}j^{-2\alpha}\cos(2\pi j(x-y) ),\ \ x,y\in[0,1].\] If \(\alpha\) is an integer, then \(H_{1,\alpha,\beta}\) consists of \(1\)-periodic functions \(f\) such that \(f^{(\alpha-1)}\) is absolutely continuous, \(f^{(\alpha)}\) belongs to \(L_{2}([0,1])\), and \[\|f\|^{2}_{H_{1,\alpha,\beta}}=\big{|}\int_{[0,1]}f(x)\mathrm{d}x\big{|}^{2}+( 2\pi)^{2\alpha}\beta^{-1}\int_{[0,1]}|f^{(\alpha)}(x)|^{2}\mathrm{d}x.\] See [37, Appendix A]. For \(d\geq 2\) and two sequences \(\mathbf{r}=\{r_{k}\}_{k\in\mathbb{N}}\) and \(\mathbf{g}=\{g_{k}\}_{k\in\mathbb{N}}\), the space \(H_{d,\alpha,\beta}\) is a Hilbert space with the inner product \[\langle f,g\rangle_{H_{d,\mathbf{r},\mathbf{g}}}=\sum_{\mathbf{h}\in\mathbb{Z }^{d}}\rho_{d,\mathbf{r},\mathbf{g}}(\mathbf{h})\hat{f}(\mathbf{h})\overline {\hat{g}(\mathbf{h})},\] where \[\rho_{d,\mathbf{r},\mathbf{g}}(\mathbf{h})=\prod_{j=1}^{d}(\delta_{0,h_{j}}+g _{j}^{-1}(1-\delta_{0,h_{j}}))|h_{j}|^{2r_{j}},\] \[\delta_{i,j}=\left\{\begin{array}{ll}1,&i=j,\\ 0,&i\neq j,\end{array}\right.\ \text{and}\] \[\hat{f}(\mathbf{h})=\int_{[0,1]^{d}}f(\mathbf{x})e^{-2\pi\mathrm{i}\mathbf{h} \mathbf{x}}\mathrm{d}\mathbf{x}\ \ \text{for}\ \ \mathbf{h}\in\mathbb{Z}^{d},\] are the Fourier coefficients of \(f\), \(\mathbf{x}\cdot\mathbf{y}=x_{1}y_{1}+\cdots+x_{d}y_{d}\). If \(r_{*}:=\inf_{j}r_{j}>1/2\), then \(H_{d,\mathbf{r},\mathbf{g}}\) consists of \(1\)-periodic functions on \([0,1]^{d}\) and is a reproducing kernel Hilbert space with reproducing kernel \[K_{d,\mathbf{r},\mathbf{g}}(\mathbf{x},\mathbf{y}) =\prod_{k=1}^{d}R_{r_{k},g_{k}}(x_{k},y_{k})\] \[=\prod_{k=1}^{d}\Big{(}1+2g_{k}\sum_{j=1}^{\infty}j^{-2r_{k}}\cos( 2\pi j(x_{k}-y_{k}))\Big{)},\ \ \mathbf{x},\mathbf{y}\in[0,1]^{d}.\] For integers \(r_{j}\), the inner product of \(H_{d,\mathbf{r},\mathbf{g}}\) can be expressed in terms of derivatives, see [37, Appendix A]. We introduce tractability results of the \(L_{2}\) approximation problem \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\), \[\tilde{I}_{d}:\,C([0,1]^{d})\to L_{2}([0,1]^{d})\ \ \text{with}\ \ \tilde{I}_{d}(f)=f\] in the average case setting for \(\Lambda^{\text{all}}\). The space \(C([0,1]^{d})\) of continuous real functions is equipped with a zero-mean Gaussian measure \(\mu_{d}\) whose covariance kernel is given by \(K_{d,\mathbf{r},\mathbf{g}}\). For ALG-tractability of \(\tilde{I}\) for \(\Lambda^{\text{all}}\), the sufficient and necessary conditions for ALG-SPT, ALG-PT, ALG-WT under NOR were given in [28], and for ALG-SPT, ALG-PT under ABS in [56], for ALG-QPT under NOR in [16, 28, 52], and for ALG-UWT under ABS or NOR in [53], for ALG-\((s,t)\)-WT under NOR or ABS in [7]. We summarize these results as follows. **Theorem 5.1**.: _Consider the \(L_{2}\) approximation problem \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\) in the average case setting with covariance kernel \(K_{d,\mathbf{r},\mathbf{g}}\) and weights \(\{g_{k}\}_{k\in\mathbb{N}}\) and smoothness \(\{r_{k}\}_{k\in\mathbb{N}}\) satisfying (5.2) and (5.3) for \(\Lambda^{\text{all}}\)._ 1. For ABS or NOR, ALG-SPT holds iff ALG-PT holds iff \[\liminf_{j\to\infty}\frac{\ln\frac{1}{g_{j}}}{\ln j}>1.\] 2. For NOR, ALG-QPT holds iff \[\sup_{d\in\mathbb{N}}\frac{1}{\ln_{+}d}\sum_{j=1}^{d}g_{j}\ln_{+}\frac{1}{g_{ j}}<\infty,\] where \(\ln_{+}x:=\max(1,\ln x)\). 4. For ABS or NOR, ALG-UWT holds iff \[\liminf_{j\to\infty}\frac{\ln\frac{1}{g_{j}}}{\ln j}\geq 1.\] 5. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(t>1\) always holds. 6. For ABS or NOR, ALG-\((s,1)\)-WT with \(s>0\) holds iff ALG-WT holds iff \[\lim_{j\to\infty}g_{j}=0.\] 7. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(0<t<1\) holds iff \[\lim_{j\to\infty}j^{1-t}g_{j}\ln_{+}\frac{1}{g_{j}}=0.\] We consider the \(L_{\infty}\) approximation problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\), \[\mathrm{APP}_{\infty,d}:\,H(K_{d,\mathbf{r},\mathbf{g}})\to L_{\infty}([0,1]^{d} )\ \ \text{with}\ \ \mathrm{APP}_{\infty,d}(f)=f\] in the worst case setting. According to the Claim, ALG-tractability of various notions for \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting, and the ones for \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\) in the average case setting are the same. By Theorems 5.1 and Theorem 2.4, we obtain the following new results. **Theorem 5.2**.: _Consider the \(L_{\infty}\) approximation problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) defined over \(H(K_{d,\mathbf{r},\mathbf{g}})\) with weights \(\{g_{k}\}_{k\in\mathbb{N}}\) and smoothness \(\{r_{k}\}_{k\in\mathbb{N}}\) satisfying (5.2) and (5.3) in the worst case setting for \(\Lambda^{\mathrm{std}}\) and \(\Lambda^{\mathrm{all}}\)._ 1. For ABS or NOR, ALG-SPT holds iff ALG-PT \[\liminf_{j\to\infty}\frac{\ln\frac{1}{g_{j}}}{\ln j}>1.\] 2. For NOR, ALG-QPT holds iff \[\sup_{d\in\mathbb{N}}\frac{1}{\ln_{+}d}\sum_{j=1}^{d}g_{j}\ln_{+}\frac{1}{g_{ j}}<\infty.\] 4. For ABS or NOR, ALG-UWT holds iff \[\liminf_{j\to\infty}\frac{\ln\frac{1}{g_{j}}}{\ln j}>1.\] 5. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(t>1\) always holds. 6. For ABS or NOR, ALG-\((s,1)\)-WT with \(s>0\) holds iff ALG-WT holds iff \[\lim_{j\to\infty}g_{j}=0.\] 7. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(0<t<1\) holds iff \[\lim_{j\to\infty}j^{1-t}g_{j}\ln_{+}\frac{1}{g_{j}}=0.\] **Remark 5.3**.: _In [25, 26], the authors considered the \(L_{\infty}\) approximation problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) defined over the weighted Korobov spaces \(H(K_{d})\) in the worst case setting for \(\Lambda^{\mathrm{all}}\) and \(\Lambda^{\mathrm{std}}\) under ABS, where the reproducing kernel \(K_{d}\) can be written as_ \[K_{d}(\mathbf{x},\mathbf{y})=\sum_{\mathbf{h}\in\mathbb{Z}^{d}}\frac{\cos( \mathbf{2}\pi\mathbf{h}\cdot(\mathbf{x}-\mathbf{y}))}{\mathbf{r}_{\alpha}( \gamma_{\mathbf{d}},\mathbf{h})},\] \(\alpha>1\) _is a smoothness parameter, \(\gamma_{d}=(\gamma_{d,1},\gamma_{d,2},\cdot\cdot\cdot,\gamma_{d,d})\) is a vector of positive weights satisfying \(1\geq\gamma_{d,1}\geq\gamma_{d,2}\geq\cdot\cdot\cdot\geq\gamma_{d,d}>0\), and_ \[\mathbf{r}_{\alpha}(\gamma_{d},\mathbf{h})=\prod_{j=1}^{d}r_{\alpha}(\gamma_{ d,j},h_{j})\ \ \text{and}\ r_{\alpha}(\gamma_{d,j},h_{j}):=\begin{cases}1,&h_{j}=0,\\ \gamma_{d,j}^{-1}|h_{j}|^{\alpha},&\text{otherwise.}\end{cases}\] _The authors obtained the sufficient and necessary conditions for ALG-SPT and ALG-PT for the above \(L_{\infty}\) approximation problem._ ### Korobov spaces with exponential weight Now we introduce Korobov kernels with exponential weight. Suppose that \(\mathbf{a}=\{a_{i}\}_{i\in\mathbb{N}}\) and \(\mathbf{b}=\{b_{i}\}_{i\in\mathbb{N}}\) be the positive weights satisfying \[0<a_{1}\leq a_{2}\leq\cdots\quad\text{and}\quad\beta_{*}:=\inf_{i\in\mathbb{N}} b_{i}>0. \tag{5.4}\] For \(d=1\), \(H(K_{1,\alpha,\beta})\) is a reproducing kernel Hilbert space with reproducing kernel \[K_{1,\alpha,\beta}(x,y)=\sum_{h\in\mathbb{Z}}\omega^{\alpha|h|^{\beta}}\exp(2 \pi ih(x-y)),\ x,y\in[0,1],\ \ \omega\in(0,1),\ \alpha,\beta>0.\] Note that if \(\beta\geq 1\), then all functions in \(H(K_{1,\alpha,\beta})\) are analytic (see [14]). For \(d\geq 2\), the Korobov space \(H(K_{d,\mathbf{a},\mathbf{b}})\) with exponential weight consists of complex valued \(1\)-periodic continuous functions defined on \([0,1]^{d}\), and is a reproducing kernel Hilbert space with reproducing kernel \[K_{d,\mathbf{a},\mathbf{b}}(\mathbf{x},\mathbf{y}) =\prod_{k=1}^{d}K_{1,a_{k},b_{k}}(x_{k}-y_{k})\] \[=\sum_{\mathbf{h}\in\mathbb{Z}^{d}}\omega_{\mathbf{h}}\exp(2\pi \mathrm{i}\mathbf{h}\cdot(\mathbf{x}-\mathbf{y})),\ \mathbf{x},\mathbf{y}\in[0,1]^{d},\] where \(\omega_{\mathbf{h}}=\omega^{\sum_{k=1}^{d}a_{k}|h_{k}|^{b_{k}}}\) for all \(\mathbf{h}=(h_{1},h_{2},\cdots,h_{d})\in\mathbf{Z}^{d}\) for fixed \(\omega\in(0,1)\) and \(\mathbf{h}\cdot(\mathbf{x}-\mathbf{y})=\sum\limits_{k=1}^{d}h_{k}(x_{k}-y_{k})\). For \(f\in H(K_{d,\mathbf{a},\mathbf{b}})\), the norm of \(f\) in \(H(K_{d,\mathbf{a},\mathbf{b}})\) is given by \[\|f\|_{H(K_{d,\mathbf{a},\mathbf{b}})}=(\sum_{\mathbf{h}\in\mathbb{Z}^{d}} \omega_{\mathbf{h}}^{-1}|\hat{f}(\mathbf{h})|^{2})^{\frac{1}{2}},\] where \[\hat{f}(\mathbf{h})=\int_{[0,1]^{d}}f(\mathbf{x})\exp(2\pi\mathrm{i}\mathbf{ h}\cdot\mathbf{x})\mathrm{d}\mathbf{x}\] are the Fourier coefficients of \(f\). We introduce previous tractability results. In [19], the authors considered the approximation problem \(\mathrm{APP}\)=\(\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\), \[\mathrm{APP}_{\infty,d}:\,H(K_{d,\mathbf{a},\mathbf{b}})\to L_{\infty}([0,1]^ {d})\ \ \text{with}\ \ \mathrm{APP}_{\infty,d}(f)=f\] in the worst case setting. They obtained the following results. **Theorem 5.4**.: _(See [19, Theorem 1]). Consider the \(L_{\infty}\) approximation problem \(\mathrm{APP}=\{\mathrm{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) defined over \(H(K_{d,\mathbf{a},\mathbf{b}})\) with arbitrary sequences \(\mathbf{a}\) and \(\mathbf{b}\) satisfying (5.4) in the worst case setting. The following results hold for \(\Lambda^{\mathrm{all}}\) and \(\Lambda^{\mathrm{std}}\) under ABS or NOR._ 1. EXP-SPT _holds iff_ EXP-PT _holds iff_ \[\sum_{j=1}^{\infty}\frac{1}{b_{j}}<\infty\ \ \text{and}\ \ \liminf_{j\to\infty}\frac{\ln a_{j}}{j}>0.\] 2. EXP-\((s,1)\)-WT _for_ \(s\geq 1\) _holds iff_ EXP_-WT _holds iff_ \(\lim_{j\to\infty}a_{j}=\infty\) We introduce tractability results of the \(L_{2}\) approximation problem \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\), \[\tilde{I}_{d}:\;C([0,1]^{d})\to L_{2}([0,1]^{d}),\;\;\text{with}\;\;\tilde{I}_{ d}(f)=f\] in the average case setting for \(\Lambda^{\text{all}}\). The space \(C([0,1]^{d})\) of continuous real functions is equipped with a zero-mean Gaussian measure \(\mu_{d}\) whose covariance kernel is given by \(K_{d,\mathbf{a},\mathbf{b}}\). The ALG-tractability and EXP-tractability of the \(L_{2}\) approximation problem \(\tilde{I}\) in the average case setting had been investigated in [7, 29, 30, 48]. For ALG-tractability of \(\tilde{I}\) for \(\Lambda^{\text{all}}\), the sufficient and necessary conditions for ALG-SPT, ALG-PT, ALG-UWT, ALG-WT under NOR or ABS, and for ALG-QPT under NOR were given in [30], for ALG-\((s,t)\)-WT with \(s>0\) and \(t\geq 1\) under NOR or ABS in [29], and for \((s,t)\)-WT with \(s>0\) and \(t\in(0,1)\) under ABS or NOR in [7]. We summarize these results as follows. **Theorem 5.5**.: _Consider the \(L_{2}\) approximation problem \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\) in the average case setting with covariance kernel \(K_{d,\mathbf{a},\mathbf{b}}\) and sequences \(\mathbf{a}\) and \(\mathbf{b}\) satisfying (5.4) for \(\Lambda^{\text{all}}\)._ 1. For ABS or NOR, ALG-SPT holds iff ALG-PT holds iff \[\liminf_{j\to\infty}\frac{a_{j}}{\ln j}>\frac{1}{\ln\omega^{-1}}.\] 2. For NOR, ALG-QPT holds iff \[\sup_{d\in\mathbb{N}}\frac{1}{\ln_{+}d}\sum_{j=1}^{d}a_{j}\omega^{a_{j}}<\infty.\] 3. For ABS or NOR, ALG-UWT holds iff \[\liminf_{j\to\infty}\frac{a_{j}}{\ln j}\geq\frac{1}{\ln\omega^{-1}}.\] 4. For ABS or NOR, ALG-WT holds iff \[\lim_{j\to\infty}a_{j}=\infty.\] 5. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(t>1\) always holds. 6. For ABS or NOR, ALG-\((s,1)\)-WT with \(s>0\) holds iff WT \[\lim_{j\to\infty}a_{j}=\infty.\] 7. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(0<t<1\) holds iff \[\lim_{j\to\infty}j^{1-t}a_{j}\omega^{a_{j}}=0.\] For the EXP-tractability of \(\tilde{I}\) under ABS or NOR, the sufficient and necessary conditions for EXP-SPT, EXP-PT, EXP-UWT, EXP-WT were given in [30], and for EXP-\((s,t)\)-WT with \(s,t>0\) and \((s,t)\neq(1,1)\) in [48]. We summarize these results as follows. **Theorem 5.6**.: _Consider the \(L_{2}\) approximation problem \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\) in the average case setting with covariance kernel \(K_{d,\mathbf{a},\mathbf{b}}\) and sequences \(\mathbf{a}\) and \(\mathbf{b}\) satisfying (5.4) for \(\Lambda^{\text{all}}\) under ABS or NOR._ 1. EXP-SPT holds iff \[\sum_{j=1}^{\infty}\frac{1}{b_{j}}<\infty\;\text{ and }\;\liminf_{j\to\infty}\frac{\ln a_{j}}{j}>0.\] 2. EXP-UWT holds iff \[\lim_{j\to\infty}\frac{\ln a_{j}}{\ln j}=\infty.\] 3. EXP-\((s,t)\)-WT with \(s>0\) and \(t>1\) always holds. 4. EXP-\((s,1)\)-WT with \(s\geq 1\) holds iff EXP-WT holds iff \[\lim_{j\to\infty}a_{j}=\infty.\] 5. EXP-\((s,t)\)-WT with \(0<s<1\) and \(0<t\leq 1\) holds iff \[\lim_{j\to\infty}\frac{a_{j}}{j^{(1-s)/s}}=\infty.\] 6. EXP-\((1,t)\)-WT with \(t<1\) holds iff \[\lim_{j\to\infty}\frac{a_{j}}{\ln j}=\infty.\] 7. EXP-\((s,t)\)-WT with \(s>1\) and \(t<1\) holds iff \[\lim_{j\to\infty}j^{1-t}a_{j}\omega^{a_{j}}=0.\] According to the Claim, ALG-tractability and EXP-tractability of various notions for \(\text{APP}=\{\text{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) in the worst case setting, and the ones for \(\tilde{I}=\{\tilde{I}_{d}\}_{d\in\mathbb{N}}\) in the average case setting are the same. By Theorems 5.5, 5.6, 5.4, and Theorem 2.4, we obtain the following new results. **Theorem 5.7**.: _Consider the \(L_{\infty}\) approximation problem \(\text{APP}=\{\text{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) defined over \(H(K_{d,\mathbf{a},\mathbf{b}})\) with sequences \(\mathbf{a}\) and \(\mathbf{b}\) satisfying (5.4) in the worst case setting for \(\Lambda^{\text{std}}\) and \(\Lambda^{\text{all}}\)._ 1. For ABS or NOR, ALG-SPT holds iff ALG-PT holds iff \[\liminf_{j\to\infty}\frac{a_{j}}{\ln j}>\frac{1}{\ln\omega^{-1}}.\] 2. For NOR, ALG-QPT holds iff \[\sup_{d\in\mathbb{N}}\frac{1}{\ln_{+}d}\sum_{j=1}^{d}a_{j}\omega^{a_{j}}<\infty.\] 3. For ABS or NOR, ALG-UWT holds iff \[\liminf_{j\to\infty}\frac{a_{j}}{\ln j}\geq\frac{1}{\ln\omega^{-1}}.\] 4. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(t>1\) always holds. 5. For ABS or NOR, ALG-\((s,1)\)-WT with \(s>0\) holds iff ALG-WT holds iff \[\lim_{j\to\infty}a_{j}=\infty.\] 6. For ABS or NOR, ALG-\((s,t)\)-WT with \(s>0\) and \(0<t<1\) holds iff \[\lim_{j\to\infty}j^{1-t}a_{j}\omega^{a_{j}}=0.\] **Theorem 5.8**.: _Consider the \(L_{\infty}\) approximation problem \(\text{APP}=\{\text{APP}_{\infty,d}\}_{d\in\mathbb{N}}\) defined over \(H(K_{d,\mathbf{a},\mathbf{b}})\) with sequences \(\mathbf{a}\) and \(\mathbf{b}\) satisfying (5.4) in the worst case setting for \(\Lambda^{\text{std}}\) and \(\Lambda^{\text{all}}\) under ABS or NOR._ 1. EXP-UWT holds iff \[\lim_{j\to\infty}\frac{\ln a_{j}}{\ln j}=\infty.\] 2. EXP-\((s,t)\)-WT with \(s>0\) and \(t>1\) always holds. 3. EXP-\((s,t)\)-WT with \(0<s<1\) and \(0<t\leq 1\) holds iff \[\lim_{j\to\infty}\frac{a_{j}}{j^{(1-s)/s}}=\infty.\] 4. EXP-\((1,t)\)-WT with \(t<1\) holds iff \[\lim_{j\to\infty}\frac{a_{j}}{\ln j}=\infty.\] 5. EXP-\((s,t)\)-WT with \(s>1\) and \(t<1\) holds iff \[\lim_{j\to\infty}j^{1-t}a_{j}\omega^{a_{j}}=0.\] **Acknowledgment** This work was supported by the National Natural Science Foundation of China (Project no. 11671271).
2308.05590
Budget equations and astrophysical nonlinear mean-field dynamos
Solar, stellar and galactic large-scale magnetic fields are originated due to a combined action of non-uniform (differential) rotation and helical motions of plasma via mean-field dynamos. Usually, nonlinear mean-field dynamo theories take into account algebraic and dynamic quenching of alpha effect and algebraic quenching of turbulent magnetic diffusivity. However, the theories of the algebraic quenching do not take into account the effect of modification of the source of turbulence by the growing large-scale magnetic field. This phenomenon is due to the dissipation of the strong large-scale magnetic field resulting in an increase of the total turbulent energy. This effect has been studied using the budget equation for the total turbulent energy (which takes into account the feedback of the generated large-scale magnetic field on the background turbulence) for (i) a forced turbulence, (ii) a shear-produced turbulence and (iii) a convective turbulence. As the result of this effect, a nonlinear dynamo number decreases with increase of the large-scale magnetic field, so that that the mean-field $\alpha\Omega$, $\alpha^2$ and $\alpha^2\Omega$ dynamo instabilities are always saturated by the strong large-scale magnetic field.
I. Rogachevskii, N. Kleeorin
2023-08-10T13:53:26Z
http://arxiv.org/abs/2308.05590v3
# Budget equations and astrophysical nonlinear mean-field dynamos ###### Abstract Solar, stellar and galactic large-scale magnetic fields are originated due to a combined action of non-uniform (differential) rotation and helical motions of plasma via mean-field dynamos. Usually, nonlinear mean-field dynamo theories take into account algebraic and dynamic quenching of alpha effect and algebraic quenching of turbulent magnetic diffusivity. However, these theories do not take into account a feedback of the mean magnetic field on the background turbulence (with a zero mean magnetic field). Our analysis using the budget equation for the total (kinetic plus magnetic) turbulent energy, which takes into account the feedback of the generated mean magnetic field on the background turbulence, has shown that a nonlinear dynamo number decreases with increase of the mean magnetic field for a forced turbulence, and a shear-produced turbulence and a convective turbulence. This implies that mean-field \(\alpha\Omega\), \(\alpha^{2}\) and \(\alpha^{2}\Omega\) dynamo instabilities are always saturated. keywords: dynamo - MHD - Sun: interior -- turbulence - activity - dynamo- galaxies: magnetic fields ## 1 Introduction Large-scale magnetic fields in the sun, stars and galaxies are believed to be generated by a joint action of a differential rotation and helical motions of plasma (see, e.g., Moffatt, 1978; Parker, 1979; Krause & Radler, 1980; Zeldovich et al., 1983; Ruzmaikin et al., 1988; Rudiger et al., 2013; Moffatt & Dormy, 2019; Rogachevskii, 2021; Shukurov & Subramanian, 2021). This mechanism can be described by the \(\alpha\Omega\) or \(\alpha^{2}\Omega\) mean-field dynamos. In particular, the effect of turbulence in the mean-field induction equation is determined by the turbulent electromotive force, \(\langle\mathbf{u}\times\mathbf{b}\rangle\), which can be written for a weak mean magnetic field \(\overline{\mathbf{B}}\) as \(\langle\mathbf{u}\times\mathbf{b}\rangle=\alpha_{\rm{{K}}}\,\overline{\mathbf{B}}+\mathbf{V} ^{\rm(eff)}\times\overline{\mathbf{B}}-\eta_{{}_{T}}\,(\mathbf{\nabla}\times\overline {\mathbf{B}})\), where \(\alpha_{{}_{\rm{{K}}}}\) is the kinetic \(\alpha\) effect caused by helical motions of plasma, \(\eta_{{}_{T}}\) is the turbulent magnetic diffusion coefficient, \(\mathbf{V}^{\rm(eff)}\) is the effective pumping velocity caused by an inhomogeneity of turbulence. Here the angular brackets imply ensemble averaging, \(\mathbf{u}\) and \(\mathbf{b}\) are fluctuations of velocity and magnetic fields, respectively. The threshold of the \(\alpha\Omega\) mean-field dynamo instability is described in terms of a dynamo number \(D_{\rm L}=\alpha_{\rm{{K}}}\,\delta\Omega\,L^{3}/\eta_{{}_{T}}^{2}\), where \(\delta\Omega\) characterises the non-uniform (differential) rotation and \(L\) is the stellar radius on the thickness of the galactic disk. The mean-field dynamos are saturated by nonlinear effects. In particular, a feedback of the growing large-scale magnetic field on plasma motions is described by algebraic quenching of the \(\alpha\) effect, turbulent magnetic diffusion, and the effective pumping velocity. This implies that the turbulent transport coefficients, \(\alpha_{\rm{{K}}}\,\big{(}\overline{B}\big{)}\), \(\eta_{{}_{T}}\,\big{(}\overline{B}\big{)}\) and \(\mathbf{V}^{\rm(eff)}\,\big{(}\overline{B}\big{)}\) depend on the mean magnetic field \(\overline{\mathbf{B}}\) via algebraic decreasing functions. The quantitative theories of the algebraic nonlinearities of the \(\alpha\) effect, the turbulent magnetic diffusion and the effective pumping velocity have been developed using the quasi-linear approach for small fluid and magnetic Reynolds numbers (Rudiger & Kichatinov, 1993; Kitchatinov et al., 1994; Rudiger et al., 2013) and the tau approach for large fluid and magnetic Reynolds numbers (Field et al., 1999; Rogachevskii & Kleeorin, 2000, 2001, 2004, 2006). In addition to the algebraic nonlinearity, there is also a dynamic nonlinearity caused by an evolution of magnetic helicity density of a small-scale turbulent magnetic field during the nonlinear stage of the mean-field dynamo. In particular, the \(\alpha\) effect has contributions from the kinetic \(\alpha\) effect, \(\alpha_{{}_{\rm{{K}}}}\), determined by the kinetic helicity and a magnetic \(\alpha\) effect, \(\alpha_{{}_{\rm{{M}}}}\), described by the current helicity of the small-scale turbulent magnetic field (Pouquet et al., 1976). The dynamics of the current helicity are determined by the evolution of the small-scale magnetic helicity density \(H_{\rm m}=\langle\mathbf{a}\!\cdot\!\mathbf{b}\rangle\), where \(\mathbf{b}=\mathbf{\nabla}\!\times\!\mathbf{a}\) and \(\mathbf{a}\) are fluctuations of the magnetic vector potential. The total magnetic helicity, i.e., the sum of the magnetic helicity densities of the large-scale and small-scale magnetic fields, \(H_{\rm M}+H_{\rm m}\), integrated over the volume, \(\int(H_{\rm M}+H_{\rm m})\,dr^{3}\), is conserved for very small microscopic magnetic diffusivity \(\eta\). Here \(H_{\rm M}=\overline{\mathbf{A}\cdot\mathbf{B}}\) is the magnetic helicity density of the large-scale magnetic field \(\overline{\mathbf{B}}=\mathbf{\nabla}\times\overline{\mathbf{A}}\) and \(\overline{\mathbf{A}}\) is the mean magnetic vector potential. As the mean-field dynamo amplifies the mean magnetic field, the large-scale magnetic helicity density \(H_{\rm M}\) grows in time. Since the total magnetic helicity \(\int(H_{\rm M}+H_{\rm m})\,dr^{3}\) is conserved for very small magnetic diffusivity, the magnetic helicity density \(H_{\rm m}\) of the small-scale field changes during the dynamo action, and its evolution is determined by the dynamic equation (Kleeorin & Ruzmaikin, 1982; Zeldovich et al., 1983; Gruzinov & Diamond, 1994; Kleeorin et al., 1995; Kleeorin & Rogachevskii, 1999). In a nonlinear \(\alpha\Omega\) dynamo one can define a nonlinear dynamo number \(D_{\rm N}\left(\overline{B}\right)=\alpha\left(\overline{B}\right)\,\delta \Omega\,L^{3}/\eta_{\rm T}^{2}\,\left(\overline{B}\right)\). If the nonlinear dynamo number \(D_{\rm N}\left(\overline{B}\right)\) decreases with the increase of the large-scale magnetic field, the mean-field dynamo instability is saturated by the nonlinear effects. However, if the \(\alpha\) effect and the turbulent magnetic diffusion are quenched as \((\overline{B}/\overline{B}_{\rm eq})^{-2}\) for strong mean magnetic fields, the nonlinear dynamo number \(D_{\rm N}\left(\overline{B}\right)\propto(\overline{B}/\overline{B}_{\rm eq}) ^{2}\) increases with the increase of the large-scale magnetic field, and the mean-field dynamo instability cannot be saturated for a strong mean magnetic field. Here \(\overline{B}_{\rm eq}=\left(\mu_{0}\,\overline{\rho}\,\langle\mathbf{u }^{2}\rangle\right)^{1/2}\) is the equipartition mean magnetic field and \(\mu_{0}\) is the magnetic permeability of the fluid. How is it possible to resolve this paradox? The mean-field dynamo theories imply that there is a background helical turbulence with a zero mean magnetic field. Due to the combined effect of the differential rotation and helical motions in the background turbulence (described by the kinetic \(\alpha\) effect), a large-scale magnetic field is amplified by the mean-field dynamo instability. In a nonlinear dynamo stage, there is an additional feedback effect of the growing large-scale magnetic field on the background turbulence. However, this effect has not been yet taken into account in nonlinear mean-field dynamo theories. In the present study, we have taken into account the feedback of the mean magnetic field on the background turbulence using the budget equation for the total (kinetic plus magnetic) turbulent energy. Considering three different types of astrophysical turbulence: * a forced turbulence (e.g., caused by supernova explosions in galaxies); * a shear-produced turbulence (e.g., in the atmosphere of the Earth or other planets) and * a convective turbulence (e.g., in a solar and stellar convective zones), we have demonstrated that the nonlinear dynamo number decreases with the increase of the mean magnetic field for any strong values of the field for these three kinds of turbulence, resulting in saturation of the mean-field dynamo instability. ## 2 Budget equations Using the Navier-Stokes equation for velocity fluctuations, we derive the budget equation for the density of turbulent kinetic energy (TKE), \(E_{\rm{{}_{\rm K}}}=\overline{\rho}\,\langle\mathbf{u}^{2}\rangle/2\) as \[\frac{\partial E_{\rm{{}_{\rm K}}}}{\partial t}+{\rm div}\,\mathbf{ \Phi}_{\rm{{}_{\rm K}}}=\Pi_{\rm{{}_{\rm K}}}-\varepsilon_{\rm{{}_{\rm K}}}, \tag{1}\] where \(\mathbf{\Phi}_{\rm{{}_{\rm K}}}=\left\langle\mathbf{u}\left( \rho\,\mathbf{u}^{2}/2+p\right)\right\rangle-\nu\,\overline{\rho}\, \mathbf{\nabla}E_{\rm{{}_{\rm K}}}\) is the flux of TKE, \(\varepsilon_{\rm{{}_{\rm K}}}=\nu\,\overline{\rho}\,\left(\langle\nabla_{j}u _{i}\rangle^{2}\right)\) is the dissipation rate of TKE, and \[\Pi_{\rm{{}_{\rm K}}}=-\frac{1}{\mu_{0}}\left[\langle\mathbf{u}\cdot\left[\mathbf{b}\times\left(\mathbf{\nabla} \times\mathbf{b}\right)\right]\rangle-\langle\mathbf{u} \times\left(\mathbf{\nabla}\times\mathbf{b}\right)\rangle \cdot\overline{\mathbf{B}}\right.\] \[\left.+\left\langle\mathbf{u}\times\mathbf{b} \right\rangle\cdot\left(\mathbf{\nabla}\times\overline{\mathbf{B}}\right)\right]+\overline{\rho}\left[g\,F_{z}-\,\left\langle u_{i}u_ {j}\right\rangle\,\nabla_{j}\overline{U}_{i}\right.\] \[\left.+\left\langle\mathbf{u}\cdot\mathbf{f} \right\rangle\right] \tag{2}\] is the production rate of TKE. Here \(\overline{\mathbf{U}}\) is the mean velocity, \(\nu\) is the kinematic viscosity and the angular brackets imply ensemble averaging, \(\mathbf{F}=\left\langle s\,\mathbf{u}\right\rangle\) is the turbulent flux of the entropy, \(s=\theta/\overline{T}+(\gamma^{-1}-1)p/\overline{P}\) are entropy fluctuations, \(\theta\) and \(\overline{T}\) are fluctuations and mean fluid temperature, \(\rho\) and \(\overline{\rho}\) are fluctuations and mean fluid density, \(p\) and \(\overline{P}\) are fluctuations and mean fluid pressure, \(\gamma=c_{\rm p}/c_{\rm v}\) is the ratio of specific heats, \(g\) is the acceleration due to the gravity and \(\overline{\rho}\,\mathbf{f}\) is the external steering force with a zero mean. We consider three different cases when turbulence is produced either by convection, or by large-scale shear motions or by an external steering force, see the last three terms in the RHS of Eq. (2). The first two terms in the RHS of Eq. (2) describe an energy exchange between the turbulent kinetic and magnetic energies (see below), and the third term in the RHS of Eq. (2) are due to the work of the Lorentz force in a nonuniform mean magnetic field. The estimate for the dissipation rate of the turbulent kinetic energy density in homogeneous isotropic and incompressible turbulence with a Kolmogorov spectrum is \(\varepsilon_{\rm{{}_{\rm K}}}=E_{\rm{{}_{\rm K}}}/\tau_{0}\), where \(\tau_{0}\) is the characteristic turbulent time at the integral scale. Using the induction equation for magnetic fluctuations, we derive the budget equation for the density of turbulent magnetic energy (TME), \(E_{\rm{{}_{\rm M}}}=\langle\mathbf{b}^{2}\rangle/2\mu_{0}\) as \[\frac{\partial E_{\rm{{}_{\rm M}}}}{\partial t}+{\rm div}\,\mathbf{ \Phi}_{\rm{{}_{\rm M}}}=\Pi_{\rm{{}_{\rm M}}}-\varepsilon_{\rm{{}_{\rm M}}}, \tag{3}\] where \[\mathbf{\Phi}_{\rm{{}_{\rm M}}}=\frac{1}{\mu_{0}}\bigg{[} \langle\mathbf{b}\times\left(\mathbf{u}\times\mathbf{b }\right)\rangle+\langle\mathbf{u}\,b_{j}\rangle\,\,\overline{B}_{j}- \langle\mathbf{u}\cdot\mathbf{b}\rangle\,\,\overline{\mathbf{B}}\] \[+\,\langle\mathbf{b}^{2}\rangle\,\,\overline{\mathbf{U}}-\langle\mathbf{b}\,b_{j}\rangle\,\,\overline{U}_{j}- \eta\,\left\langle\mathbf{b}\times\left(\mathbf{\nabla}\times \mathbf{b}\right)\right\rangle\bigg{]} \tag{4}\] is the flux of TME, \(\varepsilon_{\rm{{}_{\rm M}}}=\eta\,\left\langle\left(\mathbf{\nabla} \times\mathbf{b}\right)^{2}\right\rangle/\mu_{0}\) is the dissipation rate of TME, and \[\Pi_{\rm{{}_{\rm M}}}=\frac{1}{\mu_{0}}\bigg{[}\langle\mathbf{u}\cdot\left[\mathbf{b}\times\left(\mathbf{\nabla} \times\mathbf{b}\right)\right]\rangle-\langle\mathbf{u} \times\left(\mathbf{\nabla}\times\mathbf{b}\right)\rangle\cdot \overline{\mathbf{B}}\] \[+\,\langle b_{i}\,b_{j}\rangle\,\,\nabla_{j}\overline{U}_{i}- \langle\mathbf{b}^{2}\rangle\,\left(\mathbf{\nabla}\cdot \overline{\mathbf{U}}\right)\bigg{]} \tag{5}\] is the production rate of TME. Here \(\eta\) is the magnetic diffusion due to electrical conductivity of the fluid. The first two terms in the RHS of Eq. (5) describe an energy exchange between the turbulent magnetic and kinetic energies. The estimate for the dissipation rate of the turbulent magnetic energy density is \(\varepsilon_{\rm{{}_{\rm M}}}=E_{\rm{{}_{\rm M}}}/\tau_{0}\). The density of total turbulent energy (TTE), \(E_{\rm K}+E_{\rm M}\), is determined by the following budget equation: \[\frac{\partial E_{\rm T}}{\partial t}+{\rm div}\,\mbox{\boldmath$\Phi$ }_{\rm T}=\Pi_{\rm T}-\varepsilon_{\rm T}, \tag{6}\] where \[\Pi_{\rm T}=\bigg{[}\Big{(}\left(b_{i}\,b_{j}\right)-\mu_{0}\, \overline{\rho}\,\left\langle u_{i}u_{j}\right\rangle\Big{)}\,\nabla_{j} \overline{U}_{i}-\left\langle\mathbf{b}^{2}\right\rangle\,\big{(} \mathbf{\nabla}\cdot\overline{\mathbf{U}}\big{)}\] \[\quad-\left\langle\mathbf{u}\times\mathbf{b$ }\right\rangle\cdot\big{(}\mbox{\boldmath$\nabla}\times\overline{\mathbf{B}}\big{)}\bigg{]}\mu_{0}^{-1}+\overline{\rho}\,\Big{(}g\,F_{z}+ \left\langle\mathbf{u}\cdot\mathbf{f}\right\rangle\Big{)}. \tag{7}\] is the production rate of \(E_{\rm T}\), \(\varepsilon_{\rm T}=\varepsilon_{\rm K}+\varepsilon_{\rm M}\) is the dissipation rate of \(E_{\rm T}\) and \(\mathbf{\Phi}_{\rm T}=\mathbf{\Phi}_{\rm K}+\mathbf{\Phi}_{\rm M}\) is the flux of \(E_{\rm T}\). To determine the production rate of TTE, we use the following second moments for magnetic fluctuations (Rogachevskii & Kleeorin, 2007), \[\left\langle b_{i}\,b_{j}\right\rangle=\frac{\overline{\mathbf{B}}^{2}}{2}\bigg{[}2q_{\rm p}\,\big{(}\overline{B}\big{)}\,\, \delta_{ij}-q_{\rm s}\,\big{(}\overline{B}\big{)}\,\Big{(}\delta_{ij}+\beta_{ ij}\Big{)}\bigg{]}, \tag{8}\] and velocity fluctuations, \[\overline{\rho}\,\left\langle u_{i}\,u_{j}\right\rangle = -\frac{\overline{\mathbf{B}}^{2}}{2\mu_{0}}\left[2q_{ \rm p}\,\big{(}\overline{B}\big{)}\,\,\delta_{ij}-q_{\rm s}\,\big{(} \overline{B}\big{)}\,\Big{(}\delta_{ij}+\beta_{ij}\Big{)}\right] \tag{9}\] \[+\overline{\rho}\,\left\langle u_{i}\,u_{j}\right\rangle^{(0)},\] where \(\beta_{ij}=\overline{B_{i}}\overline{B}_{j}/\overline{B}^{\,2}\). The tensor \(\left\langle u_{i}\,u_{j}\right\rangle^{(0)}\) for a background turbulence (with a zero mean magnetic field) in Eq. (9) has two contributions caused by background isotropic velocity fluctuations and tangling anisotropic velocity fluctuations due to the mean velocity shear (Elperin et al., 2002): \[\left\langle u_{i}\,u_{j}\right\rangle^{(0)}=\frac{1}{3}\left\langle\mathbf{u}^{2}\right\rangle^{(0)}\,\delta_{ij}-2\nu_{ T}^{(0)}\,\left(\partial\overline{U}\right)_{ij}, \tag{10}\] where \(\left(\partial\overline{U}\right)_{ij}=(\nabla_{i}\overline{U}_{j}+\nabla_{j} \overline{U}_{i})/2\) and \(\nu_{ T}^{(0)}=\gamma_{0}\langle\mathbf{u}^{2}\rangle^{(0)}/3\) is the turbulent viscosity. For simplicity, in Eq. (8) we do not take into account a small-scale dynamo with a zero mean magnetic field. The nonlinear functions \(q_{\rm p}(\overline{B})\) and \(q_{\rm s}(\overline{B})\) entering in Eq. (8)-(9) are given by Eqs. (A1)-(A2) in Appendix A. The asymptotic formulae for the nonlinear functions \(q_{\rm p}(\overline{B})\) and \(q_{\rm s}(\overline{B})\) are as follows. For a very weak mean magnetic field, \(\overline{B}\ll\overline{B}_{\rm eq}/4{\rm Rm}^{1/4}\), the nonlinear functions are given by \[q_{\rm p}(\overline{B}) = \frac{2}{5}\,\left[\ln{\rm Rm}+\frac{4}{45}\right], \tag{11}\] \[q_{\rm s}(\overline{B}) = \frac{8}{15}\,\left[\ln{\rm Rm}+\frac{2}{15}\right], \tag{12}\] where \(\overline{B}_{\rm eq}^{2}=\mu_{0}\,\overline{\rho}\,\langle\mbox{\boldmath$u$ }^{2}\rangle\). For \(\overline{B}_{\rm eq}/4{\rm Rm}^{1/4}\ll\overline{B}\ll\overline{B}_{\rm eq}/4\), these nonlinear functions are given by \[q_{\rm p}(\overline{B}) = \frac{16}{25}\,\left[5|\ln(\sqrt{2}\beta)|+1+4\beta^{2}\right], \tag{13}\] \[q_{\rm s}(\overline{B}) = \frac{32}{15}\,\left[|\ln(\sqrt{2}\beta)|+\frac{1}{30}+\frac{3}{ 2}\beta^{2}\right], \tag{14}\] and for \(\overline{B}\gg\overline{B}_{\rm eq}/4\) they are given by \[q_{\rm p}(\overline{B}) = \frac{4}{3\beta^{2}},\quad q_{\rm s}(\overline{B})=\frac{\pi\sqrt {2}}{3\beta^{3}}. \tag{15}\] where \(\beta=\sqrt{8}\ \overline{B}/\overline{B}_{\rm eq}\). Substituting Eqs. (8)-(10) into Eq. (7), we obtain the production rate of TTE as \[\Pi_{\rm T}=\left[\frac{\overline{\mathbf{B}}^{2}}{2 \mu_{0}}\Big{(}3q_{\rm p}\,\big{(}\overline{B}\big{)}-q_{\rm s}\,\big{(} \overline{B}\big{)}\,\Big{)}-\frac{\overline{\rho}\,\langle\mathbf{u$ }^{2}\rangle^{(0)}}{3}\right]\big{(}\mbox{\boldmath$\nabla}\cdot\overline{ \mathbf{U}}\big{)}\] \[\quad+\left[2\nu_{ T}\,\overline{\rho}\,\left(\partial\overline{U} \right)_{ij}-\frac{1}{\mu_{0}}\,q_{\rm s}\,\big{(}\overline{B}\big{)}\,\, \overline{B}_{i}\overline{B}_{j}\,\right]\,\left(\partial\overline{U}\right)_{ij}\] \[\quad-\frac{1}{\mu_{0}}\,\mathbf{\mathcal{E}}\,\big{(} \overline{B}\big{)}\cdot(\mathbf{\nabla}\times\overline{\mathbf{B}})+\overline{\rho}\,\Big{(}g\,F_{z}+\left\langle\mathbf{u} \cdot\mathbf{f}\right\rangle\Big{)}, \tag{16}\] where \(\mathbf{\mathcal{E}}\,\big{(}\overline{B}\big{)}=\left\langle\mathbf{u}\times\mathbf{b}\right\rangle\) is the turbulent nonlinear electromotive force. Using the steady state solution of Eq. (6), we estimate the total turbulent energy density as \(E_{\rm K}+E_{\rm M}\sim\tau_{0}\,\Pi_{\rm T}\). Equation (8) yields the density of turbulent magnetic energy \(E_{\rm M}=\langle\mathbf{b}^{2}\rangle/2\mu_{0}\) as \[E_{\rm M}=\big{[}3q_{\rm p}\,\big{(}\overline{B}\big{)}-2q_{\rm s}\,\big{(} \overline{B}\big{)}\big{]}\,\,\frac{\overline{\mathbf{B}}^{2}}{2\mu_{ 0}}. \tag{17}\] In the next sections, we apply the budget equations for analysis of nonlinear mean-field \(\alpha\Omega\), \(\alpha^{2}\) and \(\alpha^{2}\Omega\) dynamos. ## 3 Mean-field \(\alpha\Omega\) dynamo In this section, we consider the axisymmetric mean-field \(\alpha\Omega\) dynamo, so that the mean magnetic field can be decomposed as \[\overline{\mathbf{B}}=\overline{B}_{y}(t,x,z)\mathbf{e}_{y}+{ \rm rot}[\overline{A}(t,x,z)\mathbf{e}_{y}], \tag{18}\] and nonlinear mean-field induction equation reads \[\frac{\partial}{\partial t}\left(\frac{\overline{A}}{B_{y}}\right)=\hat{N} \left(\frac{\overline{A}}{\overline{B}_{y}}\right), \tag{19}\] where the operator \(\hat{N}\) is given by \[\hat{N} = \left(\begin{matrix}\eta_{ T}^{(A)}\,\big{(}\overline{B}\big{)}\, \Delta&\alpha\,\big{(}\overline{B}\big{)}\\ R_{\alpha}R_{\omega}\,\hat{\Omega}&\nabla_{j}\,\eta_{ T}^{(B)}\,\big{(}\overline{B}\big{)}\,\nabla_{j} \end{matrix}\right), \tag{20}\] and the operator \[\hat{\Omega}\,\overline{A}=\frac{\partial(\partial\Omega\,\sin\vartheta,\overline {A})}{\partial(z,\,x)} \tag{21}\] describes differential rotation. Here \(\vartheta\) is the angle between \(\delta\mathbf{\Omega}\) and the vertical coordinate \(z\) and \(L\) is the characteristic scale (e.g., the radius of a star or the thickness of a galactic disk). The total \(\alpha\) effect is the sum of the kinetic \(\alpha\) effect, \(\alpha_{\rm K}(\overline{B})\), and the magnetic \(\alpha\) effect, \(\alpha_{\rm M}(\overline{B})\), \[ 1999; Kleeorin et al., 2000; Blackman & Field, 2000; Vishniac & Cho, 2001; Brandenburg & Subramanian, 2005; Kleeorin & Rogachevskii, 2022; Gopalakrishnan & Subramanian, 2023). Here \(\mathbf{b}=\mathbf{\nabla}\times\mathbf{a}\) are magnetic fluctuations and \(\mathbf{a}\) are fluctuations of magnetic vector potential. Taking into account turbulent fluxes of the small-scale magnetic helicity, it has been shown by numerical simulations that a nonlinear galactic dynamo governed by a dynamic equation for the magnetic helicity density \(H_{\rm m}\) of a small-scale field (the dynamical nonlinearity) saturates at a mean magnetic field comparable with the equipartition magnetic field (see, e.g., Kleeorin et al., 2000, 2002, 2003b,a; Blackman & Brandenburg, 2002; Brandenburg & Subramanian, 2005; Shukurov et al., 2006). Numerical simulations demonstrate that the dynamics of magnetic helicity plays a crucial role in solar dynamo as well (see, e.g., Kleeorin et al., 2003b, 2016, 2020; Sokoloff et al., 2006; Zhang et al., 2006, 2012; Kapyla et al., 2010; Hubbard & Brandenburg, 2012; Del Sordo et al., 2013; Safullin et al., 2018; Rincon, 2021). Different forms of magnetic helicity fluxes have been suggested in various studies using phenomenological arguments (Kleeorin & Rogachevskii, 1999; Kleeorin et al., 2000, 2002; Vishniac & Cho, 2001; Subramanian & Brandenburg, 2004; Brandenburg & Subramanian, 2005). Recently, the turbulent magnetic helicity fluxes have been rigorously derived (Kleeorin & Rogachevskii, 2022; Gopalakrishnan & Subramanian, 2023). In particular, Kleeorin & Rogachevskii (2022) apply the mean-field theory, adopt the Coulomb gauge and consider a strongly density-stratified turbulence. They have found that the turbulent magnetic helicity fluxes depend on the mean magnetic field energy, and include non-gradient and gradient contributions. In addition, Gopalakrishnan & Subramanian (2023) have recently shown that contributions to the turbulent magnetic helicity fluxes from the third-order moments can be described using the turbulent diffusion approximation. The kinetic \(\alpha\) effect is given by \(\alpha_{\rm K}\left(\overline{B}\right)=\alpha_{\rm K}^{(0)}\,\phi_{\rm K} \left(\overline{B}\right)\)(Rogachevskii & Kleeorin, 2004), where for a forced turbulence \(\alpha_{\rm K}^{(0)}=-\tau_{0}\,H_{\rm u}/3\) and the algebraic quenching function \(\phi_{\rm K}\left(\overline{B}\right)\) of the kinetic \(\alpha\) effect has the following asymptotic behavior: \(\phi_{\rm K}=1\) when \(\overline{B}\ll\overline{B}_{\rm eq}/4\) and \(\phi_{\rm K}=(1/4)\left(\overline{B}/\overline{B}_{\rm eq}\right)^{-2}\) when \(\overline{B}\gg\overline{B}_{\rm eq}/4\). The similar asymptotic behavior is also for the algebraic quenching of the magnetic \(\alpha\) effect. The turbulent magnetic diffusion of the toroidal mean magnetic field is given by (Rogachevskii & Kleeorin, 2004): \(\eta_{T}^{(B)}\left(\overline{B}\right)=\eta_{T}^{(0)}\,\phi_{\rm T}^{(B)} \left(\overline{B}\right)\), where \(\eta_{T}^{(0)}=\tau_{0}\langle\mathbf{u}^{2}\rangle^{(0)}/3\), the algebraic quenching function \(\phi_{\eta}^{(B)}\left(\overline{B}\right)\) of the toroidal mean magnetic field is \(\phi_{\eta}^{(B)}=1\) when \(\overline{B}\ll\overline{B}_{\rm eq}/4\) and \(\phi_{\eta}^{(B)}=\left(1/4\right)\left(\overline{B}/\overline{B}_{\rm eq} \right)^{-1}\) when \(\overline{B}\gg\overline{B}_{\rm eq}/4\). The similar asymptotic behavior is also for the turbulent viscosity (Rogachevskii & Kleeorin, 2004). The turbulent magnetic diffusion of the poloidal mean magnetic field behaves as (Rogachevskii & Kleeorin, 2004): \(\eta_{T}^{(A)}\left(\overline{B}\right)=\eta_{T}^{(0)}\,\phi_{\eta}^{(A)} \left(\overline{B}\right)\), where the algebraic quenching function \(\phi_{\eta}^{(A)}\left(\overline{B}\right)\) of the poloidal mean magnetic field is \(\phi_{\eta}^{(A)}=1\) when \(\overline{B}\ll\overline{B}_{\rm eq}/4\) and \(\phi_{\eta}^{(B)}=\left(1/8\right)\left(\overline{B}/\overline{B}_{\rm eq} \right)^{-2}\) when \(\overline{B}\gg\overline{B}_{\rm eq}/4\). Equations (19)-(21) are written in dimensionless variables: the coordinate is measured in the units of \(L\), the time \(t\) is is measured in the units of turbulent magnetic diffusion time \(L^{2}/\eta_{T}^{(0)}\); the mean magnetic field is measured in the units of \(\overline{B}_{*}\), where \(\overline{B}_{*}\equiv\sigma\,\overline{B}_{*}^{\rm eq}\), \(\sigma=\ell_{0}/\sqrt{2}L\), \(\overline{B}_{*}^{\rm eq}=u_{0}\,\sqrt{\mu_{0}\overline{\rho}_{*}}\), and the magnetic potential, \(\overline{A}\) is measured in the units of \(R_{\alpha}L\overline{B}_{*}\). Here \(R_{\alpha}=\alpha_{*}L/\eta_{T}^{(0)}\), the fluid density \(\overline{\rho}\) is measured in the units \(\overline{\rho}_{*}\), the differential rotation \(\delta\Omega\) is measured in units of the maximal value of the angular velocity \(\Omega\), the \(\alpha\) effect is measured in units of the maximum value of the kinetic \(\alpha\) effect, \(\alpha_{*}\); the integral scale of the turbulent motions \(\ell_{0}=\tau_{0}\,u_{0}\) and the characteristic turbulent velocity \(u_{0}=\sqrt{\langle\mathbf{u}^{2}\rangle^{(0)}}\) at the scale \(\ell_{0}\) are measured in units of their maximum values in the turbulent region, and the turbulent magnetic diffusion coefficients are measured in units of their maximum values. The magnetic Reynolds number \({\rm Rm}=\ell_{0}\,u_{0}/\eta\) is defined using the maximal values of the integral scale \(\ell_{0}\) and the characteristic turbulent velocity \(u_{0}\). The dynamo number for the linear \(\alpha\Omega\) dynamo is defined as \(D_{\rm L}=R_{\alpha}R_{\omega}\), where \(R_{\omega}=\left(\delta\Omega\right)L^{2}/\eta_{T}^{(0)}\). Now we define the nonlinear dynamo number \(D_{\rm N}\left(\overline{B}\right)\) for the \(\alpha\Omega\) dynamo as \[D_{\rm N}\left(\overline{B}\right)=\frac{\alpha\left(\overline{B}\right)}{ \eta_{T}^{(B)}\left(\overline{B}\right)}\,\frac{\delta\Omega\,L^{3}}{\eta_{T}^ {(A)}\left(\overline{B}\right)}, \tag{23}\] where we take into account that the nonlinear turbulent magnetic diffusion coefficients of the poloidal and toroidal components of the mean magnetic field are different (Rogachevskii & Kleeorin, 2004). The ratio of energies of the toroidal and poloidal mean magnetic fields for the \(\alpha\Omega\) dynamo is of the order of \(D_{\rm L}^{2}/D_{\rm cr}\), where \(D_{\rm cr}\) is the threshold for the excitation of the \(\alpha\Omega\) dynamo. Next, we take into account the feedback of the mean magnetic field on the background turbulence using the budget equation for the total turbulent energy. In a shear-produced non-convective turbulence, the largest contributions to the production rate of TTE for a strong large-scale magnetic field is due to the terms \(-\mathbf{\mathcal{E}}\left(\overline{B}\right)\cdot(\mathbf{\nabla}\times\overline{B} )/\mu_{0}\) and \(2\nu_{T}\left(\overline{B}\right)\,\overline{\rho}\,\left(\partial\Omega \right)_{ij}^{2}\equiv 2\nu_{T}\,\overline{\rho}\,S^{2}\) [see Eq. (16)], where \(S^{2}=\left(\partial\overline{U}\right)_{ij}^{2}\). This implies that the turbulent kinetic energy density for a strong large-scale magnetic field is estimated as \[E_{\rm K}=\tau_{0}\left[2\nu_{T}\left(\overline{B}\right)\,\overline{\rho}\,S^{2 }-\frac{1}{\mu_{0}}\,\mathbf{\mathcal{E}}\left(\overline{B}\right)\cdot(\mathbf{ \nabla}\times\overline{B})\right]. \tag{24}\] Therefore, the turbulent kinetic energy density for strong mean magnetic fields behaves as \[E_{\rm K}\,\left(\overline{B}\right)\approx E_{\rm K}^{(0)}\,\left[1+\frac{D_{ \rm cr}^{1/2}}{D_{\rm L}}\,\left(\frac{\ell_{0}}{L_{B}}\right)^{2}\left( \frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{2}\right], \tag{25}\] where \(E_{\rm K}^{(0)}=(2/3)\,\overline{\rho}\,\ell_{0}^{2}\,S^{2}\) and the characteristic scale of the mean magnetic field variations \(L_{B}\) is defined as \(L_{B}=\overline{B}/|\mathbf{\nabla}\times\overline{B}|\). We also take into account that for strong mean magnetic fields, the ratio of these production terms is \[-\frac{\tau_{0}}{E_{\rm K}^{(0)}}\mathbf{\mathcal{E}}\left(\overline{B}\right) \left(\mathbf{\nabla}\times\overline{B}\right)\propto\frac{D_{\rm cr}^{1/2}}{D_{\rm L }}\,\left(\frac{\ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{ \overline{B}_{\rm eq}}\right)^{2}. \tag{26}\] This yields the estimate for the ratio \(\eta_{T}^{(B)}\left(\overline{B}\right)/\eta_{T}^{(0)}\) for strong mean magnetic fields as \[\frac{\eta_{\rm T}^{(B)}\left(\overline{B}\right)}{\eta_{\rm T}^{(0)}} \approx \frac{1}{4}\left[1+\frac{D_{\rm cr}^{1/2}}{D_{\rm L}}\left(\frac{ \ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{\rm eq}} \right)^{2}\right]\left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{-1},\] where the ratio of turbulent diffusion coefficients of poloidal and toroidal fields \(\eta_{\rm T}^{(A)}\left(\overline{B}\right)/\eta_{\rm T}^{(B)}\left(\overline{B }\right)\) is given by \[\frac{\eta_{\rm T}^{(A)}\left(\overline{B}\right)}{\eta_{\rm T}^{(B)}\left( \overline{B}\right)}\approx\frac{1}{2}\left(\frac{\overline{B}}{\overline{B}_ {\rm eq}}\right)^{-1}, \tag{28}\] and \(\eta_{\rm T}^{(A,B)}\left(\overline{B}\right)=2\tau_{0}\,E_{\rm K}\left( \overline{B}\right)\phi_{\eta}^{(A,B)}/3\overline{\rho}.\) Therefore, the ratio of the nonlinear and linear dynamo numbers \(D_{\rm N}\left(\overline{B}\right)/D_{\rm L}\) in a shear-produced non-convective turbulence for strong mean magnetic fields is estimated as \[\frac{D_{\rm N}\left(\overline{B}\right)}{D_{\rm L}} \approx 32\left[1+\frac{D_{\rm cr}^{1/2}}{D_{\rm L}}\,\left(\frac{ \ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{\rm eq}} \right)^{2}\right]^{-2} \tag{29}\] \[\times\frac{\alpha\left(\overline{B}\right)}{\alpha_{\rm K}^{(0) }}\left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{3},\] where the dependence of the total \(\alpha\) effect on the mean magnetic field, \(\alpha\left(\overline{B}\right)\), is caused by the algebraic and dynamic quenching. The algebraic quenching describes the feedback of the mean magnetic field on the plasma motions, while the dynamic quenching of the total \(\alpha\) effect is caused by the evolution of the magnetic \(\alpha\) effect related to the small-scale current and magnetic helicities. In particular, the dynamic equation for the small-scale current helicity (which determines the evolution of the magnetic \(\alpha\) effect) in a steady state yields the total \(\alpha\) effect as \(\alpha\left(\overline{B}\right)\propto-{\rm div}\,\mathbf{F}_{\rm M} /\overline{B}^{2},\) where \(\mathbf{F}_{\rm M}\) is the magnetic helicity flux of the small-scale magnetic field. This implies that if \(\mathbf{F}_{\rm M}\) does not quenched with the growth of the mean magnetic field, the total \(\alpha\) effect for strong magnetic fields behaves as \(\alpha\left(\overline{B}\right)\propto\left(\overline{B}/\overline{B}_{\rm eq }\right)^{-2}.\) In the case of the algebraic quenching of the magnetic helicity flux \(\mathbf{F}_{\rm M}\), the dependence of \(\alpha\left(\overline{B}\right)\) with the growth of the mean magnetic field is more stronger, i.e., \(\alpha\left(\overline{B}\right)/\alpha_{\rm K}^{(0)}\propto\left(\overline{B}/ \overline{B}_{\rm eq}\right)^{-n}\) with \(n>2.\) Equation (29) implies that the nonlinear dynamo number decreases with the increase of the mean magnetic field for any strong values of the field for a shear-produced non-convective turbulence, resulting in saturation of the mean-field dynamo instability. In a convective turbulence, the largest contributions to the production rate of TTE for a strong mean magnetic fields is due to the buoyancy term \(\overline{\rho}\,g\,F_{\rm z}\) and the term \(\eta_{\rm T}^{(B)}\left(\overline{B}\right)\)\(\left(\mathbf{\nabla}\times\overline{B}\right)^{2}/\mu_{0}\) [see Eq. (16)]. This implies that the turbulent kinetic energy density is given by \[E_{\rm K}=\tau_{0}\left[\overline{\rho}\,g\,F_{\rm z}-\frac{1}{\mu_{0}}\, \mathbf{\varepsilon}\left(\overline{B}\right)\cdot\left(\mathbf{ \nabla}\times\overline{B}\right)\right], \tag{30}\] where \(\tau_{0}=\ell_{0}\,[2E_{\rm K}/\overline{\rho}]^{-1/2}.\) Thus, Eq. (30) can be rewritten as the following nonlinear equation: \[\tilde{E}_{\rm K}^{3/2}-\xi\left(\overline{B}\right)\,\tilde{E}_{\rm K}^{1/2}- 1=0, \tag{31}\] where \(\tilde{E}_{\rm K}=E_{\rm K}/E_{\rm K}^{(0)}\), \[E_{\rm K}^{(0)}=\frac{\overline{\rho}}{2}\,\left(2g\,F_{\rm z}\,\ell_{0} \right)^{-3}, \tag{32}\] \[\xi\left(\overline{B}\right)=\frac{2}{D_{\rm cr}^{1/2}}\left(\frac{\ell_{0}}{ L_{B}}\right)^{2}\,\left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{2}, \tag{33}\] and \(\overline{B}_{\rm eq}^{2}=2\mu_{0}E_{\rm K}^{(0)}\). Nonlinear equation (31) has the following asymptotic solution: \(\tilde{E}_{\rm K}^{1/2}=1\) for \(\xi\left(\overline{B}\right)\tilde{E}_{\rm K}^{1/2}\ll 1\), and \(\tilde{E}_{\rm K}^{1/2}=\xi\left(\overline{B}\right)\) for \(\xi\left(\overline{B}\right)\tilde{E}_{\rm K}^{1/2}\gg 1.\) Thus, an approximate solution of the nonlinear equation (31) can be constructed as a linear combination of these asymptotic solutions, i.e., the turbulent kinetic energy density for strong mean magnetic fields behaves as \[E_{\rm K}\approx E_{\rm K}^{(0)}\,\left[1+\frac{D_{\rm cr}^{1/2}}{D_{\rm L}}\, \left(\frac{\ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{ \rm eq}}\right)^{2}\right]. \tag{34}\] This implies that equation (34) for the turbulent kinetic energy density for strong mean magnetic fields are similar to Eqs. (27)-(29), respectively. In a forced turbulence, the turbulent kinetic energy density for a strong mean magnetic field is given by \[E_{\rm K}=\tau_{0}\left[\overline{\rho}\,\left(\mathbf{u}\cdot\mathbf{f}\right)-\frac{1}{\mu_{0}}\,\mathbf{\varepsilon}\left( \overline{B}\right)\cdot\left(\mathbf{\nabla}\times\overline{B}\right) \right], \tag{35}\] where we take into account that the largest contribution to the production rate of TTE in a non-convective forced turbulence for a strong mean magnetic field is due to the terms \(-\mathbf{\varepsilon}\left(\overline{B}\right)\cdot\left(\mathbf{\nabla}\times\overline{B}\right)/\mu_{0}\) and \(\overline{\rho}\,\left(\mathbf{u}\cdot\mathbf{f}\right)\) [see Eq. (16)]. Therefore, the turbulent kinetic energy density for strong mean magnetic fields behaves as \[E_{\rm K}\approx E_{\rm K}^{(0)}\,\left[1+\frac{1}{8}\frac{D_{\rm cr}^{1/2}}{D_{ \rm L}}\,\left(\frac{\ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{ \overline{B}_{\rm eq}}\right)\right], \tag{36}\] where \(E_{\rm K}^{(0)}=\overline{\rho}\,\tau_{0}\,\left(\mathbf{u}\cdot\mathbf{f}\right)\). This yields the estimates for the ratio \(\eta_{\rm T}^{(B)}\left(\overline{B}\right)/\eta_{\rm T}^{(0)}\) as \[\frac{\eta_{\rm T}^{(B)}\left(\overline{B}\right)}{\eta_{\rm T}^{(0)}} \approx \frac{1}{4}\left[1+\frac{D_{\rm cr}^{1/2}}{8D_{\rm L}}\left(\frac{ \ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{\rm eq}} \right)\right]\left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{-1},\] where the ratio \(\eta_{\rm T}^{(A)}\left(\overline{B}\right)/\eta_{\rm T}^{(B)}\left(\overline{B}\right)\) is given by Eq. (28). Using Eq. (37), we determine the ratio of nonlinear and linear dynamo numbers \(D_{\rm N}\left(\overline{B}\right)/D_{\rm L}\) in a non-convective forced turbulence for strong mean magnetic fields as \[\frac{D_{\rm N}\left(\overline{B}\right)}{D_{\rm L}} \approx 32\left[1+\frac{1}{8}\frac{D_{\rm cr}^{1/2}}{D_{\rm L}}\,\left(\frac{ \ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{\rm eq}} \right)\right]^{-2} \tag{38}\] \[\times\frac{\alpha\left(\overline{B}\right)}{\alpha_{\rm K}^{(0)}} \left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{3}.\] Equations (29) and (38) imply that for the \(\alpha\Omega\) dynamo, the nonlinear dynamo number decreases with increase of the mean magnetic field for a forced turbulence, and a shear-produced turbulence and a convective turbulence. This causes saturation of the mean-field \(\alpha\Omega\) dynamo instability for a strong mean magnetic field. ## 4 Mean-field \(\alpha^{2}\) dynamo In this section, we consider mean-field \(\alpha^{2}\) dynamo. First, we discuss a long-standing question when a one-dimensional kinematic \(\alpha^{2}\) dynamo can be oscillatory. The mean magnetic field \(\overline{\mathbf{B}}(t,z)=\mathbf{\nabla}\times\overline{\mathbf{A}}=(-\nabla_{z} \overline{A}_{y},\nabla_{z}\overline{A}_{x},0)\) is determined by the following equation \[\frac{\partial\Psi}{\partial t}=\hat{L}\Psi, \tag{39}\] where \(\overline{\mathbf{A}}\) is the mean magnetic vector potential in the Weyl gauge. The linear operator \(\hat{L}\) and the function \(\Psi(t,z)\) are given by \[\hat{L} = \begin{pmatrix}\eta_{T}^{(0)}\nabla_{z}^{2}&-\alpha_{\rm K}^{(0)} \nabla_{z}\\ \alpha_{\rm K}^{(0)}\nabla_{z}&\eta_{T}^{(0)}\nabla_{z}^{2}\end{pmatrix}, \quad\Psi=\begin{pmatrix}A_{x}\\ A_{y}\end{pmatrix}, \tag{40}\] where \(\eta_{T}^{(0)}\) is the turbulent magnetic diffusion coefficient, and \(\alpha_{\rm K}^{(0)}\) is the kinetic \(\alpha\) effect caused by the helical turbulent motions in plasma. When can a one-dimensional kinematic \(\alpha^{2}\) dynamo be oscillatory? First, if the linear operator \(\hat{L}\) is not self-adjoint, it has complex eigenvalues. This case corresponds to the oscillatory growing solution, i.e., the dynamo is oscillatory. On the other hand, any self-adjoint operator, \(\hat{M}\), defining by the following condition, \[\int\Psi^{*}\hat{M}\bar{\Psi}\,dz=\int\bar{\Psi}\hat{M}^{*}\Psi^{*}\,dz, \tag{41}\] has real eigenvalues, where the asterisk denotes complex conjugation. Now we determine conditions when the linear operator \(\hat{L}\) is not self-adjoint, i.e., it has complex eigenvalues. To this end, we determine the integrals \(\int\Psi^{*}\hat{L}\bar{\Psi}\,dz\) and \(\int\bar{\Psi}\hat{L}^{*}\Psi^{*}\,dz\) as: \[\int\Psi^{*}\hat{L}\bar{\Psi}\,dz=\int\alpha_{\rm K}^{(0)}\left( A_{y}^{*}\nabla_{z}\tilde{A}_{x}-A_{x}^{*}\nabla_{z}\tilde{A}_{y}\right)\,dz\] \[-\int\eta_{T}^{(0)}\left[(\nabla_{z}A_{x}^{*})\,\nabla_{z}\tilde{ A}_{x}+\left(\nabla_{z}A_{y}^{*}\right)\,\nabla_{z}\tilde{A}_{y}\right]\,dz\] \[+\left[\eta_{T}^{(0)}\left(A_{x}^{*}\,\nabla_{z}\tilde{A}_{x}+A_{ y}^{*}\,\nabla_{z}\tilde{A}_{y}\right)\right]_{z=L_{\rm fluct}}^{z=L_{\rm top}}, \tag{42}\] \[\int\bar{\Psi}\hat{L}^{*}\Psi^{*}\,dz=\int\alpha_{\rm K}^{(0)} \left(A_{y}^{*}\nabla_{z}\tilde{A}_{x}-A_{x}^{*}\nabla_{z}\tilde{A}_{y}\right) \,dz\] \[-\int\eta_{T}^{(0)}\left[(\nabla_{z}A_{x}^{*})\,\nabla_{z}\tilde{ A}_{x}+\left(\nabla_{z}A_{y}^{*}\right)\,\nabla_{z}\tilde{A}_{y}\right]\,dz\] \[+\left[\eta_{T}^{(0)}\left(\tilde{A}_{x}\,\nabla_{z}A_{x}^{*}+ \tilde{A}_{y}\,\nabla_{z}A_{y}^{*}\right)+\alpha_{\rm K}\bigg{(}A_{x}^{*} \tilde{A}_{y}\right.\] \[\left.-A_{y}^{*}\,\tilde{A}_{x}\bigg{)}\right]_{z=L_{\rm lept}}^{ z=L_{\rm lept}}, \tag{43}\] where \(z=L_{\rm bott}\) and \(z=L_{\rm top}\) are the bottom and upper boundaries, respectively. When \(\eta_{T}^{(0)}\) and \(\alpha_{\rm K}^{(0)}\) vanish at the boundaries where the turbulence is very weak, the operator \(\hat{L}\) satisfies condition (41) and the \(\alpha^{2}\) dynamo is not oscillatory. On the other hand, when \(\alpha_{\rm K}^{(0)}\) vanishes only at one boundary, while it is non-zero at the other boundary, the operator \(\hat{L}\) does not satisfy condition (41), and the \(\alpha^{2}\) dynamo is oscillatory. The latter case has been considered in analytical study by Shukurov et al. (1985); Radler & Brauer (1987) and in numerical study by Baryshnikova & Shukurov (1987). Brandenburg (2017) has recently considered the one-dimensional kinematic \(\alpha^{2}\) dynamo with different conditions at two boundaries: \(\mathbf{A}=0\) at \(z=L_{\rm bot}\) and \(\nabla_{z}\mathbf{A}=0\) at \(z=L_{\rm top}\), so that the operator \(\hat{L}\) may not satisfy condition (41), and the \(\alpha^{2}\) dynamo may be oscillatory. Now we consider the nonlinear axisymmetric mean-field \(\alpha^{2}\) dynamo, so that nonlinear mean-field induction equation reads \[\frac{\partial}{\partial t}\left(\frac{\overline{A}}{B_{y}}\right)=\hat{N} \left(\frac{\overline{A}}{B_{y}}\right), \tag{44}\] where the mean magnetic field is \(\overline{\mathbf{B}}=\overline{B}_{y}(t,x,z)\mathbf{e}_{y}+\mathrm{rot}[\overline{A}(t, x,z)\mathbf{e}_{y}]\), the operator \(\hat{N}\) is given by \[\hat{N} = \begin{pmatrix}\eta_{T}^{(A)}\left(\overline{B}\right)\Delta& \alpha\left(\overline{B}\right)\\ -R_{\alpha}^{2}\nabla_{j}\alpha\left(\overline{B}\right)\nabla_{j}&\nabla_{j} \eta_{T}^{(B)}\left(\overline{B}\right)\nabla_{j}\end{pmatrix}, \tag{45}\] and the total \(\alpha\) effect is given by \(\alpha\left(\overline{B}\right)=\alpha_{\rm K}\left(\overline{B}\right)+\alpha_ {\rm M}\left(\overline{B}\right)\). Now we introduce the effective dynamo number \(D_{\rm N}^{(\alpha)}\left(\overline{B}\right)\) in the nonlinear \(\alpha^{2}\) dynamo defined as \(D_{\rm N}^{(\alpha)}\left(\overline{B}\right)=\alpha^{2}\left(\overline{B} \right)L^{2}/[\eta_{T}^{(B)}\left(\overline{B}\right)\eta_{T}^{(A)}\left( \overline{B}\right)]\). Similarly, the effective dynamo number for a linear \(\alpha^{2}\) dynamo is defined as \(D_{\rm L}^{(\alpha)}=R_{\alpha}^{2}\), where \(R_{\alpha}=\alpha_{*}L/\eta_{T}^{(0)}\), \(\alpha_{*}\) is the maximum value of the kinetic \(\alpha\) effect and \(L\) is the stellar radius or the thickness of the galactic disk. Since poloidal and toroidal components of the mean magnetic field in the nonlinear \(\alpha^{2}\) mean-field dynamo are of the same order of magnitude, Eqs. (29) and (38) obtained in Section 3 for \(\alpha\Omega\) mean-field dynamo can be used for the nonlinear \(\alpha^{2}\) mean-field dynamo except for they should not contain the ratio \(D_{\rm C}^{1/2}/D_{\rm L}\) (which is the ratio of energies of the poloidal and toroidal mean magnetic fields). Therefore, in a shear-produced non-convective turbulence and in a convective turbulence, the ratio \(D_{\rm N}^{(\alpha)}\left(\overline{B}\right)/D_{\rm L}^{(\alpha)}\) for strong mean magnetic fields is given by \[\frac{D_{\rm N}^{(\alpha)}}{D_{\rm L}^{(\alpha)}} \approx 32\left[1+\left(\frac{\ell_{0}}{L_{B}}\right)^{2}\left(\frac{ \overline{B}}{\overline{B}_{\rm eq}}\right)^{2}\right]^{-2} \tag{46}\] \[\times\left(\frac{\alpha\left(\overline{B}\right)}{\alpha_{\rm K }^{(0)}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{3},\] while for forced turbulence, the ratio \(D_{\rm N}^{(\alpha)}\left(\overline{B}\right)/D_{\rm L}^{(\alpha)}\) for strong mean magnetic fields is given by \[\frac{D_{\rm N}^{(\alpha)}\left(\overline{B}\right)}{D_{\rm L}^{( \alpha)}} \approx 32\left[1+\frac{1}{8}\left(\frac{\ell_{0}}{L_{B}}\right)^{2} \left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)\right]^{-2} \tag{47}\] \[\times\left(\frac{\alpha\left(\overline{B}\right)}{\alpha_{\rm K }^{(0)}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{\rm eq}}\right)^{3}.\] These equations take into account the feedback of the mean magnetic field on the background turbulence by means of the budget equation for the total turbulent energy. Thus, Eqs. (46)-(47) imply that for the \(\alpha^{2}\) dynamo, the nonlinear dynamo number decreases with increase of the mean magnetic field for a forced turbulence, and a shear-produced turbulence and a convective turbulence. This causes a saturation of the mean-field \(\alpha^{2}\) dynamo instability for a strong mean magnetic field. ## 5 Mean-field \(\alpha^{2}\Omega\) dynamo In this section, we consider the axisymmetric mean-field \(\alpha^{2}\Omega\) dynamo, so that and nonlinear mean-field induction equation reads \[\frac{\partial}{\partial t}\left(\frac{\overline{A}}{B_{y}}\right)=\hat{N} \left(\frac{\overline{A}}{B_{y}}\right), \tag{48}\] where the mean magnetic field is \(\overline{\mathbf{B}}=\overline{B}_{y}(t,x,z)\mathbf{e}_{y}+\mathrm{rot}[\overline{A} (t,x,z)\mathbf{e}_{y}]\), the operator \(\hat{N}\) is \[\hat{N} = \begin{pmatrix}\eta_{{}_{T}}^{(A)}\left(\overline{B}\right) \Delta&\alpha\left(\overline{\mathbf{B}}\right)\\ R_{\alpha}\left[R_{\omega}\hat{\Omega}-R_{\alpha}\nabla_{j}\alpha\left( \overline{B}\right)\nabla_{j}\right]&\nabla_{j}\eta_{{}_{T}}^{(B)}\left( \overline{B}\right)\nabla_{j}\end{pmatrix}, \tag{49}\] and \(R_{\alpha}=\alpha_{*}L/\eta_{{}_{T}}^{(0)}\) and \(R_{\omega}=\left(\delta\Omega\right)L^{2}/\eta_{{}_{T}}^{(0)}\). First, we consider a kinematic dynamo problem, assuming for simplicity that the kinetic \(\alpha\) effect is a constant, and the mean velocity \(\overline{\mathbf{U}}=(0,Sz,0)\). We seek a solution for the linearised equation (48) as a real part of the following functions: \[\overline{A}=A_{0}\exp[\tilde{\gamma}t-\mathrm{i}\left(k_{x}x+k_{z}z\right) ], \tag{50}\] \[\overline{B}_{\varphi}=B_{0}\exp[\tilde{\gamma}t-\mathrm{i}\left(k_{x}x+k_{z}z \right)], \tag{51}\] where \(\tilde{\gamma}=\gamma+\mathrm{i}\omega\). Equations (48)-(51) yield the growth rate of the dynamo instability and the frequency of the dynamo waves as \[\gamma = \frac{R_{\alpha}R_{\alpha}^{\mathrm{cr}}}{\sqrt{2}}\left[\left[1+ \left(\frac{\zeta R_{\omega}}{R_{\alpha}R_{\alpha}^{\mathrm{cr}}}\right)^{2} \right]^{1/2}+1\right]^{1/2}-\left(R_{\alpha}^{\mathrm{cr}}\right)^{2}, \tag{52}\] \[\omega=-\mathrm{sgn}(R_{\omega})\,\frac{R_{\alpha}R_{\alpha}^{\mathrm{cr}}}{ \sqrt{2}}\left[\left[1+\left(\frac{\zeta R_{\omega}}{R_{\alpha}R_{\alpha}^{ \mathrm{cr}}}\right)^{2}\right]^{1/2}-1\right]^{1/2}, \tag{53}\] where \(\zeta^{2}=1-(k_{x}/R_{\alpha}^{\mathrm{cr}})^{2}\). Here we took into account that \((x+\mathrm{i}y)^{1/2}=\pm(X+\mathrm{i}Y)\), where \(X=2^{-1/2}\,[(x^{2}+y^{2})^{1/2}+x]^{1/2}\) and \(Y=\mathrm{sgn}(y)\,2^{-1/2}\,[(x^{2}+y^{2})^{1/2}-x]^{1/2}\). Here the threshold \(R_{\alpha}^{\mathrm{cr}}\) for the mean-field dynamo instability, defined by the conditions \(\gamma=0\) and \(R_{\omega}=0\), is given by \(R_{\alpha}^{\mathrm{cr}}=(k_{x}^{2}+k_{z}^{2})^{1/2}\). Equations (48)-(51) also yield the squared ratio of amplitudes \(|A_{0}/B_{0}|^{2}\), \[\left|\frac{A_{0}}{B_{0}}\right|^{2}=\left(R_{\alpha}R_{\alpha}^{\mathrm{cr}} \right)^{-2}\,\left(1+\zeta^{2}R_{\omega}^{2}\right)^{-1}, \tag{54}\] and the phase shift between the toroidal \(\overline{B}_{\varphi}\) and poloidal \(\overline{B}_{\mathrm{pol}}\) components of the mean magnetic field, \[\sin(2\delta)=-\zeta R_{\omega}\,\left[\left(R_{\alpha}R_{\alpha}^{\mathrm{cr }}\right)^{2}+\zeta^{2}R_{\omega}^{2}\right]^{-1/2}, \tag{55}\] where \(\overline{B}_{\mathrm{pol}}=R_{\alpha}R_{\alpha}^{\mathrm{cr}}\,\overline{A}\). Equation (54) yields the energy ratio of poloidal \(\overline{B}_{\mathrm{pol}}\) and toroidal \(\overline{B}_{\varphi}\) mean magnetic field components as \[\frac{\overline{B}_{\mathrm{pol}}^{2}}{\overline{B}_{\varphi}^{2}}=\left(1+ \zeta^{2}R_{\omega}^{2}\right)^{-1}. \tag{56}\] Asymptotic formulas for the growth rate of the dynamo instability and the frequency of the dynamo waves for a weak differential rotation, \(\zeta R_{\omega}\ll R_{\alpha}R_{\alpha}^{\mathrm{cr}}\), are given by \[\gamma=R_{\alpha}R_{\alpha}^{\mathrm{cr}}\left[1+\frac{1}{8}\left(\frac{\zeta R _{\omega}}{R_{\alpha}R_{\alpha}^{\mathrm{cr}}}\right)^{2}\right]-\left(R_{ \alpha}^{\mathrm{cr}}\right)^{2}, \tag{57}\] \[\omega=-\frac{\zeta R_{\omega}}{\sqrt{2}}. \tag{58}\] In this case, the mean-field \(\alpha^{2}\) dynamo is slightly modified by a weak differential rotation, and the phase shift between the fields \(\overline{B}_{\varphi}\) and \(\overline{B}_{\mathrm{pol}}\) vanishes, while \(\overline{B}_{\mathrm{pol}}/\overline{B}_{\varphi}\sim 1\) [see Eqs. (55)-(56)]. In the opposite case, for a strong differential rotation, \(\zeta R_{\omega}\gg R_{\alpha}R_{\alpha}^{\mathrm{cr}}\), the growth rate of the dynamo instability and the frequency of the dynamo waves are given by \[\gamma=\left[\frac{1}{2}\,\zeta\,R_{\alpha}^{\mathrm{cr}}\,R_{\alpha}|R_{ \omega}|\right]^{1/2}-\left(R_{\alpha}^{\mathrm{cr}}\right)^{2}, \tag{59}\] \[\omega=-\mathrm{sgn}(R_{\omega})\left[\frac{1}{2}\,\zeta\,R_{\alpha}^{\mathrm{cr }}\,R_{\alpha}|R_{\omega}|\right]^{1/2}. \tag{60}\] In this case, the mean-field \(\alpha\Omega\) dynamo is slightly modified by a weak \(\alpha^{2}\) effect, and the phase shift between the fields \(\overline{B}_{\varphi}\) and \(\overline{B}_{\mathrm{pol}}\) tends to \(-\pi/4\), while \(\overline{B}_{\mathrm{pol}}/\overline{B}_{\varphi}\ll 1\) [see Eqs. (55)-(56)]. The necessary condition for the dynamo (\(\gamma>0\)) in this case reads: * when \(R_{\alpha}/R_{\alpha}^{\mathrm{cr}}<\sqrt{2}\), the mean-field \(\alpha^{2}\Omega\) dynamo is excited when * when \(R_{\alpha}/R_{\alpha}^{\mathrm{cr}}>\sqrt{2}\), the mean-field \(\alpha^{2}\Omega\) dynamo is excited for any differential rotation, \(R_{\omega}\). Here \(D_{\mathrm{L}}=R_{\alpha}\,R_{\omega}\). Analysis which is similar to that performed in Section 3 yields the ratio of the nonlinear and linear dynamo numbers \(D_{\mathrm{N}}\left(\overline{B}\right)/D_{\mathrm{L}}\) in the nonlinear \(\alpha^{2}\Omega\) dynamo for strong mean magnetic fields in a shear-produced and a convective turbulence as \[\frac{D_{\mathrm{N}}\left(\overline{B}\right)}{D_{\mathrm{L}}} \approx 32\left[1+\frac{D_{\mathrm{c}}^{1/2}}{D_{\mathrm{L}}}\,\left(\frac{ \ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{\mathrm{eq} }}\right)^{2}\right]^{-2} \tag{62}\] \[\times\frac{\alpha\left(\overline{B}\right)}{\alpha_{\mathrm{c}}^{ (0)}}\left(\frac{\overline{B}}{\overline{B}_{\mathrm{eq}}}\right)^{3},\] and in a forced turbulence as \[\frac{D_{\mathrm{N}}\left(\overline{B}\right)}{D_{\mathrm{L}}} \approx 32\left[1+\frac{1}{8}\frac{D_{\mathrm{c}}^{1/2}}{D_{\mathrm{L}}} \,\left(\frac{\ell_{0}}{L_{B}}\right)^{2}\left(\frac{\overline{B}}{\overline{B}_{ \mathrm{eq}}}\right)\right]^{-2} \tag{63}\] \[\times\frac{\alpha\left(\overline{B}\right)}{\alpha_{\mathrm{c}}^{ (0)}}\left(\frac{\overline{B}}{\overline{B}_{\mathrm{eq}}}\right)^{3}.\] Equations (62)-(63) show that for the \(\alpha^{2}\Omega\) dynamo, the nonlinear dynamo number decreases with increase of the mean magnetic field for a forced turbulence, and a shear-produced turbulence and a convective turbulence. This implies that the nonlinear mean-field \(\alpha^{2}\Omega\) dynamo instability is always saturated for strong mean magnetic fields. When \((\zeta R_{\omega})^{2}\ll 1\), the poloidal and toroidal mean magnetic fields are of the same order of magnitude, so that Eqs. (62)-(63) do not contain factor \(D_{\rm cr}^{1/2}/D_{\rm L}\), which is the ratio of energies of the poloidal and toroidal mean magnetic fields. This is similar to the mean-field nonlinear \(\alpha^{2}\) dynamo. ## 6 Conclusions In the sun, stars and galaxies, the large-scale magnetic fields are originated due to the mean-field dynamo instabilities. The saturation of the dynamo generated large-scale magnetic fields is caused by algebraic and dynamic nonlinearities. However, these nonlinearities do not take into account the feedback of the generated mean magnetic field on the background turbulence. This nonlinear effect can be taking into account by means of the budget equation for the total turbulent energy. Using this approach and considering various origins of turbulence (e.g., a forced turbulence, a shear-produced turbulence and a convective turbulence), we have demonstrated that the mean-field \(\alpha\Omega\), \(\alpha^{2}\) and \(\alpha^{2}\Omega\) dynamo instabilities can be always saturated for any strong mean magnetic field. This is because the feedback of the generated mean magnetic field on the background turbulence in combination with the algebraic and dynamic nonlinearities, result in the decrease of the nonlinear dynamo number with increase of the mean magnetic field. These results have very important applications for astrophysical magnetic fields. ## Acknowledgments This work was partially supported by the Russian Science Foundation (grant 21-72-20067). We acknowledge the discussions with participants of the Nordita Scientific Program on "Towards a comprehensive model of the galactic magnetic field", Stockholm (April 2023), which is partly supported by NordForsk. ## Data Availability There are no new data associated with this article.
2307.15106
Towards non-perturbative BV-theory via derived differential geometry
We propose a global geometric framework which allows one to encode a natural non-perturbative generalisation of usual Batalin-Vilkovisky (BV-)theory. Namely, we construct a concrete model of derived differential geometry, whose geometric objects are formal derived smooth stacks, i.e. stacks on formal derived smooth manifolds, together with a notion of differential geometry on them. This provides a working language to study generalised geometric spaces that are smooth, infinite-dimensional, higher and derived at the same time. Such a formalism is obtained by combining Schreiber's differential cohesion with the machinery of T\"oen-Vezzosi's homotopical algebraic geometry applied to the theory of derived manifolds of Spivak and Carchedi-Steffens. We investigate two classes of examples of non-perturbative classical BV-theories in the context of derived differential cohesion: scalar field theory and Yang-Mills theory.
Luigi Alfonsi, Charles A. S. Young
2023-07-27T17:53:30Z
http://arxiv.org/abs/2307.15106v2
# Towards non-perturbative BV-theory ###### Abstract We propose a global geometric framework which allows one to encode a natural non-perturbative generalisation of usual Batalin-Vilkovisky (BV-)theory. Namely, we construct a concrete model of derived differential cohesive geometry, whose geometric objects are formal derived smooth stacks, i.e. stacks on formal derived smooth manifolds, together with a notion of differential geometry on them. This provides a working language to study generalised geometric spaces that are smooth, infinite-dimensional, higher and derived at the same time. Such a formalism is obtained by combining Schreiber's differential cohesion with the machinery of Toen-Vezzosi's homotopical algebraic geometry applied to the theory of derived manifolds of Spivak and Carchedi-Steffens. We investigate two classes of examples of non-perturbative classical BV-theories in the context of derived differential cohesion: scalar field theory and Yang-Mills theory. **Keywords**: Batalin-Vilkovisky formalism, higher structures, Yang-Mills theory, derived geometry, higher stacks, homotopical algebra **MSC 2020**: 81Txx, 14A30, 18N40 ###### Contents * 1 Introduction * 1 Goals of this paper * 1.2 Overview of main results * 1 Lightning review of smooth stacks * 1.1 Smooth sets * 1.2 Smooth stacks * 2 Zoology of formal smooth stacks * 2.1 \(\mathcal{C}^{\infty}\)-algebras as a Lawvere theory * 2.2 \(\mathcal{C}^{\infty}\)-varieties and formal smooth manifolds * 2.3 Definition of formal smooth stacks * 3 Formal derived smooth stacks * 3.1 Homotopy \(\mathcal{C}^{\infty}\)-algebras * 3.2 Formal derived smooth manifolds * 3.3 Definition of formal derived smooth stacks * 3.4 Discussion of formal derived smooth sets * 3.4.1 Derived affine \(\mathcal{C}^{\infty}\)-schemes * 3.4.2 Formal derived diffeological spaces * 3.5 Derived mapping stacks and bundles * 3.6 Derived de Rham cohomology * 3.6.1 Quasi-coherent \((\infty,1)\)-sheaves of modules * 3.6.2 Derived de Rham algebra * 4 Derived differential cohesive geometry * 4.1 Derived cohesion * 4.2 Derived differential cohesion * 4.3 Formal moduli problems from derived infinitesimal cohesion * 4.4 \(L_{\infty}\)-algebroids as formal derived smooth stacks * 4.5 Derived jet bundles * 5 Global aspects of classical BV-theory * 5.1 Review of BV-theory via \(L_{\infty}\)-algebras * 5.2 Global scalar field theory * 5.3 Global BRST-BV formalism * 5.3.1 Global BRST formalism * 5.3.2 Global Yang-Mills theory * 6 Outlook ## Introduction BV-theory.Batalin-Vilkovisky (BV-)theory [1] is an extremely powerful and successful mathematical framework for perturbatively formalising and quantising classical field theories, including theories with gauge symmetries. BV-theory has been applied to a wide range of physical systems and has deep connections to various areas of mathematics, including homological algebra, Poisson geometry, and symplectic geometry. See [12] for an overview. Essentially, classical BV-theory replaces the problem of determining the critical locus of the action functional - i.e. the space of solutions of the field equations - with the problem of constructing the derived critical locus of the action functional [13]. In the literature, various different approaches to BV-theory emerge in the settings of several broader programmes, including: * \(NQP\)_-manifolds_ approach (see [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 240, 241, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 320, 332, 334, 335, 336, 337, 338, 339, 340, 351, 352, 353, 354, 355, 356, 357, 358, 36, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 409, 410, 411, 42, 433, 444, 44, 45, 46, 471, 48, 49, 411, 44, 44, 46, 48, 49, 42, 44, 45, 46, 48, 49, 43, 44, 47, 49, 45, 46, 49, 47, 48, 49, 40, 41, 44, 48, 49, 42, 44, 45, 46, 49, 47, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 42, 45, 46, 49, 43, 47, 48, 49, 40, 41, 42, 44, 46, 49, 44, 45, 46, 49, 44, 47, 48, 49, 40, 41, 42, 43, 44, 45, 46, 49, 45, 47, 49, 48, 49, 40, 41, 42, 44, 45, 46, 49, 43, 44, 48, 49, 41, 42, 44, 45, 46, 49, 42, 47, 48, 49, 43, 49, 44, 45, 46, 47, 49, 48, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 44, 49, 43, 45, 46, 49, 41, 42, 45, 46, 49, 47, 48, 49, 49, 42, 48, 49, 43, 49, 45, 46, 49, 47, 49, 48, 49, 49, 40, 41, 42, 44, 49, 45, 49, 46, 49, 47, 48, 49, 49, 41, 42, 44, 49, 43, 45, 46, 49, 47, 48, 49, 49, 41, 42, 45, 49, 46, 49, 48, 49, 49, 40, 41, 42, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 41, 42, 45, 49, 49, 42, 46, 49, 43, 48, 49, 45, 46, 49, 47, 49, 48, 49, 49, 49, 40, 41, 42, 45, 46, 49, 49, 45, 47, 49, 48, 49, 49, 49, 40, 41, 42, 45, 46, 49, 49, 41, 43, 49, 42, 45, 46, 49, 47, 48, 49, 49, 45, 49, 46, 49, 48, 49, 49, 49, 40, 41, 42, 45, 46, 49, 47, 49, 48, 49, 49, 49, 40, 41, 42, 49, 45, 46, 49, 47, 48, 49, 49, 41, 45, 46, 49, 49, 42, 45, 46, 49, 47, 48, 49, 49, 40, 41, 42, 48, 49, 49, 45, 49, 48, 49, 49, 40, 41, 42, 49, 43, 49, 45, 46, 49, 47, 48, 49, 49, 41, 45, 46, 49, 49, 42, 49, 47, 48, 49, 49, 45, 46, 49, 47, 49, 49, 48, 49, 49, 49, 41, 42, 49, 45, 49, 46, 49, 47, 48, 49, 49, 49, 45, 49, 49, 48, 49, 49, 49, 49, 49, 46, 49, 49, 49, 49, 49, 49, 49, 41, 45, 46, 49, 49, 47, 49, 48, 49, 49, 49, 49, 45, 49, 49, 49, 46, 49, 49, 47, 49, 49, 48, 49 example, in higher smooth geometry our test spaces are ordinary smooth manifolds and the smooth structure of our spaces - namely, smooth stacks - is determined by the simplicial set of ways every smooth manifold can probe them. Similarly, formal smooth stacks are defined by using infinitesimally thickened manifolds as test spaces. The success of smooth stacks is multifaceted. First of all, just like smooth sheaves (also known as smooth sets) they generalise smooth manifolds by including infinite-dimensional smooth spaces. Secondly, they "categorify" smooth manifolds by relaxing the gluing conditions. The result is that spaces can be glued together by higher gauge transformations. The archetypal example of a smooth stack is \(\mathbf{Bun}_{G}(M)\), the stack of principal \(G\)-bundles on a fixed ordinary smooth manifold \(M\). At any test manifold \(U\), the space of sections \(\mathrm{Hom}(U,\,\mathbf{Bun}_{G}(M))\) is a groupoid whose objects are \(U\)-parametrised families of \(G\)-bundles on \(M\) and whose morphisms are \(U\)-parametrised families of gauge transformations. The theory of smooth stacks has been systematised by the notion of differential cohesive \((\infty,1)\)-topos developed by [DCCT] (see also [30]). Most often, the intersection of two smooth sub-manifolds is not a smooth manifold. The only exception is when is when the two sub-manifolds are transverse. As a reflection of this property of smooth manifolds, the limits in the category of smooth stacks (despite existing) do not behave well from an intersection theory point of view. However, in mathematical physics it is of primary importance to construct a well-defined space of solutions of the equations of motion (also known as the phase space), which can be precisely understood as the intersection between the section induced by first variation of the action functional and a zero-section. Derived manifolds were introduced by [14] to solve the problem of arbitrary intersection of smooth manifolds. Therefore, it is reasonable to expect that, by replacing smooth manifolds with derived manifolds, we can construct a notion of derived stacks which behave nicely from an intersection theory standpoint. Figure 1: Probing a formal smooth stack by (a) infinitesimally thickened points and (b) ordinary smooth manifolds. Usual BV-theory is perturbatively quantised by a certain deformation of the complex of functions on the formal moduli problem (see [13, 14]). However, in the context of stacks, there exists a proposed quantisation procedure which is completely distinct from BV-theory: higher geometric quantisation. Higher geometric quantisation._Higher geometric quantisation_[16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] is a mathematical framework for constructing a quantum theory from a classical one which generalises ordinary geometric quantisation. See [24] for an introduction to the field. Recall that ordinary geometric quantization is a well-established method for constructing a global-geometric quantisation of the phase space of classical mechanical system, seen as a symplectic manifold \((M,\omega)\). This is achieved by the construction of the prequantum \(U(1)\)-bundle \(P\twoheadrightarrow M\) on the symplectic manifold \((M,\omega)\), which is just a principal \(U(1)\)-bundle \(P\twoheadrightarrow M\) whose curvature is \(\operatorname{curv}(P)=\omega\in\Omega^{2}_{\mathrm{cl}}(M)\). The Hilbert space of the system is then constructed as the space of polarised sections of the associated bundle \(P\times_{U(1)}\mathbb{C}\). That being the case, higher geometric quantisation generalises ordinary geometric quantisation in two directions: * the ordinary prequantum \(U(1)\)-bundle can be generalised to a bundle \(n\)-gerbe; * the ordinary phase space can be generalised to a symplectic higher stack, as firstly introduced by [21] and further developed by [10, 22, 11]. Higher geometric quantisation does, however, suffer from the difficulty that it is not clear, in general, how to polarise sections of the prequantum bundle and consequently how to obtain a fully fledged Hilbert space. In this sense, higher geometric _pre_quantisation is quite successful, but the quantisation step itself is less understood. Nonetheless, higher geometric quantisation reminds us of the crucial lesson that quantisation is ultimately a global-geometric process. In contrast, BV-theory is perturbative, since the classical phase space is quantised in a series expansion around a fixed solution, but it has a good understanding of what the quantisation step should look like, at least locally. In this sense, one could argue that strengths and limitations of the two formalisms are complementary. ### Goals of this paper This paper is intended as a first step towards the following main two objectives. The first one concerns the development of a global-geometric framework for BV-theory and the second Figure 2: Intuitive picture of the two main generalisations of smooth geometry: formal smooth geometry and derived smooth geometry. In the former, we allow points to be infinitesimally extended, i.e. formally thickened. In the latter, points can be enhanced to a geometric object whose algebra of functions is simplicial. concerns its non-perturbative quantisation. This is closely related to the intriguing work by [10] in the context of derived algebraic geometry. Goal I: global classical BV-theory.The usual approaches to BV-theory are intrinsically perturbative, even just at the classical level. As we argued, the reason is that the formalism of usual BV-theory studies a classical field theory in terms of its infinitesimal deformations around a fixed solution of its equations of motion. In other words, the formalism of usual BV-theory does not know anything about the global geometry of the configuration space of the field away from the fixed solution. However, quantisation is known to be a global process, which depends on the global geometry of the phase space of a field theory. This fundamental issue is reified, in Yang-Mills field theory, as follows. A Yang-Mills field configuration is the datum \((P,\nabla_{A})\) of a principal \(G\)-bundle \(P\twoheadrightarrow M\) on the spacetime manifold with a connection \(\nabla_{A}\). However, pointed formal moduli problems can only encode infinitesimal deformations of some fixed \((P,\nabla_{A})\) and the Lie algebra of their infinitesimal gauge transformations. This makes usual BV-theory structurally blind to the global-geometric properties of gauge fields, as already observed by [10]. As an archetypal example, recall that the electromagnetic field has gauge group \(U(1)\), so that its infinitesimal gauge transformations are indistinguishable from the ones of a theory with gauge group \(\mathbb{R}\). However, the global geometry of the electromagnetic field is described by principal \(U(1)\)-bundles with connection, which come with fundamental global-geometric features - such as magnetic charges, encoded by the Chern classes of the bundles, and Aharonov-Bohm effects - that a gauge theory on \(\mathbb{R}\) would not show. The first goal is, then, to develop a framework which generalises the formal moduli problems of BV-theory beyond infinitesimal deformation theory. To do that, we want to apply Toen-Vezzosi's derived geometry [1, 2] to Carchedi-Steffens' derived manifolds [13] to construct _formal derived smooth stacks_. These geometric objects must generalise the traditional notion of manifold in the following ways: **formal**: allows infinitesimally thickened geometric objects, e.g. formal disks; **derived**: allows a (categorified) generalisation of intersections, e.g. non-transversal intersections; **smooth**: allows smooth geometric objects, e.g. smooth manifolds and diffeological spaces; **stack**: allows a (categorified) generalisation of gluing, e.g. gauge transformations. Our proposed framework of formal derived smooth stacks will be rooted in the formalism of Schreiber's differential cohesion [4], which has been applied to formalise many higher geometric structures underlying theoretical physics [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 222, 223, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 252, 259, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 291, 285, 286, 287, 288, 289, 292, 293, 287, 288, 289, 294, 295, 296, 297, 298, 299, 300, 31, 320, 333, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 85, 87, 89, 91, 88, 89, 92, 80, 84, 86, 88, 89, 93, 88, 89, 94, 89, 95, 80, 84, 89, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 112, 106, 107, 109, 113, 108, 109, 114, 109, 115, 109, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 130, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 161, 170, 171, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 203, 204, 205, 206, 207, 208, 209, 210, 211, 223, 214, 215, 216, 217, 218, 219, 223, 219, 232, 234, 235, 236, 237, 238, 239, 240, 258, 269, 270, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 294, 295, 296, 297, 298, 299, 300, 31, 320, 321, 323, 324, 325, 326, 327, 329, 333, 334, 328, 335, 329, 333, 340, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 84, 89, 91, 85, 86, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 119, 112, 113, 114, 115, 116, 117, 119, 120, 122, 133, 134, 145, 146, 147, 148, 149, 150, 161, 173, 174, 1 ### Overview of main results Here, we will provide a brief overview of all the main results of this paper section by section. Model of formal derived smooth stacks.In section 3, we introduce the fundamental geometric object which we are going to consider in this paper: the formal derived smooth stack. To define formal derived smooth stacks, first we must introduce formal derived smooth manifolds, which will be our probing spaces. In this respect, [10] tells us that there is a canonical equivalence of \((\infty,1)\)-categories \(\mathbf{dMfd}\simeq\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}_{\mathrm{fp}}\) between the \((\infty,1)\)-category \(\mathbf{dMfd}\) of derived manifolds and the opposite \((\infty,1)\)-category \(\mathbf{sC}^{\infty}\mathbf{Alg}_{\mathrm{fp}}\) of homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebras. However, to achieve our goals, we will need to slightly generalise the notion of derived manifold. In analogy with the discussion of [11] in the context of algebraic geometry, we define the \((\infty,1)\)-category of formal derived smooth manifolds by \[\mathbf{dFMfd}\,\coloneqq\,\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}_{ \mathrm{afp}}, \tag{0.2.1}\] where \(\mathbf{sC}^{\infty}\mathbf{Alg}_{\mathrm{afp}}\) is the \((\infty,1)\)-category of almost finitely presented \(\mathcal{C}^{\infty}\)-algebras. We then define the notion of formally etale morphisms of formal derived smooth manifolds and, thus, we equip the \((\infty,1)\)-category \(\mathbf{dFMfd}\) with the structure of an etale \((\infty,1)\)-site. Finally, we define the \((\infty,1)\)-category \(\mathbf{dFSmoothStack}\) of formal derived smooth stacks as the \((\infty,1)\)-category of stacks on the site \(\mathbf{dFMfd}\). More technically, we will see that there is a certain simplicial model category \([\mathbf{dFMfd}^{\mathrm{op}},\mathbf{sSet}]^{\circ}_{\mathrm{proj,loc}}\) whose homotopy coherent nerve presents the \((\infty,1)\)-category of formal derived smooth stacks, i.e. \[\mathbf{dFSmoothStack}\,\coloneqq\,\mathbf{N}_{hc}([\mathbf{dFMfd}^{\mathrm{op} },\mathbf{sSet}]^{\circ}_{\mathrm{proj,loc}}). \tag{0.2.2}\] The relation between formal derived smooth stacks and usual smooth stacks will be clarified by the following proposition. **Proposition 3.21** (Relation with usual smooth stacks).: There exists an adjunction \((i\dashv t_{0})\) of \((\infty,1)\)-functors between the \((\infty,1)\)-category of smooth stacks into the \((\infty,1)\)-category of formal Figure 3: The two main quantisation procedures and their potential relation. derived smooth stacks \[\begin{CD}\text{{dFSmoothStack}}@>{i}>{t_{0}}>{}>\text{{SmoothStack}},\end{CD} \tag{0.2.3}\] where \(i\) is fully faithful and \(t_{0}\) preserves finite products. The relation of formal derived smooth stacks with smooth stacks and other relevant classes of smooth spaces is summed up in figure 4. Since the functor \(t_{0}\) preserves finite products, we have the following equivalence of smooth stacks: \[t_{0}\big{(}i(X)\times_{i(Z)}^{h}i(Y)\big{)}\;\stackrel{{\simeq}} {{\longrightarrow}}\;X\times_{Z}Y, \tag{0.2.4}\] for any formal derived smooth stacks \(X\) and \(Y\). Differential forms on formal derived smooth stacks.In the last part of section 3, we define the \((\infty,1)\)-category \(\operatorname{QCoh}(X)\) of quasi-coherent sheaves of modules on a formal derived smooth stack \(X\in\text{{dFSmoothStack}}\). In particular, we provide the definition of cotangent complex \(\mathbb{L}_{X}\in\operatorname{QCoh}(X)\) of a formal derived smooth stack \(X\) in a sense which is compatible with its formal derived smooth structure. Then, we construct the complex of \(p\)-forms on a formal derived smooth stack \(X\) by \[\operatorname{A}^{p}(X)\,\coloneqq\,\operatorname{R\Gamma}(X,\wedge_{ \mathbb{O}_{X}}^{p}\mathbb{L}_{X}). \tag{0.2.5}\] Complex of closed \(p\)-forms on a formal derived smooth stack \(X\) by \[\operatorname{A}^{p}_{\text{cl}}(X)\;\coloneqq\;\bigg{(}\prod_{n\geq p} \operatorname{A}^{n}(X)[-n]\bigg{)}[p]. \tag{0.2.6}\] Figure 4: A summary family tree of stacks in formal derived smooth geometry. This implies that an \(n\)-cocycle in \(\mathrm{A}^{p}_{\mathrm{cl}}(X)\) is given by a formal sum \((\omega_{i})=(\omega_{p}+\omega_{p+1}+\dots)\), where each form \(\omega_{i}\in\mathrm{A}^{i}(X)\) is an element of degree \(n+p-i\), satisfying the equations \[Q\omega_{p} = 0, \tag{0.2.7}\] \[\mathrm{d}_{\mathrm{dR}}\omega_{i}+Q\omega_{i+1} = 0,\] for every \(i\geq p\). Finally, we construct the formal derived smooth stack \(\boldsymbol{\mathcal{A}}^{p}(n)\) as moduli stack of \(n\)-shifted differential \(p\)-forms and \(\boldsymbol{\mathcal{A}}^{p}_{\mathrm{cl}}(n)\) as moduli stack of closed \(n\)-shifted differential \(p\)-forms. Derived differential cohesion.in sections 4 we show that the formalism of differential cohesion introduced by Schreiber in [DCCT] extends very naturally to the derived smooth setting. First of all, we show that the \((\infty,1)\)-topos of formal derived smooth stacks is cohesive. Roughly speaking, a cohesive structure provides to an \((\infty,1)\)-topos the properties required for a geometry to take place in it and for its objects to be fully-fledged spaces. **Theorem 4.2** (Cohesive \((\infty,1)\)-topos of formal derived smooth stacks).: The \((\infty,1)\)-topos of formal derived smooth stacks **dFSmoothStack** is cohesive. We will see that the structure of derived cohesion induces the following triplet of adjoint endo-functors: \[(\int\ \dashv\ \dashv\ \dashv\ \dashv\ \dashv):\ \textbf{dFSmoothStack} \longrightarrow\textbf{dFSmoothStack}, \tag{0.2.8}\] where we respectively have: 1. _shape modality_ \(\int\), 2. _flat modality_ \(\flat\), 3. _sharp modality_ \(\sharp\). More specifically, to equip an \((\infty,1)\)-topos with a notion of differential geometry, a cohesive structure is not enough: we need differential cohesion. With the following theorem, we show that formal derived smooth stack come naturally equipped also with a differential cohesive structure, which we will call derived differential cohesion. **Theorem 4.7** (Differential cohesive \((\infty,1)\)-topos of formal derived smooth stacks).: The cohesive \((\infty,1)\)-topos **dFSmoothStack** of formal derived smooth stacks is naturally equipped with a differential cohesive structure. Such a structure, which we will call derived differential cohesive structure, induces the following triplet of adjoint endofunctors: \[(\Re\ \dashv\ \ \dashv\ \dashv):\ \textbf{dFSmoothStack} \longrightarrow\textbf{dFSmoothStack}, \tag{0.2.9}\] where we respectively have: 1. _infinitesimal reduction modality_ \(\Re\), 2. _infinitesimal shape modality_ \(\Im\), 3. _infinitesimal flat modality_ \(\&\)_._ Differential cohesive geometry underpins the definition of the de Rham space \(\Im(X)\) of any formal derived smooth stack \(X\) by the infinitesimal shape modality. This could be interpreted as an infinitesimal version of the path \(\infty\)-groupoid of \(X\) and its role will be pivotal. In fact, we can define the formal disk \(\mathbb{D}_{X,x}\) at the point \(x:*\to X\) of a formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) by the homotopy pullback of formal derived smooth stacks (0.2.10) where \(\mathrm{i}_{X}:X\longrightarrow\Im(X)\) is a natural map. The definition of formal disk entails the geometry of jets of formal derived smooth stacks. Relation with formal moduli problems.In the second half of section 4 we study the relation of formal derived smooth stacks with formal moduli problems. We introduce the simplicial category \(\mathsf{dgArt}_{\mathbb{R}}^{\leq 0}\) of dg-Artinian algebras, then we construct the \((\infty,1)\)-category of formal moduli problems by the \((\infty,1)\)-category of pre-stacks \[\mathbf{FMP}\,\coloneqq\,\mathbf{N}_{hc}([\mathsf{dgArt}_{\mathbb{R}}^{\leq 0 },\mathsf{sSet}]_{\mathrm{proj}}^{\circ}), \tag{0.2.11}\] with its natural structure of \((\infty,1)\)-topos of pre-stacks. The following proposition characterises the \((\infty,1)\)-category of formal moduli problems as a cohesive \((\infty,1)\)-topos which is, in particular, infinitesimally cohesive in the sense of [DCCT, Definition 4.1.21]. This, roughly, means that the objects of \(\mathbf{FMP}\) are infinitesimally thickened simplicial sets of points. **Proposition 4.43** (Infinitesimal cohesive \((\infty,1)\)-topos of formal moduli problems).: The \((\infty,1)\)-topos \(\mathbf{FMP}\) of formal moduli problems has a natural infinitesimally cohesive structure in the sense of [DCCT, Definition 4.1.21]. Figure 5: The pointed formal moduli problem underlying the BV-complex can be seen as the infinitesimal neighborhood of a fixed solution in a formal derived smooth stack corresponding to a given classical field theory. Moreover, we will show that the \((\infty,1)\)-topos of formal moduli problems is related to the one of formal derived smooth stacks by morphisms of \((\infty,1)\)-topoi of the following form: \[\begin{CD}\text{Smooth stacks}@>{}>{}>\text{Formal derived}@>{}>{}>\text{Formal}@>{}>{}> \text{Formal}\\ \text{smooth stacks}@>{}>{}>\text{Moduli Problems},\end{CD}\] presenting formal derived smooth stacks as a refinement of usual smooth stacks. We will make this relation precise in terms of a structure known as relative cohesion, which induces the following triple of adjoint endofunctors \[(\int^{\text{rel}}\,\dashv\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Lightning review of smooth stacks In this section we will provide a brief review of the theory of smooth stacks - which are sometimes known as differentiable stacks in the literature. ### Smooth sets Let \(\mathsf{Mfd}\) be the ordinary category whose objects are smooth manifolds and whose morphisms are smooth maps between them. We stress that in all the rest of this paper sans serif will be used to denote ordinary categories. Now, we can provide the category \(\mathsf{Mfd}\) with the structure of a site by assigning to each smooth manifold \(M\in\mathsf{Mfd}\) a collection of covering families, i.e. a collection of families of morphisms \(\{U_{i}\to M\}_{i\in I}\) satisfying some conditions. **Definition 1.1** (Covering of a smooth manifold).: We define a _covering family of a smooth manifold_\(M\) as a set of injective local diffeomorphisms \[\{U_{i}\xhookrightarrow{\phi_{i}}M\}_{i\in I} \tag{1.1.1}\] such that they induce a surjective local diffeomorphism \[\coprod_{i\in I}U_{i}\xhookrightarrow{(\phi_{i})_{i\in I}}M. \tag{1.1.2}\] The site structure on \(\mathsf{Mfd}\) given by the choice of covering families above is known as etale site. **Definition 1.2** (Smooth sets).: _Smooth sets_ are defined as sheaves on the site of smooth manifolds \(\mathsf{Mfd}\). The category of smooth sets is, then, defined by \[\mathsf{SmoothSet}\;\coloneqq\;\mathsf{Sh}(\mathsf{Mfd}). \tag{1.1.3}\] The usual gluing axiom of sheaves can be seen in the following light. Let \(\{U_{i}\to M\}_{i\in I}\) be a covering family and notice that \(M\) can be rewritten as the colimit of the diagram of manifolds \[M\;\simeq\;\mathrm{colim}\bigg{(}\;\coprod_{i,j\in I}U_{i}\times_{M}U_{j}\; \overrightarrow{\longrightarrow}\coprod_{i\in I}U_{i}\;\bigg{)}. \tag{1.1.4}\] Then, \(X\) to be a sheaf, must have a set of sections on \(M\) given by the limit of the diagram \[X(M)\;\simeq\;\mathrm{lim}\bigg{(}\;\prod_{i,j\in I}X(U_{i}\times_{M}U_{j})\; \overleftarrow{\longleftarrow}\;\prod_{i\in I}X(U_{i})\;\bigg{)}. \tag{1.1.5}\] **Example 1.3** (Yoneda embedding of smooth manifolds).: A smooth manifold is the simplest example of smooth set. Let \(M\in\mathsf{Mfd}\) be a smooth manifold, then it naturally Yoneda-embeds into a smooth set of the form \[M\,:\,\mathsf{Mfd}^{\mathrm{op}} \;\longrightarrow\;\mathsf{Set} \tag{1.1.6}\] \[U \;\longmapsto\;\mathrm{Hom}_{\mathsf{Mfd}}(U,M),\] where \(\mathrm{Hom}_{\mathsf{Mfd}}(U,M)\). Thus, we have the full and faithful embedding of categories \[\mathsf{Mfd}\;\smash{\mathop{\longrightarrow}\limits}\;\mathsf{SmoothSet}. \tag{1.1.7}\] (In what follows, we shall sometimes make use of this embedding without comment.) The notion of smooth set is a categorically well-behaved generalisation of smooth manifold which, crucially, allow us to encode finite-dimensional smooth spaces. A relevant example is the smooth space \([M,N]\) of functions from a smooth manifold \(M\) to \(N\). **Example 1.4** (Mapping space).: Let \(M,N\in\mathsf{Mfd}\) be a pair of smooth manifolds. We can define the mapping space \([M,N]\in\mathsf{SmoothSet}\) by the smooth set \[\begin{split}[M,N]\,:\,\mathsf{Mfd}^{\mathrm{op}}& \longrightarrow\,\mathsf{Set}\\ U&\longmapsto\,\mathrm{Hom}_{\mathsf{FMfd}}(U\times M,N),\end{split} \tag{1.1.8}\] functorially, on elements \(U\in\mathsf{Mfd}\) of the site. **Example 1.5** (Moduli space of differential forms).: It is possible to define a smooth set \(\Omega^{1}\in\mathsf{SmoothSet}\), which we can call moduli space of differential forms, by \[\begin{split}\boldsymbol{\Omega}^{1}\,:\,\mathsf{Mfd}^{\mathrm{op }}&\longrightarrow\,\mathsf{Set}\\ U&\longmapsto\,\Omega^{1}(U),\end{split} \tag{1.1.9}\] and by sending morphisms \(f:U\to U^{\prime}\) to pullbacks \(f^{*}:\Omega^{1}(U^{\prime})\to\Omega^{1}(U)\). This remarkably abstract moduli space of differential forms is very useful in practice, because it allows us to work with differential forms on general formal smooth sets, including mapping spaces. **Definition 1.6** (Differential forms on a smooth set).: We define the _set of differential \(1\)-forms_ on a given smooth set \(X\in\mathsf{SmoothSet}\) by the following hom-set of smooth sets: \[\Omega^{1}(X)\;\coloneqq\;\mathrm{Hom}(X,\boldsymbol{\Omega}^{1}), \tag{1.1.10}\] where \(\boldsymbol{\Omega}^{1}\in\mathsf{SmoothSet}\) is the moduli space of differential forms. **Remark 1.7** (de Rham differential).: There exists a canonical morphism of smooth sets \[\mathrm{d}_{\mathrm{dR}}\,:\,\mathbb{R}\,\longrightarrow\,\boldsymbol{\Omega }^{1}, \tag{1.1.11}\] which is given by the differential \(\mathrm{d}:\mathcal{C}^{\infty}(U,\mathbb{R})\to\Omega^{1}(U)\) of function on each smooth manifold \(U\) in the site. This particularly exotic morphism of smooth sets \(\mathrm{d}_{\mathrm{dR}}\in\mathrm{Hom}(\mathbb{R},\boldsymbol{\Omega}^{1})\) is known as de Rham differential. **Remark 1.8** (Pullback of differential forms).: Given a morphism \(f:X\longrightarrow Y\) of smooth sets \(X,Y\in\mathsf{SmoothSet}\), we have a morphism of sets \(f^{*}:\Omega^{p}(Y)\longrightarrow\Omega^{p}(X)\) such that the following square commutes (1.1.12) **Remark 1.9** (Variational calculus on smooth sets).: The power of smooth sets is their capacity to provide a well-defined formalism for variational calculus. For example, we can consider the mapping space \([M,\mathbb{R}]\) for a given smooth manifold \(M\). This can be thought as the infinite-dimensional smooth space of smooth functions on the manifold \(M\), and there is no issue in working with differential forms on such a large space: differential \(1\)-forms are simply given by \(\Omega^{1}([M,\mathbb{R}])\coloneqq\operatorname{Hom}([M,\mathbb{R}],\mathbf{\Omega}^{1})\), as above. Similarly, a smooth functional on such a space will be given by a morphism of smooth sets \[S\,:\,[M,\mathbb{R}]\,\longrightarrow\,\mathbb{R} \tag{1.1.13}\] to the real line. The so-called first variation of this functional is immediately given by the following composition: \[\mathrm{d}_{\mathrm{dR}}S\,:\,[M,\mathbb{R}]\,\xrightarrow{S}\,\,\mathbb{R} \,\xrightarrow{\mathrm{d}_{\mathrm{dR}}}\,\mathbf{\Omega}^{1}, \tag{1.1.14}\] which means that we have obtained a perfectly legitimate 1-form \(\mathrm{d}_{\mathrm{dR}}S\in\Omega^{1}([M,\mathbb{R}])\) on the infinite-dimensional mapping space \([M,\mathbb{R}]\) of smooth functions on \(M\). We can now define the functor which forgets the smooth structure of formal smooth sets, i.e. which sends any smooth set to its underlying bare set. **Definition 1.10** (Global section functor).: We define the the _global section functor_ by \[\varGamma(-)\coloneqq\operatorname{Hom}_{\mathsf{SmoothSet}}(\,*\,,-)\,:\, \mathsf{SmoothSet}\,\longrightarrow\,\mathsf{Set}. \tag{1.1.15}\] The global section functor will allow us to define an important class of smooth sets: diffeological spaces. Diffeological spaces were firstly introduced by [10, 11] and then reformulated by [13]. A diffeological space is a powerful generalisation of a smooth manifold which, in particular, provides a natural setting to study infinite-dimensional smooth spaces. Useful examples of diffeological spaces will be the space of smooth sections of a fibre bundle and the infinite-jet bundle of a fibre bundle. Diffeological spaces behave well under categorical properties and they embed into a sub-category, which is said concrete, of the topos of smooth sets [14, 15]. **Definition 1.11** (Diffeological space).: A _diffeological space_\(X\) is defined as a concrete smooth set, i.e. such that for any smooth manifold \(U\in\mathsf{Mfd}\) the natural map \[X(U)\,\hookrightarrow\,\operatorname{Hom}_{\mathsf{Set}}(\varGamma U,\, \varGamma X), \tag{1.1.16}\] is a monomorphism of sets. **Example 1.12** (Examples of diffeological spaces).: A smooth manifold \(M\in\mathsf{Mfd}\hookrightarrow\mathsf{SmoothSet}\), Yoneda-embedded in smooth sets, is a diffeological space. If we consider another smooth manifold \(N\in\mathsf{Mfd}\), then the mapping space \([M,N]\) is also a diffeological space. This is because, given any section \(f\in[M,N](U)\simeq\operatorname{Hom}_{\mathsf{Mfd}}(M\times U,N)\), we can embed it into a map \(\varGamma U\to\varGamma[M,N]\simeq\operatorname{Hom}_{\mathsf{Mfd}}(M,N)\) which sends any point \(u\in\varGamma U\) to \(f(u)\in\operatorname{Hom}_{\mathsf{Mfd}}(M,N)\). ### Smooth stacks The category \(\mathsf{sSet}\) of _simplicial sets_ can be seen as the functor category \([\Delta^{\mathrm{op}},\mathsf{Set}]\), where \(\Delta\) is the simplex category - i.e. the category whose objects are non-empty finite ordinals and whose morphisms are order-preserving maps - and \(\mathsf{Set}\) is the category of sets. The category \(\mathsf{sSet}\) of simplicial sets is naturally a simplicial category, i.e. a category enriched over \(\mathsf{sSet}\) itself. In the rest of the paper we will keep using sans serif to denote simplicial categories. Moreover, we will denote by \(\mathsf{sSet}_{\mathsf{Quillen}}\) the simplicial category of simplicial sets equipped with Quillen model structure [16], whose weak equivalences are weak homotopy equivalences of simplicial sets and whose fibrations are Kan fibrations. Let \(\mathsf{W}\) be the set of weak homotopy equivalences of simplicial sets. Then, by simplicial localisation, one can define the category of Kan complexes \[\mathsf{KanCplx}\,\coloneqq\,L_{\mathsf{W}}\mathsf{sSet}_{\mathsf{Quillen}}. \tag{1.2.1}\] It can be shown that the full subcategory \(\mathsf{sSet}^{\circ}_{\mathsf{Quillen}}\) of fibrant-cofibrant objects of \(\mathsf{sSet}_{\mathsf{Quillen}}\) is equivalent to the simplicial-category of Kan complexes, i.e. \[\mathsf{KanCplx}\;\simeq\;\mathsf{sSet}^{\circ}_{\mathsf{Quillen}}. \tag{1.2.2}\] Moreover, we can make this simplicial category into a fully fledged \((\infty,1)\)-category. Essentially, an \((\infty,1)\)-category is a simplicial set which satisfies an extra condition, known as weak Kan condition (which requires all the inner horns of the simplicial set to have fillers). It is a standard technique [10, Section 1.1.5] that, by applying the homotopy-coherent nerve functor \(\mathbf{N}_{hc}\) to our simplicial category, one obtains the \((\infty,1)\)-category of \(\infty\)-groupoids, i.e. \[\infty\mathbf{Grpd}\;\coloneqq\;\mathbf{N}_{hc}(\mathsf{sSet}^{\circ}_{ \mathsf{Quillen}}). \tag{1.2.3}\] In the rest of the paper, we will use bold roman font to denote \((\infty,1)\)-categories. Now, given any category \(\mathsf{C}\), consider the simplicial functor category \(\mathsf{sPreSh}(\mathsf{C})\coloneqq[\mathsf{C}^{\mathrm{op}},\mathsf{sSet}]\), known as the category of simplicial pre-sheaves on \(\mathsf{C}\). If \(\mathsf{C}\) has the structure of a _site_ with _enough points_, there exists a model structure \(\mathsf{sPreSh}(\mathsf{C})_{\mathsf{proj},\mathsf{loc}}\) which is known as the _projective local model structure_[11] and whose set of local weak equivalences \(\mathsf{W}\) is the set of natural transformations which are stalk-wise weak homotopy equivalences of simplicial sets. Then, we can define the simplicial category of _stacks_ on \(\mathsf{C}\) by simplicial localisation \[\mathsf{St}(\mathsf{C})\;\coloneqq\;L_{\mathsf{W}}\mathsf{sPreSh}(\mathsf{C}). \tag{1.2.4}\] Moreover, the projective local model structure has the property that the full subcategory \(\mathsf{sPreSh}(\mathsf{C})^{\circ}_{\mathsf{proj},\mathsf{loc}}\) of fibrant-cofibrant objects of the simplicial model category \(\mathsf{sPreSh}(\mathsf{C})_{\mathsf{proj},\mathsf{loc}}\) is equivalent to the simplicial category of stacks, i.e. we have \[\mathsf{St}(\mathsf{C})\;\simeq\;\mathsf{sPreSh}(\mathsf{C})^{\circ}_{ \mathsf{proj},\mathsf{loc}}. \tag{1.2.5}\] Thus, the \((\infty,1)\)-category of stacks on the site \(\mathsf{C}\) can be defined by the homotopy-coherent nerve of this simplicial category, i.e. by \[\mathbf{St}(\mathsf{C})\;\coloneqq\;\mathbf{N}_{hc}(\mathsf{sPreSh}(\mathsf{ C})^{\circ}_{\mathsf{proj},\mathsf{loc}}). \tag{1.2.6}\] Let us now specialize our discussion to smooth geometry. The category \(\mathsf{Mfd}\) of smooth manifolds, whose objects are smooth manifolds and whose morphisms are smooth maps between them, has a natural site structure where covering families \(\{U_{i}\to M\}_{i\in I}\) are good open covers of smooth manifolds. Then, _smooth stacks_[DCCT] - also known as _differentiable stacks_ - can be defined as stacks on the site of smooth manifolds \(\mathsf{Mfd}\) and thus they live in the simplicial category \[\mathsf{SmoothStack}\;\coloneqq\;\mathsf{St}(\mathsf{Mfd})\;\simeq\;\mathsf{ sPreSh}(\mathsf{Mfd})^{\circ}_{\mathsf{proj},\mathsf{loc}}. \tag{1.2.7}\] Given a covering family \(\{U_{i}\to U\}_{i\in I}\), it is possible to construct a simplicial object known as \(\check{\mathsf{C}}\)ech nerve of the smooth manifold \(U\) by \[\check{C}(U)_{\bullet}\;=\;\bigg{(}\;\;\cdots\;\;\overrightarrow{ \overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow {\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\,}}}}}}}}}} }\;\coprod_{i,j,k\in I}U_{i}\times_{U}U_{j}\times_{U}U_{k}\; \overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow {\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{ \overrightarrow{\,\,}}}}}}}}}}\;\coprod_{i,j\in I}U_{i}\times_{U}U_{j} \;\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow {\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\,\,\,\,}}}}}}}}} }\;\coprod_{i\in I}U_{i}\;\bigg{)}, \tag{1.2.8}\] whose colimit is the original smooth manifold \(U\simeq\operatorname{colim}_{[n]\in\Delta}\check{C}(U)_{n}\). By unravelling the definition of a smooth stack, more concretely, one has that a smooth stack is a simplicially enriched functor \(X:\mathsf{Mfd}\longrightarrow\mathsf{sSet}\) satisfying the following properties: * _object-wise fibrancy_: for any \(U\in\mathsf{Mfd}\), the simplicial set \(X(U)\) is Kan-fibrant; _pre-stack condition_: for any diffeomorphism \(U\xrightarrow{\simeq}U^{\prime}\) in \(\mathsf{Mfd}\), the induced morphism \(X(U^{\prime})\longrightarrow X(U)\) is an equivalence of simplicial sets; * _descent condition_: for any Cech nerve \(\tilde{C}(U)_{\bullet}\to U\), the natural morphism \[X(U)\;\longrightarrow\;\lim_{[n]\in\Delta}\bigg{(}\prod_{i_{1},\dots,i_{n}\in I }X(U_{i_{1}}\!\times_{U}\!\dots\!\times_{U}\!U_{i_{n}})\,\bigg{)}\] (1.2.9) is an equivalence of simplicial sets. **Example 1.13** (Quotient stack).: Let \(M\) be a smooth manifold and \(G\) a Lie group. A typical example of smooth stack is given by the quotient stack \([M/G]\in\mathbf{SmoothStack}\), which is constructed as follows. The \(\infty\)-groupoid \([M/G](U)\) of sections on a smooth manifold \(U\) is such that \(0\)-simplices are couples \((p:P\to U,f:P\to M)\), where \(p\) is a \(G\)-bundle and \(f\) is a \(G\)-equivariant map, and higher simplices are given by automorphisms and composition of those. On a Cartesian space \(U\simeq\mathbb{R}^{n}\), its simplicial set of sections takes the simpler form \[[M/G](U)\;\simeq\;\text{cosk}_{2}\Bigg{(}\;\text{Hom}(U,G^{\times 2}\! \times\!M)\xrightarrow{\longrightarrow}\text{Hom}(U,G\!\times\!M) \xrightarrow{\partial_{0}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### \(\mathcal{C}^{\infty}\)-algebras as a Lawvere theory In this subsection we will introduce the notion of \(\mathcal{C}^{\infty}\)-algebra, in the context of Lawvere theories. First, we will provide a brief review of the notion of a Lawvere theory and of algebra over a given Lawvere theory. An algebra over some Lawvere theory is, fundamentally, a generalisation of a ring, given by a set equipped with a set of \(n\)-ary operations. **Definition 2.1** (Lawvere theory).: A _Lawvere theory_ (or _algebraic theory_) is a category \(\mathsf{T}\) with finite products, whose set of objects is \(\{T^{n}\}_{n\in\mathbb{N}}\) for a fixed object \(T\in\mathsf{T}\). One can interpret the hom-set \(\operatorname{Hom}_{\mathsf{T}}(T^{n},T)\) as the set of abstract \(n\)-ary operations of the of the Lawvere theory \(\mathsf{T}\). **Definition 2.2** (\(\mathsf{T}\)-algebra).: An _algebra over a Lawvere theory_ is a product-preserving functor \[A\,:\,\mathsf{T}\,\longrightarrow\,\mathsf{Set}. \tag{2.1.1}\] **Definition 2.3** (Category of \(\mathsf{T}\)-algebras).: We call \(\mathsf{TAlg}\) the category whose objects are all the algebras over the Lawvere theory \(\mathsf{T}\), i.e. product-preserving functors \(A:\mathsf{T}\longrightarrow\mathsf{Set}\), and whose morphisms are natural transformations between these. **Definition 2.4** (Forgetful functor of a \(\mathsf{T}\)-algebra).: We call \(U_{\mathsf{T}}:\mathsf{TAlg}\to\mathsf{Set}\) the functor which sends a any \(\mathsf{T}\)-algebra \(A\) to its underlying set, i.e. \[U_{\mathsf{T}}(A)\,\coloneqq\,A(T). \tag{2.1.2}\] Notice that, since a \(\mathsf{T}\)-algebra \(A\) is a product preserving functor, any abstract \(n\)-ary operation \(\alpha_{n}\in\operatorname{Hom}_{\mathsf{T}}(T^{n},T)\) will give rise to a morphism of sets \[A(\alpha_{n})\,:\,A(T)^{\times n}\,\longrightarrow\,A(T), \tag{2.1.3}\] which can be interpreted as an \(n\)-ary bracket on our particolar \(\mathsf{T}\)-algebra. For any Lawvere theory \(\mathsf{T}\), it is possible to show that there exists a left adjoint \(F_{\mathsf{T}}\dashdot U_{\mathsf{T}}\) to the forgetful functor. In other words, we have an adjunction \[(F_{\mathsf{T}}\dashdot U_{\mathsf{T}}):\,\,\mathsf{Set}\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ Now we have all the ingredients to introduce the notion of \(\mathcal{C}^{\infty}\)-algebra in the context of Lawvere theories. The Lawvere theory underlying \(\mathcal{C}^{\infty}\)-algebras will be a natural generalisation of the Lawvere theory underlying the \(S\)-rings from example 2.6. **Definition 2.7** (Lawvere theory of smooth Cartesian spaces).: We define \(\mathsf{T}=\mathsf{CartSp}\) as the category whose objects are Cartesian spaces \(\{\mathbb{R}^{n}\}_{n\in\mathbb{N}}\) and whose morphisms are smooth maps between these. We can now provide the definition of \(\mathcal{C}^{\infty}\)-algebra as an algebra over the Lawvere theory of smooth Cartesian spaces. **Definition 2.8** (\(\mathcal{C}^{\infty}\)-algebra).: Let \(\mathsf{T}=\mathsf{CartSp}\). Then, we call \(\mathsf{C}^{\infty}\mathsf{Alg}\coloneqq\mathsf{TAlg}\) the _category of \(\mathcal{C}^{\infty}\)-algebras_ and an object \(A\in\mathsf{C}^{\infty}\mathsf{Alg}\) a _\(\mathcal{C}^{\infty}\)-algebra_. Notice that, given a \(\mathcal{C}^{\infty}\)-algebra \(A\), its underlying set \(U_{\mathsf{CartSp}}(A)=A(\mathbb{R})\) has a natural ring structure. In fact, addiction and multiplication \(+,\,\cdot\,:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\), opposite \(-:\mathbb{R}\to\mathbb{R}\), zero element \(0:\mathbb{R}^{0}\hookrightarrow\mathbb{R}\) and unit \(1:\mathbb{R}^{0}\hookrightarrow\mathbb{R}\) are all smooth maps in the category \(\mathsf{CartSp}\) of Cartesian spaces. Since \(A\) is a functor which preserves products, then the functions \(A(+),A(\,\cdot\,),A(0),A(1),A(-)\) satisfy the axioms of a ring structure on the set \(A(\mathbb{R})\). **Remark 2.9** (Limits and filtered colimits).: The category \(\mathsf{C}^{\infty}\mathsf{Alg}\) has all limits and all filtered colimits. They can be computed object-wise in \(\mathsf{CartSp}\) by taking the corresponding limits and filtered colimits in \(\mathsf{Set}\) **Definition 2.10** (\(\mathcal{C}^{\infty}\)-tensor product).: The _\(\mathcal{C}^{\infty}\)-tensor product_ in the category \(\mathsf{C}^{\infty}\mathsf{Alg}\) is defined to be the pushout \[A\,\widehat{\otimes}_{B}\,C\,\coloneqq\,A\sqcup_{B}C, \tag{2.1.5}\] for any \(\mathcal{C}^{\infty}\)-algebras \(A,B,C\in\mathsf{C}^{\infty}\mathsf{Alg}\). The following is the archetypal example of \(\mathcal{C}^{\infty}\)-algebras. Given smooth manifold \(M\in\mathsf{Mfd}\), we can construct a \(\mathcal{C}^{\infty}\)-algebra of functions on \(M\) by the functor \[\mathcal{C}^{\infty}(M)\,:\,\mathbb{R}^{n}\,\mapsto\,\mathcal{C}^{\infty}(M, \mathbb{R}^{n}). \tag{2.1.6}\] We can construct a contravariant functor by sending any smooth manifold \(M\) to its \(\mathcal{C}^{\infty}\)-algebra of functions \(\mathcal{C}^{\infty}(M)\) and any smooth map \(f:M\to N\) to its pullback \(f^{*}:\mathcal{C}^{\infty}(N)\to\mathcal{C}^{\infty}(M)\). **Lemma 2.11** (Smooth manifolds as \(\mathcal{C}^{\infty}\)-algebras [10]).: The contravariant functor \[\mathsf{Mfd}^{\mathrm{op}} \,\,\,\,\hookrightarrow\,\mathsf{C}^{\infty}\mathsf{Alg}\] (2.1.7) \[M \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, be a pullback in \(\mathsf{Mfd}\). Then, the square (2.1.9) is a pushout in \(\mathcal{C}^{\infty}\mathsf{Alg}\). In other words, we have an isomorphism of \(\mathcal{C}^{\infty}\)-algebras \[\mathcal{C}^{\infty}(\Sigma\times_{M}\Sigma^{\prime})\;=\;\mathcal{C}^{\infty} (\Sigma)\,\widehat{\otimes}_{\mathcal{C}^{\infty}(M)}\,\mathcal{C}^{\infty}( \Sigma^{\prime}). \tag{2.1.10}\] If we choose \(M=*\,\) to be the point and the smooth maps \(f,g\) to be the terminal maps to the point in the category of smooth manifolds, we immediately have the following proposition. **Corollary 2.14** (\(\mathcal{C}^{\infty}\)-algebra of functions on product manifolds).: For any pair of manifolds \(M,N\in\mathsf{Mfd}\), we have an isomorphism of \(\mathcal{C}^{\infty}\)-algebras \[\mathcal{C}^{\infty}(M)\,\widehat{\otimes}_{\mathbb{R}}\,\mathcal{C}^{\infty} (N)\;=\;\mathcal{C}^{\infty}(M\times N). \tag{2.1.11}\] Notice that the \(\mathcal{C}^{\infty}\)-tensor product \(A\,\widehat{\otimes}_{\,\mathbb{R}}B\) is much smaller than the usual tensor product \(A(\mathbb{R})\otimes_{\mathbb{R}}B(\mathbb{R})\) of the underlying \(\mathbb{R}\)-algebras. **Definition 2.15** (Ideal of a \(\mathcal{C}^{\infty}\)-algebra).: An _ideal \(\mathcal{I}\) of a \(\mathcal{C}^{\infty}\)-algebra \(A\)_ is defined as an ideal of the underlying ring \(A(\mathbb{R})\). As shown in [11, 2], given an ideal \(\mathcal{I}\) of a \(\mathcal{C}^{\infty}\)-algebra \(A\), there is a canonical \(\mathcal{C}^{\infty}\)-algebra \(A/\mathcal{I}\) whose underlying ring is precisely the quotient ring \(A(\mathbb{R})/\mathcal{I}\). **Definition 2.16** (Finitely generated and finitely presented \(\mathcal{C}^{\infty}\)-algebras).: By following [11, Chapter I], we define: * a _finitely generated \(\mathcal{C}^{\infty}\)-algebra_ as a \(\mathcal{C}^{\infty}\)-algebra of the form \(A\cong\mathcal{C}^{\infty}(\mathbb{R}^{n})/\mathcal{I}\), for some Cartesian space \(\mathbb{R}^{n}\) and an ideal \(\mathcal{I}\subset\mathcal{C}^{\infty}(\mathbb{R}^{n})\); * a _finitely presented \(\mathcal{C}^{\infty}\)-algebra_ as a \(\mathcal{C}^{\infty}\)-algebra of the form \(A\cong\mathcal{C}^{\infty}(\mathbb{R}^{n})/\mathcal{I}\), for some Cartesian space \(\mathbb{R}^{n}\) and a finitely generated ideal \(\mathcal{I}\subset\mathcal{C}^{\infty}(\mathbb{R}^{n})\). We denote by \(\mathsf{C}^{\infty}\mathsf{Alg}_{\mathrm{fg}}\) and \(\mathsf{C}^{\infty}\mathsf{Alg}_{\mathrm{fp}}\) the full subcategories of \(\mathsf{C}^{\infty}\mathsf{Alg}\) on those objects which are respectively finitely generated and finitely presented \(\mathcal{C}^{\infty}\)-algebras. The archetypal example of finitely presented \(\mathcal{C}^{\infty}\)-algebra is again the the \(\mathcal{C}^{\infty}\)-algebra \(\mathcal{C}^{\infty}(M)\) of functions on any smooth manifold \(M\in\mathsf{Mfd}\). This is because any smooth manifold can be embedded in \(\mathbb{R}^{N}\) for \(N\) large enough. **Example 2.17** (Smooth manifold as finitely presented \(\mathcal{C}^{\infty}\)-algebra).: Consider a circle \(S^{1}\). Its \(\mathcal{C}^{\infty}\)-algebra of functions is \(\mathcal{C}^{\infty}(S^{1})=\mathcal{C}^{\infty}(\mathbb{R}^{2})/(x^{2}+y^{2}-1)\), which is finitely presented. **Example 2.18** (Local Artinian \(\mathbb{R}\)-algebra).: Another crucial example is provided by local Artinian \(\mathbb{R}\)-algebras, also known as Weil algebras in the context of differential geometry. Recall that a local Artinian algebra is a finite-dimensional commutative \(\mathbb{R}\)-algebra \(W\) with a maximal differential ideal \(\mathfrak{m}_{W}\subset W\) such that \(W/\mathfrak{m}_{W}\cong\mathbb{R}\) and \(\mathfrak{m}_{W}^{N}=0\) for some \(N\) large enough. By [12, Proposition 1.5], any local Artinian \(\mathbb{R}\)-algebra can be uniquely lifted to \(\mathcal{C}^{\infty}\)-algebra, which is always finitely presented. **Example 2.19** (Algebra of truncated Taylor series as finitely presented \(\mathcal{C}^{\infty}\)-algebra).: The local Artinian algebra \(W^{n}_{k}=\mathcal{C}^{\infty}(\mathbb{R}^{n})/(x_{1},\dots,x_{n})^{k}\) of \(k\)-truncated Taylor series in \(n\) variables comes with canonical \(\mathcal{C}^{\infty}\)-algebra structure. **Remark 2.20** (Reduced \(\mathcal{C}^{\infty}\)-algebras).: Let \(\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}\) be the full sub-category of \(\mathsf{C}^{\infty}\mathsf{Alg}\) on those \(\mathcal{C}^{\infty}\)-algebras whose underlying \(\mathbb{R}\)-algebra is reduced in the usual sense, i.e. it has no non-zero nilpotent elements. Then, we have an adjunction \[\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}\ \xleftrightarrow{\tiny{\begin{array}{c} \mathsf{(-)}^{\mathrm{red}}\\ \mathsf{\iota}^{\mathrm{red}}\end{array}}}\ \mathsf{C}^{\infty}\mathsf{Alg}. \tag{2.1.12}\] where \(\iota^{\mathrm{red}}\) is the natural embedding and \((-)^{\mathrm{red}}\) is the functor which sends a \(\mathcal{C}^{\infty}\)-algebra \(A\) to the reduced \(\mathcal{C}^{\infty}\)-algebra \(A^{\mathrm{red}}\coloneqq A/\mathfrak{m}_{A}\), where we called \(\mathfrak{m}_{A}\) the nilradical of the the underlying \(\mathbb{R}\)-algebra. **Example 2.21** (Examples of reduction).: Consider a local Artinian algebra \(W\), then we have \(W^{\mathrm{red}}=\mathbb{R}\). If \(M\) is a smooth manifold, then we have \(\mathcal{C}^{\infty}(M)^{\mathrm{red}}=\mathcal{C}^{\infty}(M)\). Moreover, for a \(\mathcal{C}^{\infty}\)-tensor product of the form \(\mathcal{C}^{\infty}(M)\mathbin{\widehat{\otimes}}W\), then we have \((\mathcal{C}^{\infty}(M)\mathbin{\widehat{\otimes}}W)^{\mathrm{red}}= \mathcal{C}^{\infty}(M)\). **Remark 2.22** (Smooth manifolds embed into reduced \(\mathcal{C}^{\infty}\)-algebras).: Notice from the previous example that the \(\mathcal{C}^{\infty}\)-algebra \(\mathcal{C}^{\infty}(M)\) of functions on an ordinary smooth manifold \(M\) lies always in \(\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}\). More precisely, the embedding of smooth manifolds into \(\mathcal{C}^{\infty}\)-algebras factors by \(\mathcal{C}^{\infty}(-):\mathsf{Mfd}^{\mathrm{op}}\longleftrightarrow \mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}_{\mathsf{fp}}\longleftrightarrow \mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}_{\mathsf{fp}}\longleftrightarrow \mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}_{\mathsf{fp}}\longleftrightarrow \mathsf{C}^{\infty}\mathsf{Alg}\), where we called \(\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}_{\mathsf{fp}}\) the category of reduced finitely presented \(\mathcal{C}^{\infty}\)-algebras. ### \(\mathcal{C}^{\infty}\)-varieties and formal smooth manifolds As we have seen in the previous subsection, we have a fully faithful embedding \(\mathsf{Mfd}\hookrightarrow\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{op}}_{ \mathsf{fg}}\) of smooth manifolds into the opposite category of finitely generated \(\mathcal{C}^{\infty}\)-algebras. Thus, in a certain sense, we may interpret the category \(\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{op}}_{\mathsf{fg}}\) as a category of generalised smooth spaces of some sort. Such an intuition, for instance, underlies the formalisation by [11] of analysis. **Definition 2.23** (\(\mathcal{C}^{\infty}\)-variety [11]).: We define a _\(\mathcal{C}^{\infty}\)-variety_ as an element of the opposite category of finitely generated \(\mathcal{C}^{\infty}\)-algebras, i.e. of the category \[\mathsf{C}^{\infty}\mathsf{Var}\ \coloneqq\ \mathsf{C}^{\infty}\mathsf{Alg}^{ \mathrm{op}}_{\mathsf{fg}}. \tag{2.2.1}\] We use the notation \(X=\mathrm{Spec}(A)\) for the \(\mathcal{C}^{\infty}\)-variety corresponding to the finitely generated \(\mathcal{C}^{\infty}\)-algebra \(A\in\mathsf{C}^{\infty}\mathsf{Alg}\). Conversely, we may use the notation \(\mathcal{O}(X)\) for the finitely generated \(\mathcal{C}^{\infty}\)-algebra corresponding to the \(\mathcal{C}^{\infty}\)-variety \(X\in\mathsf{C}^{\infty}\mathsf{Var}\). Let us look at a few simple examples of such a geometric object which go beyond the notion of smooth manifolds. First, we can consider infinitesimally thickened points, i.e. formal disks. **Example 2.24** (Thickened point).: Consider the local Artinian algebra of \(k\)-truncated Taylor series \(W^{n}_{k}=\mathcal{C}^{\infty}(\mathbb{R}^{n})/(x_{1},\dots,x_{n})^{k}\) with its canonical \(\mathcal{C}^{\infty}\)-algebra structure. Then we have an infinitesimally thickened point given by \(D^{n}_{k}=\mathrm{Spec}(W^{n}_{k})\). This example can be directly generalised to construct infinitesimally thickened smooth manifolds. **Example 2.25** (Thickened circle).: Consider the thickened circle given by \(S^{1}\times\mathrm{Spec}W\), where \(S^{1}\) is a circle and \(W=\mathcal{C}^{\infty}(\mathbb{R})/(z^{2})\). Dually, this can be constructed by \(\mathcal{C}^{\infty}\)-tensor product of the corresponding \(\mathcal{C}^{\infty}\)-algebras \[\frac{\mathcal{C}^{\infty}(\mathbb{R}^{2})}{(x^{2}+y^{2}-1)}\mathbin{\widehat{ \otimes}}\frac{\mathcal{C}^{\infty}(\mathbb{R})}{(z^{2})}\ =\ \frac{\mathcal{C}^{\infty}(\mathbb{R}^{3})}{(x^{2}+y^{2}-1,z^{2})} \tag{2.2.2}\] Thus, it can be expressed as \(S^{1}\times\operatorname{Spec}W=\operatorname{Spec}(\mathcal{C}^{\infty}(\mathbb{R} ^{3})/(x^{2}+y^{2}-1,z^{2}))\). Now, the category \(\mathsf{C}^{\infty}\mathsf{Var}\) of \(\mathcal{C}^{\infty}\)-varieties that we have presented here does not have an internal hom-functor, in general. However, we have the following stricter statement. **Lemma 2.26** (Exponential by a thickened point).: Let \(D=\operatorname{Spec}W\) where \(W\) is a local Artinian algebra and \(Y\) any \(\mathcal{C}^{\infty}\)-variety. Then there exists a endofunctor of \(\mathcal{C}^{\infty}\)-varieties \[(-)^{D}\,:\,Y\,\longmapsto\,Y^{D}, \tag{2.2.3}\] which is the right adjoint of the functor \((-)\times D\) given by taking the product with \(D\). In other words, \(Y^{D}\) is a \(\mathcal{C}^{\infty}\)-variety which satisfies the property \[\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,\,Y^{D})\;\simeq\; \operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X\times D,\,Y) \tag{2.2.4}\] for any \(\mathcal{C}^{\infty}\)-variety \(X\in\mathsf{C}^{\infty}\mathsf{Var}\). Proof.: We deploy an argument similar to [13, Theorem 1.13]. First we have to verify that \(\mathbb{R}^{D}\) exists. So, for any \(\mathcal{C}^{\infty}\)-variety \(X\in\mathsf{C}^{\infty}\mathsf{Var}\) we have the equivalences \[\begin{split}\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{ Var}}(X\times D,\mathbb{R})&\;\simeq\;(\mathcal{O}(X)\,\widehat{ \otimes}\,W)(\mathbb{R})\\ &\;\simeq\;\mathcal{O}(X)(\mathbb{R}^{\dim(W)})\\ &\;\simeq\;\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,\mathbb{R}^{\dim(W)}),\end{split} \tag{2.2.5}\] where in the penultimate step we used the fact that any smooth function \(g\in\mathcal{O}(X)\,\widehat{\otimes}\,W\) can be expanded as \((g_{1},\ldots,g_{\dim(W)})\) with each \(g_{i}\in\mathcal{O}(X)\). Thus we have \(\mathbb{R}^{D}\simeq\mathbb{R}^{\dim(W)}\), which exists. By the same argument, we have an equivalence \(\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X\times D,\mathbb{R}^{ k})\simeq\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,\mathbb{R}^{k \dim(W)})\) for any natural number \(k\) and \(\mathcal{C}^{\infty}\)-variety \(X\). This implies that \((\mathbb{R}^{0})^{D}\simeq\mathbb{R}^{0}\) exist and that \((\mathbb{R}^{k})^{D}\simeq(\mathbb{R}^{D})^{k}\) exist for any \(k>0\). Now, given a smooth map \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\), the new map \(f^{D}:\mathbb{R}^{n})^{D}\to(\mathbb{R}^{m})^{D}\) is given by the equivalence \(\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,f^{D})\simeq\operatorname {Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X\times D,f)\) for any \(\mathcal{C}^{\infty}\)-variety \(X\). Now, let us fix a generic \(\mathcal{C}^{\infty}\)-variety \(Y=\operatorname{Spec}(A)\), where \(A\cong\mathcal{C}^{\infty}(\mathbb{R}^{n})/(f_{1},\ldots,f_{m})\) is a finitely generated \(\mathcal{C}^{\infty}\)-algebra with \(f_{i}\in\mathcal{C}^{\infty}(\mathbb{R}^{n})\). We must show that there exists a \(\mathcal{C}^{\infty}\)-variety \(Y^{D}\) such that the equivalence 2.2.4 holds. Since \(A\) is a quotient, \(Y=\operatorname{Spec}(A)\) is equivalently defined by the pullback square (2.2.6) On the one hand, since the functor \(\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X\times D,-)\) preserves pullbacks for any \(\mathcal{C}^{\infty}\)-variety \(X\), we have a pullback square of sets \[\begin{CD}\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X\times D,Y) @V{}V{}V\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,(\mathbb{R}^{0})^{D})\\ @V{}V{}V\\ \operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,(\mathbb{R}^{n})^{D}) @V{\operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,f_{1}^{D},\ldots,f_{m}^{D})} \operatorname{Hom}_{\mathsf{C}^{\infty}\mathsf{Var}}(X,(\mathbb{R}^{m})^{D}). \end{CD}\] for any \(\mathcal{C}^{\infty}\)-variety \(X\). On the other hand, we have the pullback square of \(\mathcal{C}^{\infty}\)-varieties (2.2.7) Thus, the \(\mathcal{C}^{\infty}\)-variety \(Y^{D}\) exists and it is indeed given by \(Y^{D}\simeq(\mathbb{R}^{n})^{D}\times_{(\mathbb{R}^{m})^{D}}(\mathbb{R}^{0})^{D}\). Notice that, for any \(\mathcal{C}^{\infty}\)-variety \(Y\) and \(D=\operatorname{Spec}\!W\) where \(W\) is a local Artinian algebra, there is a natural morphism \(\operatorname{ev}_{0}:Y^{D}\to Y\) from the \(D\)-exponential to the original \(Y\). This is induced by the canonical inclusion \(*\to D\) of the point into the canonical point of \(D\). **Definition 2.27** (Formally etale map).: We say that a morphism \(f:X\to Y\) of \(\mathcal{C}^{\infty}\)-varieties is _formally etale_ if we have a pullback diagram (2.2.8) for any thickened point \(D=\operatorname{Spec}\!W\), where \(W\) is a local Artinian algebra. **Corollary 2.28** (Formally etale maps generalise local diffeomorphisms).: Let \(M\) and \(N\) be ordinary smooth manifolds, seen as \(\mathcal{C}^{\infty}\)-varieties. Then, we have that any formally etale map \(f:M\to N\) is equivalently a local diffeomorphism in the ordinary differential geometry sense. Proof.: To see this, notice that by setting \(D=\operatorname{Spec}(\mathcal{C}^{\infty}(\mathbb{R})/(x^{2}))\) to be the local Artinian algebra of dual numbers, then the pullback square (2.2.8) becomes precisely (2.2.9) making \(f\) into a local diffeomorphism. Conversely, a local diffeomorphism \(f\) induces a diffeomorphism \(U_{x}\xrightarrow{\simeq}V_{f(x)}\) of open neighborhoods respectively of \(x\) and of its image for any point \(x\in M\). Thus we have the diagram (2.2.10) which implies that the square on the front is a pullback. In the spirit of interpreting -varieties as formal generalisations of ordinary smooth manifolds, we can equip their category with a coverage which is compatible with the coverage of from previous section. Thus we define a coverage as follows. **Theorem 2.29** (Covering family of a -variety).: We may declare a covering family of a -variety to be a set of formally etale monomorphisms (2.2.11) such that they induce the formally etale epimorphism (2.2.12) Proof.: First, we show that formal etale morphisms are stable under pullback. Consider a pullback diagram of -varieties of the form (2.2.13) where we assume that is a formally etale map. As previously noticed, preserves pullbacks, thus we have a bigger diagram (2.2.14) where both the front and the back square are pullbacks. Moreover, being formally etale implies that the right square is a pullback too. Then, by applying the pasting law for pullbacks we obtain that the left square is a pullback and thus is formally etale. Therefore, formally etale maps are stable under pullbacks. Now, consider a covering family as above and a morphism. We can form the pullback square (2.2.15) Since monomorphisms and formally etale monomorphisms are stable under pullbacks, then \(\psi_{i}\) is a formally etale monomorphism. Moreover, we have that \(\coprod_{i\in I}Y\times_{X}U_{i}\xrightarrow{(\psi_{i})_{i\in I}}Y\) is a formally etale epimorphism. The following definition is a specialization of the general one provided by [10]. **Definition 2.30** (Formal smooth manifolds).: We define a _formal smooth manifold_\(M\) as a \(\mathcal{C}^{\infty}\)-variety such that there exist a family \(\{\mathbb{R}^{n}\times\operatorname{Spec}\!W\xhookrightarrow{\phi_{i}}M\}_{i \in I}\) of etale monomorphisms, where \(W\) is Artinian, with the property that the induced map \[\bigsqcup_{i\in I}\mathbb{R}^{n}\times\operatorname{Spec}\!W\xhookrightarrow{( \phi_{i})_{i\in I}}M \tag{2.2.16}\] is an etale epimorphism. We denote by \(\mathsf{FMfd}\) the category of formal smooth manifolds, i.e. the full and faithful subcategory of \(\mathsf{C}^{\infty}\mathsf{Var}\) whose objects are all the formal smooth manifolds and we denote its embedding into the latter by \[\iota^{\mathsf{FMfd}}:\ \mathsf{FMfd}\xhookrightarrow{\mathsf{C}^{\infty} \mathsf{Var}}. \tag{2.2.17}\] In other words, a \(\mathcal{C}^{\infty}\)-variety is a formal smooth manifold if it admits a covering of thickened charts of the form \(\mathbb{R}^{n}\times\operatorname{Spec}\!W\) for some \(n\in\mathbb{N}\) and local Artinian algebra \(W\in\operatorname{Art}_{\mathbb{R}}\). **Remark 2.31** (Covering family of a formal smooth manifold).: Notice that we can naturally make the category \(\mathsf{FMfd}\) of formal smooth manifold into a site by restricting the covering families of the site \(\mathsf{C}^{\infty}\mathsf{Var}\) of \(\mathcal{C}^{\infty}\)-varieties we constructed in theorem 2.29. **Example 2.32** (Thickened circle).: Consider the thickened circle from the previous subsection \[S^{1}\times\operatorname{Spec}\!W\ =\ \operatorname{Spec}\!\big{(}\mathcal{C}^{ \infty}(\mathbb{R}^{3})/(x^{2}+y^{2}-1,z^{2})\big{)} \tag{2.2.18}\] Notice that it can be covered by a covering \(\{\mathbb{R}\times\operatorname{Spec}\!W\xhookrightarrow{(\psi_{i},\operatorname {id})}S^{1}\times\operatorname{Spec}\!W\}_{i=0,1}\) where the set \(\{\mathbb{R}\xhookrightarrow{\psi_{i}}S^{1}\}_{i=0,1}\) is just a covering of the underlying circle as a smooth manifold. **Construction 2.33** (Reduction of formal smooth manifolds).: By reduction and co-reduction of adjunction 2.20, we can obtain the adjunction of categories \[\mathsf{Mfd}\xhookrightarrow{\mathbb{F}}\mathsf{FMfd}. \tag{2.2.19}\] In particular, this is an adjunction of ordinary sites, since by construction of formal smooth manifolds both functors send covering families to covering families on the nose. ### Definition of formal smooth stacks In this subsection, we will generalise smooth sets and smooth stacks, respectively, to formal smooth set and formal smooth stacks. Let us start from the definition of formal smooth sets, which are roughly ordinary sheaves on formal smooth manifolds. Geometrically, they provide a rich class of generalisations of smooth manifolds. In particular, they allow us to formalise a large variety of infinite-dimensional smooth spaces, such as smooth mapping spaces and smooth spaces of sections of a bundle. **Definition 2.34** (Formal smooth sets).: _Formal smooth sets_ are defined as sheaves on the site of formal smooth manifolds \(\mathsf{FMfd}\). The category of formal smooth sets is, then, defined by \[\mathsf{FSmoothSet}\ \coloneqq\ \mathsf{Sh}(\mathsf{FMfd}). \tag{2.3.1}\] This definition is equivalent1 to the original one provided by [10]. Since this is a category of sheaves on a site, it is naturally a topos, which is known as _Cahiers topos_ after the reference. Footnote 1: In [10] formal smooth sets were defined as sheaves on the site \(\mathsf{FCartSp}\) of formal Cartesian spaces, i.e. spaces of the form \(\mathbb{R}^{n}\times\mathrm{Spec}W\), where \(\mathbb{R}^{n}\) is a Cartesian space and \(W\) is a local Artinian algebra. However, \(\mathsf{FCartSp}\) is by construction a dense sub-site of \(\mathsf{FMfd}\). This implies a natural equivalence \(\mathsf{Sh}(\mathsf{FCartSp})\simeq\mathsf{Sh}(\mathsf{FMfd})\), which makes the definition in the reference equivalent to definition 2.34. **Definition 2.35** (Formal smooth stacks).: _Formal smooth stacks_ are defined as stacks on the site of formal smooth manifolds \(\mathsf{FMfd}\). The \((\infty,1)\)-category of formal smooth stacks is, then, defined by \[\begin{split}\mathbf{FSmoothStack}&\;\coloneqq\; \mathsf{St}(\mathsf{FMfd})\\ &=\;\mathbf{N}_{hc}(\mathsf{sPreSh}(\mathsf{FMfd})^{\circ}_{ \mathsf{proj,loc}}).\end{split} \tag{2.3.2}\] **Construction 2.36** (Diagram of sites).: By combining adjunctions 2.20 and (2.2.19) with functors 2.22 and (2.2.17), we have the following commuting diagram of ordinary sites: (2.3.3) Given the diagram of sites presented in construction 2.36, we could be tempted to extend the notions of formal smooth sets and formal smooth stacks, which we defined above. **Definition 2.37** (Extended smooth sets and stacks).: Let us give the following definitions: * We define the \(1\)-category _extended smooth sets_ as the \(1\)-category of sheaves on the site of reduced \(\mathcal{C}^{\infty}\)-varieties, i.e. \[\mathsf{SmoothSet}^{+}\;\coloneqq\;\mathsf{Sh}(\mathsf{C}^{\infty}\mathsf{ Var}^{\mathrm{red}}).\] (2.3.4) * We define the \(1\)-category _extended formal smooth sets_ as the \(1\)-category of sheaves on the site of \(\mathcal{C}^{\infty}\)-varieties, i.e. \[\mathsf{FSmoothSet}^{+}\;\coloneqq\;\mathsf{Sh}(\mathsf{C}^{\infty}\mathsf{ Var}).\] (2.3.5) * We define the \((\infty,1)\)-category of _extended smooth stacks_ as the \((\infty,1)\)-category of stacks on the site of reduced \(\mathcal{C}^{\infty}\)-varieties, i.e. \[\begin{split}\mathbf{SmoothStack}^{+}&\;\coloneqq\; \mathsf{St}(\mathsf{C}^{\infty}\mathsf{Var}^{\mathrm{red}}).\end{split}\] (2.3.6) * We define the \((\infty,1)\)-category of _extended formal smooth stacks_ as the \((\infty,1)\)-category of stacks on the site of \(\mathcal{C}^{\infty}\)-varieties, i.e. \[\begin{split}\mathbf{FSmoothStack}^{+}&\;\coloneqq\; \mathsf{St}(\mathsf{C}^{\infty}\mathsf{Var}).\end{split}\] (2.3.7) **Remark 2.38** (Embeddings).: Since the definitions 2.37 are given on the sites of diagram 2.36, we can obtain a diagram of \((\infty,1)\)-categories (2.3.8) ## 3 Formal derived smooth stacks In this section we will propose a definition for the notion of formal derived smooth stack. Our construction of formal derived smooth stacks is related to [14] and to the research program by [1, 1, 2]. In the previous two sections we considered at most stacks on ordinary sites, such as smooth stacks on the site of smooth manifolds. In principle, it is possible to generalise the construction of stacks to the case where the site \(\mathsf{C}\) itself is a simplicial category - usually, presenting some \((\infty,1)\)-category. Consider a simplicially-enriched category \(\mathsf{C}\) equipped with the structure of a simplicial-site, i.e. such that its homotopy category \(\operatorname{Ho}(\mathsf{C})\) has the structure of a site. Recall that, given two simplicially-enriched categories \(\mathsf{C}\) and \(\mathsf{D}\), the functor category \([\mathsf{C}^{\mathrm{op}},\mathsf{D}]\) is naturally a simplicially-enriched category. In particular, we can define the simplicial-category of presheaves \([\mathsf{C}^{\mathrm{op}},\mathsf{sSet}]\) on the simplicial-site \(\mathsf{C}\). By [1, Theorem 3.4.1], for suitable simplicial-sites, there is still a notion of local projective simplicial model category structure \([\mathsf{C}^{\mathrm{op}},\mathsf{sSet}]_{\mathsf{proj},\mathsf{loc}}\) that allows us to define the simplicial-category of _derived stacks_ on \(\mathsf{C}\) by \[\mathsf{St}(\mathsf{C})\;\simeq\;[\mathsf{C}^{\mathrm{op}},\mathsf{sSet}]_{ \mathsf{proj},\mathsf{loc}}^{\circ}. \tag{3.0.1}\] Finally, by applying the homotopy coherent nerve functor on such a simplicial category, it is possible to obtain the \((\infty,1)\)-category of derived stacks on \(\mathsf{C}\), i.e. \[\mathbf{St}(\mathsf{C})\;\coloneqq\;\mathbf{N}_{hc}([\mathsf{C}^{\mathrm{op}},\mathsf{sSet}]_{\mathsf{proj},\mathsf{loc}}^{\circ}). \tag{3.0.2}\] In this section, we will introduce the \((\infty,1)\)-site of formal derived smooth manifolds, we will equip it with the structure of a site and we we will construct derived stacks on it: these will be the \((\infty,1)\)-category of formal derived smooth stacks. ### Homotopy \(\mathcal{C}^{\infty}\)-algebras Let \(\mathsf{T}\) be a generic Lawvere theory, as we reviewed at the beginning of section 2. As suggested first by [14], we can consider the simplicial category \([\Delta^{\mathrm{op}},\mathsf{TAlg}]\) of simplicial \(\mathsf{T}\)-algebras, where \(\Delta\) is the simplex category. By [14, Section 2.4] this can be equipped with a natural model structure, known as projective model structure. The following model category can be called category of strict simplicial \(\mathsf{T}\)-algebras: \[\mathsf{sTAlg}\;\coloneqq\;[\Delta^{\mathrm{op}},\mathsf{TAlg}]_{\mathrm{proj}}, \tag{3.1.1}\] where weak equivalences and fibrations are given object-wise. In fact, the fibrant-cofibrant simplicial \(\mathsf{T}\)-algebras according to this model structure are known as strict simplicial \(\mathsf{T}\)-algebras in the literature. By following [1], there is a Quillen equivalence \(\mathsf{sTAlg}\simeq_{\mathrm{Qu}}[\mathsf{T},\mathsf{sSet}]_{\mathrm{proj, loc}}\) between the model category above and the local projective model structure on the simplicial category of pre-cosheaves on \(\mathsf{T}\). Fibrant-cofibrant objects in the latter model category are known in the literature as homotopy \(\mathsf{T}\)-algebras and they are given as follows. **Definition 3.1** (Homotopy \(\mathsf{T}\)-algebra).: A _homotopy algebra over a Lawvere theory_\(\mathsf{T}\) is a functor \[A\,:\;\mathsf{T}\;\longrightarrow\;\mathsf{sSet} \tag{3.1.2}\] valued in Kan complexes, such that for any \(\mathbb{R}^{n}\in\mathrm{CartSp}\) the canonical morphism \[\bigsqcup_{i=1}^{n}A(\mathrm{prod}_{i}):\,A(\mathbb{R}^{n})\;\stackrel{{ \simeq}}{{\longrightarrow}}\;A(\mathbb{R})^{n} \tag{3.1.3}\] is a weak equivalence of simplicial sets. By the Quillen equivalence above, any homotopy \(\mathsf{T}\)-algebra is equivalent to a strict simplicial \(\mathsf{T}\)-algebra and both the model categories provide a model for the same \((\infty,1)\)-category, which we will denote by \(\mathsf{sTAlg}\). This \((\infty,1)\)-category \(\mathsf{sTAlg}\) of homotopy \(\mathsf{T}\)-algebras can be constructed by applying the homotopy-coherent nerve to the simplicial category of fibrant-cofibrant objects, namely by \[\mathsf{sTAlg}\;=\;\mathbf{N}_{hc}([\Delta^{\mathrm{op}},\mathsf{TAlg}]_{ \mathrm{proj}}^{\circ}). \tag{3.1.4}\] Now, we can specify \(\mathsf{T}=\mathsf{CartSp}\) to be the Lawvere theory of \(\mathcal{C}^{\infty}\)-algebras, as in section 2. Thus, a _homotopy \(\mathcal{C}^{\infty}\)-algebra_ is going to be defined as homotopy algebra over the Lawvere theory \(\mathsf{CartSp}\). Accordingly, we can define the model category of homotopy \(\mathcal{C}^{\infty}\)-algebras by \[\mathsf{sC}^{\infty}\mathsf{Alg}\;=\;[\Delta^{\mathrm{op}},\mathsf{C}^{\infty }\mathsf{Alg}]_{\mathrm{proj}}. \tag{3.1.5}\] A fibrant-cofibrant element of the model category \(\mathsf{sC}^{\infty}\mathsf{Alg}\) is precisely a homotopy \(\mathcal{C}^{\infty}\)-algebra. The corresponding \((\infty,1)\)-category of homotopy \(\mathcal{C}^{\infty}\)-algebras is given by the homotopy coherent nerve of the simplicial category \([\Delta^{\mathrm{op}},\mathsf{C}^{\infty}\mathsf{Alg}]_{\mathrm{proj}}^{\circ}\) of fibrant-cofibrant objects. **Definition 3.2** (\((\infty,1)\)-category of homotopy \(\mathcal{C}^{\infty}\)-algebras).: The _\((\infty,1)\)-category of homotopy \(\mathcal{C}^{\infty}\)-algebras_ is defined by \[\mathbf{sC}^{\infty}\mathbf{Alg}\;=\;\mathbf{N}_{hc}([\Delta^{\mathrm{op}}, \mathsf{C}^{\infty}\mathsf{Alg}]_{\mathrm{proj}}^{\circ}). \tag{3.1.6}\] Crucially, the \((\infty,1)\)-category of homotopy \(\mathcal{C}^{\infty}\)-algebras can be naturally equipped with a \(\mathcal{C}^{\infty}\)-version of a derived tensor product which is going to be very relevant for geometric reasons. Recall that homotopy pushouts exist; see e.g. [11]. **Definition 3.3** (Derived \(\mathcal{C}^{\infty}\)-tensor product).: We define the _derived \(\mathcal{C}^{\infty}\)-tensor product_ in the category \(\mathbf{sC}^{\infty}\mathbf{Alg}\) by the homotopy pushout \[A\,\widehat{\otimes}_{C}^{\mathrm{L}}\,B\;\simeq\;A\,{\sqcup}_{C}^{h}\,B \tag{3.1.7}\] for any homotopy \(\mathcal{C}^{\infty}\)-algebras \(A,B,C\in\mathbf{sC}^{\infty}\mathbf{Alg}\). It is known that an ordinary \(\mathcal{C}^{\infty}\)-algebra \(A\) is finitely presented precisely if its co-Yoneda embedding \(\operatorname{Hom}(A,-):\mathbb{C}^{\infty}\mathsf{Alg}\longrightarrow\mathsf{ Set}\) preserves filtered colimits (see e.g. [1]). In [13], homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebras are defined by generalising this statement to homotopy \(\mathcal{C}^{\infty}\)-algebras as follows. **Definition 3.4** (Homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebra).: A _homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebra_ is defined as a homotopy \(\mathcal{C}^{\infty}\)-algebra \(A\in\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}\) such that it is a compact object in the \((\infty,1)\)-category \(\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}\), i.e. such that the co-Yoneda \((\infty,1)\)-functor \(\operatorname{Hom}(A,-):\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}\longrightarrow \infty\mathbf{Grpd}\) preserves filtered \((\infty,1)\)-colimits. The \((\infty,1)\)-category of homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebras \(\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}_{\mathrm{fp}}\longleftrightarrow \mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}\) is defined as the full subcategory on those objects which are homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebras. In analogy with [1], in the rest of the paper we will denote by \(\mathbf{s}\mathbb{C}^{\infty}\mathsf{Alg}_{\mathrm{fp}}\hookrightarrow\mathbf{ s}\mathbb{C}^{\infty}\mathsf{Alg}\) the model sub-category on those objects whose derived co-Yoneda functor preserves filtered homotopy colimits, so that we have \(\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}_{\mathrm{fp}}\simeq\mathbf{N}_{hc}( \mathbf{s}\mathbb{C}^{\infty}\mathsf{Alg}_{\mathrm{fp}}^{\circ})\). Now, as stressed by [11], being finitely presented is quite a stringent condition on a homotopy \(\mathcal{C}^{\infty}\)-algebra. In analogy with the discussion in [1, Section 2], we can introduce a weaker notion of being finitely presented for homotopy \(\mathcal{C}^{\infty}\)-algebras as follows. **Definition 3.5** (Almost finitely presented \(\mathcal{C}^{\infty}\)-algebra).: An _almost finitely presented \(\mathcal{C}^{\infty}\)-algebra_ is defined as a homotopy \(\mathcal{C}^{\infty}\)-algebra \(A\in\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}\) such that \(\pi_{0}A\) is finitely generated as an ordinary \(\mathcal{C}^{\infty}\)-algebra and the \(\pi_{i}A\) are finitely presented modules over \(\pi_{0}A\) for all \(i>0\). The \((\infty,1)\)-category of almost finitely presented \(\mathcal{C}^{\infty}\)-algebras \(\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}_{\mathrm{afp}}\longleftrightarrow \mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}\) is defined as the full sub-category on those objects which are almost finitely presented \(\mathcal{C}^{\infty}\)-algebras. **Proposition 3.6** (Finitely presented \(\mathcal{C}^{\infty}\)-algebras are almost finitely presented).: We have the following full sub-\((\infty,1)\)-categories of homotopy \(\mathcal{C}^{\infty}\)-algebras: \[\mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}_{\mathrm{fp}}\,\longrightarrow\, \mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}_{\mathrm{afp}}\,\longrightarrow\, \mathbf{s}\mathbb{C}^{\infty}\mathbf{Alg}. \tag{3.1.8}\] Proof.: By [11, Section 2.1], any homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebra is equivalent to the retract of a strict finite cell object \(\{X_{i}\}_{i=0,\dots,n}\), characterised by \(X_{i+1}\simeq B\widehat{\otimes}_{A}^{\mathsf{L}}X_{i}\) for \(0\leq i<n\), where \(A,B\) and \(X_{i>0}\) are all finitely generated free algebras, namely of the form \(\mathcal{C}^{\infty}(\mathbb{R}^{n})\), and \(X_{0}=\emptyset\). Thus, it is sufficient to show that any such \(X_{n}\) is almost finitely presented, which we will do by induction. First, if \(n=0\), the result is trivial. Second, if we assume the result for strict finite cell objects of length \(n-1\), then, for any length \(n\) strict finite cell object, there is a spectral sequence converging to the homotopy groups of \(X_{n}\) given by \(E_{p\bullet}^{2}=\operatorname{Tor}_{p}^{A}(B,\pi_{\bullet}X_{n-1})\). Since the \(\pi_{\bullet}X_{n-1}\) must be finitely presented, then the \(\pi_{\bullet}X_{n}\) are finitely presented. ### Formal derived smooth manifolds In this subsection, we will introduce the \((\infty,1)\)-category of formal derived smooth manifolds and we explore some of its entailments. A formal derived smooth manifold is a slight generalisation of the notion of derived manifold a la Spivak [15] and Carchedi-Steffens [13]. Other relevant references on derived manifolds include [1, 1, 2, 3, 14, 15, 16, 17, 18]. Moreover, during the final stage of the preparation of this paper, the systematic foundational work of [14] for the geometry of derived \(\mathcal{C}^{\infty}\)-schemes appeared. Derived manifolds are a categorifications of smooth manifolds which are designed to crucially generalise the ordinary concept of intersection of smooth manifolds. In contrast to its ordinary counterpart, this _derived_ intersection always comes with a natural smooth structure. Let us investigate more the core issue with intersections of smooth manifolds. Let \(M\) be an ordinary smooth manifold and \(\Sigma,\Sigma^{\prime}\subset M\) two smooth submanifolds of \(M\). One would be tempted to categorically define the intersection of these submanifolds by the pullback \(\Sigma\cap\Sigma^{\prime}=\Sigma\times_{M}\Sigma^{\prime}\). However, this definition generally fails, since the intersection may not a smooth manifold. More precisely, this happens if the embeddings \(\Sigma,\Sigma^{\prime}\hookrightarrow M\) are not transversal embeddings. To have a concrete example in mind, the reader can look at figure 6. Let us now explore an interesting example more in detail. **Example 3.7** (Intersection is not locally homeomorphic to a Cartesian space).: Consider the ordinary smooth manifolds \(\Sigma,\Sigma^{\prime}\,=\,\mathbb{R}^{2}\), and \(M\,=\,\mathbb{R}^{3}\), together with embeddings \(e_{\Sigma}:\Sigma\hookrightarrow\mathbb{R}^{3}\) and \(e_{\Sigma^{\prime}}:\Sigma^{\prime}\hookrightarrow\mathbb{R}^{3}\) given by the maps \[e_{\Sigma}\,:\,(x,y)\mapsto(x,y,x^{2}y^{2}),\hskip 14.226378pte_{\Sigma^{ \prime}}\,:\,(x,y)\mapsto(x,y,0).\] As a set, the intersection of these two submanifolds is \(\{(x,y,0)\in\mathbb{R}^{3}\,|\,x^{2}y^{2}=0\}\), which is precisely the union of the line \(\{(x,0,0)\in\mathbb{R}^{3}\,|\,x\in\mathbb{R}\}\) and the line \(\{(0,y,0)\in\mathbb{R}^{3}\,|\,y\in\mathbb{R}\}\). This cross-shaped subset of \(\mathbb{R}^{3}\) is clearly not locally homeomorphic to \(\mathbb{R}\) and, therefore, it does not allow the structure of an ordinary smooth manifold. To make sense of arbitrary intersections of smooth manifolds, we need to introduce the concept of a _derived manifold_. We will exploit the following proposition by [14, Corollary 5.4]. **Proposition 3.8** (Derived manifolds [14]).: There is a natural equivalence of \((\infty,1)\)-categories \[\mathbf{dMfd}\;\simeq\;\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}_{ \mathrm{fp}} \tag{3.2.1}\] between the \((\infty,1)\)-category \(\mathbf{dMfd}\) of derived manifolds, and the opposite of the \((\infty,1)\)-category \(\mathbf{sC}^{\infty}\mathbf{Alg}_{\mathrm{fp}}\) of homotopically finitely presented homotopy \(\mathcal{C}^{\infty}\)-algebras. In this paper we will regard the equivalence (3.2.1) as an effective definition of derived manifolds. However, we will need a slight generalisation of the notion of derived manifold. In fact, as stressed by [13], being homotopically finitely presented is much more a stringent notion than being finitely presented in the ordinary sense. In fact, in general, ordinary finitely presented \(\mathcal{C}^{\infty}\)-algebras \(A\in\mathsf{C}^{\infty}\mathbf{Alg}_{\mathrm{fp}}\) such as Weil algebras do not embed into \(\mathbf{C}^{\infty}\mathbf{Alg}_{\mathrm{fp}}\). For this reason, in analogy with the discussion of [13, Section 2] in the context of algebraic geometry, we give the following definition. **Definition 3.9** (Formal derived smooth manifolds).: We define the \((\infty,1)\)-category of _formal derived smooth manifolds_ by \[\mathbf{dFMfd}\;\coloneqq\;\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}_{ \mathrm{afp}}, \tag{3.2.2}\] Figure 6: Example of non-transverse intersection of smooth submanifolds \(\Sigma,\Sigma^{\prime}\subset M\). where \(\mathbf{sC}^{\infty}\mathbf{Alg}_{\mathrm{agfp}}^{\mathrm{op}}\) is the \((\infty,1)\)-category of almost finitely presented \(\mathcal{C}^{\infty}\)-algebras. **Remark 3.10** (Intuitive picture of formal derived smooth manifolds).: At an intuitive level, a formal derived smooth manifold \(U\in\mathbf{dFMfd}\) is a geometric object whose algebra of smooth function is, by definition, a homotopically finitely presented homotopy \(\mathcal{C}^{\infty}\)-algebra modelled by some simplicial object \[\mathcal{O}(U)\ =\ \left(\begin{array}{c}\cdots\xrightarrow{\xrightarrow{ \xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{ \xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\cdot}}}}}}}}}}}} \mathcal{O}(U)_{3}\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\ \xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{\xrightarrow{ \xrightarrow{\xrightarrow{\xrightarrow{\cdot{\cdot{\cdot{\cdot{\cdot{\ A notational warning: whenever it is clear from the context, we will tend to omit the symbol of the embedding \(i\) and simply write \(\Sigma\times_{M}^{h}\Sigma^{\prime}\) to mean the derived intersection \(i(\Sigma)\times_{i(M)}^{h}i(\Sigma^{\prime})\) of ordinary smooth manifolds. Now, since the \((\infty,1)\)-category of formal derived smooth manifolds satisfies the equivalence \(\mathbf{dFMfd}\simeq\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}_{\mathrm{fp}}\), we have that the homotopy \(\mathcal{C}^{\infty}\)-algebra \(\mathcal{O}\big{(}\Sigma\times_{M}^{h}\Sigma^{\prime}\big{)}\) of smooth functions on a homotopy pullback \(\Sigma\times_{M}^{h}\Sigma^{\prime}\) is given by the derived \(\mathcal{C}^{\infty}\)-tensor product of the corresponding ordinary \(\mathcal{C}^{\infty}\)-algebras, i.e. \[\mathcal{O}\big{(}\Sigma\times_{M}^{h}\Sigma^{\prime}\big{)}\;\simeq\; \mathcal{C}^{\infty}(\Sigma)\,\widehat{\otimes}^{\mathrm{L}}_{\mathcal{C}^{ \infty}(M)}\,\mathcal{C}^{\infty}(\Sigma^{\prime}). \tag{3.2.8}\] **Construction 3.14** (Computing the derived intersection of smooth manifolds).: Equivalence (3.2.8) suggests a practical way to compute the derived intersection of given smooth manifolds. In fact, we can consider a cofibrant replacement \(Q\mathcal{C}^{\infty}(\Sigma)\longrightarrow\mathcal{C}^{\infty}(\Sigma)\) in the co-slice category \(\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathcal{C}^{\infty}(M)}\) of homotopy \(\mathcal{C}^{\infty}\)-algebras over \(\mathcal{C}^{\infty}(M)\) with respect to its model structure. By replacing \(\mathcal{C}^{\infty}(\Sigma)\) with a cofibrant replacement \(Q\mathcal{C}^{\infty}(\Sigma)\) in equation (3.2.8), we can compute the derived \(\mathcal{C}^{\infty}\)-tensor product as an ordinary \(\mathcal{C}^{\infty}\)-tensor product, namely we have \[\mathcal{O}\big{(}\Sigma\times_{M}^{h}\Sigma^{\prime}\big{)}\;\simeq\;Q \mathcal{C}^{\infty}(\Sigma)\,\widehat{\otimes}_{\mathcal{C}^{\infty}(M)}\, \mathcal{C}^{\infty}(\Sigma^{\prime}). \tag{3.2.9}\] In principle, we may exploit the Bar construction \(\mathrm{Bar}(\mathcal{C}^{\infty}(M),\mathcal{C}^{\infty}(\Sigma))\longrightarrow \mathcal{C}^{\infty}(\Sigma)\) to produce a suitable cofibrant replacement, but other methods may be available depending on the amount of structure. The simplicial \(\mathcal{C}^{\infty}\)-algebra obtained by this \(\mathcal{C}^{\infty}\)-tensor product will be an explicit model of the wanted homotopy \(\mathcal{C}^{\infty}\)-algebra. **Example 3.15** (Back to previous example).: We look back at example 3.7. Let us exploit the fact that \(e_{\Sigma}\) and \(e_{\Sigma^{\prime}}\) are sections of the vector bundle \(\pi:\mathbb{R}^{3}\to\mathbb{R}^{2}\), given by the obvious projection \((x,y,z)\mapsto(x,y)\). We want to compute the derived \(\mathcal{C}^{\infty}\)-tensor product \[\mathcal{O}\big{(}\mathbb{R}^{2}\!\times_{\mathbb{R}^{3}}^{h} \mathbb{R}^{2}\big{)} \;\simeq\;\mathcal{C}^{\infty}(\mathbb{R}^{2})\,\widehat{ \otimes}^{\mathrm{L}}_{\mathcal{C}^{\infty}(\mathbb{R}^{3})}\,\mathcal{C}^{ \infty}(\mathbb{R}^{2}) \tag{3.2.10}\] \[\;\simeq\;Q\mathcal{C}^{\infty}(\mathbb{R}^{2})\,\widehat{ \otimes}_{\mathcal{C}^{\infty}(\mathbb{R}^{3})}\,\mathcal{C}^{\infty}( \mathbb{R}^{2}),\] by using some cofibrant replacement \(Q\mathcal{C}^{\infty}(\mathbb{R}^{2})\longrightarrow\mathcal{C}^{\infty}( \mathbb{R}^{2})\) in the simplicial co-slice model category \(\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathcal{C}^{\infty}(\mathbb{R}^{3})}\) of homotopy \(\mathcal{C}^{\infty}\)-algebras over \(\mathcal{C}^{\infty}(\mathbb{R}^{3})\). Such a homotopy \(\mathcal{C}^{\infty}\)-algebra must be a simplicial resolution of the ordinary \(\mathcal{C}^{\infty}\)-algebra \(\mathcal{C}^{\infty}(\mathbb{R}^{2})\). Now, let us consider the \(\mathbb{R}\)-algebra \(B\coloneqq\mathcal{C}^{\infty}(\mathbb{R}^{2},\mathbb{R})\oplus\Gamma(\mathbb{ R}^{2},\mathbb{R}^{3})\), where \(\mathcal{C}^{\infty}(\mathbb{R}^{2},\mathbb{R})\) and \(\Gamma(\mathbb{R}^{2},\mathbb{R}^{3})\) are respectively the vector space of functions on \(\mathbb{R}^{2}\) and of sections of the bundle \(\pi:\mathbb{R}^{3}\to\mathbb{R}^{2}\), and where the product given by \((f,\phi)\cdot(f^{\prime},\phi^{\prime})=(ff^{\prime},f\phi^{\prime}+f^{\prime}\phi)\) for any \(f,f^{\prime}\in\mathcal{C}^{\infty}(\mathbb{R}^{2})\) and \(\phi,\phi^{\prime}\in\Gamma(\mathbb{R}^{2},\mathbb{R}^{3})\). We can canonically equip the \(\mathbb{R}\)-algebra \(b\) with the structure of a \(\mathcal{C}^{\infty}\)-algebra by the pre-cosheaf \(\widehat{B}:\mathbb{R}^{k}\mapsto\mathrm{Hom}_{\mathsf{Alg}}(\mathcal{C}^{ \infty}(\mathbb{R}^{k})^{\mathrm{alg}},B)\) on Cartesian spaces. Let us then try with the following: \[Q\mathcal{C}^{\infty}(\mathbb{R}^{2})_{n}\;=\;\begin{cases}\mathcal{C}^{\infty}( \mathbb{R}^{3}),&n=0\\ \mathcal{C}^{\infty}(\mathbb{R}^{3})\,\widehat{\otimes}_{\mathcal{C}^{\infty}( \mathbb{R}^{2})}\,\widehat{B},&n>0,\end{cases} \tag{3.2.11}\] where we used the the fact that there is a pullback map \(\pi^{*}:\mathcal{C}^{\infty}(\mathbb{R}^{2})\to\mathcal{C}^{\infty}(\mathbb{R} ^{3})\). So, the simplicial \(\mathcal{C}^{\infty}\)-algebra \(Q\mathcal{C}^{\infty}(\mathbb{R}^{2})\) will be truncated at \(n=1\), which, more precisely, means that it is \(1\)-skeletal. In fact, we construct a simplicial \(\mathcal{C}^{\infty}(\mathbb{R}^{3})\)-algebra \[Q\mathcal{C}^{\infty}(\mathbb{R}^{2})\;\simeq\;\mathrm{sk}_{1}\bigg{(}\; \mathcal{C}^{\infty}(\mathbb{R}^{3})\,\widehat{\otimes}_{\mathcal{C}^{\infty}( \mathbb{R}^{2})}\,\widehat{B}\;\xrightarrow[\partial_{0}]{}\,\mathcal{C}^{ \infty}(\mathbb{R}^{3})\;\bigg{)},\] with face maps given by the morphisms \[\partial_{0}(1\otimes(f,\phi))=f,\qquad\partial_{1}(1\otimes(f,\phi))=f+(z-x^{2 }y^{2})\phi, \tag{3.2.12}\] for any pair \(f\in\mathcal{C}^{\infty}(\mathbb{R}^{2})\) and \(\phi\in\Gamma(\mathbb{R}^{2},\mathbb{R}^{3})\). To see that this is indeed a cofibrant replacement of \(\mathcal{C}^{\infty}(\mathbb{R}^{2})\), notice that we have \(\pi_{0}\mathcal{QC}^{\infty}(\mathbb{R}^{2})=\mathcal{C}^{\infty}(\mathbb{R} ^{3})/(z-x^{2}y^{2})\cong\mathcal{C}^{\infty}(\mathbb{R}^{2})\) and \(\pi_{i}\mathcal{QC}^{\infty}(\mathbb{R}^{2})\cong 0\) for \(i>0\). Now we can compute the ordinary \(\mathcal{C}^{\infty}\)-tensor product \[\mathcal{O}\big{(}\mathbb{R}^{2}\!\times_{\mathbb{R}^{3}}^{h}\mathbb{R}^{2} \big{)}\ \simeq\ Q\mathcal{C}^{\infty}(\mathbb{R}^{2})\,\widehat{\otimes}_{ \mathcal{C}^{\infty}(\mathbb{R}^{3})}\,\mathcal{C}^{\infty}(\mathbb{R}^{2}), \tag{3.2.13}\] which is given by the \(1\)-skeletal simplicial \(\mathcal{C}^{\infty}\)-algebra \[\mathcal{O}\big{(}\mathbb{R}^{2}\!\times_{\mathbb{R}^{3}}^{h}\mathbb{R}^{2} \big{)}\ \simeq\ \mathrm{sk}_{1}\bigg{(}\ \widehat{B}\ \xrightarrow{\partial_{0}}\ \mathcal{C}^{\infty}(\mathbb{R}^{2})\ \bigg{)}, \tag{3.2.14}\] with face maps given by the morphisms \[\partial_{0}(f,\phi)=f,\qquad\partial_{1}(f,\phi)=f+x^{2}y^{2}\phi. \tag{3.2.15}\] This provides a model for the homotopy \(\mathcal{C}^{\infty}\)-algebra of functions on the derived intersection \(\mathbb{R}^{2}\!\times_{\mathbb{R}^{3}}^{h}\mathbb{R}^{2}\) of the smooth manifold in the example. Let us see how this simple example can be generalised to a relevant class of examples: the derived intersection of the graph of a section of a vector bundle with the one of the zero-section. **Example 3.16** (Derived zero locus of a section of a vector bundle).: Let \(\Sigma,M\) be again ordinary smooth manifolds and let \(\pi_{\Sigma}:M\to\Sigma\) be an ordinary vector bundle. Let us also fix a section \(e_{\Sigma}:\Sigma\hookrightarrow M\) of such a vector bundle. The derived intersection \(\Sigma\times_{M}^{h}\Sigma\) of \(e_{\Sigma}:\Sigma\hookrightarrow M\) with the zero-section \(0:\Sigma\hookrightarrow M\) is also known as derived zero locus of \(e_{\Sigma}\). To explicitly find such a derived intersection it is convenient to deploy the notion of dg-\(\mathcal{C}^{\infty}\)-algebra, see e.g. [11, 12]. A dg-\(\mathcal{C}^{\infty}\)-algebra \(K_{\bullet}\) is a dg-commutative \(\mathbb{R}\)-algebra where \(K_{0}\) is equipped with a compatible \(\mathcal{C}^{\infty}\)-algebra structure. Maps of dg-\(\mathcal{C}^{\infty}\)-algebra are maps of dg-commutative \(\mathbb{R}\)-algebras which respect the \(\mathcal{C}^{\infty}\)-structure in degree \(0\) and, similarly to \(\mathbb{R}\)-algebras, the category of non-positively graded dg-\(\mathcal{C}^{\infty}\)-algebras is naturally simplicially enriched. Now, there exists a non-positively graded dg-\(\mathcal{C}^{\infty}\)-algebra \(K_{-n}=\wedge_{\mathcal{C}^{\infty}(\Sigma)}^{n}\Gamma(\Sigma,M^{\vee})\) with differential \(\mathrm{d}_{K}=\langle e_{\Sigma},-\rangle\) given by contraction with the section \(e_{\Sigma}\). By the construction in [12], we can consider the following simplicial \(\mathcal{C}^{\infty}\)-algebra: \[\mathcal{O}\big{(}\Sigma\!\times_{M}^{h}\Sigma\big{)}\,:\,\mathbb{R}^{k}\, \longmapsto\,\mathrm{Hom}_{\text{dg}\mathcal{C}^{\infty}\text{Alg}\leq 0}(\mathcal{C}^{\infty}(\mathbb{R}^{k}),K_{\bullet}), \tag{3.2.16}\] which provides a model for the homotopy \(\mathcal{C}^{\infty}\)-algebra of functions on the derived zero locus. In fact, by forgetting the \(\mathcal{C}^{\infty}\)-structure, the underlying simplicial set of such a homotopy \(\mathcal{C}^{\infty}\)-algebra Figure 7: Morally speaking, we can picture the formal derived smooth manifold in the example above as a smooth ”cloud” around the bare set of the intersection. is the \((\dim M-\dim\Sigma)\)-skeletal simplicial set given by the usual Dold-Kan correspondence \[\mathcal{O}\!\left(\Sigma\!\times_{M}^{h}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Construction 3.19** (Etale \((\infty,1)\)-site of formal derived smooth manifolds).: Now, by following [12, 13], the \((\infty,1)\)-category **dFMfd** of formal derived smooth manifolds can be naturally equipped with the structure of an \(\acute{e}\)tale \((\infty,1)\)-site, whose coverage is provided by the assignment of \(\acute{e}\)tale covers \(\{U_{i}\xrightarrow{\phi_{i}}M\}_{i\in I}\) to any formal derived smooth manifold \(M\). Such \(\acute{e}\)tale covers are collections of morphisms such that: 1. each \(U_{i}\xrightarrow{\phi_{i}}M\) is a formally \(\acute{e}\)tale map, 2. there exist a finite subset \(I^{\prime}\subset I\) such that the truncation \(\{t_{0}U_{i}\xrightarrow{t_{0}\phi_{i}}t_{0}M\}_{i\in I^{\prime}}\) is a covering in the ordinary site of \(\mathcal{C}^{\infty}\)-varieties. **Construction 3.20** (Simplicial category of formal derived smooth stacks).: Now, given the definition of the \((\infty,1)\)-site of formal derived smooth manifolds, we can apply the general discussion above about derived stacks to our case of interest. By [12, Theorem 3.4.1] there exists a local projective model structure on the simplicial-category of pre-stacks \([\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathsf{afp}},\mathsf{sSet}]\) induced by the definition of formally \(\acute{e}\)tale maps of formal derived smooth manifolds. Thus, by localisation of such a simplicial model structure, one can obtain the simplicial category of formal derived smooth stacks, i.e. \[\mathsf{dFSmoothStack}\;\coloneqq\;[\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathsf{ afp}},\mathsf{sSet}]^{\circ}_{\mathrm{proj,loc}}. \tag{3.3.4}\] To write concretely a formal derived smooth stack, we need to introduce a certain refinement of an \(\acute{e}\)tale cover, namely we need the definition of an \(\acute{e}\)tale hypercover. **Definition 3.21** (\(\acute{e}\)tale hypercover of a formal derived smooth manifold).: An \(\acute{e}\)tale hypercover \(H(U)_{\bullet}\to U\) of a formal derived smooth manifold \(U\) is a simplicial object \(H(U)_{\bullet}\) in the \(\acute{e}\)tale \((\infty,1)\)-site **dFMfd** such that \(H(U)_{0}\to U\) is an \(\acute{e}\)tale cover and all natural morphisms \[H(U)_{n}\;\longrightarrow\;(\mathrm{cosk}_{n-1}\circ\mathrm{tr}_{n-1}H(U)_{ \bullet})_{n} \tag{3.3.5}\] for \(n>0\) are \(\acute{e}\)tale covers. In the definition above, \(\mathrm{tr}_{n}\) and \(\mathrm{cosk}_{n}\) are respectively the \(n\)-truncation functor and the \(n\)-coskeleton functor on simplicial objects. Thus, \(H(U)_{\bullet}\to U\) being an \(\acute{e}\)tale hypercover means that, for each \(n\geq 0\), one has the equivalence of the form \[H(U)_{n}\;\simeq\;\coprod_{i\in I_{n}}U_{i}^{n} \tag{3.3.6}\] where \(U_{i}^{n}\) are formal derived smooth manifolds such that the following are all \(\acute{e}\)tale covers: \[\begin{split}\{U_{i}^{0}\,\rightarrow\,U\}_{i\in I_{0}}\\ \left\{U_{i}^{1}\,\rightarrow\!\coprod_{j_{1},j_{2}\in I_{0}}\!U_{ j_{1}}^{0}\!\times_{U}\!U_{j_{2}}^{0}\right\}_{i\in I_{1}}\\ \left\{U_{i}^{2}\,\rightarrow\!\coprod_{j_{1},j_{2},j_{3}\in I_{ 1}}\!U_{j_{1}}^{1}\!\times_{U}\!U_{j_{2}}^{1}\!\times_{U}\!U_{j_{3}}^{1} \right\}_{i\in I_{2}}\\ \vdots\end{split} \tag{3.3.7}\] Now, we have all the ingredients to unravel the definition of formal derived smooth stacks in concrete terms. **Remark 3.22** (Formal derived smooth stack in concrete terms).: A formal derived smooth stack \(X\in\mathsf{dFSmoothStack}\) is modelled by a fibrant object in the simplicial model category \([\mathsf{dFMfd}^{\mathrm{op}},\mathsf{sSet}]_{\mathrm{proj,loc}}\). Thus, by the general argument in [10, 11], we have that a formal derived smooth stack \(X\) is concretely given by a simplicial functor \(X:\mathsf{dFMfd}^{\mathrm{op}}\longrightarrow\mathsf{sSet}\) such that the following conditions are satisfied: 1. _object-wise fibrancy_: for any \(U\in\mathsf{dFMfd}\), the simplicial set \(X(U)\) is Kan-fibrant; 2. _pre-stack condition_: for any equivalence \(U\xrightarrow{\simeq}U^{\prime}\) in \(\mathsf{dFMfd}\), the induced morphism \(X(U^{\prime})\longrightarrow X(U)\) is an equivalence of simplicial sets; 3. _descent condition_: for any etale hypercover \(H(U)_{\bullet}\to U\) in \(\mathsf{dFMfd}\), the natural morphism \[X(U)\;\longrightarrow\;\underset{[n]\in\Delta}{\mathrm{Rlim}}\left(\,\prod_{i \in I_{n}}X(U_{i}^{n})\right)\] (3.3.8) is an equivalence of simplicial sets. Notice that this last condition provides an interesting generalisation of the gluing conditions of ordinary sheaves. Moreover, from the perspective of applications, it provides a recipe to construct a formal derived smooth stack by gluing together simpler spaces of sections. Finally, we can take the homotopy-coherent nerve of the simplicial-category of formal derived smooth stacks to obtain its \((\infty,1)\)-categorical version, as previously discussed at the beginning of this section at equality (3.0.2). **Definition 3.23** (\((\infty,1)\)-category of formal derived smooth stacks).: We define the \((\infty,1)\)_-category of formal derived smooth stacks_ by \[\mathsf{dFSmoothStack}\;\coloneqq\;\mathbf{N}_{hc}([\mathsf{dFMfd}^{\mathrm{op }},\mathsf{sSet}]_{\mathrm{proj,loc}}^{\circ}), \tag{3.3.9}\] i.e. by the \((\infty,1)\)-category of stacks on the etale \((\infty,1)\)-site presented by \(\mathsf{dFMfd}=\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathrm{afp}}^{\mathrm{op}}\) of formal derived smooth manifolds. As we will see in section 4 below, the \((\infty,1)\)-category \(\mathsf{dFSmoothStack}\) comes equipped with a very rich structure: it is a differential cohesive \((\infty,1)\)-topos in the sense of [DCCT]. **Proposition 3.24** (Relation with usual smooth stacks).: There exists an adjunction \((i\dashv t_{0})\) of \((\infty,1)\)-functors between the \((\infty,1)\)-category of smooth stacks into the \((\infty,1)\)-category of formal derived smooth stacks (3.3.10) where \(i\) is fully faithful and \(t_{0}\) preserves finite products. Proof.: The logic of the proof is the following: first, we must show that we have an adjunction between the corresponding \((\infty,1)\)-categories of pre-stacks and, then, that this restricts to an adjunction of the the \((\infty,1)\)-categories of stacks. A simplicial functor \(f:\mathsf{C}\rightarrow\mathsf{D}\) gives rise to an adjunction \((f_{!}\dashv f^{*})\) between the corresponding simplicial-functor categories \([\mathsf{C}^{\mathrm{op}},\mathsf{sSet}]\) and \([\mathsf{D}^{\mathrm{op}},\mathsf{sSet}]\), where the pullback functor \(f^{*}=(-)\circ f\) is just the pre-composition with \(f\) and \(f_{!}\) is the left Kan extension of \(f\). (See e.g. [1].) In our case of interest, the embedding induces an adjunction of functors between \([\mathsf{dFMfd}^{\mathrm{op}},\mathsf{sSet}]\) and \([\mathsf{Mfd}^{\mathrm{op}},\mathsf{sSet}]\). Thus, by [10, Section 2.3.1] we have a Quillen adjunction of simplicial-functors (3.3.11) (Recall that, in the projective model structure, fibrations and weak equivalences are computed object-wise). This simplicial Quillen adjunction provides a model of an \((\infty,1)\)-adjunction of prestacks. Now, to see that this restricts to stacks, we need to show that these simplicial-functors send locally fibrant/ofibrant objects (i.e. fibrant/cofibrant objects in the local projective model structure) to other locally fibrant/cofibrant objects. However, by the properties of Quillen adjunctions, it is sufficient to check this for the right adjoint functor. So, given any \(X\in[\mathsf{dFMfd}^{\mathrm{op}},\mathsf{sSet}]^{\mathrm{op}}_{\mathrm{proj, loc}}\), its image is \(\mathfrak{iota}^{\mathsf{Mfd}*}X=X\circ\mathfrak{l}^{\mathsf{Mfd}}\). For any manifold \(U\in\mathsf{Mfd}\), a Cech nerve \(\check{C}(U)_{\bullet}\to U\) precisely embeds into an etale hypercover, thus \(\mathfrak{iota}^{\mathsf{Mfd}*}X\) satisfying descent on ordinary smooth manifold is an immediate consequence of \(X\) satisfying descent on formal derived smooth manifolds. Therefore, there is a Quillen adjunction of simplicial-functors (3.3.12) This simplicial Quillen adjunction provides a model of an \((\infty,1)\)-adjunction of stacks. Now, since the functor \(\mathfrak{iota}^{\mathsf{Mfd}}\) is fully faithful, we have that \(\mathfrak{iota}^{\mathsf{Mfd}}_{!}\) is also fully faithful. Finally, \(\mathfrak{iota}^{\mathsf{Mfd}*}\) preserves finite products, since finite limits are computed object-wise, so we have \((X\times^{h}Y)(\mathfrak{iota}^{\mathsf{Mfd}}U)\simeq X(\mathfrak{iota}^{ \mathsf{Mfd}}U)\times Y(\mathfrak{iota}^{\mathsf{Mfd}}U)\) for any smooth manifold \(U\) and formal derived smooth stacks \(X,Y\). **Definition 3.25** (Derived-extension and underived-truncation functor).: In the diagram right above we defined the following functors: * the _derived-extension functor_ \(i\coloneqq\mathfrak{iota}^{\mathsf{Mfd}}\) in the diagram above, * the _underived-truncation functor_ \(t_{0}\coloneqq\mathfrak{iota}^{\mathsf{Mfd}*}\) in the diagram above. More concretely, the underived-truncation functor \(t_{0}\) sends any formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) to the smooth stack \(t_{0}X\in\mathbf{SmoothStack}\) given by the composition \[t_{0}X\,:\,\mathsf{Mfd}^{\mathrm{op}}\,\xrightarrow{\mathfrak{iota}^{\mathsf{ Mfd}}}\,\,\mathsf{dFMfd}^{\mathrm{op}}\,\,\xrightarrow{X}\,\,\mathsf{sSet}. \tag{3.3.13}\] **Remark 3.26** (Derived-extension functor does not preserve limits).: As we noticed, the derived-extension functor \(i\) preserves finite products. However, crucially, it does not generally preserve pullbacks or other limits. **Remark 3.27** (Homotopy pullback of non-derived stacks).: Let \(f:X\to Z\) and \(g:Y\to Z\) be morphisms of smooth stacks. We can consider the formal derived smooth stack given by the \((\infty,1)\)-pullback (3.3.14) Since, as we just remarked, the \((\infty,1)\)-functor \(i\) does not generally preserve limits, there is therefore a natural morphism of formal derived smooth stacks \[i(X)\times^{h}_{i(Z)}i(Y)\,\longrightarrow\,i(X\times_{Z}Y), \tag{3.3.15}\] which is generally not an equivalence. However, the underived-truncation of such a morphism \[t_{0}\big{(}i(X)\times^{h}_{i(Z)}i(Y)\big{)}\,\xrightarrow{\simeq}\,t_{0}i(X \times_{Z}Y)\,\simeq\,X\times_{Z}Y \tag{3.3.16}\] is an equivalence of smooth stacks. **Example 3.28** (Derived-extension of a quotient smooth stack).: Let us consider a simple smooth stack: a quotient stack \([M/G]\in\operatorname{\mathbf{SmoothStack}}\), where \(M\) is an ordinary smooth manifold and \(G\) a Lie group. Recall that, on a smooth manifold \(U\simeq\mathbb{R}^{n}\) diffeomorphic to a Cartesian space, its simplicial set of sections is given by \[[M/G](U)\;\simeq\;\operatorname{csk}_{2}\Biggl{(}\;\operatorname{Hom}(U,G^{ \times 2}\times M)\;\xrightarrow[]{}\;\operatorname{Hom}(U,G\times M)\; \xrightarrow[]{\partial_{0}}\;\operatorname{Hom}(U,M)\;\Biggr{)},\] where the face maps, on \(1\)-simplices, are \(\partial_{0}(g,f)\mapsto f\) and \(\partial_{1}(g,f)\mapsto g\cdot f\) and, on \(2\)-simplices, are given respectively by the group multiplication and by bare projections, as usual. This means that \(1\)-simplices from a \(0\)-simplex \(f\in\operatorname{Hom}(U,M)\) to a \(0\)-simplex \(f^{\prime}\in\operatorname{Hom}(U,M)\) are of the form \(f^{\prime}=g\cdot f\) for some \(g\in\operatorname{Hom}(U,G)\). How does this picture of \(1\)-simplices generalise when we consider the space of sections of the derived-extension \(i[M/G]\in\operatorname{\mathbf{dFSmoothStack}}\) of our quotient stack? Let now \(U\) be a formal derived smooth manifold. By unravelling its definition, the simplicial set of sections of our formal derived smooth stack is of the form2 Footnote 2: From now on we will denote by \(\operatorname{\mathbb{R}Hom}(X,Y)\) the \((\infty,1)\)-categorical hom-space between formal derived smooth stacks \(X\) and \(Y\). Notice that such a notation is different from the one deployed so far for non-derived stacks. \[i[M/G](U)\;\simeq\;\left(\begin{array}{ccc}\;\xrightarrow[]{}\;\;\; \xrightarrow[]{}\;\;\;\operatorname{\mathbb{R}Hom}(U,G\times M)_{0}&\; \xrightarrow[]{}\;\;\;\operatorname{\mathbb{R}Hom}(U,G\times M)_{0}&\; \xrightarrow[]{}\;\;\;\operatorname{\mathbb{R}Hom}(U,M)_{0}\\ \;\;\xrightarrow[]{}\;\;\;\;\times\operatorname{\mathbb{R}Hom}(U,M)_{2}&\; \xrightarrow[]{}\;\;\;\times\operatorname{\mathbb{R}Hom}(U,M)_{1}&\; \xrightarrow[]{}\;\;\;\operatorname{\mathbb{R}Hom}(U,M)_{0}\end{array}\right).\] So, a \(1\)-simplex is a triplet \((g,f,f_{1})\), where \((g,f)\in\operatorname{\mathbb{R}Hom}(U,G\times M)_{0}\) and \(f_{1}\in\operatorname{\mathbb{R}Hom}(U,M)_{1}\) is a homotopy of the form \(f^{\prime}\stackrel{{ f_{1}}}{{\longleftarrow}}g\cdot f\), where this compact notation means that the homotopy \(f_{1}\) has boundaries \(\partial_{0}f_{1}=g\cdot f\) and \(\partial_{1}f_{1}=f^{\prime}\). This means that a \(1\)-simplex \((g,f,f_{1})\) goes from \(f\) to \(f^{\prime}\), where the latter is not anymore equal on the nose to \(g\cdot f\), but only homotopic to it by \(f_{1}\). An analogous story holds for \(2\)-simplices, where homotopies of homotopies will appear and so on for higher simplices. This example will be propaedeutic to the study of more complicated stacks in section 5. ### Discussion of formal derived smooth sets In the previous subsection we constructed formal derived smooth stacks. In analogy with non-derived smooth stacks, we may wonder if there is any possible notion of formal derived smooth set. We should remark, however, that there is no meaningful notion of sheaf on formal derived smooth manifolds, so that the idea of defining formal derived smooth sets this way seems hopeless. Having said that, in this subsection we will propose a working definition of formal derived smooth sets based on a different principle: a formal derived smooth set will be defined as a formal derived smooth stack which is the derived enhancement of an ordinary smooth set. Recall from Remark 2.38 that there is a natural embedding \(\operatorname{\mathbf{N}}(\operatorname{\mathbf{SmoothSet}})\longleftrightarrow \operatorname{\mathbf{SmoothStack}}\) of smooth sets into smooth stacks. Moreover, by [11, Section 5.6], such an embedding has a left adjoint functor \(\tau_{0}\), which is known as \(0\)_-truncation_ of smooth stacks. So, by putting everything together, we have the following diagram of coreflective and reflective embeddings of \((\infty,1)\)-categories: \[\begin{CD}\text{{dFSmoothStack}}@>{i}>{t_{0}}>{}>\text{SmoothStack}\\ @V{}V{}V@V{}V{\tau_{0}}V{}V\\ \end{CD} \tag{3.4.1}\] We have now all the ingredients to provide the definition of formal derived smooth sets. **Definition 3.29** (Formal derived smooth set).: A _formal derived smooth set_\(X\) is a formal derived smooth stack \(X\in\text{{dFSmoothStack}}\) such that its underived-truncation \(t_{0}X\) is in the essential image of the natural embedding \(\mathbf{N}(\text{{FSmoothSet}})\longleftrightarrow\text{{FSmoothStack}}\). Thus, we define the \((\infty,1)\)-category of formal derived smooth sets by the pullback \[\text{{dFSmoothSet}}\,\coloneqq\,\text{{dFSmoothStack}}\times^{h}_{\text{{ SmoothStack}}}\mathbf{N}(\text{{SmoothSet}}) \tag{3.4.2}\] in the \((\infty,1)\)-category of \((\infty,1)\)-categories. In other words, we construct a formal derived smooth set to be a formal derived smooth stack \(X\) whose underived-truncation \(t_{0}X\) is, in particular, a \(0\)-truncated smooth stack, or equivalently just an ordinary smooth set. Now, we have the following square of reflective embeddings: \[\begin{CD}\text{{dFSmoothStack}}@>{\bot}>{t_{0}}>{}>\text{SmoothStack}\\ @V{}V{\tau_{0}}V@V{}V{\tau_{0}}V\\ \end{CD} \tag{3.4.3}\] Reflective sub-categories are stable under pullback along cocartesian fibrations, as shown for example in [Kerodon, Proposition 6.2.2.17]. But any left fibration is a cocartesian fibration, as seen in [1, Example 3.3], so \(\tau_{0}\) on the right a cocartesian fibration. This implies that \(\text{{dFSmoothSet}}\hookrightarrow\text{{dFSmoothStack}}\) is itself a reflective sub-category. Let us now look at a few relevant examples of formal derived smooth sets, which will be useful later in dealing with physics. **Example 3.30** (Formal derived smooth manifold).: The simplest, but also the archetypal, class of examples of formal derived smooth set is provided by the formal derived smooth manifolds themselves. Let \(M\in\text{{dFMfd}}\) be a formal derived smooth manifold. It naturally Yoneda-embeds into a formal derived smooth set of the form \[\begin{split} M\,:\,\text{{dFMfd}}&\, \longrightarrow\,\text{{SSet}}\\ U&\,\longmapsto\,\text{{RHom}}_{\text{{dFMfd}}}(U,M), \end{split} \tag{3.4.4}\] where \(\text{{RHom}}_{\text{{dFMfd}}}(U,M)=\text{{RHom}}_{\text{{sC}}\,\infty} \hskip-1.0pt\text{{Alg}}_{\text{{fip}}}\big{(}\mathcal{O}(M),\mathcal{O}(U) \big{)}\) and \(\mathcal{O}(M),\mathcal{O}(U)\) are respectively the homotopy \(\mathcal{C}^{\infty}\)-algebras of functions on \(M,U\). Thus, we have an embedding of \((\infty,1)\)-categories \(\text{{dFMfd}}\,\longleftrightarrow\,\text{{dFSmoothSet}}\). We can now explicitly show that the natural embedding of smooth manifolds into derived smooth manifolds is compatible with the embedding of smooth sets into formal derived smooth sets, i.e. with the derived-extension functor. **Example 3.31** (Ordinary smooth manifolds).: Recall that, given a smooth manifold \(M\in\mathsf{Mfd}\), we have that it Yoneda embeds into smooth sets to the functor \(M:U\mapsto\operatorname{Hom}_{\mathsf{Mfd}}(U,M)\) on the site of smooth manifolds \(U\in\mathsf{Mfd}\). The derived-extension functor embeds this smooth set into the following formal derived smooth set \[\begin{split} i(M)\,:\,\mathsf{dFMfd}& \longrightarrow\,\mathsf{sSet}\\ U&\longmapsto\,\operatorname{RHom}_{\mathsf{dFMfd}} (U,\mathfrak{t}^{\mathsf{Mfd}}(M)),\end{split} \tag{3.4.5}\] where \(\mathfrak{t}^{\mathsf{Mfd}}:\mathsf{Mfd}\hookrightarrow\mathsf{dFMfd}\) is the natural embedding of smooth manifolds into formal derived smooth manifolds. The following is the first non-obvious class of examples which we can study in the context of formal derived smooth sets. **Example 3.32** (Formal derived mapping space).: A more interesting class of examples of formal derived smooth sets is provided by mapping spaces. Let \(M,N\in\mathsf{Mfd}\) be a pair of ordinary smooth manifolds. We can define a formal derived smooth set \([iM,iN]\in\mathsf{dFSmoothSet}\) by \[\begin{split}[iM,iN]\,:\,\mathsf{dFMfd}& \longrightarrow\,\mathsf{sSet}\\ U&\longmapsto\,\operatorname{RHom}_{\mathsf{dFMfd}} (U\times\mathfrak{t}^{\mathsf{Mfd}}(M),\,\mathfrak{t}^{\mathsf{Mfd}}(N)), \end{split} \tag{3.4.6}\] functorially on elements \(U\in\mathsf{dFMfd}\) of the site. This is the natural derived enhancement of the ordinary mapping space of two ordinary smooth manifolds. To see that this is indeed a formal derived smooth set, it is enough to notice that we have the equivalences of simplicial sets \([iM,iN](*)\simeq\operatorname{RHom}_{\mathsf{dFMfd}}(\mathfrak{t}^{\mathsf{ Mfd}}(M),\,\mathfrak{t}^{\mathsf{Mfd}}(N))\simeq\operatorname{Hom}_{\mathsf{Mfd}} (M,N)\). Let, more generally, \(M,N\in\mathsf{dFMfd}\) be a pair of formal derived smooth manifolds. Then we can construct their formal derived mapping stack \([M,N]\,:\,U\mapsto\,\operatorname{RHom}_{\mathsf{dFMfd}}(U\times M,N)\). However, notice that this is not a derived formal smooth set, contrarily to what one may have expected. To see this, one can pick \(U=*\), so that \([M,N](*)\simeq\operatorname{RHom}_{\mathsf{dFMfd}}(M,N)\) is generally not a constant simplicial set. #### Derived affine \(\mathcal{C}^{\infty}\)-schemes We will now introduce a fundamental and very concrete class of examples of formal derived smooth sets: derived affine \(\mathcal{C}^{\infty}\)-schemes. These geometric objects are defined similarly to the derived affine schemes of derived algebraic geometry, but instead of derived commutative algebras, they correspond to homotopy \(\mathcal{C}^{\infty}\)-algebras. **Remark 3.33** (Searching for formal derived smooth pro-manifolds).: The ind-category of an \((\infty,1)\)-category \(\mathbf{C}\) is defined by \(\operatorname{Ind}(\mathbf{C})\simeq[\mathbf{C}^{\mathrm{op}},\infty\mathbf{ Grpd}]_{\mathrm{acc},\mathrm{lex}}\), where we called \([-,-]_{\mathrm{acc},\mathrm{lex}}\) the \((\infty,1)\)-category of functors which are accessible and left-exact (see for instance [13, Section 5.3]). In the case of formal derived smooth manifolds, [12, Theorem 3.10] tells us that the \((\infty,1)\)-category \(\mathbf{sC}^{\infty}\mathbf{Alg}\) of homotopy \(\mathcal{C}^{\infty}\)-algebras is compactly generated and, in particular, there is an equivalence \[\operatorname{Ind}\bigl{(}\mathbf{sC}^{\infty}\mathbf{Alg}_{\mathrm{fp}} \bigr{)}\;\simeq\;\mathbf{sC}^{\infty}\mathbf{Alg} \tag{3.4.7}\] between the ind-\((\infty,1)\)-category of finitely presented homotopy \(\mathcal{C}^{\infty}\)-algebras and the \((\infty,1)\)-category of homotopy \(\mathcal{C}^{\infty}\)-algebras. The pro-\((\infty,1)\)-category \(\operatorname{Pro}(\mathbf{C})\) of any given \((\infty,1)\)-category \(\mathbf{C}\) is defined by the equivalence \(\operatorname{Pro}(\mathbf{C})\simeq\operatorname{Ind}(\mathbf{C}^{\mathrm{op}} )^{\mathrm{op}}\). Thus, from the equivalence (3.4.7), we can be immediately obtain the following equivalences: \[\mathrm{Pro}(\mathbf{d}\mathbf{M}\mathbf{f}\mathbf{d})\;\simeq\;\mathrm{Ind} \big{(}\mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}_{\mathrm{fp} }\big{)}^{\mathrm{op}}\;\simeq\;\mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I }\mathbf{g}^{\mathrm{op}}, \tag{3.4.8}\] where \(\mathbf{d}\mathbf{M}\mathbf{f}\simeq\mathbf{s}\mathbf{C}^{\infty}\mathbf{A} \mathbf{I}\mathbf{g}_{\mathrm{fp}}^{\mathrm{op}}\) is the \((\infty,1)\)-category of derived manifolds in the sense of [10]. Thus, there is a natural notion of pro-object for the \((\infty,1)\)-category of derived manifolds, which can be seen as the opposite of a general homotopy \(\mathcal{C}^{\infty}\)-algebra. This provides a motivation for the definition of derived affine \(\mathcal{C}^{\infty}\)-schemes: they can be seen as derived pro-manifolds. **Definition 3.34** (Derived affine \(\mathcal{C}^{\infty}\)-scheme).: We define the \((\infty,1)\)-category of _derived affine \(\mathcal{C}^{\infty}\)-schemes_ by the opposite \((\infty,1)\)-category of homotopy \(\mathcal{C}^{\infty}\)-algebras, i.e. by \[\mathbf{d}\mathbf{C}^{\infty}\mathbf{A}\mathbf{f}\;\coloneqq\;\mathbf{s} \mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}^{\mathrm{op}}. \tag{3.4.9}\] An alternative nomenclature for such spaces would be _derived pro-manifolds_, in the light of the discussion at Remark 3.33 above. **Lemma 3.35** (Derived affine \(C^{\infty}\)-schemes are formal derived smooth stacks).: There is a natural embedding of derived affine \(C^{\infty}\)-schemes into formal derived smooth stacks. If we denote by \(\mathrm{RSpec}(A)\in\mathbf{d}\mathbf{C}^{\infty}\mathbf{A}\mathbf{f}\) the derived affine \(\mathcal{C}^{\infty}\)-scheme whose homotopy \(\mathcal{C}^{\infty}\)-algebra is \(A\in\mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}\), its embedding into formal derived smooth stacks is given by \[\mathrm{RSpec}(A)\,:\,\mathbf{d}\mathbf{F}\mathbf{M}\mathbf{f} \;\longrightarrow\;\mathbf{s}\mathbf{S}\mathbf{e} \tag{3.4.10}\] \[U \longmapsto\,\mathrm{RHom}_{\mathbf{s}\mathbf{C}^{\infty}\mathbf{A} \mathbf{I}\mathbf{g}}(A,\,\mathcal{O}(U)).\] Proof.: Recall that we have an embedding \(\mathbf{d}\mathbf{F}\mathbf{M}\mathbf{f}\mathbf{d}\simeq\mathbf{s}\mathbf{C} ^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}_{\mathrm{afp}}^{\mathrm{op}}\; \longrightarrow\;\mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g} ^{\mathrm{op}}\). We can construct a functor \(\mathcal{O}:\mathbf{d}\mathbf{F}\mathbf{S}\mathbf{moothStack}\longrightarrow \mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}^{\mathrm{op}}\) by Yoneda extension of such an embedding. More concretely, we can write any formal derived smooth stack \(X\in\mathbf{d}\mathbf{F}\mathbf{S}\mathbf{moothStack}\) as the colimit of representables and construct the limit of homotopy \(\mathcal{C}^{\infty}\)-algebras \[\mathcal{O}(X)\;\simeq\;\underset{U\to X}{\varprojlim}\;\mathcal{O}(U)\quad \text{for}\quad X\;\simeq\;\underset{U\to X}{\varprojlim}\;U, \tag{3.4.11}\] where \(\mathcal{O}(U)\) is the usual homotopically finitely presented \(\mathcal{C}^{\infty}\)-algebra of functions on the formal derived smooth manifold \(U\). Since limits become colimits in the opposite category, by construction, the \((\infty,1)\)-functor \(\mathcal{O}\) preserves colimit. Notice that both \(\mathbf{d}\mathbf{F}\mathbf{S}\mathbf{moothStack}\) and \(\mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}\) are presentable \((\infty,1)\)-categories, the former since it is an \((\infty,1)\)-topos and the latter by [10, Proposition 3.6]. Therefore, by the adjoint \((\infty,1)\)-functor theorem, the \((\infty,1)\)-functor \(\mathcal{O}\) has a right adjoint \(\mathrm{RSpec}:\mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}^{ \mathrm{op}}\longrightarrow\mathbf{d}\mathbf{F}\mathbf{S}\mathbf{moothStack}\). In fact, for any \(X\in\mathbf{d}\mathbf{F}\mathbf{S}\mathbf{moothStack}\) and \(A\in\mathbf{s}\mathbf{C}^{\infty}\mathbf{A}\mathbf{I}\mathbf{g}\) we have the following chain of equivalences: \[\mathrm{RHom}(X,\mathrm{RSpec}A) \;\simeq\;\mathrm{RHom}\big{(}\underset{U\to X}{\varprojlim}\;U, \,\mathrm{RSpec}A\big{)} \tag{3.4.12}\] \[\;\simeq\;\mathrm{Rlim}\,\mathrm{RHom}(U,\,\mathrm{RSpec}A)\] \[\;\simeq\;\mathrm{Rlim}\,\mathrm{RHom}_{\mathbf{s}\mathbf{C}^{ \infty}\mathbf{A}\mathbf{I}\mathbf{g}}(A,\,\underset{U\to X}{\varprojlim}\; \mathcal{O}(U))\] \[\;\simeq\;\mathrm{RHom}_{\mathbf{s}\mathbf{C}^{\infty}\mathbf{A} \mathbf{I}\mathbf{g}^{\mathrm{op}}}(\mathcal{O}(X),\,A)\] Now, a sufficient and necessary condition for \(\mathrm{RSpec}\) being a fully faithful \((\infty,1)\)-functor is that the counit is an equivalence, which means that the morphism \(\mathcal{O}(\mathrm{RSpec}A)\;\overset{\simeq}{\longrightarrow}\;A\). must be an equivalence for any homotopy \(\mathcal{C}^{\infty}\)-algebra \(A\). Notice that we have the equivalences \[\mathcal{O}(\mathrm{RSpec}A)\;\simeq\;\underset{U\to\mathrm{RSpec}A}{ \varprojlim}\mathcal{O}(U)\;\simeq\;\underset{A\to\mathcal{O}(U)}{\varprojlim} \mathcal{O}(U)\;\simeq\;A, \tag{3.4.13}\] where in the second line we used the fact that \(\mathrm{RSpec}\) is the right adjoint to \(\mathcal{O}\). This then shows that the \((\infty,1)\)-functor \(\mathrm{RSpec}\) is indeed full and faithful. The relevance of derived affine \(\mathcal{C}^{\infty}\)-schemes will be mostly a consequence of the fact that they constitute a particularly tractable example of formal derived smooth sets which generalise formal derived smooth manifolds. **Remark 3.36** (Formal derived smooth manifolds are derived affine \(C^{\infty}\)-schemes).: There is a natural (coreflective) embedding \(\mathbf{dFMfd}\simeq\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}_{\mathrm{afp} }\hookrightarrow\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}\), since any derived smooth manifold \(M\) is immediately equivalent to the spectrum of its homotopy \(\mathcal{C}^{\infty}\)-algebra of functions, i.e. \(M\simeq\mathrm{RSpec}\mathcal{O}(M)\). This embedding allows us to naturally embed formal derived smooth manifolds into derived \(\mathcal{C}^{\infty}\)-schemes. Thus, by combining this fact with proposition 3.35, we obtain the following inclusions of \((\infty,1)\)-categories: \[\mathbf{dFMfd}\ \longrightarrow\ \mathbf{dC}^{\infty}\mathbf{Aff}\ \longrightarrow\ \mathbf{dFSmoothSet}\ \longrightarrow\ \mathbf{dFSmoothStack}, \tag{3.4.14}\] where, as before, \(\mathbf{dFMfd}\) is the \((\infty,1)\)-category of formal derived smooth manifolds, \(\mathbf{dC}^{\infty}\mathbf{Aff}\) is the \((\infty,1)\)-category of derived \(\mathcal{C}^{\infty}\)-schemes and \(\mathbf{dFSmoothSet}\) is the \((\infty,1)\)-category of formal derived smooth sets. By construction above, the \((\infty,1)\)-functor \(\mathrm{RSpec}:\mathbf{sC}^{\infty}\mathbf{Alg}^{\mathrm{op}}\longrightarrow \mathbf{dFSmoothStack}\) preserves limits. Thus, we have the following corollary. **Corollary 3.37** (Pullbacks of affine derived \(\mathcal{C}^{\infty}\)-schemes).: We have the following equivalence of formal derived smooth stacks: \[\mathrm{RSpec}A\times^{h}_{\mathrm{RSpec}C}\mathrm{RSpec}B\ \simeq\ \mathrm{RSpec}(A\, \widehat{\otimes}^{\mathrm{L}}_{C}\,B), \tag{3.4.15}\] for any given homotopy \(\mathcal{C}^{\infty}\)-algebras \(A,B,C\in\mathbf{sC}^{\infty}\mathbf{Alg}\). **Remark 3.38** (Underived-truncation of derived affine \(\mathcal{C}^{\infty}\)-schemes).: Notice that the underived-truncation functor sends a derived affine \(\mathcal{C}^{\infty}\)-scheme \(\mathrm{RSpec}(R)\in\mathbf{dC}^{\infty}\mathbf{Aff}\) corresponding to a simplicial \(\mathcal{C}^{\infty}\)-algebra \(R\in\mathbf{sC}^{\infty}\mathbf{Alg}\) to an ordinary affine \(\mathcal{C}^{\infty}\)-scheme \[t_{0}\mathrm{RSpec}(R)\,\simeq\,\mathrm{Spec}(\pi_{0}R), \tag{3.4.16}\] corresponding to the ordinary \(\mathcal{C}^{\infty}\)-algebra \(\pi_{0}R\in\mathcal{C}^{\infty}\mathbf{Alg}\). **Remark 3.39** (Derived-extension of affine \(\mathcal{C}^{\infty}\)-schemes).: Notice that the derived-extension functor \(i\) sends an ordinary affine \(\mathcal{C}^{\infty}\)-scheme \(\mathrm{Spec}(R)\) corresponding to the ordinary \(\mathcal{C}^{\infty}\)-algebra \(R\in\mathbb{C}^{\infty}\mathbf{Alg}\) to a derived affine \(\mathcal{C}^{\infty}\)-scheme \[i\mathrm{Spec}(R)\,\simeq\,\mathrm{RSpec}(\iota(R)) \tag{3.4.17}\] in \(\mathbf{dC}^{\infty}\mathbf{Aff}\), which corresponds to the homotopy \(\mathcal{C}^{\infty}\)-algebra \(\iota(R)\in\mathbf{sC}^{\infty}\mathbf{Alg}\). More generally, these last remarks provide a good intuition for the role played by the underived-truncation and derived-extension of formal derived smooth stacks. #### Formal derived diffeological spaces In this subsection we will define and explore the derived version of a diffeological space, which we will call formal derived diffeological space. Recall from Definition 1.11 that an ordinary diffeological space is a concrete smooth set, i.e. a concrete sheaf on the site of ordinary smooth manifolds. **Definition 3.40** (Formal derived diffeological space).: The \((\infty,1)\)_-category of formal derived diffeological spaces_ is defined by the pullback of \((\infty,1)\)-categories \[\mathbf{dFDiffSp}\ \coloneqq\ \mathbf{dFSmoothStack}\times^{h}_{\mathbf{ smoothStack}}\mathbf{N}(\mathrm{DiffSp}), \tag{3.4.18}\] An element of such an \((\infty,1)\)-category will be called _formal derived diffeological space_. In other words, we have a pullback diagram (3.4.19) which, since monomorphisms are stable under pullback by [13, Proposition 6.5.1.16], makes \(\mathbf{dFDiffSp}\hookrightarrow\mathbf{dFSmoothSet}\) a full and faithful reflective \(\operatorname{sub-}(\infty,1)\)-category. **Lemma 3.41** (Derived affine \(\mathcal{C}^{\infty}\)-schemes are formal derived diffeological spaces).: The \((\infty,1)\)-category \(\mathbf{dC^{\infty}Aff}\) of derived affine \(\mathcal{C}^{\infty}\)-schemes is a full and faithful \(\operatorname{sub-}(\infty,1)\)-category of the \((\infty,1)\)-category \(\mathbf{dFDiffSp}\) of formal derived diffeological spaces. Proof.: Derived affine \(\mathcal{C}^{\infty}\)-schemes form a full and faithful \(\operatorname{sub-}(\infty,1)\)-category of formal derived smooth stacks. Therefore, it is enough to show that every object of \(\mathbf{dC^{\infty}Aff}\) is an object of \(\mathbf{dFDiffSp}\). Consider a derived affine \(\mathcal{C}^{\infty}\)-scheme \(\operatorname{RSpec}(R)\in\mathbf{dC^{\infty}Aff}\), for any given homotopy \(\mathcal{C}^{\infty}\)-algebra \(R\in\mathsf{sC^{\infty}Alg}\). Its underived-truncation is the ordinary \(\mathcal{C}^{\infty}\)-scheme \(t_{0}\operatorname{RSpec}(R)\simeq\operatorname{Spec}(R^{\mathrm{red}})\) with \(R^{\mathrm{red}}=\pi_{0}(R)/\mathfrak{m}_{\pi_{0}(R)}\). Thus, it is enough to show that \(\operatorname{Spec}(R^{\mathrm{red}})\) is an ordinary diffeological space, i.e that it is a concrete sheaf on the site of smooth manifolds: namely, that for any ordinary smooth manifold \(U\in\mathsf{Mfd}\) there is an injective map of sets \[\operatorname{Hom}_{\mathcal{C}^{\infty}Alg}(R^{\mathrm{red}},\mathcal{C}^{ \infty}(U))\;\longrightarrow\;\operatorname{Hom}_{\mathsf{Set}}\bigl{(} \mathit{\Gamma}(U),\,\mathit{\Gamma}(\operatorname{Spec}R^{\mathrm{red}}) \bigr{)}, \tag{3.4.20}\] where \(\mathit{\Gamma}(\operatorname{Spec}(R^{\mathrm{red}}))=\operatorname{Hom}_{ \mathcal{C}^{\infty}Alg}(R^{\mathrm{red}},\,\mathbb{R})\) is the underlying set of points of the reduced scheme and where \(\mathit{\Gamma}(U)=\operatorname{Hom}_{\mathcal{C}^{\infty}Alg}(\mathcal{C}^{ \infty}(U),\mathbb{R})\) is the underlying set of point of the smooth manifold. Such a function is given by mapping every element \(f\in\operatorname{Hom}_{\mathcal{C}^{\infty}Alg}(R^{\mathrm{red}},\,\mathcal{C }^{\infty}(U))\) to the precomposition function \((-)\circ f:\mathit{\Gamma}(U)\to\mathit{\Gamma}(\operatorname{Spec}(R^{ \mathrm{red}}))\) which sends points of the smooth manifold \(U\) to their image in the underlying set of points of the smooth set \(\operatorname{Spec}(R^{\mathrm{red}})\). This function is, in fact, injective, since both \(\mathcal{C}^{\infty}(U)\) and \(R^{\mathrm{red}}\) are reduced \(\mathcal{C}^{\infty}\)-algebras. **Remark 3.42** (Embeddings of \((\infty,1)\)-categories of derived spaces).: To sum up, we have the following full and faithful inclusions of \((\infty,1)\)-categories: \[\mathbf{dFMfd}\;\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{ d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d}\mathbf{d} \mathbf{d}\mathbf{d}\ Now, we will introduce the notion of fibre bundle of formal derived smooth sets. The following two definitions are specific cases of the general definitions appearing in [21, Section 4]. **Definition 3.44** (Fiber bundle).: A _bundle_ is a morphism \(E\xrightarrow{p}X\). A _fiber bundle_ is a morphism \(E\xrightarrow{p}X\) such that there is an effective epimorphism \(Y\twoheadrightarrow X\) and, for some formal derived smooth stack \(F\), a pullback of the form (3.5.2) in the \((\infty,1)\)-category **dFSmoothStack** of formal derived smooth stacks. We say that the fiber bundle \(E\to X\) locally trivialises with respect to \(Y\) and we call \(F\) the fiber of the bundle. **Definition 3.45** (\(\infty\)-groupoid of sections).: The \(\infty\)-_groupoid of sections_ of a bundle \(E\xrightarrow{p}X\) is defined as the homotopy fiber \[\Gamma(X,E)\;\coloneqq\;\operatorname{RHom}(X,E)\times_{\operatorname{RHom}( X,X)}\{\operatorname{id}_{X}\} \tag{3.5.3}\] of the \(\infty\)-groupoid of all morphisms \(X\to E\) on those who cover the identity on \(X\). Notice that, if \(E\to X\) is a fibre bundle of ordinary smooth manifolds, then by Yoneda embedding \(\Gamma(X,E)\) as defined above reduces to the usual notion of set of smooth sections. **Remark 3.46** (On the slice category).: Notice that the \(\infty\)-groupoid of sections of a bundle \(E\xrightarrow{p}X\) can be equivalently expressed as the \(\infty\)-groupoid \[\Gamma(X,E)\;\simeq\;\operatorname{RHom}_{/X}(\operatorname{id}_{X},p) \tag{3.5.4}\] where \(\operatorname{RHom}_{/X}(-,-)\) is the \(\operatorname{hom}\)-\(\infty\)-groupoid of the slice \((\infty,1)\)-category **dFSmoothStack\({}_{/X}\)**. ### Derived de Rham cohomology In this section we will define a notion of quasi-coherent \((\infty,1)\)-sheaves of modules on formal derived smooth stacks. In particular, we will introduce the notion of tangent and cotangent complex of formal derived smooth stacks, which will be instrumental to the construction of derived differential forms. Moreover, this discussion will be a crucial premise for [1], in preparation. #### Quasi-coherent \((\infty,1)\)-sheaves of modules Our strategy in this subsection will be to use the notion of homotopy \(\mathcal{C}^{\infty}\)-algebra \(\mathcal{O}(X)\) of functions on a formal derived smooth stack \(X\) to construct the \((\infty,1)\)-category of quasi-coherent sheaves of modules \(\operatorname{QCoh}(X)\) on \(X\). First, recall that the definition of module for a homotopy \(\mathcal{C}^{\infty}\)-algebra appears in [19] and it is exactly the following. **Definition 3.47** (Module for a homotopy \(\mathcal{C}^{\infty}\)-algebra).: A _module for a homotopy \(\mathcal{C}^{\infty}\)-algebra_\(R\in\mathbf{sC}^{\infty}\mathbf{Alg}\) is a module for the underlying derived commutative algebra \(R^{\operatorname{alg}}\in\mathbf{scAlg}_{\mathbb{R}}\). Here \(\mathbf{scAlg}_{\mathbb{R}}\) is the \((\infty,1)\)-category of derived commutative \(\mathbb{R}\)-algebras, i.e. simplicial commutative \(\mathbb{R}\)-algebras with the classical simplicial algebra model structure. In the following, let **(oo,1)Cat** be the \((\infty,1)\)-category of \((\infty,1)\)-categories. For any given simplicial commutative \(\mathbb{R}\)-algebra \(A\in\mathsf{scAlg}_{\mathbb{R}}\), let \(\mathrm{N}A\in\mathsf{dgAlg}_{\mathbb{R}}\) be the dg-commutative algebra given by the normalized chains complex functor \(\mathrm{N}:\mathsf{scAlg}_{\mathbb{R}}\longrightarrow\mathsf{dgAlg}_{\mathbb{R}}\) and let \(\mathrm{N}A\text{-}\mathsf{Mod}\) be the category of \(\mathrm{N}A\text{-}\mathsf{dg}\)-modules, which is naturally simplicially-enriched. Moreover, let \(\mathsf{W}_{\mathrm{q}i}\) be the set of quasi-isomorphisms in the category \(\mathrm{N}A\text{-}\mathsf{Mod}\). Thus we can define the \((\infty,1)\)-functor \[\begin{split}\mathrm{QCoh}\,:\,\mathsf{dFMfd}& \longrightarrow\textbf{(\infty,1)Cat}\\ M&\longmapsto L_{\mathsf{W}_{\mathrm{q}i}}\mathrm{N}\mathcal{O}(M)^{ \mathrm{alg}\text{-}\mathsf{Mod}},\end{split} \tag{3.6.1}\] which sends any \(\mathcal{C}^{\infty}\)-algebra \(A\in\mathsf{sc}^{\infty}\mathsf{Alg}\) to the \((\infty,1)\)-category obtained by simplicial localisation of the simplicial category of dg-modules of its underlying algebra. Let us now provide a definition of quasi-coherent sheaves on a general derived smooth stack. First, we must recall that any stack can be canonically written as a colimit of representables (see for instance [10]) by \[X\;\simeq\;\operatorname*{\mathbb{L}\mathrm{colim}}_{U\to X}U. \tag{3.6.2}\] **Definition 3.48** (Quasi-coherent sheaves of modules).: Given any formal derived smooth stack \(X\in\mathsf{dFSmoothStack}\), the \((\infty,1)\)-category of _quasi-coherent \((\infty,1)\)-sheaves_ on \(X\) is given by the homotopy limit \[\mathrm{QCoh}(X)\;\simeq\;\operatorname*{\mathbb{R}\mathrm{lim}}_{U\to X} \mathrm{QCoh}(U)\quad\in\textbf{(\infty,1)Cat}, \tag{3.6.3}\] where \(U\in\mathsf{dFMfd}\) runs over all formal derived smooth manifolds. **Definition 3.49** (Complex of sections of a quasi-coherent \((\infty,1)\)-sheaf).: The _dg-vector space of global sections of a quasi-coherent \((\infty,1)\)-sheaf of modules \(\mathbb{M}_{X}\in\mathrm{QCoh}(X)\)_ is given by the functor \[\begin{split}\mathrm{R}\Gamma(X,-)\,:\,\mathrm{QCoh}(X)& \longrightarrow\,\mathsf{dgVec}_{\mathbb{R}}\\ \mathbb{M}_{X}&\longmapsto\,\mathrm{R}\Gamma(X, \mathbb{M}_{X}),\end{split} \tag{3.6.4}\] which is defined as the base change morphism \(\mathbb{R}^{0}_{*}:\mathrm{QCoh}(X)\to\mathrm{QCoh}(\mathbb{R}^{0})\simeq \mathsf{dgVec}_{\mathbb{R}}\) along the unique terminal morphism \(\mathbb{R}^{0}:X\to\mathbb{R}^{0}\) to the point, where \(\mathsf{dgVec}_{\mathbb{R}}\) is the \((\infty,1)\)-category of dg-vector spaces. **Definition 3.50** (Quasi-coherent sheaf cohomology).: We define the _quasi-coherent \((\infty,1)\)-sheaf cohomology_\(\mathrm{H}^{n}(X,\mathbb{M}_{X})\) of any \(\mathbb{M}_{X}\in\mathrm{QCoh}(X)\) on a given formal derived smooth stack \(X\in\mathsf{dFSmoothStack}\) by the cohomology of the dg-vector space of its sections, i.e. by \[\mathrm{H}^{n}(X,\mathbb{M}_{X})\;\coloneqq\;\mathrm{H}^{n}\big{(}\mathrm{R} \Gamma(X,\mathbb{M}_{X})\big{)}. \tag{3.6.5}\] Recall that Dold-Kan correspondence gives us a Quillen equivalence \(|-|:\mathsf{sSet}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5 pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5 pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5 pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{ \raisebox{-0.5pt}{$\leftrightarrow$}}\;\smash{\raisebox{-0.5pt}{$ \leftrightarrow$}}\;\smash{\raisebox{-0. **Example 3.52** (Structure sheaf).: Given a formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\), its _structure sheaf_\(\mathbb{O}_{X}\in\operatorname{QCoh}(X)\) is defined by the homotopy limit \[\mathbb{O}_{X}\ \coloneqq\ \operatorname*{\mathbb{R}lim}_{U\to X}\operatorname{N} \mathcal{O}(U)^{\operatorname{alg}}, \tag{3.6.7}\] where, clearly, \(\operatorname{N}\mathcal{O}(U)^{\operatorname{alg}}\) is in \(\operatorname{N}\mathcal{O}(U)^{\operatorname{alg}}\text{-}\mathsf{Mod}\). In analogy with [10], we want to define a cotangent complex for formal derived smooth stacks which is compatible with their smooth structure. In fact, even if in our definition a module of a \(\mathcal{C}^{\infty}\)-algebra is just a module of the underlying \(\mathbb{R}\)-algebra, we will introduce a cotangent module, whose definition is non-trivially reliant on the smooth structure of \(\mathcal{C}^{\infty}\)-algebras. We remark that, in the spirit of [10], such a cotangent module is not the usual one given by the usual Kahler differentials which one can find in algebraic geometry. **Definition 3.53** (Cotangent module of a formal derived smooth manifold).: Let \(U\in\mathbf{dFMfd}\) be a formal derived smooth manifold. The _cotangent module_\(\Omega^{1}_{\mathcal{O}(U)}\in\mathcal{O}(U)\text{-}\mathsf{Mod}\) is defined as the \(\operatorname{N}\mathcal{O}(U)^{\operatorname{alg}}\)-\(\operatorname{dg}\)-module generated by elements of the form \(\operatorname{d}_{\operatorname{dR}}f\), where \(f\in\operatorname{N}\mathcal{O}(U)^{\operatorname{alg}}\) is any homogeneous element, such that the following conditions hold 1. the degree of \(\operatorname{d}_{\operatorname{dR}}f\) is the same as the degree of \(f\), 2. Leibniz's rule holds, i.e. \(\operatorname{d}_{\operatorname{dR}}(f_{1}f_{2})=(\operatorname{d}_{ \operatorname{dR}}f_{1})f_{2}+(-1)^{|f_{1}|}f_{1}(\operatorname{d}_{ \operatorname{dR}}f_{2})\), 3. for any \(f_{1},\cdots,f_{n}\in\operatorname{N}\mathcal{O}(U)^{\operatorname{alg}}\) and any smooth map \(\phi:\mathbb{R}^{n}\to\mathbb{R}\), we have \[\operatorname{d}_{\operatorname{dR}}\bigl{(}\operatorname{N}\mathcal{O}(U, \phi)(f_{1},\dots,f_{n})\bigr{)}\,=\,\sum_{i=1}^{n}\operatorname{N}\mathcal{O} \Bigl{(}U,\frac{\partial\phi}{\partial x^{i}}\Bigr{)}(f_{1},\dots,f_{n}) \cdot\operatorname{d}_{\operatorname{dR}}f_{i},\] (3.6.8) where \(\mathcal{O}(U,\phi):\mathcal{O}(U,\mathbb{R})^{n}\to\mathcal{O}(U,\mathbb{R})\) is the image of the smooth map \(\phi\) on \(\mathcal{O}(U)\). By following [1], we can define the cotangent complex \(\mathbb{L}_{M}\in\operatorname{QCoh}(M)\) of a formal derived smooth manifold \(M\in\mathbf{dFMfd}\) by deriving the functor on the slice category \(\operatorname{\mathsf{s}C}^{\infty}\mathsf{Alg}_{\mathcal{O}(M)}\) \[\Omega^{1}_{(-)}\widehat{\otimes}_{(-)}\mathcal{O}(M)\,:\,U\,\longmapsto\, \Omega^{1}_{U}\,\widehat{\otimes}_{\mathcal{O}(U)}\,\mathcal{O}(M), \tag{3.6.9}\] where \(\widehat{\otimes}\) is the \(\mathcal{C}^{\infty}\)-tensor product of homotopy \(\mathcal{C}^{\infty}\)-algebras, and evaluating it at \(M\). More precisely, we can define the cotangent complex \(\mathbb{L}_{M}\coloneqq\mathds{L}\bigl{(}\Omega^{1}_{(-)}\widehat{\otimes}_{( -)}\mathcal{O}(M)\bigr{)}(M)\). In other words, we have \(\mathbb{L}_{M}=\Omega^{1}_{\operatorname{\mathcal{O}}\mathcal{O}(M)} \widehat{\otimes}_{\operatorname{\mathcal{O}}\mathcal{O}(M)}\mathcal{O}(M)\), where \(\operatorname{\mathcal{O}}\mathcal{O}(M)\) is a cofibrant replacement of the original homotopy \(\mathcal{C}^{\infty}\)-algebra \(\mathcal{O}(M)\). **Definition 3.54** (Cotangent complex).: The _cotangent complex_\(\mathbb{L}_{X}\in\operatorname{QCoh}(X)\) is defined by the homotopy limit \[\mathbb{L}_{X}\ \coloneqq\ \operatorname*{\mathbb{R}lim}_{U\to X}\mathbb{L}_{U}, \tag{3.6.10}\] where \(\mathbb{L}_{U}\) is the cotangent complex of the formal derived smooth manifold \(U\in\mathbf{dFMfd}\) we introduced right above. **Definition 3.55** (Relative cotangent complex).: Any morphism \(f:X\to Y\) of stacks induces a morphism \(f_{!}:f^{*}\mathbb{L}_{Y}\to\mathbb{L}_{X}\) of quasi-coherent \((\infty,1)\)-sheaves. The _relative cotangent complex_\(\mathbb{L}_{f}\in\operatorname{QCoh}(X)\) is defined by the homotopy cofibre of such a map, i.e. (3.6.11) **Definition 3.56** (Tangent complex).: Whenever the cotangent complex \(\mathbb{L}_{X}\) of a formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) is a perfect complex, we can define the _tangent complex_ of \(X\) by \[\mathbb{T}_{X}\,\coloneqq\,\mathbb{L}_{X}^{\vee}, \tag{3.6.12}\] where \(\mathbb{L}_{X}^{\vee}\coloneqq[\mathbb{L}_{X},\mathbb{O}_{X}]\in\mathrm{QCoh}(X)\) is the dual quasi-coherent sheaf of the cotangent complex. The \(\infty\)-groupoid of \(n\)-shifted vectors on \(X\in\mathbf{dFSmoothStack}\) is given by the \(\infty\)-groupoid of \(n\)-shifted sections of \(\mathbb{T}_{X}\), i.e. by \(\mathfrak{X}(X,n)\,\coloneqq\,\big{|}\mathrm{R}\Gamma(X,\mathbb{T}_{X})[n]\big{|}\). #### Derived de Rham algebra In this subsection we will provide a definition of differential forms on a formal derived smooth stack. By using the fact that a module for a homotopy \(\mathcal{C}^{\infty}\)-algebra is defined as a module for the underlying derived commutative algebra, we will translate the formulation by [11, 12] in our framework. Moreover, we will introduce the notion of formal derived smooth stack of differential forms on a formal derived smooth stack. **Definition 3.57** (Complex of \(p\)-forms).: We define the _complex of \(p\)-forms_ on the derived stack \(X\in\mathbf{dFSmoothStack}\) by the dg-vector space of sections \[\mathrm{A}^{p}(X)\,\coloneqq\,\mathrm{R}\Gamma(X,\wedge_{\mathbb{O}_{X}}^{p} \mathbb{L}_{X}). \tag{3.6.13}\] We denote by the symbol \(\mathrm{A}^{p}(X)_{n}\) the degree \(n\in\mathbb{Z}\) component of the dg-vector space \(\mathrm{A}^{p}(X)\) and by \(Q:\mathrm{A}^{p}(X)_{n}\to\mathrm{A}^{p}(X)_{n+1}\) its differential. **Remark 3.58** (Homotopy between \(p\)-forms).: A homotopy from an element \(\alpha\) to an element \(\beta\) of \(\mathrm{A}^{p}(X)_{n}\) is given by an element \(\gamma\in\mathrm{A}^{p}(X)_{n-1}\) such that \[\beta-\alpha\,=\,Q\gamma. \tag{3.6.14}\] **Definition 3.59** (\(n\)-degree differential \(p\)-form).: An _\(n\)-degree differential \(p\)-form_ on a formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) is defined as a cohomology class in \[\Omega^{p}(X)_{n}\,\coloneqq\,\mathrm{H}^{n}(\mathrm{A}^{p}(X)). \tag{3.6.15}\] Notice that, in general, we obtain a bi-complex \(\mathrm{A}^{p}(X)_{n}\) with \((p,n)\in\mathbb{N}\times\mathbb{Z}\) of the form \[\begin{CD}\vdots\\ @V{}V{Q}V\\ \mathrm{A}^{0}(X)_{-2}@>{\mathrm{d}_{\mathrm{dR}}}>{}>\mathrm{A}^{1}(X)_{-2}@>{ \mathrm{d}_{\mathrm{dR}}}>{}>\mathrm{A}^{2}(X)_{-2}@>{\mathrm{d}_{\mathrm{dR}}} >{}>\cdots\\ @V{}V{Q}V\\ \mathrm{A}^{0}(X)_{-1}@>{\mathrm{d}_{\mathrm{dR}}}>{}>\mathrm{A}^{1}(X)_{-1}@>{ \mathrm{d}_{\mathrm{dR}}}>{}>\mathrm{A}^{2}(X)_{-1}@>{\mathrm{d}_{\mathrm{dR}}} >{}>\cdots\\ @V{}V{Q}V\\ \mathrm{A}^{0}(X)_{0}@>{\mathrm{d}_{\mathrm{dR}}}>{}>\mathrm{A}^{1}(X)_{0}@>{ \mathrm{d}_{\mathrm{dR}}}>{}>\mathrm{A}^{2}(X)_{0}@>{\mathrm{d}_{\mathrm{dR}}}>{} >\cdots\\ @V{}V{Q}V@V{Q}V{Q}V\\ @V{}V{Q}V@V{Q}V{Q}V\\ \vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdotsvdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdotsvdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots\vdots where the following relations between de Rham and internal differentials are satisfied: \[\mathrm{d}^{2}_{\mathrm{dR}}\;=\;Q^{2}\;=\;\mathrm{d}_{\mathrm{dR}}\circ Q+Q \circ\mathrm{d}_{\mathrm{dR}}\;=\;0. \tag{3.6.17}\] We will now start introducing the technology which will allow us to deal with closed differential forms on derived formal smooth stacks. **Definition 3.60** (Total de Rham dg-algebra).: The _total de Rham algebra_ is the dg-algebra whose underlying dg-vector space is defined by the totalisation \[\mathrm{DR}(X)\;\coloneqq\;\prod_{n\in\mathbb{N}}\mathrm{A}^{n}(X)[-n], \tag{3.6.18}\] with total differential \(\mathrm{d}_{\mathrm{dR}}+Q\), where \(\mathrm{d}_{\mathrm{dR}}\) is the de Rham differential and \(Q\) is the internal differential of each dg-vector space \(\mathrm{A}^{p}(X)\). **Definition 3.61** (Complex of closed \(p\)-forms).: Consider the following filtration of the total de Rham algebra: \[F^{p}\mathrm{DR}(X)\;=\;\prod_{n\geq p}\mathrm{A}^{n}(X)[-n]\;\subset\;\mathrm{ DR}(X). \tag{3.6.19}\] The _complex of closed \(p\)-forms_ is defined for any \(p\in\mathbb{N}\) by the following dg-vector space: \[\mathrm{A}^{p}_{\mathrm{cl}}(X)\;\coloneqq\;F^{p}\mathrm{DR}(X)[p]. \tag{3.6.20}\] **Remark 3.62** (Homotopy between closed \(p\)-forms).: A homotopy from an element \((\alpha_{i})\) to \((\beta_{i})\) in \(\mathrm{A}^{p}_{\mathrm{cl}}(X)_{n}\) is given by an element \((\gamma_{i})\in\mathrm{A}^{p}_{\mathrm{cl}}(X)_{n-1}\) such that \[\beta_{i}-\alpha_{i}\,=\,\mathrm{d}_{\mathrm{dR}}\gamma_{i-1}+Q\gamma_{i}. \tag{3.6.21}\] **Definition 3.63** (Closed form).: An _\(n\)-shifted closed \(p\)-form_ on a derived formal smooth stacks \(X\) is defined as an \(n\)-cocycle \((\omega_{i})\in\mathrm{Z}^{n}\mathrm{A}^{p}_{\mathrm{cl}}(X)\) of the dg-vector space of closed \(p\)-forms on \(X\), i.e. as an element \((\omega_{i})\in\mathrm{A}^{p}_{\mathrm{cl}}(X)\) such that \((\mathrm{d}_{\mathrm{dR}}+Q)(\omega_{i})=0\). In other words, an \(n\)-cocycle in \(\mathrm{A}^{p}_{\mathrm{cl}}(X)\) is given by a formal sum \((\omega_{i})=(\omega_{p}+\omega_{p+1}+\dots)\), where each form \(\omega_{i}\in\mathrm{A}^{i}(X)\) is an element of degree \(n+p-i\), satisfying the equations \[\begin{split} Q\omega_{p}&\,=\,0,\\ \mathrm{d}_{\mathrm{dR}}\omega_{i}+Q\omega_{i+1}&\,=\,0,\end{split} \tag{3.6.22}\] for every \(i\geq p\). This embodies the idea that the underlying \(p\)-form \(\omega_{p}\in\mathcal{A}^{p}(X)\) is de Rham-closed up to homotopy, which is given by a choice of higher forms \(\omega_{i}\) with \(i>p\). **Definition 3.64** (\(n\)-degree closed differential \(p\)-form).: An _\(n\)-degree closed \(p\)-form_ is defined as a cohomology class in \[\Omega^{p}_{\mathrm{cl}}(X)_{n}\,\coloneqq\,\mathrm{H}^{n}(\mathrm{A}^{p}_{ \mathrm{cl}}(X)). \tag{3.6.23}\] **Definition 3.65** (\(\infty\)-groupoid of differential forms).: We define the _\(\infty\)-groupoid of differential \(p\)-forms_\(\mathcal{A}^{p}(X,n)\) and _of closed differential \(p\)-forms_\(\mathcal{A}^{p}_{\mathrm{cl}}(X,n)\) by \[\begin{split}\mathcal{A}^{p}(X,n)&\;\simeq\;| \mathrm{A}^{p}(X)[n]|,\\ \mathcal{A}^{p}_{\mathrm{cl}}(X,n)&\;\simeq\;\big{|} \mathrm{A}^{p}_{\mathrm{cl}}(X)[n]\big{|},\end{split} \tag{3.6.24}\] where the functor \(|-|:\mathsf{dgcAlg}_{\mathbb{R}}\to\mathsf{sSet}\), as before, is the Dold-Kan correspondence functor applied on the non-positive truncation of the argument. **Remark 3.66** (Differential forms from \(\infty\)-groupoid of differential forms).: Notice that the \(\infty\)-groupoid of differential \(p\)-forms \(\mathcal{A}^{p}(X,n)\) and of closed differential \(p\)-forms \(\mathcal{A}^{p}_{\mathrm{cl}}(X,n)\) have the following sets of connected components \[\begin{split}\pi_{0}\mathcal{A}^{p}(X,n)&\ \simeq\ \mathrm{H}^{n}(\mathrm{A}^{p}(X))\quad \eqqcolon\ \Omega^{p}(X)_{n},\\ \pi_{0}\mathcal{A}^{p}_{\mathrm{cl}}(X,n)&\ \simeq\ \mathrm{H}^{n}( \mathrm{A}^{p}(X)_{\mathrm{cl}})\ \eqqcolon\ \Omega^{p}_{\mathrm{cl}}(X)_{n}.\end{split} \tag{3.6.25}\] As we discussed in section 2.3, in ordinary smooth geometry it is possible to construct a smooth set \(\Omega^{p}\) such that the hom-set \(\mathrm{Hom}(M,\Omega^{p})\) in the category of smooth sets from a smooth manifold \(M\) to \(\Omega^{p}\) is exactly the set of differential forms \(\Omega^{p}(M)\in\mathsf{Set}\). This (formal) smooth set \(\Omega^{p}\) also known as moduli space of differential \(p\)-forms. We will now construct something analogous for formal derived smooth stacks. **Proposition 3.67** (Derived stack of differential forms).: There exist formal derived smooth stacks \(\boldsymbol{\mathcal{A}}^{p}(n)\) and \(\boldsymbol{\mathcal{A}}^{p}_{\mathrm{cl}}(n)\) satisfying respectively the universal properties \[\begin{split}\mathrm{RHom}\big{(}X,\boldsymbol{\mathcal{A}}^{p}(n )\big{)}&\ \simeq\ \mathcal{A}^{p}(X,n),\\ \mathrm{RHom}\big{(}X,\boldsymbol{\mathcal{A}}^{p}_{\mathrm{cl}}(n )\big{)}&\ \simeq\ \mathcal{A}^{p}_{\mathrm{cl}}(X,n),\end{split} \tag{3.6.26}\] where \(X\) is any formal derived smooth stack and \(\mathrm{RHom}(-,-)\) is the hom-\(\infty\)-groupoid of the \((\infty,1)\)-category **dFSmoothStack**. Proof.: First, notice that we can immediately define a pre-stack \(\boldsymbol{\mathcal{A}}^{p}(n):U\mapsto\mathcal{A}^{p}(U,n)\) on the \((\infty,1)\)-category **dFMfd** of formal derived smooth manifolds. The fact that this satisfies the descent respect to the \((\infty,1)\)-etale site structure of **dFMfd** is a consequence of the fact that the functor \(U\mapsto\wedge_{\mathbb{Q}_{U}}^{p}\mathbb{L}_{U}\) with \(U\in\mathbf{dFMfd}\) satisfies descent, as \(\wedge_{\mathbb{Q}_{U}}^{p}\mathbb{L}_{U}\in\mathrm{QCoh}(U)\) is a quasi-coherent \((\infty,1)\)-sheaf on any \(U\). We have the following chain of equivalences: \[\begin{split}\mathrm{RHom}\big{(}X,\boldsymbol{\mathcal{A}}^{p}(n )\big{)}&\ \simeq\ \mathrm{RHom}\big{(}\mathrm{Lcolim}_{U\to X}U,\boldsymbol{\mathcal{A}}^{p}(n )\big{)}\\ &\ \simeq\ \mathrm{Rlim}_{U\to X}\,\mathrm{RHom}\big{(}U,\, \boldsymbol{\mathcal{A}}^{p}(n)\big{)}\\ &\ \simeq\ \mathrm{Rlim}_{U\to X}\,\mathcal{A}^{p}(U,n)\\ &\ \simeq\ \mathcal{A}^{p}(X,n).\end{split} \tag{3.6.27}\] Moreover, by a completely analogous argument, also the pre-stack \(\boldsymbol{\mathcal{A}}^{p}_{\mathrm{cl}}(n)\) satisfies descent. **Definition 3.68** (Derived stack of differential forms).: We call \(\boldsymbol{\mathcal{A}}^{p}(n)\) the _formal derived smooth stacks of differential \(p\)-forms_ and \(\boldsymbol{\mathcal{A}}^{p}_{\mathrm{cl}}(n)\) the one _of closed differential \(p\)-forms_. Moreover, we will write \(\boldsymbol{\mathcal{A}}^{p}\coloneqq\boldsymbol{\mathcal{A}}^{p}(0)\) and \(\boldsymbol{\mathcal{A}}^{p}_{\mathrm{cl}}\coloneqq\boldsymbol{\mathcal{A}}^{p} _{\mathrm{cl}}(0)\) for the \(0\)-shifted cases. **Corollary 3.69** (Differential forms from the homotopy category).: By putting together remark 3.66 and proposition 3.67, we have the following equivalences of sets \[\begin{split}\mathrm{Hom}_{\mathrm{Ho}}\big{(}X,\boldsymbol{ \mathcal{A}}^{p}(n)\big{)}&\ \simeq\ \pi_{0}\mathcal{A}^{p}(X,n)\ \simeq\ \Omega^{p}(X)_{n},\\ \mathrm{Hom}_{\mathrm{Ho}}\big{(}X,\boldsymbol{\mathcal{A}}^{p}_{ \mathrm{cl}}(n)\big{)}&\ \simeq\ \pi_{0}\mathcal{A}^{p}_{\mathrm{cl}}(X,n)\ \simeq\ \Omega^{p}_{\mathrm{cl}}(X)_{n},\end{split} \tag{3.6.28}\] where \(\mathrm{Hom}_{\mathrm{Ho}}(-,-)\) is the hom-set of the homotopy category \(\mathrm{Ho}(\mathbf{dFSmoothStack})\) of formal derived smooth stacks. Therefore, a morphism \(\xi:X\to\boldsymbol{\mathcal{A}}^{p}(n)\) in the homotopy category \(\mathrm{Ho}(\mathbf{dFSmoothStack})\) is equivalently an \(n\)-shifted \(p\)-form \(\xi\in\Omega^{p}(X)_{n}\). Similarly for \(\boldsymbol{\mathcal{A}}^{p}_{\mathrm{cl}}(n)\). **Example 3.70** (Derived zero locus).: The affine derived zero locus \(\mathds{R}f^{-1}(0)\in\mathbf{dFMfd}\) of a smooth function \(f:\mathbb{R}^{n}\to\mathbb{R}^{k}\) is a formal derived smooth manifold defined by a homotopy pullback of the following form (3.6.29) where \(\mathrm{id}:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is the identity, in the \((\infty,1)\)-category of derived manifolds. For more details about its algebraic geometric version see [211]. The tangent complex will be given by \(\mathbb{T}_{\mathds{R}f^{-1}(0)}=\left(T_{\mathbb{R}^{n}}[0]\xrightarrow{f^{*} }f^{*}T_{\mathbb{R}^{k}}[-1]\right)\), concentrated in cohomological degree \(0\) and \(1\). In degree \(1\) we have the sheaf \(f^{*}T_{\mathbb{R}^{k}}\simeq\mathcal{C}_{\mathbb{R}^{n}}^{\infty}(-,\mathbb{ R}^{k})\). Analogously, the cotangent complex will be \(\mathbb{L}_{\mathds{R}f^{-1}(0)}=\left(f^{*}\Omega_{\mathbb{R}^{k}}^{1}[1] \xrightarrow{f_{*}}\Omega_{\mathbb{R}^{n}}^{1}[0]\right)\), concentrated in cohomological degree \(-1\) and \(0\). In degree \(-1\) we have the sheaf \(f^{*}\Omega_{\mathbb{R}^{k}}\simeq\mathcal{C}_{\mathbb{R}^{n}}^{\infty}(-,( \mathbb{R}^{k})^{\vee})\). Thus, by unravelling its definition, the complex of \(0\)-forms is the following: \[\begin{split}\mathrm{A}^{0}(\mathds{R}f^{-1}(0))&= \mathds{R}\Gamma(\mathds{R}f^{-1}(0),\mathbb{O}_{\mathds{R}f^{-1}(0)})\\ &=\ \mathcal{C}^{\infty}(\mathbb{R}^{n})\otimes_{\mathbb{R}}\wedge^{*}( \mathbb{R}^{k})^{\vee},\end{split} \tag{3.6.30}\] where the differential is given by \(Qx^{i}=0\) and \(Qx^{+}_{j}=f_{j}(x)\), on \(\{x^{i}\}_{i=1,\dots,n}\) global coordinates of \(\mathbb{R}^{n}\) in degree \(0\) and \(\{x^{+}_{j}\}_{j=1,\dots,k}\) the generators of the exterior algebra \(\wedge^{*}(\mathbb{R}^{k})^{\vee}\) in degree \(-1\). By unravelling its definition, we can explicitly see that the complex of \(1\)-forms is the following: \[\begin{split}\mathrm{A}^{1}(\mathds{R}f^{-1}(0))&= \mathds{R}\Gamma(\mathds{R}f^{-1}(0),\mathbb{L}_{\mathds{R}f^{-1}(0)})\\ &=\ \bigoplus_{i=1}^{n}\mathcal{A}^{0}(\mathds{R}f^{-1}(0))[ \mathrm{d}x^{i}]\oplus\bigoplus_{j=1}^{k}\mathcal{A}^{0}(\mathds{R}f^{-1}(0)) [\mathrm{d}x^{+}_{j}],\end{split} \tag{3.6.31}\] with the graded-commutation relations given by the equations \[\mathrm{d}x^{i}\wedge\mathrm{d}x^{j}=-\mathrm{d}x^{j}\wedge\mathrm{d}x^{i}, \quad\mathrm{d}x^{i}\wedge\mathrm{d}x^{+}_{j}=\mathrm{d}x^{+}_{j}\wedge\mathrm{ d}x^{i},\quad\mathrm{d}x^{+}_{i}\wedge\mathrm{d}x^{+}_{j}=\mathrm{d}x^{+}_{j} \wedge\mathrm{d}x^{+}_{i}. \tag{3.6.32}\] Similarly, one obtains all the differential \(p\)-forms. ## 4 Derived differential cohesive geometry In the previous section, we constructed the \((\infty,1)\)-category \(\mathbf{dFSmoothStack}\) of formal derived smooth stacks. In this section, we show that the formalism of differential cohesion, introduced by Schreiber [DCCT] in the setting of formal smooth stacks, extends very naturally to our present setting of formal derived smooth stacks. Many statements and constructions follow through very naturally. Since it is known that an \((\infty,1)\)-category of stacks is an \((\infty,1)\)-topos (see e.g. [10; 11]), the \((\infty,1)\)-category \(\mathbf{dFSmoothStack}\) is, in particular, an \((\infty,1)\)-topos. In subsection 4.1, we show that the \((\infty,1)\)-topos of formal derived smooth stacks comes naturally equipped with a cohesive structure. Roughly speaking, a cohesive structure provides an \((\infty,1)\)-topos with the properties required for a geometry to take place in it and for its objects to be fully-fledged spaces. In subsection 4.2, we prove that the \((\infty,1)\)-topos of formal derived smooth stacks comes also naturally equipped with a differential cohesive structure. In subsection 4.3, we will show that the formal moduli problems appearing in BV-theory naturally arise in the context of differential cohesion. In the last two subsections, we explore some entailments of such a structure, including generalisations of the notions of \(L_{\infty}\)-algebroids and jet bundles. ### Derived cohesion After showing that formal derived smooth stacks constitute an \((\infty,1)\)-topos, we will investigate its natural cohesive structure. The notion of cohesive topos originated in Lawvere's seminal work [13, 14, 15, 16]. The definition of cohesive \((\infty,1)\)-topos is given by [4, Section 4.1]. To facilitate the reading for the broadest possible audience, we will provide now an informal picture of a cohesive \((\infty,1)\)-topos. For a detailed and comprehensive discussion of cohesion, we point at the main reference. The concept of a cohesive topos provides a unifying framework for studying a range of structures, including smooth manifolds, algebraic varieties, and more general spaces that admit some notion of local chart. At its core, a cohesive topos is a category of sheaves over a site that satisfies certain axioms, which ensure that it has enough structure to capture the basic features of smooth and topological spaces. Every ordinary topos of sheaves \(\mathsf{Sh}(\mathsf{C})\) on some site \(\mathsf{C}\) comes naturally equipped with a global section functor \(\varGamma:\mathsf{Sh}(\mathsf{C})\to\mathsf{Set}\) which sends a sheaf \(X\) to the section \(\varGamma(X)\coloneqq\mathrm{Hom}(*,X)\) at the point (i.e. at the terminal object, which exists). The global sections functor \(\varGamma\) naturally fits into a geometric morphism, which is given by the adjunction \(\mathrm{Disc}\dashv\varGamma\), where the functor \(\mathrm{Disc}:\mathsf{Set}\to\mathsf{Sh}(\mathsf{C})\) embeds sets into the corresponding locally constant sheaves. Roughly, a cohesive structure is the lift of the geometric morphism \(\mathrm{Disc}\dashv\varGamma\) to a quadruple of adjoint functors \((\varPi\dashv\mathrm{Disc}\dashv\varGamma\dashv\mathrm{coDisc})\), which will look as follows: (4.1.1) where \(\mathrm{Disc}\) and \(\mathrm{coDisc}\) must be fully faithful and \(\varPi\) must preserve finite products. As constructed and explained in plenty of detail by [4, this construction can be generalised to a \((\infty,1)\)-topos of stacks if we replace the ordinary category of sets with the \((\infty,1)\)-category of \(\infty\)-groupoids. The following concrete definition is precisely [4, Remark 4.1.9]. **Definition 4.1** (Cohesive \((\infty,1)\)-topos).: A _cohesive structure_ on an \((\infty,1)\)-topos \(\mathbf{H}\) is the datum of a quadruple of adjoint \((\infty,1)\)-functors of the following form: (4.1.2) such that: 1. the \((\infty,1)\)-functor \(\varGamma\) is the global section functor. 2. the \((\infty,1)\)-functors \(\mathrm{Disc}\) and \(\mathrm{coDisc}\) are fully faithful, 3. the \((\infty,1)\)-functor \(\varPi\) preserves finite products. We are now ready to show that the \((\infty,1)\)-topos **dFSmoothStack** of formal derived smooth stacks comes equipped with a natural cohesive structure. **Theorem 4.2** (Cohesive \((\infty,1)\)-topos of formal derived smooth stacks).: The \((\infty,1)\)-topos of formal derived smooth stacks **dFSmoothStack** is cohesive. Proof.: Recall that \(\mathbb{R}^{0}\) is the terminal object in the category of formal derived smooth manifolds and its \(\mathcal{C}^{\infty}\)-algebra \(\mathbb{R}=\mathcal{C}^{\infty}(\mathbb{R}^{0})\) is the initial object in the category of homotopy \(\mathcal{C}^{\infty}\)-algebras. Let \(*_{\mathbb{R}}\coloneqq\{\mathbb{R}\}\) be its full and faithful sub-category whose only object is initial object \(\mathbb{R}\). Then, we have the following co-reflective inclusion: (4.1.3) By left and right Kan extension and by essential uniqueness of the adjoint functor, we have the quadruple of simplicially-enriched adjoint functors (4.1.4) Now, we can immediately induce the quadruple of simplicial Quillen adjoint functors for the global projective model structure, i.e. (4.1.5) Since \(\iota^{\mathbb{R}}\) is the natural embedding of the algebra \(\mathbb{R}\) into the simplicial-category of almost finitely presented \(\mathcal{C}^{\infty}\)-algebras, it is continuous and cocontinuous with respect to the etale site structure. Therefore, the adjoint triple \((\iota^{\mathbb{R}}_{!}\dashv\iota^{\mathbb{R}*}\dashv\iota^{\mathbb{R}}_{*})\) restrict and corestrict to the corresponding simplicial model category of formal derived smooth stacks \([\mathsf{sC}^{\infty}\mathsf{Alg}_{\text{afp}},\,\mathsf{sSet}]^{\circ}_{ \text{proj},\text{loc}}\). Thus, we have only left to show that the functor \(\mathbb{R}_{!}\) maps locally fibrant objects to Kan-fibrant simplicial sets. By construction, we have \(\mathbb{R}_{!}X=\operatorname{\mathsf{L}colim}_{A}X(A)\) where all \(X(A)\in\mathsf{sSet}^{\circ}_{\text{Quillen}}\) are Kan-fibrant for any \(X\in[\mathsf{sC}^{\infty}\mathsf{Alg}_{\text{afp}},\,\mathsf{sSet}]^{\circ}_{ \text{proj},\text{loc}}\), which implies that \(\mathbb{R}_{!}X\) is always Kan-fibrant. Thus we have the following quadruple of adjoint \((\infty,1)\)-functors: (4.1.6) Finally, notice that the terminal functor \(\mathbb{R}^{\text{op}}:\mathsf{dFMfd}\to*_{\mathbb{R}}\) preserves finite products, since any finite product of formal derived smooth manifolds \(\prod_{i}^{h}M_{i}\) is sent to \(\prod_{i}^{h}\mathbb{R}^{0}\simeq\mathbb{R}^{0}\). Thus, the functor \(\mathbb{R}_{!}\) preserves finite products. Moreover, since the inclusion of the terminal object \((\iota^{\mathbb{R}})^{\text{op}}:*_{\mathbb{R}}\to\mathsf{dFMfd}\) is clearly fully faithful, then the functor \(\iota^{\mathbb{R}}_{!}\) is also fully faithful. **Remark 4.3** (Global section functor factors through \(t_{0}\)).: Notice that the point \(*\simeq\mathbb{R}^{0}\in\mathsf{dFSmoothStack}\) lies in the essential image of \(\mathsf{Mfd}\). This immediately implies that the global section functor \(\varGamma(-)=\mathbb{R}\text{Hom}(\mathbb{R}^{0},-)\) will factor through the underived-truncation \(t_{0}\). **Example 4.4** (Global sections of a formal derived smooth set).: Because of the remark right above, the global sections \(\varGamma(X)\) of a formal derived smooth set \(X\in\mathsf{dFSmoothSet}\) will be nothing but a set \(\varGamma(X)=\mathbb{R}\text{Hom}(\mathbb{R}^{0},X)\simeq\text{Hom}(\mathbb{R} ^{0},t_{0}X)\). Just like any quadruple of adjoint \((\infty,1)\)-functors, the derived differential cohesive structure presented by our diagram gives naturally rise to a triplet of adjoint \((\infty,1)\)-endofunctors. **Definition 4.5** (Modalities of derived cohesion).: We define the following endofunctors: \[(\int\ \dashv\ \triangleright\ \dashv\ \dashv\ \dashv\ \dashv):\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, e.g. [10]) and where \(\mathfrak{m}_{\pi_{0}R}\subset\pi_{0}R\) is the nilradical of \(\pi_{0}R\), i.e. the ideal consisting of the nilpotent elements of \(\pi_{0}R\) regarded as an \(\mathbb{R}\)-algebra. Recall from example 2.18 that the quotient \(R^{\mathrm{red}}=\pi_{0}R/\mathfrak{m}_{\pi_{0}R}\) of a \(\mathcal{C}^{\infty}\)-algebra by any of its ideals is canonically a \(\mathcal{C}^{\infty}\)-algebra. Now, we can see that we have a simplicial Quillen adjunction \((-)^{\mathrm{red}}\dashv\iota^{\mathrm{red}}\), where \(\iota^{\mathrm{red}}:\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}\hookrightarrow \mathsf{s}\mathsf{C}^{\infty}\mathsf{Alg}\) is the natural embedding (in fact, \((-)^{\mathrm{red}}\) automatically preserves cofibrant objects and \(\iota^{\mathrm{red}}\) fibrant objects). Now, we can restrict everything to almost finitely presented algebras and obtain the following simplicial Quillen adjunction: \[\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}_{\mathrm{fg}}\xleftrightarrow{ \binom{(-)^{\mathrm{red}}}{\iota^{\mathrm{red}}}}\mathsf{s}\mathsf{C}^{ \infty}\mathsf{Alg}_{\mathrm{afp}}. \tag{4.2.3}\] Since it is a simplicial Quillen adjunction, it gives naturally rise to a reflective embedding of \((\infty,1)\)-categories \(\mathbf{N}(\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}_{\mathrm{fg}}) \xleftrightarrow{\mathsf{s}\mathsf{C}^{\infty}\mathsf{Alg}_{\mathrm{afp}}}\). **Construction 4.7** (Diagram of sites).: We can extend the diagram of ordinary sites from remark 2.36 to include the \((\infty,1)\)-category of formal derived smooth manifolds. Thus, by putting all together, we have the following diagram of \((\infty,1)\)-sites: (4.2.4) The diagram of \((\infty,1)\)-sites we constructed above encodes all the relations between the relevant sites in the context of derived smooth geometry and it is going to be the main ingredient to show the following theorems of this subsection. **Theorem 4.8** (Differential cohesive \((\infty,1)\)-topos of formal derived smooth stacks).: The cohesive \((\infty,1)\)-topos \(\mathsf{dFSmoothStack}\) of formal derived smooth stacks is naturally equipped with a differential cohesive structure, i.e. with a quadruplet of adjoint \((\infty,1)\)-functors (4.2.5) such that the functor \(\hat{\imath}\) is fully faithful and preserves finite products. Proof.: Recall that we have an equivalence \(\mathsf{sC}^{\infty}\mathsf{Alg}^{\mathrm{op}}_{\mathrm{afp}}\simeq\mathsf{dFMfd}\) of the opposite category of almost finitely presented \(\mathcal{C}^{\infty}\)-algebras to the category of formal derived smooth manifolds. By left and right Kan extension, the reflective embedding (4.2.3) of simplicial sites induces the quadruple of Quillen adjoint functors (4.2.6) which encodes a quadruple of adjoint \((\infty,1)\)-functors between the corresponding \((\infty,1)\)-categories of pre-stacks. Now we must now prove that these functors send, in particular, stacks to stacks. By [10, Section 4.8], the functor \(\iota^{\mathrm{red}}\) is continuous and cocontinuous, which implies that the adjunction \((\iota^{\mathrm{red}}_{\mathrm{t}}\dashv\iota^{\mathrm{red}}_{\mathrm{s}} \dashv\iota^{\mathrm{red}}_{\mathrm{s}})\) restricts to an adjunction of stacks. So, we have left to show that \((-)^{\mathrm{red}}_{\mathrm{s}}X\in[\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{ red}}_{\mathrm{fg}}\,,\mathsf{sSet}]_{\mathrm{proj}}\) satisfies descent for any formal derived smooth stack \(X\). By the adjoint functor, it is sufficient to see that \(\operatorname{\mathsf{Lcolim}}_{\iota}{}^{\mathrm{red}}_{\mathrm{s}}H(U)_{n} \to\iota^{\mathrm{red}}_{\mathrm{s}}U\), but this is verified by noticing that \(\iota^{\mathrm{red}}_{\mathrm{s}}\) preserves colimits of representable objects and etale epimorphisms between them. Thus we have constructed functors of \((\infty,1)\)-categories (4.2.7) where \(\mathbf{SmoothStack^{+}}=\mathbf{N}_{hc}([\mathsf{C}^{\infty}\mathsf{Alg}^{ \mathrm{red}}_{\mathrm{fg}}\,,\mathsf{sSet}]^{\circ}_{\mathrm{proj},\mathrm{loc}})\) is by definition the \((\infty,1)\)-topos of stacks on the ordinary etale site of reduced \(\mathcal{C}^{\infty}\)-varieties \(\mathsf{C}^{\infty}\mathsf{Var}_{\mathrm{red}}=(\mathsf{C}^{\infty}\mathsf{Alg }^{\mathrm{red}}_{\mathrm{fg}})^{\mathrm{op}}\), which we constructed in definition 2.37. Now, we have left to show that \(\iota^{\mathrm{red}}_{\mathrm{t}}\) is fully faithful and preserves finite products. As for the first property, \(\iota^{\mathrm{red}}_{\mathrm{t}},\iota^{\mathrm{red}}_{\mathrm{s}}\) are both fully faithful, since \(\iota^{\mathrm{red}}\) fully faithful implies that \(\mathrm{id}\to\iota^{\mathrm{red}}\iota^{\mathrm{red}}_{\mathrm{t}}\) and \(\iota^{\mathrm{red}}\iota^{\mathrm{red}}\iota^{\mathrm{red}}_{\mathrm{s}}\to \mathrm{id}\) are object-wise equivalences. As for the second one, it is sufficient to show that for any formal derived smooth stack \(X\) and formal derived smooth manifold \(U\) the functor \[X\;\longmapsto\;\operatorname{\mathsf{Lcolim}}\bigl{(}\iota^{\mathrm{red}} \mathord{\downarrow}\mathcal{O}(U)\,\to\,\mathsf{C}^{\infty}\mathsf{Alg}^{ \mathrm{red}}_{\mathrm{fg}}\,\stackrel{{ X}}{{\longrightarrow}}\, \mathsf{sSet}\bigr{)}\] preserves finite products, which is the case if the comma category \(\iota^{\mathrm{red}}\mathord{\downarrow}\mathcal{O}(U)\) has finite coproducts. This is equivalent to \(U\mathord{\downarrow}(\iota^{\mathrm{red}})^{\mathrm{op}}\) having finite products, which, since \((\iota^{\mathrm{red}})^{\mathrm{op}}\) preserves finite products, is true. Therefore, if we redefine the functors by \(\hat{\imath}:=\iota^{\mathrm{red}}_{\mathrm{!}}\), \(\Pi^{\mathrm{dif}}\coloneqq\iota^{\mathrm{red}\,\ast}\simeq(-)^{\mathrm{red}}_ {\mathrm{!}}\), \(\operatorname{Disc}^{\mathrm{dif}}\coloneqq\iota^{\mathrm{red}}_{\mathrm{s}} \simeq(-)^{\mathrm{red}\ast}\) and \(\Gamma^{\mathrm{dif}}\coloneqq(-)^{\mathrm{red}}_{\mathrm{s}}\), we have the conclusion. In the terminology of [DCCT, Definition 4.2.1], the quadruplet of morphisms in diagram (4.2.5) characterises the cohesive \((\infty,1)\)-topos \(\mathbf{dFSmoothStack}\) as an infinitesimal cohesive neighbourhood of the cohesive \((\infty,1)\)-topos \(\mathbf{SmoothStack^{+}}\). Intuitively, this tells us that, in some sense, any object of \(\mathbf{dFSmoothStack}\) can be thought of as an infinitesimal extension of some object in \(\mathbf{SmoothStack^{+}}\). Such a structure is, indeed, differential cohesion on \(\mathbf{dFSmoothStack}\). **Remark 4.9** (Interpretation of reduced and co-reduced objects).: In analogy with non-derived differential cohesion, we could call the functor \(\hat{\imath}\) inclusion of _reduced objects_ and \(\operatorname{Disc}^{\mathrm{dif}}\) inclusion of _co-reduced objects_. * The reduced objects are, intuitively, the ones whose infinitesimal and derived behaviour is determined by their non-infinitesimal ordinary behavior; * on the other hand, the co-reduced objects are the ones who are lacking of any infinitesimal and derived behaviour. Finally, the functor \(\varPi^{\,\mathrm{dif}}\) can be thought of as the functor which contracts away the infinitesimal and derived extension of a formal derived smooth stack. **Remark 4.10** (Extending smooth stacks into formal derived smooth stacks).: Notice that we have a diagram of \((\infty,1)\)-categories (4.2.8) where \(\iota^{\mathsf{Mfd}}\) is the full and faithful embedding of smooth manifolds into reduced finitely generated \(\mathcal{C}^{\infty}\)-algebras. Notice that such an embedding does not come with a natural adjoint. In fact, we can always see a smooth manifold as a \(\mathcal{C}^{\infty}\)-variety, but there is no standard way to make a \(\mathcal{C}^{\infty}\)-variety into a smooth manifold. Thus, the diagram above gives rise to a diagram of \((\infty,1)\)-categories of the form We have that the derived-extension functor is equivalently the composition \(i=\iota_{!}^{\mathrm{red}}\circ\iota_{!}^{\mathsf{Mfd}}\) and the underived-truncation functor is \(t_{0}=\iota^{\mathsf{Mfd}*}\circ\iota^{\mathrm{red}*}\) constructed in proposition 3.24. **Lemma 4.11** (Underived-truncation and derived-extension of affine \(\mathcal{C}^{\infty}\)-schemes).: We have the following results about affine \(\mathcal{C}^{\infty}\)-schemes. * For any given ordinary reduced affine \(\mathcal{C}^{\infty}\)-scheme \(\operatorname{Spec}(R)\in\mathsf{C}^{\infty}\mathsf{Aff}\) corresponding to the ordinary reduced \(\mathcal{C}^{\infty}\)-algebra \(R\in\mathsf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}\) we have an equivalence \[\hat{\imath}\operatorname{Spec}(R)\;\simeq\;\operatorname{\mathsf{RSpec}}( \iota^{\mathrm{red}}(R))\] (4.2.9) in \(\mathbf{dC}^{\infty}\!\mathsf{Aff}\), which corresponds to the homotopy \(\mathcal{C}^{\infty}\)-algebra \(\iota^{\mathrm{red}}(R)\in\mathsf{s}\mathsf{C}^{\infty}\mathsf{Alg}\). * For any given homotopy \(\mathcal{C}^{\infty}\)-algebra \(R\in\mathsf{s}\mathsf{C}^{\infty}\mathsf{Alg}\), we have the equivalence of ordinary smooth sets \[\varPi^{\,\mathrm{dif}}\operatorname{\mathsf{RSpec}}(R)\;\simeq\; \operatorname{Spec}(R^{\mathrm{red}}),\] (4.2.10) where \(\operatorname{Spec}(R^{\mathrm{red}})\in\mathsf{C}^{\infty}\mathsf{Aff}\) is the ordinary affine \(\mathcal{C}^{\infty}\)-scheme corresponding to the reduced \(\mathcal{C}^{\infty}\)-algebra \(R^{\mathrm{red}}\in\mathbf{C}^{\infty}\mathsf{Alg}^{\mathrm{red}}\). Proof.: For any almost finitely presented \(\mathcal{C}^{\infty}\)-algebra \(A\in\mathsf{s}\mathsf{C}^{\infty}\mathsf{Alg}_{\mathrm{afp}}\), we have the equivalences \[(\varPi^{\,\mathrm{dif}}\operatorname{\mathsf{RSpec}}R)(A) \;\simeq\;(\operatorname{\mathsf{RSpec}}R)(\iota^{\mathrm{red}}A) \tag{4.2.11}\] \[\;\simeq\;\operatorname{Hom}_{\mathsf{s}\mathsf{C}^{\infty} \mathsf{Alg}}(R,\iota^{\mathrm{red}}A)\] \[\;\simeq\;(\operatorname{Spec}R^{\mathrm{red}})(A)\] where in the penultimate line we used the adjunction \((-)^{\mathrm{red}}\dash\iota^{\mathrm{red}}\). Thus, the conclusion. Just like any quadruple of adjoint \((\infty,1)\)-functors, the derived differential cohesive structure presented by diagram (4.2.5) gives naturally rise to a triplet of adjoint \((\infty,1)\)-endofunctors. **Definition 4.12** (Modalities of derived differential cohesion).: We define the following endo-functors: \[(\Re\,\,\dashvartriangle\,\,\Im\,\dashvartriangle\,):\,\mathbf{dFSmoothStack} \,\longrightarrow\,\mathbf{dFSmoothStack}, \tag{4.2.12}\] where we respectively called 1. _infinitesimal reduction modality_ \(\Re\coloneqq\hat{\imath}\circ\varPi^{\,\mathrm{dif}}\), 2. _infinitesimal shape modality_ \(\Im\coloneqq\mathrm{Disc}^{\mathrm{dif}}\circ\varPi^{\,\mathrm{dif}}\), 3. _infinitesimal flat modality_ \(\&\coloneqq\mathrm{Disc}^{\mathrm{dif}}\circ\varGamma^{\,\mathrm{dif}}\). The modalities of derived differential cohesion will constitute our fundamental toolbox in dealing with the geometry of formal derived smooth stacks. **Remark 4.13** (Infinitesimal reduction counit).: Since there is an adjunction \((\hat{\imath}\dashvartriangle\varPi^{\,\mathrm{dif}})\), there will be an adjunction counit \(\mathfrak{r}:\Re\to\mathrm{id}\), which, at any \(X\in\mathbf{dFSmoothStack}\), will give rise to the canonical morphism \[\mathfrak{r}_{X}:\,\Re(X)\,\longrightarrow\,X. \tag{4.2.13}\] We will call this _infinitesimal reduction counit_, for short. Since by construction we have \(\varPi^{\,\mathrm{dif}}\circ\hat{\imath}\simeq\mathrm{id}\), it is possible to see that the infinitesimal reduction modality is an idempotent comonad, i.e. we have an equivalence \[\Re\,\stackrel{{\simeq}}{{\longrightarrow}}\,\Re\Re. \tag{4.2.14}\] Let us show the geometric meaning of the infinitesimal reduction counit more concretely. The following corollary will provide a concrete characterisation of the infinitesimal reduction on the relevant class of examples of derived affine \(\mathcal{C}^{\infty}\)-schemes. **Corollary 4.14** (Infinitesimal reduction of derived affine \(\mathcal{C}^{\infty}\)-schemes).: For any given homotopy \(\mathcal{C}^{\infty}\)-algebra \(R\in\mathsf{sC}^{\infty}\mathsf{Alg}\), by lemma 4.11 we directly obtain the equivalence \[\Re(\mathrm{RSpec}R)\;\simeq\;\mathrm{RSpec}(R^{\mathrm{red}}). \tag{4.2.15}\] Roughly speaking, we can see that the infinitesimal reduction modality is the functor whcih contracts away the formal derived directions of a formal derived smooth stack. **Definition 4.15** (Infinitesimal object).: We say that \(X\) is an _infinitesimal object_ if \(\Re(X)\simeq*\). Notice that the infinitesimal reduction counit of an infinitesimal object \(X\in\mathbf{dFSmoothStack}\) becomes the embedding of the canonical point \(\mathfrak{r}_{X}:*\to X\). **Definition 4.16** (Reduced object).: We say that \(X\) is a _reduced object_ if \(\Re(X)\simeq X\). Notice that the infinitesimal reduction counit of a reduced object \(X\in\mathbf{dFSmoothStack}\) becomes the identity \(\mathfrak{r}_{X}:X\stackrel{{\simeq}}{{\longrightarrow}}X\). **Remark 4.17** (Infinitesimal shape unit).: Since there is an adjunction \((\varPi^{\,\mathrm{dif}}\dashvartriangle\mathrm{Disc}^{\mathrm{dif}})\), there will be an adjunction unit \(\mathfrak{i}:\mathrm{id}\to\Im\), which, at any \(X\in\mathbf{dFSmoothStack}\), will give rise to the canonical morphism \[\mathfrak{i}_{X}:\,X\,\longrightarrow\,\Im(X), \tag{4.2.16}\] We will call this _infinitesimal shape unit_, for short. Similarly to \(\Re\), the infinitesimal shape modality is an idempotent monad, i.e. we have an equivalence \[\Im\Im\ \xrightarrow{\simeq}\Im. \tag{4.2.17}\] Let us show the geometric meaning of the infinitesimal shape unit more concretely. Let us consider a derived formal smooth stack \(X\in\mathbf{dFSmoothStack}\). Then we can see that the infinitesimal shape modality will send it to the formal derived smooth stack \[\Im(X)\,:\;\mathbf{dFMfd^{\mathrm{op}}} \longmapsto\;\mathsf{sSet} \tag{4.2.18}\] \[U \longmapsto\;X(t_{0}U),\] where \(t_{0}U\) is the underived-truncation of the formal derived smooth manifold \(U\). Moreover, the infinitesimal shape unit \(\mathrm{i}_{X}:X\to\Im(X)\) of \(X\) will be concretely given by the natural map of simplicial sets \[\mathrm{i}_{X}(U)\,:\;X(U)\;\longrightarrow\;X(t_{0}U) \tag{4.2.19}\] on each formal derived smooth manifold \(U\in\mathbf{dFMfd}\) in our site. **Definition 4.18** (de Rham space).: The _de Rham space_ of a formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) is defined by the formal derived smooth stack \(\Im(X)\in\mathbf{dFSmoothStack}\). **Remark 4.19** (\(\mathscr{D}\)-modules).: The \((\infty,1)\)-category of \(\mathscr{D}\)-modules on a formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) can be defined precisely by \(\mathscr{D}(X)\coloneqq\mathrm{QCoh}(\Im(X))\), i.e. by the \((\infty,1)\)-category of quasi-coherent sheaves on its de Rham space \(\Im(X)\). **Remark 4.20** (Infinitesimal flat unit).: Since there is an adjunction \((\mathrm{Disc}^{\mathrm{dif}}\dashdot\Gamma^{\mathrm{dif}})\), there will be an adjunction unit \(\mathfrak{e}:\mathrm{id}\to\&\), which, at any \(X\in\mathbf{dFSmoothStack}\), will give rise to the canonical morphism \[\mathfrak{e}_{X}:\,X\,\longrightarrow\,\&(X). \tag{4.2.20}\] We will call this _infinitesimal flat unit_, for short. Similarly to \(\Re\) and \(\Im\), the infinitesimal flat modality is an idempotent comonad, i.e. we have an equivalence \[\&\;\xrightarrow{\simeq}\&\&. \tag{4.2.21}\] **Remark 4.21** (Analogy with derived algebraic geometry).: The adjoint \((\infty,1)\)-functors \((\Re\dashdot\Im)\) from derived differential cohesion as described above can be thought of as a smooth version of the adjunction constructed by [17, Section 2] in the context of derived algebraic geometry. **Remark 4.22** (Modalities of derived cohesion and derived differential cohesion).: \[\Re\quad\dashdot\quad\Im \dashdot\quad\&\] \[\vee \vee\] \[\int \dashdot\quad\flat \dashdot\quad\dashdot\quad\sharp\] where the relations \(\int<\Im\) and \(\flat<\&\) respectively mean that we have equivalences \(\int\Im(X)\simeq\int X\) and \(\flat\&(X)\simeq\flat X\) for any formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\). In the rest of this subsection, we will provide generalisations of the formal geometric objects constructed in [16] to formal derived smooth stacks. **Remark 4.23** (Points on a formal derived smooth stack).: Notice that the point \(*\simeq\mathrm{RSpec}(\mathbb{R})\) is the terminal object in \(\mathbf{dFSmoothStack}\). Thus, the Hom-space of morphisms \(*\to X\) from the point into any formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) is nothing but the simplicial set \(\varGamma(X)=X(*)\in\mathsf{sSet}^{\circ}_{\mathrm{Quillen}}\). Therefore, we can equivalently give a point \(x:*\to X\) on the formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) as an element \(x\in\varGamma(X)=X(*)\). We can now provide a well-defined notion of formal neighborhood of a formal derived smooth stack at any of its points. **Definition 4.24** (Formal disk).: The _formal disk_\(\mathbb{D}_{X,x}\) at the point \(x:*\to X\) of the formal derived smooth stack \(X\in\mathsf{dFSmoothStack}\) is defined by the homotopy pullback (4.2.22) in the \((\infty,1)\)-category \(\mathsf{dFSmoothStack}\) of formal derived smooth stacks. In other words, the formal disk \(\mathbb{D}_{X,x}\) is the fibre at the point \(x:*\to\Im(X)\) of the bundle provided by the canonical morphism \(\mathfrak{i}_{X}:X\longrightarrow\Im(X)\) from \(X\) to its de Rham space. **Remark 4.25** (Formal disk is infinitesimal).: Notice that the formal disk is an infinitesimal object, in fact we have the equivalence \(\Re(\mathbb{D}_{X,x})\simeq*\). Before we proceed further, let us provide an extremely simple example of formal disk, namely a formal disk on the real line. **Example 4.26** (Formal disk \(\mathbb{D}_{\mathbb{R},0}\) on \(\mathbb{R}\)).: Recall that the real line can be Yoneda-embedded into a formal derived smooth stack by the functor \(\mathbb{R}(U)\simeq\mathcal{O}(U,\,\mathbb{R})\), on any \(U\in\mathsf{dFMfd}\). Let us consider the formal disk \(\mathbb{D}_{\mathbb{R},0}\stackrel{{\epsilon_{0}}}{{\longleftrightarrow}} \mathbb{R}\) defined as above at the zero point \(0\in\mathbb{R}\) on the real line. Thus sections are given by pullback of simplicial sets \[\mathbb{D}_{\mathbb{R},0}(U)\;\simeq\;\mathcal{O}(U,\,\mathbb{R})\times_{ \mathcal{O}(\varPi^{\text{dif}}U,\,\mathbb{R})}^{h}\{0\} \tag{4.2.23}\] on any formal derived smooth manifold \(U\in\mathsf{dFMfd}\). This means that, for example: * if \(U\) is in the essential image of an ordinary smooth manifold, i.e. if \(\Re(U)\simeq U\), then we have that the space of sections is just \(\mathbb{D}_{\mathbb{R},0}(U)\simeq\{0\}\); * on the other hand, if \(U\) is a derived thickened point, i.e. if \(\Re(U)\simeq*\), then we have that the space of sections \(\mathbb{D}_{\mathbb{R},0}(U)\) is given by the simplicial set of nilpotent elements of \(\mathcal{O}(U)\). Notice that the infinitesimal derived behaviour of \(\mathbb{D}_{\mathbb{R},0}\) is seen only by formal derived probing spaces \(U\) and if we try to probe \(\mathbb{D}_{\mathbb{R},0}\) with ordinary smooth manifolds we do not see anything but a point. This shows why we can think at \(\mathbb{D}_{\mathbb{R},0}\) as a derived thickened point. Now, we can introduce the notion of formal disk bundle, i.e. a fibre bundle of formal disks. **Definition 4.27** (Formal disk bundle).: The _formal disk bundle_\(T^{\infty}X\) of a formal derived smooth stack \(X\in\mathbf{dFSmoothStack}\) is defined by the homotopy pullback (4.2.24) in the \((\infty,1)\)-category \(\mathbf{dFSmoothStack}\) of formal derived smooth stacks. **Remark 4.28** (Formal disk as fibre of the formal disk bundle).: Notice that the fibre at any point \(x:*\to X\) of the bundle \(T^{\infty}X\to X\) is an infinitesimal disk \(\mathbb{D}_{X,x}\stackrel{{\iota_{x}}}{{\longmapsto}}X\) at such point. If we are given a formal derived smooth sub-set of our original formal derived smooth set, we can consider a natural notion of infinitesimal normal bundle. This is given as follows. **Definition 4.29** (Etalification).: The _etalification_\(\Im_{X}Y\) of a formal derived smooth stack \(Y\) respect to a map \(f:Y\to X\) in \(\mathbf{dFSmoothStack}\) is defined by the homotopy pullback (4.2.25) in the \((\infty,1)\)-category \(\mathbf{dFSmoothStack}\) of formal derived smooth stacks. **Definition 4.30** (Normal formal disk bundle).: The _normal formal disk bundle_\(N_{X}^{\infty}Y\) of a monomorphism \(Y\stackrel{{ e}}{{\hookrightarrow}}X\) of formal derived smooth stacks in \(\mathbf{dFSmoothStack}\) is defined by the homotopy pullback (4.2.26) in the \((\infty,1)\)-category \(\mathbf{dFSmoothStack}\) of formal derived smooth stacks. **Example 4.31** (Trivial embedding).: Notice that, if we consider the trivial formal embedding \(e:X\xrightarrow{\mathrm{id}}X\), then we immediately have the identification \(N_{X}^{\infty}X\simeq X\), i.e. the bundle with trivial fibre. Figure 9: The normal formal disk bundle of a formal derived smooth stack \(Y\hookrightarrow X\). **Example 4.32** (Case of formal disk).: Notice that, if we consider the embedding \(e:\ast\stackrel{{ x}}{{\rightharpoonup}}X\) of a point, then we immediately have the identification \(N_{X}^{\infty}\ast\simeq\mathbb{D}_{X,x}\), i.e. the formal disk at \(x\). **Example 4.33** (Thickened hypersurface).: Let \(M\simeq\Sigma\times\mathbb{R}\) be a smooth manifold and let \(\Sigma_{0}=\Sigma\times\{0\}\) be a submanifold for a fixed element \(t_{0}\in\mathbb{R}\). Thus, we have the normal formal disk bundle \(N_{M}^{\infty}\Sigma_{0}=\Sigma\times\mathbb{D}_{\mathbb{R},0}\), where \(\mathbb{D}_{\mathbb{R},0}\) is the formal disk of \(\mathbb{R}\) at \(0\). Let us look at the formal embedding map of the normal formal disk bundle \(N_{M}^{\infty}\Sigma_{0}\) into \(M\) in detail, i.e. at the map \[N_{M}^{\infty}\Sigma_{0}\,\simeq\,\Sigma\times\mathbb{D}_{\mathbb{R},0}\, \stackrel{{\iota_{\Sigma}}}{{\rightharpoonup}}\,\Sigma\times \mathbb{R}\,\simeq\,M. \tag{4.2.27}\] This can be understood dually by the map \[\begin{CD}\mathcal{C}^{\infty}(M)\,\simeq\,\mathcal{C}^{\infty}(\Sigma \times\mathbb{R})@>{\mathcal{O}(\iota_{\Sigma})}>{}>\mathcal{O}(\Sigma\times \mathbb{D}_{\mathbb{R},0})\,\simeq\,\mathcal{O}(N_{M}^{\infty}\Sigma_{0})\\ f(x,t)@>{}>{}>f(x,0)+\sum_{n>0}\frac{\partial^{n}\!f(x,t)}{ \partial t^{n}}\bigg{|}_{t=0}t^{n}\end{CD} \tag{4.2.28}\] which sends a smooth function to its Taylor series at \(t=0\). Now we want to study what happens when we restrict sections of some fibre bundle to the etalification of a sub-stack of the base stack. Let us make this idea more precise. **Remark 4.34** (Formal restriction of sections).: Let \(E\to X\) be a fibre bundle, as defined in Definition 3.44, and let \(Y\stackrel{{ e}}{{\rightharpoonup}}X\) be a formal derived smooth stack in \(\mathbf{dFSmoothStack}\). Recall the Definition 3.45 of \(\infty\)-groupoid of sections of a fibre bundle. Then, we will call the \(\infty\)-groupoid \(\Gamma(\Im_{X}Y,\,\iota_{Y}^{*}E)\), where \(\iota_{Y}\) is the formal embedding map \(\Im_{X}Y\hookrightarrow X\), the \(\infty\)_-groupoid of formal restricted sections_ of \(E\) on \(Y\). The embedding \(\iota_{Y}:\Im_{X}Y\hookrightarrow X\) of formal derived smooth set induces a morphism \[\Gamma(X,E)\,\,\stackrel{{\pi_{Y}}}{{\rightharpoonup}}\,\Gamma( \Im_{X}Y,\,\iota_{Y}^{*}E)\,, \tag{4.2.29}\] which we will call _formal restriction_ of sections. Let us come back to the example of the thickened hyper-surface and let us concretely see how sections on the total smooth manifold restrict to the aforementioned thickened hyper-surface. **Example 4.35** (Scalar field on thickened hypersurface).: Consider the situation of example 4.33. Now, we introduce a trivial vector bundle \(E\coloneqq M\times V\to M\), where \(V\) is a vector space. The formal restriction of sections of such a bundle to the formal submanifold \(\Sigma_{t_{0}}\) will be given by \[\begin{CD}\Gamma(M,E)\,\simeq\,\Gamma(\Sigma\times\mathbb{R},E)@>{\pi_{ \Sigma}}>{}>\Gamma(\Sigma\times\mathbb{D}_{\mathbb{R},0},\iota_{\Sigma}^{*}E) \,\simeq\,\Gamma(N_{M}^{\infty}\Sigma_{t_{0}},\iota_{\Sigma}^{*}E)\\ \phi^{i}(x,t)@>{}>{}>\phi^{i}(x,0)+\sum_{n>0}\,\frac{\partial^{n}\!\phi^{i}(x,t)}{\partial t^{n}}\bigg{|}_{0}^{t}.\end{CD} \tag{4.2.30}\] In other words, the restriction sends a scalar field \(\phi^{i}(x,t)\) to the collection of boundary conditions \(\phi^{i}(x,0)\), \(\dot{\phi}^{i}(x,0)\), \(\ddot{\phi}^{i}(x,0)\), etc..., at a fixed \(0\in\mathbb{R}\). **Lemma 4.36** (Restriction of formal disk bundle).: Consider a formal derived smooth stack \(Y\stackrel{{ e}}{{\rightharpoonup}}X\) in \(\mathbf{dFSmoothStack}\) and let \(T^{\infty}X|_{Y}\coloneqq T^{\infty}X\times_{X}Y\) be the restriction of the formal disk bundle of \(X\) to \(Y\). Then we have the equivalence of formal derived smooth stacks \[T^{\infty}X|_{Y}\,\simeq\,T^{\infty}Y\times_{Y}N_{X}^{\infty}Y. \tag{4.2.31}\] Proof.: First, notice that the restriction of the formal disk bundle \(T^{\infty}X|_{Y}\simeq Y\times_{\Im(X)}X\) by the following pullback squares: (4.2.32) On the other hand, we also have the equivalence \(T^{\infty}Y\times_{Y}N_{X}^{\infty}Y\simeq Y\times_{\Im(X)}X\), which follows from the other pullback squares (4.2.33) Therefore, we have the conclusion of the lemma. ### Formal moduli problems from derived infinitesimal cohesion In this subsection we will briefly investigate the relation between formal derived smooth stacks, which we have defined in this paper, and formal moduli problems, which are the pivotal ingredient of the formalisation of BV-theory developed by [13]. This relation is summed up in fig. 5: a pointed formal moduli problem can be seen as a formal neighbourhood of a more general formal derived smooth stack. To begin with, let us consider the definition of formal moduli problem as it appears in [13]. From now on, we will denote by \(\mathsf{dgArt}_{\Bbbk}^{\leq 0}\) the category of _local Artinian dg-algebras_. Recall that a local Artinian dg-algebra is a negatively graded dg-\(\Bbbk\)-algebra \(\mathcal{A}\) concentrated in finitely many degrees, whose graded components are finite-dimensional and which comes equipped with a unique maximal differential ideal \(\mathfrak{m}_{\mathcal{A}}\subset\mathcal{A}\) such that \(\mathcal{A}/\mathfrak{m}_{\mathcal{A}}\cong\Bbbk\) and \(\mathfrak{m}_{\mathcal{A}}^{N}\) for some \(N\gg 0\). Equivalently, a local Artinian dg-algebra is a negatively graded dg-\(\Bbbk\)-algebra \(\mathcal{A}\) concentrated in finitely many degrees, whose \(0\)th cohomology \(\mathrm{H}^{0}(\mathcal{A})\) is a local Artinian algebra in the ordinary sense. Then, the definition of formal moduli problem is the following. **Definition 4.37** (Pointed formal moduli problem).: A _pointed formal moduli problem_ is a functor \[F\,:\,\mathsf{dgArt}_{\Bbbk}^{\leq 0}\,\longrightarrow\,\mathsf{sSet}, \tag{4.3.1}\] such that it satisfies the following properties: * \(F(\Bbbk)\) is contractible, * \(F\) maps surjective morphisms of Artinian dg-algebras to fibrations of simplicial sets, * Let \(\mathcal{A}\twoheadrightarrow\mathcal{C}\) and \(\mathcal{B}\twoheadrightarrow\mathcal{C}\) be two surjective morphisms of dg-Artinian algebras. Then, the natural map \(F(\mathcal{A}\times_{\mathcal{C}}\mathcal{B})\to F(\mathcal{A}) \times_{F(\mathcal{C})}F(\mathcal{B})\) is a weak homotopy equivalence. In other words, we can see a pointed formal moduli problem as a derived stack on the \((\infty,1)\)-site of dg-Artinian algebras, with the natural simplicial model structure induced by the usual \((\infty,1)\)-site structure of commutative dg-algebras. A pivotal class of these objects will be provided by local \(L_{\infty}\)-algebras, whose definition from [10] we now recall. **Definition 4.38** (Local \(L_{\infty}\)-algebra).: A _local \(L_{\infty}\)-algebra_\(\mathfrak{L}(M)\) on a smooth manifold \(M\in\mathsf{Mfd}\) is a \(\mathbb{Z}\)-graded vector bundle \(L\twoheadrightarrow M\) whose space of sections \(\mathfrak{L}(M)\coloneqq\Gamma(M,L)\) is equipped with a collection of poly-differential operators \[\ell_{n}\,:\,\mathfrak{L}(M)^{\otimes n}\,\longrightarrow\,\mathfrak{L}(M) \tag{4.3.2}\] of cohomological degree \(2-n\) for any \(n\geq 1\) such that \((\mathfrak{L}(M),\{\ell_{n}\}_{n\geq 1})\) is an \(L_{\infty}\)-algebra. The definition above is, then, a natural generalisation of the more familiar notion of \(L_{\infty}\)-algebra on a degree-wise finite-dimensional \(\mathbb{Z}\)-graded vector space to the case of a infinite-dimensional \(\mathbb{Z}\)-graded vector space of sections of a \(\mathbb{Z}\)-graded vector bundle. As anticipated, an \(L_{\infty}\)-algebra, local or not, gives naturally rise to a formal moduli problem by the following construction. **Definition 4.39** (Maurer-Cartan formal moduli problem).: Given an \(L_{\infty}\)-algebra \(\mathfrak{g}\), the _Maurer-Cartan formal moduli problem_\(\mathbf{MC}(\mathfrak{g})\) can be defined by the functor \[\begin{split}\mathbf{MC}(\mathfrak{g})\,:\,\mathsf{dgAr}_{\mathbb{ K}}^{\leq 0}&\longrightarrow\,\mathsf{sSet}\\ \mathcal{A}&\longmapsto\,\mathrm{MC}(\mathfrak{g} \otimes_{\mathbb{K}}\mathfrak{m}_{\mathcal{A}}),\end{split} \tag{4.3.3}\] where \(\mathfrak{m}_{\mathcal{A}}\) is the maximal differential ideal of \(\mathcal{A}\) and \(\mathrm{MC}(-)\) is the simplicial set of solutions to the Maurer-Cartan equation. Notice that the Maurer-Cartan formal moduli problem is a pointed formal moduli problem. **Remark 4.40** (Any pointed formal moduli problem is equivalent to a Maurer-Cartan one).: Thanks to the results by [11, 12], we know that any pointed formal moduli problem \(F\) is equivalent to a Maurer-Cartan formal moduli problem, i.e. there is an equivalence \[F\,\simeq\,\mathbf{MC}(\mathfrak{L}(M)), \tag{4.3.4}\] for some local \(L_{\infty}\)-algebra \(\mathfrak{L}(M)\) on the smooth manifold \(M\). Thus, without any loss of generality, we can focus on Maurer-Cartan moduli problems. **Construction 4.41** (Artinian dg-algebras are almost finitely presented homotopy \(\mathcal{C}^{\infty}\)-algebras).: * Since Artinian dg-algebras naturally embed into the model category of dg-commutative algebras, then, by composing with the Dold-Kan correspondence functor \(|-|_{\mathrm{DK}}:\mathsf{dgCAlg}_{\mathbb{R}}^{\leq 0}\longrightarrow\mathsf{ scAlg}_{\mathbb{R}}\), see e.g. [10], we can embed Artinian dg-algebras into simplicial commutative algebras. * Given any Artinian dg-algebra \(\mathcal{A}\in\mathsf{dgAr}_{\mathbb{R}}^{\leq 0}\), its \(0\)-degree component \(\mathcal{A}_{0}\) is an ordinary Artinian algebra and thus it is canonically a \(\mathcal{C}^{\infty}\)-algebra by the discussion in section 2. Moreover, the \(\{\mathcal{A}_{-i}\}_{i>0}\) are modules on \(\mathcal{A}_{0}\). Therefore, we have a canonical dg-\(\mathcal{C}^{\infty}\)-algebra structure on \(\mathcal{A}\) in the sense of [10]. Then, by [11], its Dold-Kan simplicialisation is a homotopy \(\mathcal{C}^{\infty}\)-algebra, which we will denote by \(|\mathcal{A}|_{\mathrm{DK}}^{\leq\infty}\in\mathsf{sC}^{\infty}\mathsf{Alg}\). * By definition, the \(0\)th cohomology \(\mathrm{H}^{0}(\mathcal{A})\cong\pi_{0}|_{\mathcal{A}}|_{\mathrm{DK}}\) of a local Artinian dg-algebra \(\mathcal{A}\) is an ordinary local Artinian algebra and thus it is canonically a finitely presented \(\mathcal{C}^{\infty}\)-algebra. Moreover, since \(\mathcal{A}\) is degree-wise finite dimensional, the \(\{\pi_{i}|\mathcal{A}|_{\mathrm{DK}}\}_{i>0}\) are finitely presented modules on \(\pi_{0}|\mathcal{A}|_{\mathrm{DK}}\). Therefore, \(|\mathcal{A}|_{\mathrm{DK}}^{\mathcal{C}^{\infty}}\in\mathsf{sC}^{\infty} \mathsf{Alg}_{\mathrm{afp}}\) is canonically an almost finitely presented \(\mathcal{C}^{\infty}\)-algebra. Thus, by generalising the case of ordinary Artinian algebras, the Dold-Kan functor \(|-|_{\mathrm{DK}}\) can be uniquely lifted as follows (4.3.5) where \((-)^{\mathrm{alg}}:\mathsf{sC}^{\infty}\mathsf{Alg}\to\mathsf{sCAlg}_{\mathbb{R}}\) is, as usual, the forgetful functor which forgets the \(\mathcal{C}^{\infty}\)-algebra structure and leaves us with the underlying simplicial commutative algebra. Therefore we have an embedding \[|-|_{\mathrm{DK}}^{\mathcal{\infty}}\,:\,\mathsf{dgArt}_{\mathbb{R}}^{\leq 0} \,\longleftrightarrow\,\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathrm{afp}}. \tag{4.3.6}\] In other words, we can interpret an Artinian dg-algebra as the algebra of functions on a formal derived smooth manifold, which will be, in particular, a thickened point. This means that we could see formal moduli problems as formal derived smooth stacks whose source category has been restricted to derived thickened points. In this light, it is possible to see that we can always extract a formal moduli problem from a formal derived smooth stack \(X\) by restricting the \((\infty,1)\)-site of formal derived smooth manifolds to the \((\infty,1)\)-site of thickened points and by sending such thickened points to some fixed point \(x:*\to X\) of the original stack. Let us construct this operation step by step. **Construction 4.42** (Formal moduli problems as formal completion of formal derived smooth stacks).: Let \(X\in\mathsf{dFSmoothStack}\) be a formal derived smooth stack. As discussed above, we have the embedding \(|-|_{\mathrm{DK}}^{\mathcal{\infty}}:\mathsf{dgArt}_{\mathbb{R}}^{\leq 0} \hookrightarrow\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathrm{afp}}\). This gives immediately rise to a formal moduli problem \(X^{\wedge}\) which is defined by the pullback \(X^{\wedge}\coloneqq(|-|_{\mathrm{DK}}^{\mathcal{\infty}})^{*}X\). This is a functor \[\begin{split} X^{\wedge}\,:\,\mathsf{dgArt}_{\mathbb{R}}^{\leq 0 }&\,\longrightarrow\,\mathsf{sSet}\\ \mathcal{A}&\,\longmapsto\,X\big{(}|\mathcal{A}|_{ \mathrm{DK}}^{\mathcal{\infty}}\big{)},\end{split} \tag{4.3.7}\] where \(|\mathcal{A}|_{\mathrm{DK}}^{\mathcal{\infty}}\in\mathsf{sC}^{\infty}\mathsf{ Alg}_{\mathrm{afp}}\) is the almost finitely presented simplicial \(\mathcal{C}^{\infty}\)-algebra corresponding to the Artinian dg-algebra \(\mathcal{A}\in\mathsf{dgArt}_{\mathbb{R}}^{\leq 0}\). However, this functor does not encode a _pointed_ formal moduli problem, because the thickened points in the site are allowed to be sent to any point of the stack \(X\) and not only to some fixed point \(x\in X\). Let us then fix a point \(x:*\to X\) and define the following pointed formal moduli problem: \[\begin{split} X_{x}^{\wedge}\,:\,\mathsf{dgArt}_{\mathbb{R}}^{ \leq 0}&\,\longrightarrow\,\mathsf{sSet}\\ \mathcal{A}&\,\longmapsto\,X\big{(}|\mathcal{A}|_{ \mathrm{DK}}^{\mathcal{\infty}}\big{)}\times_{X(*)}*\,,\end{split} \tag{4.3.8}\] which is the smooth version of the construction appearing in [14, Section 4.2] and [10], called _formal completion_ at \(x\) of a derived stack. **Definition 4.43** (\((\infty,1)\)-topos of formal moduli problems).: We define the \((\infty,1)\)-category of _formal moduli problems_ by the \((\infty,1)\)-category of pre-stacks \[\mathbf{FMP}\,\coloneqq\,\mathbf{N}_{hc}([\mathsf{dgArt}_{\mathbb{R}}^{\leq 0 },\mathsf{sSet}]_{\mathrm{proj}}^{\circ}), \tag{4.3.9}\] with its natural structure of \((\infty,1)\)-topos of pre-stacks. **Proposition 4.44** (Infinitesimally cohesive \((\infty,1)\)-topos of formal moduli problems).: The \((\infty,1)\)-topos \(\mathbf{FMP}\) of formal moduli problems has a natural infinitesimally cohesive structure as defined by [DCCT, Definition 4.1.21]. Proof.: By [DCCT, Proposition 4.1.24] the \((\infty,1)\)-category of pre-stacks on an \((\infty,1)\)-site containing a zero object (i.e. an object which is both initial and terminal) is an infinitesimal cohesive \((\infty,1)\)-topos. The simplicial model category underlying \(\mathbf{FMP}\) is precisely \([\mathsf{dgArt}_{\mathbb{R}}^{\leq 0},\mathsf{sSet}]_{\mathrm{proj}}\), making \(\mathbf{FMP}\) an \((\infty,1)\)-category of pre-stacks. Now, we can see that the real line \(\mathbb{R}\) is both a terminal and initial in \(\mathsf{dgArt}_{\mathbb{R}}^{\leq 0}\). In fact, for any dg-Artinian algebra \(\mathcal{A}\), there is not only a unique map \(\mathbb{R}\to\mathcal{A}\), but crucially also a unique \(\mathbb{R}\)-point \(\mathcal{A}\to\mathbb{R}\). Thus we have the conclusion. **Corollary 4.45** (Derived infinitesimal cohesion of formal moduli problems).: The immediate consequence of [DCCT, Proposition 4.1.24] is that, in particular, the \((\infty,1)\)-topos \(\mathbf{FMP}\) of formal moduli problems is naturally equipped with a cohesive structure of the form (4.3.10) Morally speaking, formal moduli problems in \(\mathbf{FMP}\) can be thought of as infinitesimally thickened \(\infty\)-groupoids, in a derived sense. **Lemma 4.46** (Derived relative cohesion).: Formal derived smooth stacks are equipped with a relative cohesive structure over the \((\infty,1)\)-topos \(\mathbf{FMP}\) of formal moduli problems, i.e. we have a quadruplet of adjoint \((\infty,1)\)-functors (4.3.11) such that: * \(\Pi^{\mathrm{rel}}\) preserves finite products, * \(\mathrm{Disc}^{\mathrm{rel}}\) and \(\mathrm{coDisc}^{\mathrm{rel}}\) are fully faithful, * \(\Gamma^{\mathrm{rel}}=(-)^{\wedge}\) is precisely the functor in equation (4.3.7). Proof.: Recall that there is the following embedding of simplicial sites (4.3.12) This gives rise by left and right Kan extension to the following triplet of adjoint functors between the corresponding simplicial categories of pre-stacks: (4.3.13) The functor \(\Gamma^{\mathrm{rel}}\coloneqq(|-|_{\mathrm{DK}}^{\leq\infty})^{*}\) immediately sends local fibrant objects to fibrant objects, since the simplicial category \([\mathsf{dgArt}_{\mathbb{R}}^{\leq 0},\mathsf{sSet}]_{\mathrm{proj}}\) is equipped with a global and not a local model structure. By [1], this also implies that its left adjoint \(\mathrm{Disc}^{\mathrm{rel}}\coloneqq(|-|_{\mathrm{DK}}^{\leq\infty})_{*}\) restricts to stacks. We now must show that \(\mathrm{coDisc}^{\mathrm{rel}}\coloneqq(|-|_{\mathrm{DK}}^{\leq\infty})_{*}\) sends fibrant objects to local fibrant in the local model structure \([\mathsf{sC}^{\infty}\mathsf{Alg}_{\mathrm{afp}},\mathsf{sSet}]_{\mathrm{proj,loc}}\). Notice that, by adjunction, for any formal moduli problem \(F\in[\mathsf{dgArt}_{\mathbb{R}}^{\leq 0},\mathsf{sSet}]_{\mathrm{proj}}\), we have sections of the form \((\mathrm{coDisc}^{\mathrm{rel}}F)(U)\simeq\mathrm{RHom}_{\mathsf{FMP}}(I^{\mathrm{ rel}}(U),F)\) on any formal derived smooth manifold \(U\in\mathsf{dFMfd}\). For \(\mathrm{coDisc}^{\mathrm{rel}}F\) to be a formal derived smooth stack, it must satisfy the descent condition, i.e. one must have an equivalence \((\mathrm{coDisc}^{\mathrm{rel}}F)(U)\simeq\mathrm{Rlim}_{n}\,\mathrm{RHom}_{ \mathsf{FMP}}(I^{\mathrm{rel}}H(U)_{n},F)\) for any etale hypercover \(H(U)_{\bullet}\to U\). This is verified by noticing that we have an equivalence \(\mathrm{Lcolim}_{n}I^{\mathrm{rel}}H(U)_{n}\simeq I^{\mathrm{rel}}U\), since, for any formal derived smooth stack, we have \(I^{\mathrm{rel}}X\simeq\coprod_{x:*\to X}X^{\wedge}_{x}\), where the \(X^{\wedge}_{x}\) are pointed formal moduli problems. Thus, we showed that \((\mathrm{Disc}^{\mathrm{rel}}\dash I^{\mathrm{rel}})\) and \((I^{\mathrm{rel}}\dashdot\mathrm{coDisc}^{\mathrm{rel}})\) are simplicial Quillen adjunctions with respect to the local model structure. We now define a further functor \[\varPi^{\mathrm{rel}}X\;\coloneqq\;\mathrm{Disc}^{\mathrm{inf}}(\varPi X)\cup _{I^{\mathrm{rel}}(\varPhi X)}^{h}\;I^{\mathrm{rel}}(X) \tag{4.3.14}\] It is possible to see, by using the limit preserving property of the Hom-space, that the right adjoint of \(\varPi^{\mathrm{rel}}\) is the functor \(F\mapsto\mathrm{Disc}(I^{\mathrm{inf}}F)\times_{\mathrm{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}}} \mathrm{coDisc^{\mathrm{rel}}F}F\) on formal moduli problems \(F\in\mathbf{FMP}\). However, we also have the following equivalences: \[\begin{split}\mathrm{Disc}(I^{\mathrm{inf}}F)\times_{\mathrm{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}} \mathrm{coDisc^{\mathrm{rel}}F}F&\simeq\;\Big{(}\coprod _{f:\mathbb{R}\to F}*\Big{)}\times_{\mathrm{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak { { \mathfrak{ { \mathfrak{ { \mathfrak { \mathfrak{ { \mathfrak { { \mathfrak { { \mathfrak { { \mathfrak { { \mathfrak { \mathfrak { { \mathfrak { { \mathfrak { \mathfrak { \mathfrak { \mathfrak { { \mathfrak { { \mathfrak { { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak {\mathfrak {\mathfrak \mathfrak {\mathfrak {\mathfrak {\mathfrak \mathfrak {\mathfrak {\mathfrak {\mathfrak \mathfrak {\mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak {\mathfrak \mathfrak {\mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak \mathfrak {\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak But this is precisely \(\flat^{\mathrm{rel}}X\;\simeq\;\flat X\times_{\mathfrak{H}(X)}^{h}X\), since one has that \(\flat X\simeq\coprod_{x:\ast\to X}\ast\) and, as we have seen many times, that the infinitesimal disk is given by \(\mathbb{D}_{X,x}\simeq\ast\times\times_{\mathfrak{H}(X)}^{h}X\). \[\begin{split}\mathrm{RHom}(X,\flat^{\mathrm{rel}}Y)& \;\simeq\;\mathrm{RHom}(X,\flat Y)\times_{\mathrm{RHom}(X,\flat Y)}^{h} \,\mathrm{RHom}(X,Y)\\ &\;\simeq\;\mathrm{RHom}(\int\!X\,Y)\times_{\mathrm{RHom}( \mathfrak{R}X,Y)}^{h}\,\mathrm{RHom}(X,Y)\\ &\;\simeq\;\mathrm{RHom}(\int\!X\sqcup_{\mathfrak{R}X}^{h}X,Y), \end{split} \tag{4.3.19}\] By unravelling the definition \(\int^{\mathrm{rel}}\!X=\mathrm{Disc}^{\mathrm{rel}}I\!I^{\mathrm{rel}}X\) of the relative shape modality, one obtains the equivalence \(\int^{\mathrm{rel}}\!X\simeq\int\!X\sqcup_{\flat^{\mathrm{rel}}\mathfrak{R}(X )}\flat^{\mathrm{rel}}X\). However, notice that we can form the diagram (4.3.20) Moreover, we have the following chain of equivalences: \[\begin{split}\Re(X)\sqcup_{\flat^{\mathrm{rel}}\mathfrak{R}(X)}^ {h}\,\flat^{\mathrm{rel}}X&\;\simeq\;\Re(X)\sqcup_{\flat\Re(X) \times_{\mathfrak{H}(X)}^{h}\Re(X)}^{h}(\flat X\times_{\mathfrak{H}(X)}^{h}X) \\ &\;\simeq\;\Re(X)\sqcup_{\flat X\times_{\mathfrak{H}(X)}^{h} \Re(X)}^{h}(\flat X\times_{\mathfrak{H}(X)}^{h}X)\\ &\;\simeq\;\Re(X)\sqcup_{\mathfrak{R}(X)}^{h}X\;\simeq\;X,\end{split} \tag{4.3.21}\] where we used the fact that we have equivalences \(\flat\Re(X)\simeq\flat X\) and \(\Im\Re(X)\simeq\Im(X)\). Since the outer square is already a pushout, then we have an equivalence \(\int^{\mathrm{rel}}\!X\simeq\int\!X\sqcup_{\mathfrak{R}(X)}^{h}X\). **Remark 4.49** (All the structures in the context of derived differential cohesion).: By putting together all the structures we encountered in this section, we can write the following diagram of \((\infty,1)\)-categories: (4.3.22) where, more in detailed, we have the following structures: * the left vertical quadruplet \((\hat{\imath}\dashv\varprojlim^{\mathrm{dif}}\dashv\mathrm{Disc}^{\mathrm{ dif}}\dashv\varprojlim^{\mathrm{dif}}\dashv\varprojlim^{\mathrm{dif}})\) of \((\infty,1)\)-functors presents a differential cohesive structure on the \((\infty,1)\)-category **dFSmoothStack** of formal derived smooth stacks over the \((\infty,1)\)-category **SmoothStack\({}^{\boldsymbol{+}}\)** from diagram (4.2.5); * the right vertical quadruplet \((\Pi^{\rm inf}\dashv{\rm Disc}^{\rm inf}\dashv\Gamma^{\rm inf}\dashv{\rm coDisc}^{ \rm inf})\) of \((\infty,1)\)-functors presents an infinitesimal cohesive structure on the \((\infty,1)\)-category \(\mathbf{FMP}\) of formal moduli problems over the \((\infty,1)\)-category \(\infty\mathbf{Grpd}\) of \(\infty\)-groupoids from diagram (4.3.10); * the upper horizontal quadruplet \((\Pi^{\rm rel}\dashv{\rm Disc}^{\rm rel}\dashv\Gamma^{\rm rel}\dashv{\rm coDisc }^{\rm rel})\) of \((\infty,1)\)-functors presents a relative cohesive structure on the \((\infty,1)\)-category \(\mathbf{dFSmoothStack}\) of formal derived smooth stacks over the \((\infty,1)\)-category \(\mathbf{FMP}\) of formal moduli problems from diagram (4.3.11); * the lower horizontal quadruplet \((\Pi\dashv{\rm Disc}\dashv\Gamma\dashv{\rm coDisc})\) of \((\infty,1)\)-functors presents a cohesive structure on the \((\infty,1)\)-category \(\mathbf{SmoothStack}^{\star}\) over the \((\infty,1)\)-category \(\infty\mathbf{Grpd}\) of \(\infty\)-groupoids. Finally, the following proposition is the step-by-step generalisation of [4, Proposition 6.5.15] to derived smooth geometry. **Proposition 4.50** (Derived relative cohesion as a pushout).: The diagram (4.3.23) is an \((\infty,1)\)-pushout square of \((\infty,1)\)-topoi. Proof.: The result of [11, Proposition 6.3.2.3] tells us that an \((\infty,1)\)-pushout of \((\infty,1)\)-topoi can be concretely computed as an \((\infty,1)\)-pullback of the underlying \((\infty,1)\)-categories, where the morphisms are the left adjoint \((\infty,1)\)-functors in all pairs presenting the geometric morphisms. Then, we need to show that the square (4.3.24) is an \((\infty,1)\)-pullback of \((\infty,1)\)-categories. Since any stack can be written as the \((\infty,1)\)-colimit of representables and the left adjoint preserves colimits, it is enough to show that the following diagram of sites is a pullback square: (4.3.25) where the functor \((|-|\mathbb{C}^{\infty}_{\mathrm{DK}})^{\mathrm{op}}\) embeds any dg-Artinian algebra \(\mathcal{A}\) into a formal derived smooth manifold \(\mathbbm{R}\mathrm{Spec}|\mathcal{A}|^{\infty}_{\mathrm{DK}}\) and \(((-)^{\mathrm{red}})^{\mathrm{op}}\) reduces any formal derived smooth manifold to a reduced \(\mathcal{C}^{\infty}\)-variety. Commutativity of the square (4.3.25) is assured by the fact that there is an equivalence \(\big{(}\mathbbm{R}\mathrm{Spec}|\mathcal{A}|^{\infty}_{\mathrm{DK}}\big{)}^{ \mathrm{red}}\simeq*\), which is a consequence of \(\mathrm{H}^{0}(\mathcal{A})/\mathfrak{m}_{\mathrm{H}^{0}(\mathcal{A})}\cong \mathbb{R}\). However, the converse is also true: given a formal derived smooth manifold \(U\in\mathbf{d}\mathbf{f}\mathbf{M}\mathbf{f}\mathbf{d}\), the equivalence \((U)^{\mathrm{red}}\simeq*\) implies that \(U\simeq\mathrm{R}\mathrm{Spec}|\mathcal{A}|^{\mathcal{C}^{\infty}_{\mathrm{ DK}}}\) for some dg-Artinian algebra \(\mathcal{A}\in\mathbf{dgArt}^{\leq 0}_{\mathbb{R}}\). This shows that the square (4.3.25) is a \((\infty,1)\)-pullback and, therefore, the proposition. ### \(L_{\infty}\)-algebroids as formal derived smooth stacks In this subsection we will develop a general picture of \(L_{\infty}\)-algebroids - and thus of the geometric objects sometimes known as \(NQ\)-manifolds in the literature - in the context of derived differential cohesion. We will see an interesting interplay between the formal and the higher derived properties of formal derived smooth stacks, which is also related to the research by [10]. First, we write the appropriate definition of groupoid object internal in an \((\infty,1)\)-category, as proposed in [11], in our case of interest of formal derived smooth stacks. **Definition 4.51** (Groupoid object).: A _groupoid object_ in the \((\infty,1)\)-category \(\mathbf{d}\mathbf{F}\mathbf{SmoothStack}\) of formal derived smooth stacks is a simplicial object \(\mathcal{G}_{\bullet}:\Delta^{\mathrm{op}}\to\mathbf{d}\mathbf{F}\mathbf{SmoothStack}\) such that all the natural maps (also known as Segal maps) \[\mathcal{G}_{n}\;\longrightarrow\;\mathcal{G}_{1}\times^{h}_{\mathcal{G}_{0}} \cdots\times^{h}_{\mathcal{G}_{0}}\mathcal{G}_{1} \tag{4.4.1}\] are equivalences of formal derived smooth stacks. As discussed by [11], a groupoid object \(\mathcal{G}_{\bullet}\) in an \((\infty,1)\)-topos gives rise to the colimiting cocone \(\mathcal{G}_{0}\longrightarrow\mathbbm{L}\mathrm{colim}\,\mathcal{G}_{\bullet}\), which is an effective epimorphism. Conversely, any effective epimorphism \(X\xrightarrow{p}\mathcal{G}\) is equivalently a groupoid object \(\mathcal{G}_{\bullet}\) with \(\mathcal{G}_{0}\simeq X\). Then, in particular, this must hold in the \((\infty,1)\)-topos \(\mathbf{d}\mathbf{F}\mathbf{SmoothStack}\) of formal derived smooth stacks. To sum up, the relevant data of a groupoid object of formal derived smooth stacks can be packed in a diagram of the following form: \[\mathcal{G}_{1}\xrightarrow{s}\mathcal{G}_{0}\xrightarrow{p}\operatorname{L} \mathrm{colim}\,\mathcal{G}_{\bullet}, \tag{4.4.2}\] where \(s\) plays the role of source map and \(t\) the role of target map. **Example 4.52** (Derived smooth group).: We call _derived smooth group_\(\mathbf{B}G\in\mathbf{d}\mathbf{F}\mathbf{SmoothStack}\) a groupoid object that is pointed, i.e. of the form \(*\xrightarrow{*}\mathbf{B}G\) with \(\mathcal{G}_{0}=*\) and effective epimorphism given by the inclusion of the canonical point. Here, diagram (4.4.2) reduces to (4.4.3) Now, in the formalism of derived differential cohesion we are able to generalise an idea from [41] to derived geometry and, thus, provide a very general definition of what we may call derived smooth algebroid. Morally speaking, a derived smooth algebroid is going to be a groupoid object \(\mathcal{G}_{\bullet}\) in the \((\infty,1)\)-category of formal derived smooth stacks which is infinitesimally thickened over its base \(\mathcal{G}_{0}\). We will show that such a notion generalises familiar \(L_{\infty}\)-algebroids. **Definition 4.53** (Derived smooth algebroid).: We call _derived smooth algebroid_ a groupoid object \(X\xrightarrow{p}\mathcal{G}\) in \(\mathbf{d}\mathbf{F}\mathbf{SmoothStack}\) such that the morphism \(\Im(X)\xrightarrow{\Im(p)}\Im(\mathcal{G})\) is an equivalence. We also call the map \(p\) the _anchor map_ of the derived \(L_{\infty}\)-algebroid. Now we will see that the usual notions of \(L_{\infty}\)-algebra and \(L_{\infty}\)-algebroid fit into this wider definition of derived smooth algebroid. First, let us see how \(L_{\infty}\)-algebras and \(L_{\infty}\)-algebroids are embedded into formal derived smooth stacks. **Definition 4.54** (Delooping of an \(L_{\infty}\)-algebra and of an \(L_{\infty}\)-algebroid).: * The _delooping_\(\mathbf{Bg}\) of an \(L_{\infty}\)-algebra \(\mathfrak{g}\) can be defined by the formal derived smooth stack \[\begin{split}\mathbf{Bg}\,:\;\mathsf{dFMfd}& \longrightarrow\mathsf{sSet}\\ U&\longmapsto\operatorname{MC}(\mathfrak{g}\otimes \mathfrak{m}_{\mathcal{O}(U)}),\end{split}\] (4.4.4) where \(\mathfrak{m}_{\mathcal{O}(U)}\) is the nilradical of the dg-commutative algebra \(\operatorname{N\mathcal{O}}(U)^{\operatorname{alg}}\) and \(\operatorname{MC}(-)\) is the simplicial set of solutions to the Maurer-Cartan equation. * More generally, the _delooping_\(\mathbf{B\mathfrak{L}}(M)\) of a local \(L_{\infty}\)-algebra \(\mathfrak{L}(M)\) can be defined by the formal derived smooth stack \[\begin{split}\mathbf{B\mathfrak{L}}(M)\,:\;\mathsf{dFMfd}& \longrightarrow\mathsf{sSet}\\ U&\longmapsto\operatorname{MC}(\mathfrak{L}(M) \operatorname{\widehat{\otimes}}\mathfrak{m}_{\mathcal{O}(U)}),\end{split}\] (4.4.5) where we defined the pullback \(\mathfrak{L}(M)\operatorname{\widehat{\otimes}}\mathfrak{m}_{\mathcal{O}(U)} \coloneqq\mathfrak{L}(M)\operatorname{\widehat{\otimes}}\operatorname{N \mathcal{O}}(U)\times_{\mathfrak{L}(M)\operatorname{\widehat{\otimes}} \operatorname{N\mathcal{O}}(U)^{\operatorname{red}}}\{0\}.\) * The _delooping_\(\mathbf{B\mathfrak{a}}\) of an \(L_{\infty}\)-algebroid \(\mathfrak{a}\twoheadrightarrow M\) on an ordinary smooth manifold \(M\) can be defined by the formal derived smooth stack \[\begin{split}\mathbf{B\mathfrak{a}}\,:\;\mathsf{dFMfd}& \longrightarrow\mathsf{sSet}\\ U&\longmapsto\coprod_{f:U^{\operatorname{red}} \to M}\operatorname{MC}(\Gamma(U^{\operatorname{red}},f^{*}\mathfrak{a}) \operatorname{\widehat{\otimes}}_{\operatorname{N\mathcal{O}}(U)} \mathfrak{m}_{\mathcal{O}(U)}).\end{split}\] (4.4.6) where the \(\mathcal{C}^{\infty}\)-tensor product is given as above. We can now show that usual \(L_{\infty}\)-algebras and \(L_{\infty}\)-algebroids are examples of derived smooth algebroids as defined above. **Example 4.55** (Usual \(L_{\infty}\)-algebras).: Let \(\mathbf{B\mathfrak{g}}\) be the delooping of an \(L_{\infty}\)-algebra. The canonical map \(*\stackrel{{*}}{{\longrightarrow}}\mathbf{Bg}\) gives rise to a map \(*\simeq\Im(*)\longrightarrow\Im(\mathbf{B\mathfrak{g}})\simeq*\), which makes \(\mathbf{B\mathfrak{g}}\) into a formal smooth algebroid on the point. **Example 4.56** (Usual \(L_{\infty}\)-algebroids).: Let \(\mathfrak{a}\twoheadrightarrow M\), where \(M\) is an ordinary smooth manifold, be a \(L_{\infty}\)-algebroid in the usual sense. Then the map \(M\stackrel{{\rho}}{{\longrightarrow}}\mathbf{B\mathfrak{a}}\) presents the \(L_{\infty}\)-algebroid as a derived smooth algebroid in the sense above, since \(M\simeq\Im(M)\twoheadrightarrow\Im(\mathbf{B\mathfrak{a}})\simeq M\). **Remark 4.57** (The base of a derived smooth algebroid).: Notice that, in the definition of a derived \(L_{\infty}\)-algebroid \(X\stackrel{{ p}}{{\longrightarrow}}\mathcal{G}\), there is no requirement for \(X\) to be an ordinary smooth manifold or even a formal derived smooth manifold. In fact, \(X\) can be, in general, a formal derived smooth stack. In other words, derived smooth algebroids generalise \(L_{\infty}\)-algebroids by dropping the constraint that the base has to be an ordinary smooth manifold. Roughly speaking, a derived smooth algebroid is an infinitesimally thickened groupoid object where the base \(X\) is generally a formal derived smooth stack. Let us now provide an archetypal example of such a generalised notion of derived smooth algebroid where the base is not just an ordinary smooth manifold, but fully fledged a formal derived smooth stack. **Example 4.58** (Formal disk bundle as derived \(L_{\infty}\)-algebroid).: Let \(X\in\mathbf{dFSmoothStack}\) be any formal derived smooth stack. Recall the definition of formal disk bundle \(T^{\infty}X=X\times_{\Im(X)}^{h}X\) induced by the canonical morphism \(\mathrm{i}\chi:X\longrightarrow\Im(X)\) to the de Rham space \(\Im(X)\). Thus, the formal disk bundle gives rise to a groupoid object of the form (4.4.7) Moreover, notice that \(\Re(\Im(X))\simeq\Re(X)\) since one has the equivalence \(\iota^{\mathrm{red}*}\circ\iota^{\mathrm{red}}_{*}\circ\iota^{\mathrm{red}*} \simeq\mathrm{id}\). Thus, the diagram (4.4.7) presents, in particular, a derived \(L_{\infty}\)-algebroid in the generalised sense of definition 4.53. Thus, the abstract definition of derived smooth algebroid above provides a generalisation of the usual definition of \(L_{\infty}\)-algebroid, which is based on the formalism of differential-graded manifolds. In section 5 we will explore some relevant examples motivated by physics. **Remark 4.59** (Lie differentiation).: Finally, notice that we can use the infinitesimal flat modality to encompass Lie differentiation. In fact, by using the equivalences \(\varGamma^{\mathrm{rel}}(\mathbf{B}G)\simeq\mathbf{MC}(\mathfrak{g})\) and \(\mathrm{Disc}^{\mathrm{rel}}\big{(}\mathbf{MC}(\mathfrak{g})\big{)}\simeq \mathbf{B}\mathfrak{g}\). From these two equivalences, we obtain an equivalence of formal derived smooth stacks \[\flat^{\mathrm{rel}}\mathbf{B}G\;\simeq\;\mathbf{B}\mathfrak{g}. \tag{4.4.8}\] ### Derived jet bundles In this subsection we will provide a definition of jet bundles as formal derived smooth stacks, rooted in differential cohesive geometry, which we delineated in the this section. This will be an application of the framework developed by [10, 11]. **Construction 4.60**.: Let \(M\in\mathbf{dFMfd}\hookrightarrow\mathbf{dFSmoothStack}\) be any fixed formal derived smooth manifold. A bundle \(E\overset{p}{\longrightarrow}M\) can be seen as an object of the slice \((\infty,1)\)-category \(\mathbf{dFSmoothStack}_{/M}\). Recall that there is a morphism \(\mathrm{i}_{M}:M\to\Im(M)\), which is the infinitesimal shape unit of definition 4.12, i.e. the canonical morphism from the derived formal smooth manifold \(M\) to its de Rham space. This induces a triplet of adjoint \((\infty,1)\)-functors \[(\mathrm{i}_{M})_{!}\,\dashv(\mathrm{i}_{M})^{*}\,\dashv(\mathrm{i}_{M})_{*}, \tag{4.5.1}\] which is the base change given by [11, Proposition 6.3.5.1], i.e. a triplet of \((\infty,1)\)-functors the form (4.5.2) where \(\mathbf{dFSmoothStack}_{/M}\) and \(\mathbf{dFSmoothStack}_{/\Im(M)}\) are the slice \((\infty,1)\)-categories of derived formal smooth sets respectively over \(M\) and over its de Rham space \(\Im(M)\). **Definition 4.61** (Derived jet bundle).: For a given fibre bundle \(E\to M\), where \(E\) is a formal derived smooth stack and \(M\) is a formal derived smooth manifold, the _jet bundle_\(\mathrm{Jet}_{M}E\to M\) is a fibre bundle of formal derived smooth stacks which is defined by the image of the functor \[\begin{split}\mathrm{Jet}_{M}\,:\;\mathbf{dFSmoothStack}_{/M}& \longrightarrow\mathbf{dFSmoothStack}_{/M}\\ E&\longmapsto\;\mathrm{Jet}_{M}E\,\coloneqq\,( \mathrm{i}_{M})^{*}(\mathrm{i}_{M})_{*}E.\end{split} \tag{4.5.3}\] That this generalises the usual definition of jet bundles becomes clearer after corollary 4.68. **Remark 4.62** (Jet co-monad).: From the definition, similarly to the previously examined co-monad structures, one obtains that there exists a an equivalence of endofunctors \[\varDelta\,:\;\mathrm{Jet}_{M}\;\simeq\;\mathrm{Jet}_{M}\mathrm{Jet}_{M}. \tag{4.5.4}\] Thus we call the functor \(\mathrm{Jet}_{M}\)_jet co-monad_ over \(M\). For any given bundle \((E\to M)\in\mathbf{dFSmoothStack}_{/M}\), the natural transformation (4.5.4) will give rise to a morphism \[\varDelta_{E}\,:\;\mathrm{Jet}_{M}E\;\longleftrightarrow\;\mathrm{Jet}_{M}( \mathrm{Jet}_{M}E). \tag{4.5.5}\] This is the coproduct of the comonad structure associated to jet bundles, which was originally observed in the context of ordinary differential geometry by [10]. In the rest of this subsection we will show that some essential results by [111, 12] follow through to the formal derived smooth case. **Lemma 4.63** (Adjunction with formal disk bundle).: There is a natural equivalence of functors \[(\mathrm{i}_{M})^{*}(\mathrm{i}_{M});\;\simeq\;T^{\infty}M\times_{M}(-) \tag{4.5.6}\] Proof.: Consider any bundle \((E\xrightarrow{p}M)\in\mathbf{dFSmoothStack}_{/M}\). Then, the formal smooth set \(T^{\infty}M\times_{M}E\) sits at the top-left corner of the following pullback squares: (4.5.7) We can also see that there is an equivalence \(T^{\infty}M\times_{M}E\simeq M\times_{\Im(M)}E\). Recall that, for a base change morphism, \((\mathrm{i}_{M})_{!}\) is the post-composition by \(\mathrm{i}_{M}\) and \((\mathrm{i}_{M})^{*}\) is the pullback along \(\mathrm{i}_{M}\). Thus, the bundle \((\mathrm{i}_{M})_{!}E\) is nothing but the composition \(\mathrm{i}_{M}\circ p:E\to\Im(M)\) and the bundle \((\mathrm{i}_{M})^{*}(\mathrm{i}_{M})!E\) is nothing but the pullback \(T^{\infty}M\times_{M}E\to M\). **Theorem 4.64** (Formal disk bundle and jet bundle adjunction).: There is an adjunction \[T^{\infty}M\times_{M}(-)\;\dashv\;\mathrm{Jet}_{M}. \tag{4.5.8}\] of endofunctors of the slice \((\infty,1)\)-category \(\mathbf{dFSmoothStack}_{/M}\). Proof.: It is enough to notice that we have the following equivalences: \[\begin{array}{rcl}\mathrm{RHom}_{/M}(E^{\prime},\mathrm{Jet}_{M}E)&\simeq& \mathrm{RHom}_{/M}(E^{\prime},\,(\mathrm{i}_{M})^{*}(\mathrm{i}_{M})_{*}E)\\ &\simeq&\mathrm{RHom}_{/M}((\mathrm{i}_{M})_{!}E^{\prime},\,(\mathrm{i}_{M})_ {*}E)\\ &\simeq&\mathrm{RHom}_{/M}((\mathrm{i}_{M})^{*}(\mathrm{i}_{M})_{!}E^{\prime}, \,E)\\ &\simeq&\mathrm{RHom}_{/M}(T^{\infty}M\times_{M}E^{\prime},\,E)\end{array} \tag{4.5.9}\] where \(\mathrm{RHom}_{/M}(-,-)\) is the hom-\(\infty\)-groupoid of the slice \((\infty,1)\)-category \(\mathbf{dFSmoothStack}_{/M}\). Therefore we have the wanted conclusion. **Corollary 4.65** (Mapping stack to jet bundles).: We have the equivalence of formal derived smooth sets \[[E^{\prime},\operatorname{Jet}_{M}E]_{/M}\;\simeq\;[T^{\infty}M\times_{M}E^{ \prime},E]_{/M}. \tag{4.5.10}\] **Corollary 4.66** (Sections of a jet bundle).: The \(\infty\)-groupoid of sections of a jet bundle \(\operatorname{Jet}_{M}E\) is equivalent to the \(\infty\)-groupoid of bundle morphisms from \(T^{\infty}M\) to \(E\), i.e. \[\Gamma(M,\operatorname{Jet}_{M}E)\;\simeq\;\mathds{R}\mathrm{Hom}_{/M}(T^{ \infty}M,E). \tag{4.5.11}\] Proof.: By setting in previous lemma \(E^{\prime}=M\xrightarrow{\operatorname{id}_{M}}M\) to be the tautological bundle, we obtain \(\Gamma(M,\operatorname{Jet}E)\simeq\mathrm{RHom}_{/M}(M,\operatorname{Jet}E )\simeq\mathrm{RHom}_{/M}(T^{\infty}M\times_{M}M,E)\,\simeq\,\mathrm{RHom}_{/M }(T^{\infty}M,E)\), which is the result. By considering the special cases \(E^{\prime}=M\) and \(E^{\prime}=*\xleftrightarrow{x}M\) in corollary 4.65 above, we obtain respectively the following two corollaries. **Corollary 4.67** (Space of sections of a jet bundle).: We have the equivalence of formal derived smooth stacks \[\mathbf{\Gamma}(M,\operatorname{Jet}_{M}E)\;\simeq\;[T^{\infty}M,E]_{/M} \tag{4.5.12}\] **Corollary 4.68** (Fibre of a jet bundle).: We have the equivalence of formal derived smooth stacks \[(\operatorname{Jet}_{M}E)_{x}\;\simeq\;\mathbf{\Gamma}(\mathbb{D}_{M,x},E), \tag{4.5.13}\] where \((\operatorname{Jet}_{M}E)_{x}\) is the fibre of \(\operatorname{Jet}E\) at any point \(x\in M\) of the base manifold. In other words, the jet bundle \(\operatorname{Jet}_{M}E\) of a bundle \(E\) is such that its fiber at any point \(x\in M\) is the space of formal germs of sections of \(E\) at \(x\), as in the classical definition of jet bundle. Notice that, for any fixed \(M\in\mathbf{dFMfd}\), one has that \(\operatorname{Jet}_{M}(-)\) is a functor on the slice category \(\mathbf{dFSmoothStack}_{/M}\). In this light, we can define the jet prolongation of section as follows. **Definition 4.69** (Jet prolongation of sections).: Given a section \(\varPhi:M\to E\) of a bundle \(E\xrightarrow{p}M\), its _jet prolongation_ can be defined by the composition \[j(\varPhi)\,:\;M\xrightarrow{\simeq}\operatorname{Jet}_{M}M\xrightarrow{ \operatorname{Jet}_{M}(\varPhi)}\operatorname{Jet}_{M}E, \tag{4.5.14}\] where \(\operatorname{Jet}_{M}M\) is the jet bundle of the tautological bundle \(\operatorname{id}_{M}:M\to M\). In other words, the jet prolongation provides a canonical map \(j:\Gamma(M,E)\longrightarrow\Gamma(M,\operatorname{Jet}E)\) which sends any section \(\varPhi\in\Gamma(M,E)\) to its germs \(j(\varPhi)\in\Gamma(M,\operatorname{Jet}E)\) at every point of the base manifold. To sum up, we have a diagram of the following form: (4.5.15) A paper in preparation [1] will be devolved to the exploitation of the features of derived jet bundles in the context of derived differential cohesion. Global aspects of classical BV-theory In this section we finally get our hands dirty: we will use the new toolbox provided by derived differential cohesion to investigate some global-geometric features of classical field theory. The point of this section is not to provide a systematic non-perturbative reformulation of BV-theory, but to show that the tools developed in this paper open at least the way to progress. In subsection 5.1, we will provide a brief review of usual BV-theory via \(L_{\infty}\)-algebras - as it is probably more familiar to the physically oriented reader - and we will explain how this relates with the formal moduli problem picture. Moreover, we will provide the concrete examples of scalar field theory and Yang-Mills theory. Such examples will be important for later comparison with the global-geometric picture which we are going to construct respectively in the second and the third subsection. In fact, in subsection 5.2, we will study the global derived critical locus of an action functional on the smooth set \(\boldsymbol{\Gamma}(M,E)\) of sections of a bundle of smooth manifolds \(E\two M\), which should be seen as the global configuration space of a scalar field theory. In subsection 5.3, we will study the global derived critical locus of the Yang-Mills action functional on the smooth stack \(\mathbf{Bun}_{G}^{\nabla}(M)\) of principal \(G\)-bundles with connection on a spacetime manifold \(M\), which should be seen as the global configuration space of a gauge theory. ### Review of BV-theory via \(L_{\infty}\)-algebras In this subsection we will briefly review usual classical BV-theory, formulated in terms of \(L_{\infty}\)-algebras. For more details, we point at the references [10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 220, 221, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 232, 234, 238, 239, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 289, 288, 289, 291, 285, 289, 286, 287, 288, 289, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 38, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 409, 421, 403, 404, 409, 432, 404, 406, 409, 441, 405, 407, 409, 411, 408, 409, 421, 403, 404, 406, 407, 409, 411, 409, 421, 403, 404, 408, 409, 432, 404, 406, 409, 411, 409, 433, 404, 412, 403, 404, 413, 404, 414, 405, 407, 409, 421, 403, 404, 409, 432, 404, 406, 409, 441, 405, 407, 409, 411, 409, 421, 403, 404, 406, 407, 409, 411, 409, 434, 404, 413, 404, 414, 415, 407, 409, 421, 403, 404, 406, 407, 408, 409, 410, 421, 403, 404, 409, 432, 404, 406, 409, 411, 409, 442, 403, 406, 407, 409, 411, 409, 421, 403, 404, 414, 405, 407, 409, 411, 409, 434, 412, 403, 404, 413, 404, 414, 415, 407, 416, 417, 418, 419, 421, 403, 404, 419, 435, 407, 409, 421, 403, 404, 406, 407, 409, 410, 421, 403, 404, 406, 407, 408, 409, 411, 409, 436, 407, 409, 421, 403, 404, 406, 407, 409, 410, 438, 404, 413, 404, 414, 415, 407, 409, 421, 403, 408, 409, 439, 444, 416, 417, 419, 44, 422, 43, 44, 44, 45, 46, 47, 48, 49, 49, 409, 410, 421, 403, 404, 406, 407, 409, 411, 409, 435, 407, 409, 421, 403, 404, 406, 407, 409, 410, 439, 44, 413, 404, 414, 44, 415, 416, 417, 419, 421, 43, 44, 445, 417, 418, 419, 43, 44, 45, 419, 45, 419, 46, 421, 403, 404, 414, 45, 416, 417, 419, 435, 419, 45, 420, 421, 44, 45, 422, 453, 45, 45, \((-)\) : \(\operatorname{Sym}(\mathfrak{L}^{\vee}[-1])\otimes\mathfrak{L}[1]\longleftrightarrow \operatorname{Sym}(\mathfrak{L}\oplus\mathfrak{L}^{\vee}[-3])^{\vee}[-1]\). The classical BV-action satisfies the so-called _classical master equation_\(\{S_{\mathrm{BV}},S_{\mathrm{BV}}\}\,=\,0\). Then we can define the BV-differential by \[Q_{\mathrm{BV}}\;\coloneqq\;\{S_{\mathrm{BV}},-\}, \tag{5.1.4}\] so that the classical master equation is indeed equivalent to \(Q_{\mathrm{BV}}^{2}=0\). Moreover, notice that we have an isomorphism of graded vector spaces \(\operatorname{Sym}(\mathfrak{L}\oplus\mathfrak{L}^{\vee}[-3])^{\vee}[-1]= \operatorname{Sym}(\mathfrak{L}^{\vee}[-1]\oplus\mathfrak{L}[2])\). Thus we have all we need to define the following Chevalley-Eilenberg dg-algebra structure: \[\operatorname{CE}\bigl{(}\mathfrak{C}\mathsf{rit}(S)\bigr{)}\;\coloneqq\; \bigl{(}\operatorname{Sym}(\mathfrak{L}^{\vee}[-1]\oplus\mathfrak{L}[2]),\,\,Q _{\mathrm{BV}}=\{S_{\mathrm{BV}},-\}\bigr{)}. \tag{5.1.5}\] This can be dually interpreted as an \(L_{\infty}\)-algebra \(\mathfrak{C}\mathsf{rit}(S)\) whose underlying graded vector space \(T^{\vee}[-1]\mathfrak{L}[1]\), as we wanted. The Chevalley-Eilenberg dg-algebra \(\operatorname{CE}\bigl{(}\mathfrak{C}\mathsf{rit}(S)\bigr{)}\) is what is known as BV-complex in physical literature. As noticed by [10, 10], this discussion can be nicely refined by replacing, in the discussion above, \(L_{\infty}\)-algebras with local \(L_{\infty}\)-algebras on a fixed spacetime. **Remark 5.2** (Usual BV-theory via polyvectors).: Observe that, provided that we interpret \(\operatorname{Sym}(\mathfrak{L}^{\vee}[-1]\oplus\mathfrak{L}[2])= \operatorname{Pol}(\mathfrak{L}[1])\) as the dg-algebra of polyvector fields on the graded space \(\mathfrak{L}[1]\), we can naturally see the BV-differential as \[Q_{\mathrm{BV}}\;=\;\iota_{(-)}\mathrm{d}_{\mathrm{dR}}S+\mathcal{L}_{ \mathrm{d}_{\operatorname{CE}(\mathfrak{L})}}, \tag{5.1.6}\] where \(\iota_{(-)}\mathrm{d}_{\mathrm{dR}}S\) is the contraction of polyvectors with the de Rham differential of the starting action functional \(S\) and \(\mathcal{L}_{\mathrm{d}_{\operatorname{CE}(\mathfrak{L})}}\) is the Lie derivative of polyvectors along the Chevalley-Eilenberg differential of the BRST \(L_{\infty}\)-algebra \(\mathfrak{L}\). **Construction 5.3** (Usual BV-theory via formal moduli problems).: In [10], a beautiful geometrical insight on BV-theory is provided. The de Rham differential of the original action \(S\in\operatorname{CE}(\mathfrak{L})\) can be seen as an element \(\mathrm{d}_{\mathrm{dR}}S\in\operatorname{CE}(\mathfrak{L},\mathfrak{L}^{ \vee}[-1])\) of the Chevalley-Eilenberg dg-algebra of \(\mathfrak{L}\) valued in the \(\mathfrak{L}\)-module \(\mathfrak{L}^{\vee}[-1]\). Remarkably, in [10] it is shown that the classical BV-\(L_{\infty}\)-algebra \(\mathfrak{C}\mathsf{rit}(S)\) can be geometrically seen as the \(L_{\infty}\)-algebra associated to the pointed formal moduli problem which is the derived critical locus of the action \(S\). In other words, one has a notion of a cotangent pointed formal moduli problem \(T^{\vee}\mathbf{MC}(\mathfrak{L})\), whose complex of sections is exactly \(\operatorname{CE}(\mathfrak{L},\mathfrak{L}^{\vee}[-1])\). Then, the pointed formal moduli problem \(\mathbf{MC}\bigl{(}\mathfrak{C}\mathsf{rit}(S)\bigr{)}\) can be identified with the homotopy pullback \[\begin{CD}\mathbf{MC}\bigl{(}\mathfrak{C}\mathsf{rit}(S)\bigr{)}@>{}>{}> \mathbf{MC}(\mathfrak{L})\\ @V{}V{\mathrm{d}_{\mathrm{dR}}S}V\\ \mathbf{MC}(\mathfrak{L})@>{0}>{}>T^{\vee}\mathbf{MC}(\mathfrak{L})\end{CD} \tag{5.1.7}\] of formal moduli problems. Thus, in principle, we can obtain the \(L_{\infty}\)-algebra \(\mathfrak{C}\mathsf{rit}(S)\) which encodes classical BV-theory from a purely geometric construction - namely, a flavour of derived intersection - which is not very manifest when we approach BV-theory by following the usual recipe based on constructing the classical BV-action. Let us now take some time to explore two fundamental classes of examples of BV-theories in terms of \(L_{\infty}\)-algebras and formal moduli problems: scalar fields and gauge theories. **Example 5.4** (Klein-Gordon theory).: We start from the following graded vector space: \[\mathfrak{L}[1]\;=\;\mathcal{C}^{\infty}(M), \tag{5.1.8}\] equipped with the trivial \(L_{\infty}\)-algebra structure. The classical action of a Klein-Gordon field \(\phi\in{\cal C}^{\infty}(M)\) with arbitrary interaction terms is given by \[S(\phi)\;=\;\int_{M}\bigg{(}\frac{1}{2}\phi\Box\phi+\sum_{k>1}\frac{m_{k}}{k!} \phi^{k}\bigg{)}{\rm vol}_{M}. \tag{5.1.9}\] By following the aforementioned recipe, one can obtain an \(L_{\infty}\)-algebra on the complex \[\begin{array}{c}{\mathfrak{C}}{\sf{r}it}(S)[1]\;=\;\Big{(}\;{\cal C}^{\infty }(M)\;\xrightarrow{\Box+m_{2}}\;{\cal C}^{\infty}(M)\;\Big{)}\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; for any elements \(c_{k}\in\Omega^{0}(M,\mathfrak{g})\) and \(A\in\Omega^{1}(M,\mathfrak{g})\). Informally speaking, as it is often presented in the context of BRST-theory, this \(L_{\infty}\)-algebra us dually given by the Chevalley-Eilenberg differential \[\begin{array}{rcl}\mathrm{d}_{\mathrm{CE}(\mathfrak{L})}:&\mathsf{c}&\longmapsto &-\frac{1}{2}[\mathsf{c},\mathsf{c}]_{\mathfrak{g}},\\ \mathrm{d}_{\mathrm{CE}(\mathfrak{L})}:&\mathsf{A}&\longmapsto&\mathrm{d} \mathsf{c}+[\mathsf{A},\mathsf{c}]_{\mathfrak{g}},\end{array} \tag{5.1.15}\] where \(\mathsf{c}:\Omega^{0}(M,\mathfrak{g})\to\mathbb{R}\) and \(\mathsf{A}:\Omega^{1}(M,\mathfrak{g})\to\mathbb{R}\) should be thought of as coordinate functions on the underlying graded vector space. Thus, the \(L_{\infty}\)-algebra \(\mathfrak{L}\) is precisely the algebraic incarnation of the BRST complex of physics. We want to consider the standard action functional of a Yang-Mills theory, which is given by \[S(A)\;=\;\frac{1}{2}\int_{M}\langle F_{A},\star F_{A}\rangle_{\mathfrak{g}}, \tag{5.1.16}\] where \(F_{A}\coloneqq\nabla_{\!A}A=\mathrm{d}A+[A\,\dot{\wedge},A]_{\mathfrak{g}}\) is the field strength. By exploiting the given pairing \[\langle-\,\dot{\wedge},-\rangle_{\mathfrak{g}}\;:\;\Omega^{d-p}(M,\mathfrak{ g})\times\Omega^{p}(M,\mathfrak{g})\;\longrightarrow\;\mathcal{C}^{\infty}(M) \tag{5.1.17}\] we are led to an \(L_{\infty}\)-algebra \(\mathfrak{C}\mathsf{r}\mathsf{i}\mathsf{t}(S)\) whose underlying differential graded vector space is \[\begin{array}{ccc}\mathfrak{C}\mathsf{r}\mathsf{i}\mathsf{t}(S)[1]&=& \Big{(}\;\Omega^{0}(M,\mathfrak{g})\;\xrightarrow{\mathrm{d}}\;\Omega^{1}(M, \mathfrak{g})\;\xrightarrow{\mathrm{d}*\mathrm{d}}\;\Omega^{d-1}(M, \mathfrak{g})\;\xrightarrow{\mathrm{d}}\;\Omega^{d}(M,\mathfrak{g})\;\Big{)} \\ \deg=&\quad-1&\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \; Now, let us give a look to the formal moduli problem description of the example of Yang-Mills theory. First, let us fix any ordinary Artinian algebra \({\cal R}\in{\sf Art}_{\mathbb{R}}\). We will now explicitly construct the simplicial set \({\rm MC}({\mathfrak{L}}\otimes{\mathfrak{m}}_{\cal R})\), where \({\mathfrak{m}}_{\cal R}\) is the maximal differential ideal of \({\cal R}\). The set of \(0\)-simplices is just \[{\rm MC}({\mathfrak{L}}\otimes{\mathfrak{m}}_{\cal R})_{0}\:=\:\left\{\begin{array} []{l}A\in\Omega^{1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R}\end{array} \right\},\] and the set of \(1\)-simplices is given by \[{\rm MC}({\mathfrak{L}}\otimes{\mathfrak{m}}_{\cal R})_{1}\:=\:\left\{\begin{array} []{l}c_{1}{\rm d}t\in\Omega^{0}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R }\otimes\Omega^{1}([0,1])\\ A_{0}\;\;\;\in\Omega^{1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R} \otimes\Omega^{0}([0,1])\end{array}\right|\;\begin{array}{l}\frac{{\rm d}}{ {\rm d}t}A_{0}+\nabla_{A_{0}}c_{1}\:=\:0\end{array}\right\}.\] and so on for higher simplices. This provided the formal moduli problem version of the BRST \(L_{\infty}\)-algebra \({\mathfrak{L}}\). Now, we move on to the to the Maurer-Cartan formal moduli problem of the classical BV-BRST \(L_{\infty}\)-algebra \({\mathfrak{CFit}}(S)\), i.e. the functor \[{\bf MC}\big{(}{\mathfrak{CFit}}(S)\big{)}\::\:{\cal R}\:\longmapsto\:{\rm MC }({\mathfrak{CFit}}(S)\otimes{\mathfrak{m}}_{\cal R}) \tag{5.1.22}\] where \({\cal R}\) is now allowed to be a dg-Artinian algebra. For concreteness, let us write explicitly the sets of \(0\)- and \(1\)-simplices of this simplicial set for a fixed general dg-Artinian algebra \({\cal R}\). So, the set of \(0\)-simplices is given by \[{\rm MC}({\mathfrak{CFit}}(S)\otimes{\mathfrak{m}}_{\cal R})_{0}\:=\:\left\{ \begin{array}{l}A\;\;\;\in\Omega^{1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_ {\cal R,0}\\ A^{+}\;\;\;\in\Omega^{d-1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-1} \\ c^{+}\;\;\;\in\Omega^{d}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-2} \end{array}\right|\;\begin{array}{l}\nabla_{A}\star F_{A}\;=\:{\rm d}_{\cal R }A^{+}\\ \nabla_{A}A^{+}\;=\:{\rm d}_{\cal R}c^{+}\end{array}\right\},\] and the set of \(1\)-simplices is \[{\rm MC}({\mathfrak{CFit}}(S)\otimes{\mathfrak{m}}_{\cal R})_{1}\:=\:\left\{ \begin{array}{l}c_{1}{\rm d}t\;\;\in\Omega^{0}(M,{\mathfrak{g}})\otimes{ \mathfrak{m}}_{\cal R,0}\otimes\Omega^{1}([0,1])\\ A_{0}\;\;\;\;\in\Omega^{1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,0} \otimes\Omega^{0}([0,1])\\ A^{1}{\rm d}t\;\;\in\Omega^{1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-1} \otimes\Omega^{1}([0,1])\\ A^{+}_{0}\;\;\;\in\Omega^{d-1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-1 }\otimes\Omega^{0}([0,1])\\ A^{+}_{1}\;\;\;\in\Omega^{d-1}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-2 }\otimes\Omega^{1}([0,1])\\ c^{+}_{0}\;\;\;\;\in\Omega^{d}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-2 }\otimes\Omega^{0}([0,1])\\ c^{+}_{1}{\rm d}t\;\;\in\Omega^{d}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-2}\otimes\Omega^{0}([0,1])\\ c^{+}_{1}{\rm d}t\;\;\in\Omega^{d}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-3}\otimes\Omega^{1}([0,1])\end{array}\right|\;\begin{array}{l}\nabla_{A_{0 }}\star F_{A_{0}}\;=\:{\rm d}_{\cal R}A^{+}_{0}\\ \nabla_{A_{0}}A^{+}_{0}\;=\:{\rm d}_{\cal R}c^{+}_{0}\\ \frac{{\rm d}}{{\rm d}t}A_{0}+\nabla_{A_{0}}c_{1}\;=\:{\rm d}_{\cal R}A_{1}\\ \frac{{\rm d}}{{\rm d}t}A^{+}_{0}+\nabla_{A_{0}}\star F_{A_{1}}+\\ +\left[c_{1},A^{+}_{0}\right]\;=\:{\rm d}_{\cal R}A^{+}_{1}\\ c^{+}_{0}\;\;\;\;\in\Omega^{d}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-2 }\otimes\Omega^{0}([0,1])\\ c^{+}_{1}{\rm d}t\;\;\in\Omega^{d}(M,{\mathfrak{g}})\otimes{\mathfrak{m}}_{\cal R,-3}\otimes\Omega^{1}([0,1])\end{array}\right|\;\begin{array}{l}\frac{{\rm d }}{{\rm d}t}c^{+}_{0}+\nabla_{A_{0}}A^{+}_{1}+\\ +\left[c_{1},c^{+}_{0}\right]\;=\:{\rm d}_{\cal R}c^{+}_{1}\end{array}\right\},\] where \(t\) is a coordinate on the unit interval \([0,1]\subset{\mathbb{R}}\). The elements of this set are \(1\)-simplices in the sense that each of them links a \(0\)-simplex (\(A\), \(A^{+}\), \(c^{+}\)) = \((A_{0}(0)\), \(A^{+}_{0}(0)\), \(c^{+}_{0}(0)\)) at \(t=0\) to the \(0\)-simplex (\(A^{\prime}\), \(A^{+\prime}\), \(c^{+\prime}\)) = \((A_{0}(1)\), \(A^{+}_{0}(1)\), \(c^{+}_{0}(1)\)) at \(t=1\). And so on for higher simplices. The rest of this section will devoted to the construction of a global version of this formalism in the context of derived differential cohesion. Figure 10: \(0\)- and \(1\)-simplices of \({\rm MC}({\mathfrak{CFit}}(S)\otimes{\mathfrak{m}}_{\cal R})\). ### Global scalar field theory In this subsection we will first illustrate the smooth set structure of the space of sections of a fibre bundle, which is the configuration space of a scalar field theory. Second, we will see how the derived critical locus of a smooth functional on such a space is defined and what is its formal derived smooth structure. It is worth stressing that the fibre bundle \(E\two M\) corresponding to a general classical scalar field theory does not have to be a vector bundle; in fact, it can be a general fibre bundle of smooth manifolds. **Definition 5.6** (Smooth set of sections).: Let \(M\) be a smooth manifold and \(E\twoheadrightarrow M\) a fibre bundle of smooth manifolds. The _smooth set of sections_\(\mathbf{\Gamma}(M,E)\in\mathsf{SmoothSet}\) of \(E\) is defined by the formal smooth sheaf \[\mathbf{\Gamma}(M,E)\,:\;U\,\longmapsto\,\Gamma(M\times U,\,\pi_{M}^{*}E) \tag{5.2.1}\] where \(\pi_{M}:M\times U\to M\) is the natural projection and \(U\in\mathsf{Mfd}\) is any smooth manifold. **Remark 5.7** (Diffeological space of sections).: Notice that the smooth set \(\mathbf{\Gamma}(M,E)\) is a concrete sheaf and, thus, it is in particular a diffeological space. **Remark 5.8** (As sheaf on spacetime \(M\)).: Notice that the formal smooth set of sections \(\mathbf{\Gamma}(M,E)\) is also a sheaf on the smooth manifold \(M\). This means that, for any good open cover \(\{V_{i}\}_{i\in I}\) of the smooth manifold \(M\), we have the limit \[\mathbf{\Gamma}(M,E)\;\simeq\;\lim\left(\;\prod_{i}\mathbf{\Gamma}(V_{i},E) \;\xleftarrow{\hskip-11.381102pt\longleftarrow}\;\prod_{i,j}\mathbf{\Gamma}( V_{i}\cap V_{j},E)\;\right) \tag{5.2.2}\] in the category of formal smooth sets. In this sense, \(\mathbf{\Gamma}(-,E)\) can be seen as a "sheaf of sheaves". More precisely, we can see \(\mathbf{\Gamma}(-,E)\) as a sheaf on the product site \(\mathsf{Mfd}\times\mathsf{Op}(M)\), where \(\mathsf{Mfd}\) is the site of ordinary smooth manifolds and \(\mathsf{Op}(M)\) is the one of open subsets of the manifold \(M\). The crucial reason why we promoted the bare set of sections \(\Gamma(M,E)\in\mathsf{Set}\) to a smooth set \(\mathbf{\Gamma}(M,E)\in\mathsf{SmoothSet}\) is that the latter comes with a smooth structure - which is, in particular, the structure of a diffeological space. Therefore, as seen in section 1.1, there is a well-defined notion of differential geometry on such a space. **Example 5.9** (\(\sigma\)-models).: An interesting class of examples of such a configuration space is the one of \(\sigma\)-models, where the bundle is trivial and its total space is a product manifold \(E\twoheadrightarrow M\times X\) for some smooth manifold \(X\). This way, the configuration space \(\mathbf{\Gamma}(M,E)\simeq[M,X]\) is given by the mapping space of the two manifolds, namely the smooth set of smooth maps from the manifold \(M\) to the manifold \(X\), which is usually called _target space_ of the theory. Next, let us extend our smooth set \(\mathbf{\Gamma}(M,E)\) to a formal smooth set, by embedding it along the natural embedding \(\mathsf{SmoothSet}\longleftrightarrow\mathsf{FSmoothSet}\) from section 2.3. For simplicity, we will keep denoting by \(\mathbf{\Gamma}(M,E)\) the formal smooth set obtained by this embedding. **Example 5.10** (Parameterised families of scalar fields).: Let us consider some basic examples of parametrised families of sections of a bundle of smooth manifolds \(E\twoheadrightarrow M\). * Let \(U=*\) be the point. A \(*\)-parameterised family of sections \(\Phi:*\to\mathbf{\Gamma}(M,E)\) is nothing but an element of the bare set \(\Phi\in\Gamma(M,E)\). * Now, let \(U=\mathbb{R}^{p}\) with \(p>0\). A \(\mathbb{R}^{p}\)-parameterised family of sections \(\Phi:\mathbb{R}^{p}\to\mathbf{\Gamma}(M,E)\) is nothing but a family of sections \(\Phi_{u}\in\Gamma(M,E)\) which smoothly varies by varying \(u\in\mathbb{R}^{p}\). * Now, let \(U=\operatorname{Spec}(\mathbb{R}[\epsilon]/(\epsilon^{2}))\) be the formal smooth manifold whose \(\mathcal{C}^{\infty}\)-algebra of functions is given by the dual numbers (i.e. a thickened point). A \(\operatorname{Spec}(\mathbb{R}[\epsilon]/(\epsilon^{2}))\)-parameterised family of sections \(\Phi:\operatorname{Spec}(\mathbb{R}[\epsilon]/(\epsilon^{2}))\to \mathbf{\Gamma}(M,E)\) is equivalently a point \(\Phi:*\to T\mathbf{\Gamma}(M,E)\) in the tangent bundle of the original formal smooth set. Now that we have our global-geometric configuration space \(\mathbf{\Gamma}(M,E)\) of a scalar field theory, we need to introduce its dynamics. This can be done with an action functional for the scalar field theory. So, first, we need to construct the smooth set of compactly supported sections. **Construction 5.11** (Smooth set of compactly supported sections).: We can construct the _smooth set of compactly supported sections_\(\mathbf{\Gamma}_{c}(M,E)\hookrightarrow\mathbf{\Gamma}(M,E)\) by the sheaf which sends any smooth manifold \(U\) to the set of those sections \(\varPhi_{u}\in\Gamma(M\times U,\,\pi_{M}^{*}E)\) whose support \(\operatorname{supp}(\varPhi_{u})\hookrightarrow M\times U\xrightarrow{\pi_{U} }U\) maps properly. **Construction 5.12** (Variational calculus on spaces of sections).: \(\mathbf{\Gamma}_{c}(M,E)\hookrightarrow\mathbf{\Gamma}(M,E)\). As previously discussed, a smooth functional on sections of a bundle \(E\twoheadrightarrow M\) is exactly a morphism of smooth sets \[S\,:\,\mathbf{\Gamma}_{c}(M,E)\,\longrightarrow\,\mathbb{R}, \tag{5.2.3}\] or, equivalently, a smooth function \(S\in\mathcal{O}(\mathbf{\Gamma}_{c}(M,E))\) on the smooth set of sections. On every element of the site \(U\in\mathsf{Mfd}\), this is concretely given by a morphism of sets \[S(U)\,:\,\Gamma_{c}(M\times U,\pi_{M}^{*}E)\,\longrightarrow\,\mathcal{C}^{ \infty}(U,\mathbb{R}) \tag{5.2.4}\] where \(S(U)\) sends \(U\)-parametrised sections of the bundle \(E\twoheadrightarrow M\) to smooth functions on \(U\). Moreover, for any morphism \(f:U\to U^{\prime}\) in the site, we have compatibility conditions between \(S(U)\) and \(S(U^{\prime})\). The so-called first variation of this functional is nothing but the morphism of smooth sets \[\mathrm{d}_{\mathrm{dR}}S\,:\,\mathbf{\Gamma}_{c}(M,E)\,\xrightarrow{S}\, \,\mathbb{R}\,\xrightarrow{\mathrm{d}_{\mathrm{dR}}}\,\mathbf{\Omega}^{1}, \tag{5.2.5}\] where \(\mathbf{\Omega}^{1}\) is the smooth set of differential \(1\)-forms and \(\mathrm{d}_{\mathrm{dR}}\in\operatorname{Hom}(\mathbb{R},\Omega^{1})\) is the de Rham differential. Such a morphism of smooth sets is a well-defined \(1\)-form \(\mathrm{d}_{\mathrm{dR}}S\in\Omega^{1}(\mathbf{\Gamma}_{c}(M,E))\) on the smooth set of compactly supported sections \(\mathbf{\Gamma}_{c}(M,E)\). Since \(\mathrm{d}_{\mathrm{dR}}S\) is a differential form, notice that it maps vectors by \[(\mathrm{d}_{\mathrm{dR}}S)_{\varPhi}\,:\,T\mathbf{\Gamma}_{c}(M,E)_{\varPhi} \,\longrightarrow\,\mathbb{R} \tag{5.2.6}\] at any point \(\varPhi:*\to\mathbf{\Gamma}_{c}(M,E)\) **Construction 5.13** (Restricted cotangent bundle).: Now, let us consider the vertical tangent bundle \(T_{\mathrm{ver}}E\coloneqq\ker(TE\twoheadrightarrow TM)\), which is a vector bundle on the base manifold \(E\). Consider also its dual vector bundle \(T_{\mathrm{ver}}^{\vee}E\to E\). These two bundles come equipped with the canonical pairing \(\langle-,-\rangle_{E}:T_{\mathrm{ver}}E\times_{E}T_{\mathrm{ver}}^{\vee}E \longrightarrow E\times\mathbb{R}\). Since \(T_{\mathrm{ver}}E\) and \(T_{\mathrm{ver}}^{\vee}E\) are also bundles of smooth manifolds on the base manifold \(M\) by post-composition with \(E\twoheadrightarrow M\), we obtain a pairing \[\langle-,-\rangle_{E}\,:\,\mathbf{\Gamma}(M,T_{\mathrm{ver}}E)\times_{\mathbf{ \Gamma}(M,E)}\mathbf{\Gamma}(M,T_{\mathrm{ver}}^{\vee}E)\,\longrightarrow\, \mathbf{\Gamma}(M,E)\times[M,\mathbb{R}]. \tag{5.2.7}\] Recall that there is a canonical equivalence \(T\mathbf{\Gamma}(M,E)\simeq\mathbf{\Gamma}(M,T_{\mathrm{ver}}E)\). Thus, it makes sense to define the _restricted cotangent bundle_ of the smooth set of sections \(\mathbf{\Gamma}(M,E)\) by \[T_{\mathrm{res}}^{\vee}\mathbf{\Gamma}(M,E)\,\coloneqq\,\mathbf{\Gamma}(M,T_ {\mathrm{ver}}^{\vee}E). \tag{5.2.8}\] If, as is usually the case in physics, the action functional \(S\in\mathcal{O}(\mathbf{\Gamma}_{c}(M,E))\) of the considered field theory is a local functional3, then the de Rham differential of the action functional can be written in the form \(\mathrm{d}_{\mathrm{dR}}S=\int_{M}\!\mathrm{vol}_{M}\langle\delta S,-\rangle_{E}\) for some morphism Footnote 3: The argument goes roughly as follows. A local action functional can be expressed by \(S(\phi)=\int_{M}\!j(\phi)^{*}\!L\mathrm{vol}_{M}\), where \(j(\phi)\) is the jet prolongation of the section \(\phi\) and \(L\) is the Lagrangian, which is a function on the jet bundle. It is possible to show [16, 17] that its first variation is given by \(\mathrm{d}_{\mathrm{dR}}S(\phi)=\int_{M}\!j(\phi)^{*}\delta_{\mathrm{EL}}L \wedge\mathrm{vol}_{M}\), where \(\delta_{\mathrm{EL}}L\) is the so-called Euler-Lagrange form, which is a section \(\delta_{\mathrm{EL}}L:\mathrm{Jet}_{M}\!E\to T_{\mathrm{ver}}^{\vee}E\). Then, by defining \(\delta S=j(-)^{*}\delta_{\mathrm{EL}}L\), one gets the functional derivative. In [1] we will deal more systematically with these field-theoretic details. \[\delta S\,:\,\mathbf{\Gamma}(M,E)\,\longrightarrow\,\mathbf{\Gamma}(M,T_{ \mathrm{ver}}^{\vee}E), \tag{5.2.9}\] which we call _variational derivative_ of the action functional \(S\), and some fixed volume form \(\operatorname{vol}_{M}\). In fact, this represents the notion of variational derivative familiar to physicists and the equation \(\delta S=0\) is precisely the Euler-Lagrange equation. We can now introduce the derived critical locus of an action functional \(S\) as the derived zero locus of its variational derivative \(\delta S\). **Definition 5.14** (Derived critical locus of an action functional).: Let \(\mathbf{\Gamma}(M,E)\in\mathsf{SmoothSet}\) be the smooth set of sections of a bundle \(E\twoheadrightarrow M\) of smooth manifolds and let \(S:\mathbf{\Gamma}_{c}(M,E)\to\mathbb{R}\) be an action functional. We define the _derived critical locus_\(\operatorname{RCrit}(S)(M)\in\mathbf{dFSmoothSet}\) of the action functional \(S\) by the homotopy pullback (5.2.10) in the \((\infty,1)\)-category \(\mathbf{dFSmoothSet}\), where \(0\) is the zero-section and \(\delta S\) is the de Rham differential of the action functional functional \(S\). **Remark 5.15** (Derived critical locus is a formal derived diffeological space).: The ordinary critical locus \(\operatorname{Crit}(S)(M)\in\mathsf{SmoothSet}\) is given by the underived-truncation of the derived critical locus, i.e. by \(\varPi^{\operatorname{dif}}\mathsf{RCrit}(S)(M)\simeq\operatorname{Crit}(S)(M)\). Notice that \(\operatorname{Crit}(S)(M)\hookrightarrow\mathbf{\Gamma}(M,E)\) is a diffeological space. This implies that the derived critical locus \(\operatorname{RCrit}(S)(M)\in\mathbf{dFDiffSp}\) is, in particular, a formal derived diffeological space. **Remark 5.16** (Explicit expression for the \(0\)-simplices of the derived critical locus).: Given a formal derived smooth manifold \(U\), let us denote by the fibre of the bundle of simplicial sets \(\operatorname{RHom}\bigl{(}U,\,\mathbf{\Gamma}(M,T^{\vee}_{\operatorname{ ver}}E)\bigr{)}\longrightarrow\operatorname{RHom}\bigl{(}U,\,\mathbf{\Gamma}(M,E) \bigr{)}\) at the point of the base \(\Phi:U\to\mathbf{\Gamma}(M,E)\). The set of \(0\)-simplices of the \(\infty\)-groupoid \(\operatorname{RHom}\bigl{(}U,\,\operatorname{RCrit}(S)(M)\bigr{)}\) of sections of the derived critical locus \(\operatorname{RCrit}(S)(M)\) on a formal derived smooth manifold \(U\) is where \(\operatorname{RHom}\bigl{(}U,\,\mathbf{\Gamma}(M,E)\bigr{)}_{0}\) is the set of \(0\)-simplices of the simplicial set \(\operatorname{RHom}\bigl{(}U,\,\mathbf{\Gamma}(M,E)\bigr{)}\) and \(\operatorname{RHom}\bigl{(}U,\,\mathbf{\Gamma}(M,T^{\vee}_{\operatorname{ ver}}E)\bigr{)}_{\Phi,1}\) is the set of \(1\)-simplices of the simplicial set \(\operatorname{RHom}\bigl{(}U,\,\mathbf{\Gamma}(M,T^{\vee}_{\operatorname{ ver}}E)\bigr{)}_{\Phi}\), which comes with face maps \(\partial_{0,1}\). **Remark 5.17** (Global antifield).: Notice that a \(0\)-simplex of the simplicial set of sections \(\operatorname{RHom}(U,\,\operatorname{RCrit}(S)(M))\) is a pair of the form \[(\Phi,\,\Phi^{+})\,\in\,\operatorname{RHom}(U,\,\operatorname{RCrit}(S)(M)), \tag{5.2.11}\] where \(\Phi^{+}\) is a homotopy from the variational derivative \(\delta S(\Phi)\) of the action functional at the field configuration \(\Phi\) to zero, as written above. Notice that \(\Phi\) is a scalar field and \(\Phi^{+}\) is the global-geometric version of what is known as its antifield in usual BV-theory. However, it is clear that the two fields play a very different role in the global geometry of the scalar field theory: in fact, the antifield \(\Phi^{+}\) is not independent from the field \(\Phi\), but it lives in the fibre \(\Gamma(M,T^{\vee}_{\operatorname{ver}}E)_{\Phi}\). **Example 5.18** (The case of \(E\) a vector bundle).: Let \(E\twoheadrightarrow M\) be an vector bundle, so that the smooth set \(\mathbf{\Gamma}(M,E)\) of its sections has a natural vector space structure. In this case, the restricted cotangent bundle reduces to \(T^{\vee}_{\mathrm{res}}\mathbf{\Gamma}(M,E)\simeq\mathbf{\Gamma}(M,E\times_{M}E ^{\vee})\simeq\mathbf{\Gamma}(M,E)\oplus\mathbf{\Gamma}(M,E^{\vee})\). The set \(\Gamma(M,E)\) of sections of a vector bundle comes also equipped with a \(\mathcal{C}^{\infty}\)-module structure on \(\mathcal{C}^{\infty}(M)\), which allows the use of the \(\mathcal{C}^{\infty}\)-tensor product \(\widehat{\otimes}\). So, the set of \(0\)-simplices from remark 5.16, in case of \(E\twoheadrightarrow M\) being a vector bundle reduces to the more familiar looking \[\mathrm{RHom}\big{(}U,\,\mathrm{RCrit}(S)(M)\big{)}_{0}\;=\;\left\{\begin{array} []{ll}\Phi\in\Gamma(M,E)\,\widehat{\otimes}\,\mathcal{O}(U)_{0}&\left|\begin{array} []{ll}\delta S(\Phi)=\,\partial_{0}\,\Phi^{+}\\ \Phi^{+}\in\Gamma(M,E^{\vee})\,\widehat{\otimes}\,\mathcal{O}(U)_{1}&\left| \begin{array}{ll}\delta S(\Phi)=\,\partial_{0}\,\Phi^{+}\\ 0=\,\partial_{1}\,\Phi^{+}\end{array}\right.\right\}\!,\end{array}\right.\] where \(\mathcal{O}(U)_{0}\) and \(\mathcal{O}(U)_{1}\) are respectively the \(\mathcal{C}^{\infty}\)-algebras of \(0\)- and \(1\)-simplices of the simplicial \(\mathcal{C}^{\infty}\)-algebra \(\mathcal{O}(U)\) and \(\partial_{0,1}\) are the corresponding face maps. Now, the pointed formal moduli problems of the form considered in subsection 5.1 to study BV-theory can, in principle, be obtained by formal completion \(\mathrm{RCrit}(S)(M)\hat{\wedge}_{\partial_{0}}\) at some fixed solution of the equations of motion \(\Phi_{0}\in\mathrm{RCrit}(S)\) as explained in construction 4.42. Such an operation amounts to the construction of the pointed formal moduli problem \(\mathrm{RCrit}(S)(M)\hat{\wedge}_{\partial_{0}}^{\wedge}\) which infinitesimally approximates the formal derived smooth stack \(\mathrm{RCrit}(S)(M)\) at \(\Phi_{0}\in\mathrm{RCrit}(S)\). Let us now see this more in detail. **Remark 5.19** (Infinitesimal disk as formal moduli problem of Klein-Gordon theory).: As an example, let us consider Klein-Gordon theory, so let \(S:[M,\mathbb{R}]_{c}\to\mathbb{R}\) be a Klein-Gordon action of the form \[S(\phi)\;=\;\int_{M}\Bigl{(}\phi\Box\phi-V(\phi)\Bigr{)}\mathrm{vol}_{M}, \tag{5.2.12}\] where \(V(\phi)\) is a function such that \(V(0)=0\). According to the machinery above, we can construct the derived critical locus \(\mathrm{RCrit}(S)(M)\), which will be a formal derived smooth set. The fact that the \(0\)-section is the trivial solution of the equations of motion, assures that there is a point \(0:*\to\mathrm{RCrit}(S)(M)\), so we can consider the formal disk of the derived critical locus at such a point, according to definition 4.24. It is possible to see that one has an equivalence \[\mathbb{D}_{\mathrm{RCrit}(S)(M),0}\;\simeq\;\mathbf{B}\mathfrak{L}(M), \tag{5.2.13}\] where \(\mathfrak{L}(M)\) is the local \(L_{\infty}\)-algebra which has the underlying graded vector space given simply by \(\mathfrak{L}(M)=\mathcal{C}^{\infty}(M)[-1]\oplus\mathcal{C}^{\infty}(M)[-2]\) and bracket structure given by \[\begin{split}\ell_{1}(\phi)&=\;\Box\phi-\frac{ \partial V(\phi)}{\partial\phi}\bigg{|}_{0}\phi,\\ \ell_{k}(\phi_{1},\dots,\phi_{k})&=\;-\frac{ \partial^{k}V(\phi)}{\partial\phi^{k}}\bigg{|}_{0}\phi_{1}\cdots\phi_{k}\quad \text{for $k>1$}\end{split} \tag{5.2.14}\] for any \(\phi_{i}\in\mathcal{C}^{\infty}(M)\). This is precisely the \(L_{\infty}\)-algebra which encodes the usual perturbative BV-theory of a Klein-Gordon scalar field. We can formally complete our formal derived smooth stack \(\mathrm{RCrit}(S)(M)\) at the trivial solution to obtain the pointed formal moduli problem \[\mathrm{RCrit}(S)(M)\hat{\wedge}_{0}\;\;\simeq\;\;\Gamma^{\mathrm{rel}}\mathbb{ D}_{\mathrm{RCrit}(S)(M),0}\;\;\simeq\;\;\mathbf{MC}\big{(}\mathfrak{L}(M) \big{)}, \tag{5.2.15}\] where \(\Gamma^{\mathrm{rel}}\) is the functor we introduced in section 4.3. For a suitable choice of potential \(V(\phi)\), this is nothing but the pointed formal moduli problem of Klein-Gordon theory appearing in [11, 12]. Thus, this shows that the formal derived smooth stack \(\mathrm{RCrit}(S)(M)\) provides a global-geometric version of the BV-theory of a Klein-Gordon scalar field. The usual perturbative formulation is given by the formal disk \(\mathbb{D}_{\mathrm{RCrit}(S)(M),0}\simeq\mathrm{RCrit}(S)(M)\times_{\mathfrak{ \Im}(\mathrm{RCrit}(S)(M))}\{0\}\) at the trivial solution, whose construction is made possible by derived differential cohesion. Now, the usual perturbative BV-theory is most commonly dually stated in terms of dg-algebras of observables, also known as BV-complexes in physics. To make contact with this perspective, we will now investigate what is the global-geometric version of the BV-complex of a scalar field. **Remark 5.20** (Global BV-complex).: In what follows we will be deploying the compact notation \(\mathbb{O}(X)\coloneqq\mathds{R}\Gamma(X,\mathbb{O}_{X})\) for the complex of global sections of the structure sheaf \(\mathbb{O}_{X}\in\operatorname{QCoh}(X)\) of a formal derived smooth stack, as defined in subsection 3.6.1. As we have already noticed, the dual vector bundle of the vector bundle \(\boldsymbol{\Gamma}(M,T_{\mathrm{ver}}^{\vee}E)\twoheadrightarrow\boldsymbol{ \Gamma}(M,E)\) is precisely the tangent bundle \(T\boldsymbol{\Gamma}(M,E)\simeq\boldsymbol{\Gamma}(M,T_{\mathrm{ver}}E)\) of the smooth set of sections. By applying the machinery of derived zero loci, it is possible to see that the complex of global sections of \(\operatorname{RCrit}(S)(M)\) is given by \[\mathbb{O}\big{(}\mathds{R}\mathrm{Crit}(S)(M)\big{)}\;\simeq\;\Big{(}\cdots \stackrel{{ Q}}{{\longrightarrow}}\wedge^{2}\mathfrak{X}\big{(} \boldsymbol{\Gamma}(M,E)\big{)}\stackrel{{ Q}}{{\longrightarrow}} \mathfrak{X}\big{(}\boldsymbol{\Gamma}(M,E)\big{)}\stackrel{{ Q}}{{ \longrightarrow}}\mathcal{O}\big{(}\boldsymbol{\Gamma}(M,E)\big{)}\Big{)}\] where \(\mathfrak{X}\big{(}\boldsymbol{\Gamma}(M,E)\big{)}\) is the set of vector fields on the ordinary smooth set \(\boldsymbol{\Gamma}(M,E)\) and the differential \(Q\) is given by the contraction \(\iota_{(-)}\delta S\) of poly-vectors with the variational derivative \(\delta S\) constructed above. This is the picture that most directly generalises the BV-complex appearing in perturbative BV-theory. Moreover, it generalises the functional approach to quantum mechanics of [10]. To see that the complex \(\mathbb{O}\big{(}\mathds{R}\mathrm{Crit}(S)(M)\big{)}\) of global sections of the structure sheaf reduces to the usual BV-complex, it is enough to notice that, in the case of the formal disk \(\mathbb{D}_{\operatorname{RCrit}(S)(M),0}\simeq\mathbf{B}\mathfrak{X}(M)\), we obtain the complex4 Footnote 4: It is a standard fact (see for example [11]) that the complex of global sections on a formal group stack of the form \(\mathbf{B}\mathfrak{g}\), with \(\mathfrak{g}\) an \(L_{\infty}\)-algebra, reduces to the Chevalley-Eilenberg algebra \(\operatorname{CE}(\mathfrak{g})\) of \(\mathfrak{g}\). \[\mathbb{O}(\mathbb{D}_{\operatorname{RCrit}(S)(M),0})\;\cong\;\operatorname{ CE}\big{(}\mathfrak{L}(M)\big{)},\] where \(\operatorname{CE}\big{(}\mathfrak{L}(M)\big{)}\) is the Chevalley-Eilenberg algebra of the \(L_{\infty}\)-algebra \(\mathfrak{L}(M)\) found above. This tells us that the complex of sections \(\mathbb{O}\big{(}\mathds{R}\mathrm{Crit}(S)(M)\big{)}\) of the structure sheaf of the derived critical locus is a globally-defined generalisation of the usual BV-complex, which is recovered infinitesimally. Let us stress that the field bundle \(E\twoheadrightarrow M\) is a general fibre bundle of smooth manifolds and it does not have to be a vector bundle. ### Global BRST-BV formalism In this subsection we will construct a global-geometric version of the BRST-BV formalism for Yang-Mills theory. First, we will illustrate the smooth stack structure of the space of principal \(G\)-bundles with connection on a given smooth manifold, which is the configuration space of Yang-Mills theory. Second, we will see how the derived critical locus of the Yang-Mills action functional on such a smooth stack can be concretely constructed as formal derived smooth stack. Finally, we will show that such a construction provides a global generalisation of usual the usual BV-formalism for Yang-Mills theory. #### 5.3.1 Global BRST formalism Let us now temporarily take a step back and work in the \((\infty,1)\)-topos **SmoothStack** of smooth stacks, i.e. stacks on the ordinary site of smooth manifolds. Our objective in this subsection is the construction of the smooth stack \(\mathbf{B}\mathbf{u}_{G}^{\nabla}(M)\) of principal \(G\)-bundles on \(M\) with connection. We will see such a stack as the global-geometric configuration space of a gauge theory on spacetime \(M\) with gauge group \(G\). This is because a field configuration of a gauge field is precisely the datum of a principal \(G\)-bundle on \(M\) with a connection. **Construction 5.21** (\(\infty\)-groupoid of principal \(G\)-bundles).: For a given ordinary Lie group \(G\), the smooth stack \(\mathbf{B}G=[*/G]\) is the moduli stack of principal \(G\)-bundles. For a given manifold \(M\), the \(0\)-simplices of the \(\infty\)-groupoid \(\operatorname{Hom}(M,\mathbf{B}G)\) are all the non-abelian Cech \(G\)-cocycles \(\{g_{\alpha\beta}\in\mathcal{C}^{\infty}(V_{\alpha}\cap V_{\beta},G)\,|\,g_{ \alpha\beta}\cdot g_{\beta\gamma}=g_{\alpha\gamma}\}\) on \(M\) and the \(1\)-simplices are all the coboundaries \(\{g_{\alpha\beta}\mapsto c_{\alpha}g_{\alpha\beta}c_{\beta}^{-1}\}\) between cocycles. Schematically, we have: \[\operatorname{Hom}(M,\mathbf{B}G)\,\simeq\,\left\{\begin{array}{c}\raisebox{-1 4.226378pt}{\includegraphics[]{fig/Ceco_1.eps}}\end{array}\right. \tag{5.3.1}\] A principal \(G\)-bundle \(P\) on an ordinary smooth manifold \(M\in\mathsf{Mfd}\) is defined by its transition functions \(g_{\alpha\beta}\), which are nothing but a Cech \(G\)-cocycle on \(M\). Thus, geometrically, the \(0\)-simplices are all the principal \(G\)-bundles over \(M\), the \(1\)-simplices are all the isomorphisms (i.e. gauge transformations) between them and the higher simplices are given just by the composition of those. Thus, we can see a principal \(G\)-bundle as a point in the \(\infty\)-groupoid \(\operatorname{Hom}(M,\mathbf{B}G)\). Let us call \[\operatorname{Bun}_{G}(M)\coloneqq\operatorname{Hom}(M,\mathbf{B}G)\] the \(\infty\)-groupoid of principal \(G\)-bundles on a smooth manifold \(M\). **Remark 5.22** (On a Cech cover).: More concretely, given a good open cover \(\coprod_{\alpha\in I}V_{\alpha}\twoheadrightarrow M\) of our manifold, the simplicial set \(\operatorname{Bun}_{G}(M)\) can be expressed as the homotopy limit \[\operatorname{Bun}_{G}(M)\,\simeq\,\operatorname{Rlim}\Big{(}\,\prod_{\alpha }[\,*\,/\mathcal{C}^{\infty}(V_{\alpha},G)]\,\xrightarrow{\,\,}\prod_{\alpha,\beta}[\,*\,/\mathcal{C}^{\infty}(V_{\alpha}\cap V_{\beta},G)]\,\xrightarrow {\,\,}\cdots\,\Big{)}, \tag{5.3.2}\] which glues explicitly the Cech local data of the \(G\)-bundles. **Remark 5.23** (Non-abelian cohomology).: To recover the more familiar topological picture one must look at the set of connected components of the \(\infty\)-groupoid of principal \(G\)-bundles, i.e. \[\operatorname{H}^{1}(M,G)\,=\,\pi_{0}\operatorname{Hom}(M,\mathbf{B}G). \tag{5.3.3}\] In other words, a morphism \(M\to\mathbf{B}G\) in the homotopy category \(\operatorname{Ho}(\mathbf{SmoothStack})\) of smooth stacks is equivalently a class in the cohomology \(\operatorname{H}^{1}(M,G)\). For example, for \(G=U(1)\), we have by the isomorphism \(\operatorname{H}^{1}(M,U(1))\cong\operatorname{H}^{2}(M,\mathbb{Z})\) the first Chern class of circle bundles. According to the general construction of \(G\)-bundles by [11, 12] in the context of a general \((\infty,1)\)-topos, to any cocycle \(M\to\mathbf{B}G\) is canonically associated a principal \(G\)-bundle \(P\twoheadrightarrow M\) given by the pullback square (5.3.4) where the homotopy fibre \(\pi_{M}=\operatorname{hofib}(g)\) is the projection of the total space of the principal bundle to the base manifold. However, as we have said, \(\operatorname{Bun}_{G}(M)\) is just a bare \(\infty\)-groupoid (i.e. a Kan-fibrant simplicial set), lacking any smooth structure. What we want is to upgrade this object to a smooth stack. **Definition 5.24** (Smooth stack of principal \(G\)-bundles).: The _smooth stack of principal \(G\)-bundles_ on a given smooth manifold \(M\) is the mapping smooth stack \[\mathbf{Bun}_{G}(M)\;\coloneqq\;[M,\,\mathbf{B}G]. \tag{5.3.5}\] Notice that the underlying \(\infty\)-groupoid of this smooth stack, which we can extract by feeding it the point as \(\mathbf{Bun}_{G}(M):*\mapsto\mathrm{Bun}_{G}(M)\), is precisely the one of principal \(G\)-bundles on \(M\). Now, we want to introduce the moduli stack \(\mathbf{B}G_{\mathrm{conn}}\) of principal \(G\)-bundles with connection, which refines the moduli stack \(\mathbf{B}G\) of principal bundles. We will have the following diagram: (5.3.6) Just as a cocycle \(P:M\to\mathbf{B}G\) encodes the global geometric data of a principal bundle, a cocycle \((P,\nabla_{\!\!A}):M\to\mathbf{B}G_{\mathrm{conn}}\) will encode both the global geometric data of a principal bundle and the global differential data of a principal connection. **Construction 5.25** (\(\infty\)-groupoid of \(G\)-bundles with connection).: We can avoid the many technical subtleties and explicitly construct the stack \(\mathbf{B}G_{\mathrm{conn}}\in\mathbf{SmoothStack}\) so that a cocycle \((A_{\alpha},g_{\alpha\beta})\in\mathrm{Hom}(M,\mathbf{B}G_{\mathrm{conn}})\) encodes precisely the global differential data of a principal \(G\)-bundle with connection on \(M\) as follows (see, for instance, [11]): \(A_{\alpha}\in\Omega^{1}(V_{\alpha},\mathfrak{g})\) is a local \(1\)-form, which is glued on two-fold overlaps \(V_{\alpha}\times_{M}V_{\beta}\) by \[A_{\beta}\;=\;g_{\beta\alpha}^{-1}(A_{\alpha}+\mathrm{d})g_{\beta\alpha}, \tag{5.3.7}\] where \(g_{\alpha\beta}:M\to\mathbf{B}G\) is the Cech cocycle of a principal \(G\)-bundle, which is itself glued by \[g_{\alpha\beta}\cdot g_{\beta\gamma}\,=\,g_{\alpha\gamma}. \tag{5.3.8}\] on three-fold overlaps \(V_{\alpha}\times_{M}V_{\beta}\times_{M}V_{\gamma}\). Moreover, a coboundary \((A_{\alpha},g_{\alpha\beta})\mapsto(A^{\prime}_{\alpha},g^{\prime}_{\alpha \beta})\) is given by the datum of a local \(G\)-valued scalar \(c_{\alpha}\in\mathcal{C}^{\infty}(V_{\alpha},G)\) such that \[\begin{split} g^{\prime}_{\alpha\beta}&\;=\;c_{ \beta}^{-1}g_{\alpha\beta}c_{\alpha},\\ A^{\prime}_{\alpha}&\;=\;c_{\alpha}^{-1}(A_{\alpha} +\mathrm{d})c_{\alpha}.\end{split} \tag{5.3.9}\] Given a smooth manifold \(M\) and a Lie group \(G\), let us call \[\mathrm{Bun}_{G}^{\nabla}(M)\;:=\;\mathrm{Hom}(M,\,\mathbf{B}G_{\mathrm{conn}}) \tag{5.3.10}\] the \(\infty\)-groupoid of \(G\)-bundles with connection on \(M\). **Remark 5.26** (Underlying principal \(G\)-bundle).: In general, there is a forgetful morphism \[\mathbf{B}G_{\mathrm{conn}}\;\xrightarrow{\mathsf{F}}\;\mathbf{B}G, \tag{5.3.11}\] which forgets the connection of the \(G\)-bundles. Thus, it is important that a cocycle \(M\to\mathbf{B}G_{\mathrm{conn}}\) contains not only local connection data, but also the underlying bundle structure \(M\to\mathbf{B}G\). In our case, cocycles are mapped as \[\begin{split}\mathrm{Hom}(M,\mathsf{F})\,:\;\mathrm{Hom}(M, \mathbf{B}G_{\mathrm{conn}})&\;\longrightarrow\;\mathrm{Hom}(M, \mathbf{B}G),\\ (g_{\alpha\beta},A_{\alpha})&\;\longmapsto\;(g_{ \alpha\beta})\end{split} \tag{5.3.12}\] so that the functor forgets the connection data, but retains the global geometric data. Now that we have the moduli stack \(\mathbf{B}G_{\mathrm{conn}}\) of \(G\)-bundles with connection, we can formulate the following definition. **Remark 5.27** (On a Cech cover).: More concretely, given a good open cover \(\coprod_{\alpha\in I}V_{\alpha}\twoheadrightarrow M\) of our manifold, the simplicial set \(\mathrm{Bun}_{G}^{\nabla}(M)\) can be expressed as the homotopy limit \[\mathrm{Bun}_{G}^{\nabla}(M)\simeq\mathrm{Rlim}\bigg{(}\prod_{\alpha}[\Omega^ {1}(V_{\alpha},\mathfrak{g})/\mathcal{C}^{\infty}(V_{\alpha},G)]\xrightarrow{ \longrightarrow}\prod_{\alpha,\beta}[\Omega^{1}(V_{\alpha}\cap V_{\beta}, \mathfrak{g})/\mathcal{C}^{\infty}(V_{\alpha}\cap V_{\beta},G)]\xrightarrow{ \longrightarrow}\cdots\bigg{)}\] which glues explicitly the Cech local data of the \(G\)-bundles. **Remark 5.28** (Non-abelian differential cohomology).: In the homotopy category of smooth stacks \(\mathrm{Ho}(\mathbf{SmoothStack})\), a morphism \(M\to\mathbf{B}G_{\mathrm{conn}}\) is an element of \[\widehat{\mathrm{H}}^{1}(M,G)\;\coloneqq\;\pi_{0}\mathrm{Bun}_{G}^{\nabla}(M), \tag{5.3.13}\] which can be interpreted as (non-abelian) differential cohomology. Let us now fix once and for all a Lie group \(G\), which we will think of as our gauge group, and an ordinary smooth manifold \(M\in\mathsf{Mfd}\), which is going to play the role of spacetime. What we want to do now is to update the bare \(\infty\)-groupoid \(\mathrm{Bun}_{G}^{\nabla}(M)\) of principal \(G\)-bundles on \(M\) with connection to some smooth stack \(\mathbf{Bun}_{G}^{\nabla}(M)\), which we can see as the configuration space of a gauge theory on spacetime \(M\) with gauge group \(G\). **Remark 5.29** (Technical subtleties).: For technical reasons [11], the proper choice of definition for the smooth stack \(\mathbf{Bun}_{G}^{\nabla}(M)\) of principal \(G\)-bundles on \(M\) with connection cannot be, as one may naively think by comparison with the connection-less case, just the mapping smooth stack \([M,\mathbf{B}G_{\mathrm{conn}}]\). Such a choice would fail to have the desired properties. As argued by [11], the smooth stack \(\mathbf{Bun}_{G}^{\nabla}(M)\) must be a certain concretification of the mapping stack \([M,\mathbf{B}G_{\mathrm{conn}}]\), which is constructed in the reference. **Construction 5.30** (Smooth stack of principal \(G\)-bundles with connection).: We construct the _smooth stack \(\mathbf{Bun}_{G}^{\nabla}(M)\) of principal \(G\)-bundles with connection_ as follows. First, let us fix a good open cover \(\bigsqcup_{\alpha}V_{\alpha}\twoheadrightarrow M\) for the base manifold \(M\). Then, for any smooth manifold \(U\in\mathsf{Mfd}\) diffeomorphic to a Cartesian space \(U\simeq\mathbb{R}^{n}\) we construct the following simplicial set of sections: \[\mathrm{Hom}\big{(}U,\,\mathbf{Bun}_{G}^{\nabla}(M)\big{)}\,\simeq\,\mathrm{ cos}_{2}\left(\begin{array}{c}\includegraphics[height=56.905512pt]{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{{figs}{figs}{figs}{figs}{figs}{figs}{figs}{{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{s}{figs}{figs}{{figs}{figs}{figs}{{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{figs}{figs}{{figs}{figs}{figs}{figs}{figs}{ }{figs}{figs}{figs}{figs}{figs}{figs}{{figs}{s}{figs}{figs}{figs}{s}{figs}{figs}{figs}{ }{figs}{figs}{figs}{s}{figs}{figs}{figs}{s}{figs}{figs}{figs}{figs}{s}{figs}{figs}{figs}{ }{figs}{s}{figs}{figs}{s}{figs}{figs}{s}{figs}{figs}{s}{figs}{figs}{figs}{s}{figs}{figs}{ }{figs}{figs}{s}{figs}{s}{figs}{s}{figs}{figs}{s}{figs}{figs}{s}{figs}{s}{figs}{s}{figs}{ }{figs}{s}{figs}{s}{figs}{s}{figs}{figs}{s}{figs}{s}{figs}{s}{figs}{{figs}{s}{figs}{ }{figs}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{{figs}{s}{figs}{ }{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{figs}{s}{figs}{s}{figs}{ }{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{ }{figs}{s}{figs}{s}{figs}{s}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{ }{figs}{s}{figs}{s}{figs}{s}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{ }{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{ }{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{ }{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs}{s}{figs} \[Z_{2}\;=\;\left\{\begin{array}{l}\begin{array}{l}g_{\alpha\beta}\;g_{\beta \gamma}\cdot g_{\gamma\alpha}=1\\ A_{\alpha}\;=\;g_{\beta\alpha}^{-1}(A_{\beta}+{\rm d})g_{\beta\alpha}\\ g_{\alpha\beta}^{\prime}\cdot g_{\beta\gamma}^{\prime}\cdot g_{\gamma\alpha}^{ \prime}=1\\ A_{\alpha}^{\prime}\;=\;g_{\beta\alpha}^{\prime-1}(A_{\beta}^{\prime}+{\rm d })g_{\beta\alpha}^{\prime}\\ g_{\alpha\beta}^{\prime}\cdot g_{\beta\gamma}^{\prime}\cdot g_{\gamma\alpha}^{ \prime}=1\\ A_{\alpha}^{\prime\prime}\;=\;g_{\beta\alpha}^{\prime-1}(A_{\beta}^{\prime \prime}+{\rm d})g_{\beta\alpha}^{\prime\prime}\\ g_{\alpha\beta}^{\prime}\cdot g_{\alpha\beta}^{\prime\prime}\cdot g_{\beta \gamma}^{\prime}\cdot g_{\gamma\alpha}^{\prime}=1\\ A_{\alpha}^{\prime\prime}\;=\;g_{\beta\alpha}^{\prime-1}(A_{\beta}^{\prime \prime}+{\rm d})g_{\beta\alpha}^{\prime\prime}\\ g_{\alpha\beta}^{\prime}\;=\;c_{\beta}^{-1}g_{\alpha\beta}c_{\alpha}\\ A_{\alpha}^{\prime}\;=\;c_{\alpha}^{-1}(A_{\alpha}+{\rm d})c_{\alpha}\\ g_{\alpha\beta}^{\prime\prime}\;=\;c_{\beta}^{\prime-1}g_{\alpha\beta}^{\prime }c_{\alpha}^{\prime}\\ A_{\alpha}^{\prime\prime}\;=\;c_{\alpha}^{\prime\prime-1}(A_{\alpha}^{\prime}+ {\rm d})c_{\alpha}^{\prime}\end{array}\right\},\] where \(\Omega_{\rm ver}^{p}(V_{\alpha}\times U,\mathfrak{g}_{p})\) is the set of vertical differential \(p\)forms on the fibration \(V_{\alpha}\times U\twoheadrightarrow U\). Finally, for a general smooth manifold \(U\in{\sf Mfd}\) we consider a good open cover \(\bigsqcup_{i\in I}U_{i}\to U\) for it, so that all the overlaps \(U_{i_{1},\ldots,i_{n}}\) are diffeomorphic to Cartesian spaces. Thus, we define the simplicial set of sections at \(U\) to be the homotopy limit \[{\rm Hom}\big{(}U,\,{\bf Bun}_{G}^{\nabla}(M)\big{)}\;\simeq\;\underset{[n] \in\Delta}{\rm Rlim}\prod_{i_{1},\ldots,i_{n}\in I}{\rm Hom}\big{(}U_{i_{1}, \ldots,i_{n}},\,{\bf Bun}_{G}^{\nabla}(M)\big{)}. \tag{5.3.14}\] **Remark 5.31** (Relation with bare groupoid of principal \(G\)-bundles with connection).: Notice that the underlying \(\infty\)-groupoid of the smooth stack defined above is precisely the \(\infty\)-groupoid of principal \(G\)-bundles with connection on the manifold \(M\), i.e. \[{\bf Bun}_{G}^{\nabla}(M):\;*\;\longmapsto\,{\rm Bun}_{G}^{\nabla}(M). \tag{5.3.15}\] In this precise sense, \({\bf Bun}_{G}^{\nabla}(M)\) can be understood as the smooth stack version of the bare \(\infty\)-groupoid \({\rm Bun}_{G}^{\nabla}(M)\). Now, having introduced the smooth stack \({\bf Bun}_{G}(M)\) and its refinement \({\bf Bun}_{G}^{\nabla}(M)\), we will focus on their infinitesimal properties in the context of differential cohesion. To do that, we must embed both these smooth stacks into formal smooth stacks by exploiting the canonical embedding \({\bf SmoothStack}\longrightarrow{\bf FSmoothStack}\) from section 2. For simplicity, we will keep using the same symbols \({\bf Bun}_{G}(M)\) and \({\bf Bun}_{G}^{\nabla}(M)\) to indicate the two formal smooth stacks obtained by such an embedding. **Proposition 5.32** (Formal disk of \({\bf Bun}_{G}(M)\)).: The formal disk \({\mathbb{D}}_{{\bf Bun}_{G}(M),P}\) of the formal smooth stack \({\bf Bun}_{G}(M)\) of \(G\)-bundles on a fixed smooth manifold \(M\), at a given \(G\)-bundle \(P\twoheadrightarrow M\), is the formal smooth stack \[{\mathbb{D}}_{{\bf Bun}_{G}(M),P}\;\simeq\;{\bf B}\Omega^{0}(M,\mathfrak{g}_{P}), \tag{5.3.16}\] where \(\mathfrak{g}_{P}\coloneqq P\times_{G}\mathfrak{g}\) is the adjoint bundle of \(P\in{\bf Bun}_{G}(M)\) and \(\Omega^{0}(M,\mathfrak{g}_{P})\) is the local Lie algebra of \(\mathfrak{g}_{P}\)-valued \(0\)-forms on \(M\). Proof.: Let \(M\in{\sf Mfd}\) be a smooth manifold and \(X\) any formal smooth stack. The formal disk of the mapping stack \([M,X]\) at the point \(f:M\to X\) is defined by the pullback \({\mathbb{D}}_{[M,X],f}=*\times_{\Im[M,X]}[M,X]\). Consider now the pullback \(f^{*}T^{\infty}X\simeq M\times_{\Im(X)}X\) of the formal disk bundle of \(X\) along the map \(f\). Let \(\boldsymbol{\Gamma}(M,E)\) denote the formal smooth stack of section of a bundle \(E\) on \(M\). One can notice that we have an equivalence of formal smooth stacks \({\mathbb{D}}_{[M,X],f}\simeq\boldsymbol{\Gamma}(M,f^{*}T^{\infty}X)\). Let us now consider our case of interest, \({\bf Bun}_{G}(M)=[M,{\bf B}G]\). Since the moduli stack of \(G\)-bundles is of the form \({\bf B}G\simeq[\,*\,/G]\), we have the formal disk bundle \(T^{\infty}{\bf B}G\simeq[\,*\,/G\lhd_{\rm ad}]\). Given \(P:M\to{\bf B}G\), we have the pullback \(P^{*}T^{\infty}{\bf B}G\simeq{\bf Bq}_{P}\). Therefore, we have the equivalence of formal smooth stacks \({\mathbb{D}}_{{\rm Bun}_{G}(M),P}\simeq\boldsymbol{\Gamma}(M,P^{*}T^{\infty}{ \bf B}G)\simeq{\bf B}\Omega^{0}(M,\mathfrak{g}_{P})\) Recall that the infinitesimal automorphisms - i.e. gauge transformations - of a principal -bundle are indeed known to be given by sections of its adjoint bundle (see e.g. [11, 12] for a higher geometric point of view). The next step will be to consider infinitesimal deformations of, which is the configuration space of a gauge theory with gauge group on spacetime. As we have seen in the derived case in definition 4.27, we can construct the formal disk bundle of the formal smooth stack of -bundles with connection on by the following pullback square (5.3.17) Recall that the fibre of the formal disk bundle of a formal smooth stack at a point is the formal disk at such a point. Then, the fibre of the formal disk bundle at a fixed principal -bundle with connection is given by the following formal smooth stack (5.3.18) where is a local -algebra whose underlying graded vector space is given by Notice that it depends on the point. Such an -algebra controls the infinitesimal deformations of the fixed connection, together with infinitesimal gauge transformations for the deformed connection. So, its -bracket structure is given as follows: (5.3.19) for any and elements of the underlying graded vector space. **Remark 5.33** (Formal disk bundle as -algebraoid).: Notice that, by construction, the formal disk is indeed an infinitesimal object. In fact, we have that there is a natural equivalence of its reduction to the point. More generally, we have a natural equivalence of the reduction of the formal disk bundle of the smooth stack of -bundles on to itself. Let us stress the fact that the map is a bundle of formal smooth stacks, whose base is not an ordinary manifold but the smooth stack of principal -bundles on with connection. Moreover, as we have seen in subsection 4.4, the formal smooth stack comes with a natural structure of smooth algebroid (i.e. of infinitesimal smooth groupoid) provided by the canonical effective epimorphism to its de Rham space. **Remark 5.34** (Morphism forgetting the connection).: Recall from the beginning of this subsection that the formal disk in the formal smooth stack \(\mathbf{Bun}_{G}(M)\) of principal \(G\)-bundles at a \(P\in\mathbf{Bun}_{G}(M)\) is precisely given by the quotient stack \[\mathbb{D}_{\mathbf{Bun}_{G}(M),P}\ \simeq\ \mathbf{B}\Omega^{0}(M,\mathfrak{g}_{P}), \tag{5.3.20}\] where \(\Omega^{0}(M,\mathfrak{g}_{P})\) is the Lie algebra of \(\mathfrak{g}_{P}\)-valued \(0\)-forms. Thus, we can notice that there exists a forgetful map of formal smooth stacks \[\mathbb{D}_{\mathbf{Bun}_{G}^{\nabla}(M),(P,\nabla_{A})}\ \xrightarrow{\mathsf{F}} \mathbb{D}_{\mathbf{Bun}_{G}(M),P} \tag{5.3.21}\] which forgets the deformation of the connection data. **Remark 5.35** (Formal disk bundle in Cech data).: We can explicitly express the formal smooth stack \(T^{\infty}\mathbf{Bun}_{G}^{\nabla}(M)\) in Cech data as follows. First, let us fix a good open cover \(\bigsqcup_{\alpha}V_{\alpha}\twoheadrightarrow M\) for the base manifold \(M\). Then, for any formal smooth manifold \(U\in\mathsf{FMfd}\) equivalent to a formal Cartesian space \(U\simeq\mathbb{R}^{n}\times\mathrm{Spec}W\) we can write by the \(2\)-coskeletal simplicial set of sections: where the sets of \(0\)-, \(1\)- and \(2\)-simplices are given respectively by \[Z_{1}\,=\,\left\{\begin{array}{l}\left\{\begin{array}{l}g_{\alpha\beta} \in\mathcal{C}^{\infty}(V_{\alpha}\cap V_{\beta}\times U,G)\\ A_{\alpha}\ \in\Omega^{1}_{\mathrm{ver}}(V_{\alpha}\times U,\mathfrak{g})\\ \vec{A}_{\alpha}\ \in\Omega^{1}_{\mathrm{ver}}(V_{\alpha}\times\mathbb{R}^{n}, \mathfrak{g})\otimes\mathfrak{m}_{W}\end{array}\right|\begin{array}{l}g_{ \alpha\beta}\cdot g_{\beta\gamma}\cdot g_{\gamma\alpha}=1\\ A_{\alpha}=g_{\beta\alpha}^{-1}(A_{\beta}+\mathrm{d})g_{\beta\alpha}\\ \vec{A}_{\alpha}=g_{\beta\alpha}^{-1}\vec{A}_{\beta}g_{\beta\alpha}\end{array} \right\},\] \[Z_{1}\,=\,\left\{\begin{array}{l}\left\{\begin{array}{l}\left|\begin{array}{l}g_{ \alpha\beta}\cdot g_{\beta\gamma}\cdot g_{\gamma\alpha}=1\\ A_{\alpha}=g_{\beta\alpha}^{-1}(A_{\beta}+\mathrm{d})g_{\beta\alpha}\\ \vec{A}_{\alpha}=g_{\beta\alpha}^{-1}\vec{A}_{\beta}g_{\beta\alpha}\end{array} \right.\\ \vec{A}_{\alpha}=g_{\beta\alpha}^{-1}\vec{A}_{\beta}g_{\beta\alpha}\end{array} \right\}\\ \vec{A}_{\alpha}\cdot g_{\beta\gamma}\cdot g_{\gamma\alpha}^{\prime}=1\\ \vec{A}_{\alpha}^{\prime}\,=\,g_{\beta\alpha}^{-1}(A_{\beta}^{\prime}+ \mathrm{d})g_{\beta\alpha}^{\prime}\\ \vec{A}_{\alpha}^{\prime}\,=\,g_{\beta\alpha}^{\prime-1}\vec{A}_{\beta}^{ \prime}g_{\beta\alpha}^{\prime}\\ \vec{A}_{\alpha}^{\prime}\,=\,g_{\beta\alpha}^{-1}\vec{A}_{\beta}^{\prime}g_{ \beta\alpha}^{\prime}\\ \vec{A}_{\alpha},\vec{A}_{\alpha}^{\prime}\,\in\Omega^{1}_{\mathrm{ver}}(V_{ \alpha}\times\mathbb{R}^{n},\mathfrak{g})\otimes\mathfrak{m}_{W}\end{array} \right|,\] \[Z_{1}\,=\,\left\{\begin{array}{l}\left\{\begin{array}{l}\left|\begin{array}{l}g_{ \alpha\beta}\cdot g_{\beta\gamma}\cdot g_{\gamma\alpha}=1\\ A_{\alpha}=g_{\beta\alpha}^{-1}(A_{\beta}+\mathrm{d})g_{\beta\alpha}\\ \vec{A}_{\alpha}=g_{\beta\alpha}^{-1}\vec{A}_{\beta}g_{\beta\alpha}\end{array} \right.\\ \vec{A}_{\alpha}^{\prime}=g_{\beta\alpha}^{-1}(A_{\beta}^{\prime}+\mathrm{d})g _{\beta\alpha}^{\prime}\\ \vec{A}_{\alpha}^{\prime}=g_{\beta\alpha}^{\prime-1}\vec{A}_{\beta}^{\prime}g_{ \beta\alpha}^{\prime}\\ \vec{A}_{\alpha}^{\prime}=g_{\beta\alpha}^{-1}\vec{A}_{\beta}^{\prime}g_{ \beta\alpha}^{\prime}\\ \vec{A}_{\alpha}^{\prime}=g_{\beta\alpha}^{-1}\vec{A}_{\beta}^{\prime}g_{ \beta\alpha}^{\prime}\\ \vec{A}_{\alpha}^{\prime}=c_{\alpha}^{-1}(A_{\alpha}+\mathrm{d})c_{\alpha}\\ \vec{A}_{\alpha}^{\prime}=\vec{A}_{\alpha}+\nabla_{A_{\alpha}}\vec{c}_{\alpha} \\ \vec{c}_{\alpha}=g_{\beta\alpha}^{-1}\vec{c}_{\beta}g_{\beta\alpha}\end{array} \right\},\] \[Z_{2}\,=\,\left\{\begin{array}{ll}\begin{array}{ll}g_{\alpha\beta}\cdot g_{ \beta\beta}\cdot g_{\gamma\alpha}=1\\ A_{\alpha}\,=\,g_{\beta\alpha}^{-1}(A_{\beta}+\mathrm{d})g_{\beta\alpha}\\ \vec{A}_{\alpha}=\,g_{\beta\alpha}^{-1}\vec{A}_{\beta}g_{\beta\alpha}\\ g_{\alpha\beta}\cdot g_{\beta\gamma}^{\prime}\cdot g_{\gamma\alpha}^{\prime}=1 \\ A_{\alpha}\,=\,g_{\beta\alpha}^{\prime-1}(A_{\beta}^{\prime}+\mathrm{d})g_{ \beta\alpha}^{\prime}\\ \vec{A}_{\alpha}^{\prime}=\,g_{\beta\alpha}^{\prime-1}\vec{A}_{\beta}^{ \prime}g_{\beta\alpha}^{\prime}\\ g_{\alpha\beta}^{\prime\prime}\cdot g_{\beta\gamma}^{\prime\prime}\cdot g_{ \gamma\alpha}^{\prime\prime}=1\\ A_{\alpha}^{\prime\prime}=\,g_{\beta\alpha}^{\prime-1}(A_{\beta}^{\prime }+\mathrm{d})g_{\beta\alpha}^{\prime\prime}\\ \vec{A}_{\alpha}^{\prime}=\,\vec{A}_{\alpha}^{\prime\prime}\cdot\vec{A}_{ \beta}^{\prime\prime}g_{\beta\alpha}^{\prime\prime}\\ \vec{A}_{\alpha}^{\prime\prime}=\,\vec{A}_{\alpha}^{\prime\prime}\cdot\vec{A} _{\alpha}^{\prime\prime}\end{array}\right.\,.\end{array}\right.\] Finally, for any general formal smooth manifold \(U\in\mathsf{FMfd}\) we consider a good open cover \(\bigsqcup_{i\in I}U_{i}\to U\) for it, so that all the overlaps \(U_{i_{1},\ldots,i_{n}}\) are isomorphic to thickened Cartesian spaces. Thus, we define the simplicial set of sections at \(U\) to be the homotopy limit \[\mathrm{Hom}\big{(}U,\,T^{\infty}\mathbf{Bun}_{G}^{\nabla}(M)\big{)}\ \simeq\ \underset{[n]\in\Delta}{\mathrm{Rlim}}\prod_{i_{1},\ldots,i_{n}\in I} \mathrm{Hom}\big{(}U_{i_{1},\ldots,i_{n}},\,T^{\infty}\mathbf{Bun}_{G}^{ \nabla}(M)\big{)}. \tag{5.3.22}\] #### Global Yang-Mills theory In the previous subsection, we constructed the smooth stack \(\mathbf{Bun}_{G}^{\nabla}(M)\) which provides a global-geometric formulation of the configuration space of a gauge field with gauge Lie group \(G\) on a spacetime \(M\). In this subsection, we will proceed with the construction of the derived critical locus \(\mathsf{RCrit}(S)(M)\to\mathbf{Bun}_{G}^{\nabla}(M)\) of the Yang-Mills action \(S\) as a formal derived smooth stack in the context of derived differential cohesion. Finally, we will show that such a geometric object provides a global-geometric version of usual BV-BRST theory. **Construction 5.36** (Stack of densities).: We take spacetime to be an oriented \(d\)-dimensional smooth manifold \(M\) equipped with a (pseudo-)Riemannian metric. We construct the quotient stack \(\mathbf{Dens}_{M}\coloneqq[\boldsymbol{\Omega}^{d}(M)/\boldsymbol{\Omega}^{d -1}(M)]\) of top forms \(\mu\) on \(M\), with the action \(\mu\mapsto\mu+\mathrm{d}_{\mathrm{dR}}\lambda\) of \((d-1)\)-forms \(\lambda\). Notice that the connected components \(\pi_{0}\mathbf{Dens}_{M}\) are classes of top forms up to total derivative. **Construction 5.37** (Yang-Mills action functional).: The datum of the Yang-Mills action functional is equivalently a morphism of formal smooth stacks given by \[\begin{split}\breve{S}\,:&\,\mathbf{Bun}_{G}^{\nabla} (M)\,\longrightarrow\,\mathbf{Dens}_{M}\\ &(g_{\alpha\beta},\,A_{\alpha})\,\longmapsto\,\frac{1}{2}\langle F _{A}\,\overset{\wedge}{,}\,\star F_{A}\rangle_{\mathfrak{g}}\end{split} \tag{5.3.23}\] where we called \(F_{A}=\nabla_{A_{\alpha}}A_{\alpha}\) the curvature of the principal \(G\)-bundle with connection \((g_{\alpha\beta},\,A_{\alpha})\). Now, the question becomes how can we encode the variational derivative of the Yang-Mills action functional or, in other words, the Euler-Lagrange equations of motion. In analogy with the case of scalar field theory, we will construct a restricted cotangent bundle \(T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)\) such that the variational derivative can be formalised as its section \(\delta S:\mathbf{Bun}^{\nabla}_{G}(M)\longrightarrow T^{\vee}_{\mathrm{res}} \mathbf{Bun}^{\nabla}_{G}(M)\). **Construction 5.38** (Fibre of the restricted cotangent bundle).: Notice that the Killing form \(\langle-,-\rangle_{\mathfrak{g}}\) on the Lie algebra \(\mathfrak{g}\) induces a natural pairing between \(\mathfrak{g}_{P}\)-valued differential forms \(\langle-\,\overset{\wedge}{\dots}-\rangle_{\mathfrak{g}}:\Omega^{d-p}(M, \mathfrak{g}_{P})\times\Omega^{p}(M,\mathfrak{g}_{P})\longrightarrow\Omega^{d}(M)\), where \(d\coloneqq\dim M\) is the dimension of the base manifold and \(\mathfrak{g}_{P}\) is the adjoint bundle of a principal \(G\)-bundle \(P\twoheadrightarrow M\). We want to use this fact to induce a well-defined morphism of formal derived smooth stacks of the form \[\langle-\,\overset{\wedge}{\dots}-\rangle_{\mathfrak{g}}\;:\;\mathcal{F}_{(P, \nabla_{A})}\times\mathbb{D}_{\mathbf{Bun}^{\nabla}_{G}(M),(P,\nabla_{A})} \;\longrightarrow\;\mathbf{Dens}_{M}, \tag{5.3.24}\] where \(\mathcal{F}_{(P,\nabla_{A})}\) is a suitable formal derived smooth stack which we must construct. Let us define a formal derived smooth set by the derived kernel \[\mathcal{F}_{(P,\nabla_{A})}\;\coloneqq\;\mathds{R}\mathrm{ker}\Big{(}\nabla _{A}:\mathbf{\Omega}^{d-1}(M,\mathfrak{g}_{P})\rightarrow\mathbf{\Omega}^{d} (M,\mathfrak{g}_{P})\Big{)}, \tag{5.3.25}\] for any fixed principal \(G\)-bundle with connection \((P,\nabla_{A})\in\mathbf{Bun}^{\nabla}_{G}(M)\). A section is given by a \((d-1)\)-form \(\tilde{A}\) together with a homotopy \(\tilde{c}\) from \(\nabla_{A}\tilde{A}\) to \(0\). The natural morphism (5.3.24) is the constructed by sending \(0\)-simplices \((\tilde{A},\widetilde{A})\) to the density \(\langle\tilde{A}\,\overset{\wedge}{\dots}\tilde{A}\rangle_{\mathfrak{g}}\). This assignment is invariant up to total derivative, in fact an infinitesimal gauge transformation \(\vec{A}\mapsto\vec{A}+\nabla_{A}\vec{c}\) is sent to the \(1\)-simplex \(\langle\tilde{A}\,\overset{\wedge}{\dots}\tilde{A}\rangle_{\mathfrak{g}} \mapsto\langle\tilde{A}\,\overset{\wedge}{\dots}\tilde{A}\rangle_{\mathfrak{g} }+\mathrm{d}_{\mathrm{dR}}\langle\tilde{A}\,\overset{\wedge}{\dots}\,\tilde{ c}\rangle_{\mathfrak{g}}\) in \(\mathbf{Dens}_{M}\) for any \(\vec{A}\in\mathbb{D}_{\mathbf{Bun}^{\nabla}_{G}(M),(P,\nabla_{A})}(U)\) and \(\tilde{A}\in\mathcal{F}_{(P,\nabla_{A})}(U)\). This is because the term \(\langle\nabla_{A}\tilde{A}\,\overset{\wedge}{\dots}\tilde{c}\rangle_{ \mathfrak{g}}\) is homotopic to \(0\). Thus, the natural morphism (5.3.24) is well-defined. A reasonable definition of restricted cotangent bundle must be such that its fibre at the point \((P,\nabla_{A})\in\mathbf{Bun}^{\nabla}_{G}(M)\) is the formal derived smooth set \(\mathcal{F}_{(P,\nabla_{A})}\). We have a fibre-wise construction of a formal derived smooth stack \(T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)\), which we will call _restricted cotangent bundle_, in analogy with scalar field theory above. Therefore, by construction, there will be the natural pairing \[\langle-\,\overset{\wedge}{\dots}-\rangle_{\mathfrak{g}}\;:\;T^{\vee}_{ \mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)\times_{\mathbf{Bun}^{\nabla}_{G}(M )}T^{\infty}\mathbf{Bun}^{\nabla}_{G}(M)\,\longrightarrow\,\mathbf{Bun}^{ \nabla}_{G}(M)\times\mathbf{Dens}_{M}.\] In the rest of this section, we will deploy the compact notation \(f^{\prime}\xleftarrow{f_{1}}\)\(f\) to denote a \(1\)-simplex \(f_{1}\) whose boundaries are \(\partial_{0}f_{1}=f\) and \(\partial_{1}f_{1}=f^{\prime}\), and similarly for higher simplices. (This is the notation we used in Example 3.28, which the reader may find helpful to recall at this point.) **Construction 5.39** (Restricted cotangent bundle).: Let us provide a concrete construction of the restricted cotangent bundle \(T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)\) in terms of Cech data. Such a construction is not the easiest, so our strategy will be the following: first, we will define a pre-stack \(T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)^{\mathrm{pre}}\) which - roughly speaking - approximates the wanted formal derived smooth stack by encoding its local sections; then, we will stackify it. This means gluing local sections of the pre-stack in a way that is compatible with the descent condition on formal derived smooth manifolds. To keep our notation consistent with the ordinary case, given an ordinary smooth manifold \(V\), let us define the following simplicial sets: \[\mathcal{C}^{\infty}(V\times U,\,G)\,\coloneqq\,\mathds{R}\mathrm{Hom}(U,[V,G]), \qquad\Omega^{p}_{\mathrm{ver}}(V\times U,\mathfrak{g})\,\coloneqq\,\mathds{R} \mathrm{Hom}(U,\,\mathbf{\Omega}^{p}(V,\mathfrak{g})),\] for any formal derived smooth manifold \(U\). Now, the simplicial set of sections of \(T^{\vee}_{\text{res}}\mathbf{Bun}^{\nabla}_{G}(M)^{\text{pre}}\) on any formal derived smooth manifold \(U\in\mathbf{dFMfd}\) in our \((\infty,1)\)-site is of the following form: \[\begin{array}{l}\text{RHom}\big{(}U,\,T^{\vee}_{\text{res}}\mathbf{Bun}^{ \nabla}_{G}(M)^{\text{pre}}\big{)}\ \simeq\\ \simeq\ \left(\begin{array}{ccc}\includegraphics[height=142.26378pt]{ \includegraphics[height=142.26378pt]{ \includegraphics[height=142. For any formal derived smooth manifold \(U\in\mathbf{dFMfd}\) in our \((\infty,1)\)-site, the simplicial set of sections of the restricted cotangent complex \(T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)\) on \(U\) is given by the homotopy colimit \[\mathrm{RHom}\big{(}U,\,T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M) \big{)}\ \simeq\ \underset{H(U)}{\mathrm{Rlim}}\,\underset{[n]\in\Delta}{\mathrm{RHom}} \big{(}H(U)_{n},\,T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)^{ \mathrm{pre}}\big{)},\] where the colimit is taken over all hypercovers \(H(U)\) - cf. Definition 3.21 - which cover \(U\). This stackification procedure is explained, for instance, in [10, Section 6.5.3] Notice that a \(0\)-simplex in the space of sections above (which we can also call a \(U\)-point) is given, first, by the Cech-Deligne cocycle \((g_{\alpha\beta},A_{\alpha})\) of a \(U\)-parametrised family of principal \(G\)-bundles \((P,\nabla_{A})\) with connection and, second, by a \(U\)-parametrised family of differential forms \(\tilde{A}\in\Omega^{d-1}_{\mathrm{ver}}(M\times U,\mathfrak{g}_{P})_{0}\) and \(\tilde{c}\in\Omega^{d}_{\mathrm{ver}}(M\times U,\mathfrak{g}_{P})_{1}\) which are valued in the adjoint bundle of the aforementioned family of bundles. **Remark 5.40** (Derived-extension of the group action).: In the construction above we exploited the following facts. As we said, given a smooth manifold \(V\), there is a morphism of smooth sets \(\rho:[V,G]\times\mathbf{\Omega}^{1}(V,\mathfrak{g})\to\mathbf{\Omega}^{1}(V,\mathfrak{g})\) defined by \((c,A)\mapsto c^{-1}(A+\mathrm{d})c\). Then, we can embed such a morphism of smooth sets into a morphism \(i\rho\) of formal derived smooth stacks by derived-extension from definition 3.25. On a given formal derived smooth manifold \(U\), we then have the morphism of simplicial sets \(i\rho(U):\mathcal{C}^{\infty}(V\times U)\times\Omega^{1}_{\mathrm{ver}}(V \times U,\mathfrak{g})\to\Omega^{1}_{\mathrm{ver}}(V\times U,\mathfrak{g})\), with the same notation as above. In complete analogy, we can derived-extend the group multiplication morphism \(:[V,G]\times[V,G]\to[V,G]\), given by \((c,c^{\prime})\mapsto c\cdot c^{\prime}\), and the morphism encoding the adjoint action on differential forms \(\mathrm{Ad}:[V,G]\times\mathbf{\Omega}^{p}(V,\mathfrak{g})\to\mathbf{\Omega}^ {p}(V,\mathfrak{g})\), given by \((c,\tilde{A})\mapsto c^{-1}\tilde{A}c\). Now that we have constructed restricted cotangent bundle, we can show that the Yang-Mills action functional \(S\) induces a section \(\delta S:\mathbf{Bun}^{\nabla}_{G}(M)\longrightarrow T^{\vee}_{\mathrm{res}} \mathbf{Bun}^{\nabla}_{G}(M)\) of it, which is going to encode its equations of motion. In fact, as shown for example in [11], the first variation of the Yang-Mills action functional can be expressed in the form \(\mathrm{d}_{\mathrm{dR}}S=\int_{M}\langle\delta S\,\dot{\gamma}\,\rangle_{ \mathfrak{g}}\) where the variational derivative, which encodes the Yang-Mills equations, must be of the form \(\delta S(P,\nabla_{A})=\nabla_{A}\star F_{A}\in\Omega^{d-1}(M,\mathfrak{g}_{P})\) at any bundle \((P,\nabla_{A})\). Let us now see that this can be indeed interpreted as a section of the restricted cotangent bundle. **Construction 5.41** (Variational derivative of the action functional).: The de Rham differential \(\mathrm{d}_{\mathrm{dR}}S\) of the action functional gives rise to a morphism of formal derived smooth stacks, which we call variational derivative, given by \[\begin{split}\delta S\,:\,\mathbf{Bun}^{\nabla}_{G}(M)& \longrightarrow\,T^{\vee}_{\mathrm{res}}\mathbf{Bun}^{\nabla}_{G}(M)\\ (g_{\alpha\beta},\,A_{\alpha})&\longmapsto\,(g_{ \alpha\beta},\,A_{\alpha},\,\nabla_{A_{\alpha}}\star F_{A_{\alpha}},\,0),\end{split} \tag{5.3.26}\] and the higher simplices are naturally embedded. Now, since we have a good definition of the variational derivative, we have all the ingredients we need to define the derived critical locus \(\mathrm{RCrit}(S)(M)\) of the Yang-Mills action functional. **Definition 5.42** (Derived critical locus of Yang-Mills action functional).: We construct the _derived critical locus of Yang-Mills action functional_ by the formal derived smooth stack given by the following homotopy pullback square: (5.3.27) where \(\delta S\) is the morphism (5.3.23) constructed above and \(0\) is the zero-section. **Remark 5.43** (Derived critical locus in Cech data).: Let us unravel the definition of the derived critical locus \(\operatorname{RCrit}(S)(M)\) of the Yang-Mills action functional in terms of Cech data. As in the previous example, our strategy to present the derived critical locus will be the following: first, we will explicitly write a pre-stack \(\operatorname{RCrit}(S)(M)^{\operatorname{pre}}\) which - roughly speaking - approximates the derived critical locus by encoding its local sections; then, we will stackify it. So, the simplicial set of sections of \(\operatorname{RCrit}(S)(M)^{\operatorname{pre}}\) on any formal derived smooth manifold \(U\in\mathbf{dFMfd}\) in our \((\infty,1)\)-site is of the following form: \[\operatorname{RHom}\bigl{(}U,\,\operatorname{RCrit}(S)(M)^{ \operatorname{pre}}\bigr{)}\ \simeq\] \[\simeq\ \left(\begin{array}{cccc}\includegraphics[width=142.344pt ]{RHom}\left(U,\,\operatorname{RCrit}(S)(M)^{\operatorname{pre}}\right)& \simeq&\\ \includegraphics[width=142.344pt]{RHom}\left(\begin{array}{cccc} \includegraphics[width=142.344pt]{RHom}&\begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}& \begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}& \begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}& \begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}& \begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}& \begin{array}{cccc}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}& \begin{array}{c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}& \begin{array}{c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}& \begin{array}{c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}& \begin{array}{c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}& \begin{array}{c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}&\begin{array}{ c}\includegraphics[width=142.344pt]{RHom}& \begin{array}{c}\includegraphics[width=142. \(g_{\alpha\beta}\cdot g_{\beta\gamma}\cdot g_{\gamma\alpha}=1\) \(A_{\alpha}=g_{\beta\alpha}^{-1}(A_{\beta}+\mathrm{d})g_{\beta\alpha}\) \(A_{\alpha}^{+}=g_{\beta\alpha}^{-1}A_{\beta}^{+}g_{\beta\alpha}\) \(c_{\alpha}^{+}=g_{\beta\alpha}^{-1}c_{\beta}^{+}g_{\beta\alpha}\) \(0\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\ * \(g_{\alpha\beta}\) transition functions, * \(A_{\alpha}\) connection, * \(A_{\alpha}^{+}\) equations of motion, * \(c_{\alpha}^{+}\) Noether identities, * \(c_{\alpha}\) gauge transformations, * \(g_{1,\alpha\beta}\) homotopies of transition functions, * \(A_{1,\alpha}\) homotopies of connections, * \(A_{1,\alpha}^{+}\) homotopies of equations of motions, * \(c_{1,\alpha}^{+}\) homotopies of Noether identities, * \((n\geq 2)\)-simplices: compositions of gauge transformations and homotopies of homotopies. From a physical standpoint, we can interpret \(A^{+}\) and \(c^{+}\) as antifield and antighost, respectively. **Remark 5.45** (Global antifields and antighosts).: Notice that a section of our formal derived smooth stack \(\operatorname{RCrit}(S)(M)\) on a formal derived smooth manifold \(U\in\mathbf{dFMfd}\) will be of the form \((P,\nabla_{A},A^{+},c^{+})\), where we have the following: * \((P,\nabla_{A})\) is a \(U\)-parametrised family of \(G\)-bundles on \(M\) with connection, * \(A^{+}\in\Omega_{\operatorname{ver}}^{d-1}(M\times U,\mathfrak{g}_{P})_{1}\) is a \(U\)-parametrised family of so-called _antifields_, * \(c^{+}\in\Omega_{\operatorname{ver}}^{d}(M\times U,\mathfrak{g}_{P})_{2}\) is a \(U\)-parametrised family of so-called _antighosts_. Moreover, notice that the antifields and the antighosts appearing here have a global-geometric structure and, in fact, they are differential forms valued in the adjoint bundle \(\mathfrak{g}_{P}=P\times_{G}\mathfrak{g}\) of the underlying principal \(G\)-bundle \(P\). **Remark 5.46** (Infinitesimal disk of derived critical locus).: In the special case where \(U\simeq*\), a section is a point \((P,\nabla_{A})\in\operatorname{RCrit}(S)(M)\) in the derived critical locus, i.e. a principal \(G\)-bundle on \(M\) with connection which satisfies the Yang-Mills equations of motion. Recall from section 4 that, in the context of derived differential cohesion, we can consider a formal disk \(\mathbb{D}_{\operatorname{RCrit}(S)(M),(P,\nabla_{A})}\) of the formal derived smooth stack \(\operatorname{RCrit}(S)(M)\) at the point \((P,\nabla_{A})\in\operatorname{RCrit}(S)(M)\), as in definition 4.24. Such an formal disk describes the behaviour of the formal derived smooth stack in an infinitesimal neighborhood of the chosen point, where the latter is a global solution of the Yang-Mills equation. This is defined by the pullback square (5.3.28) Since our infinitesimal disk is in fact an infinitesimal object, as we saw above in section 4.4, it is of the form \[\mathbb{D}_{\operatorname{RCrit}(S)(M),(P,\nabla_{A})}\ \simeq\ \mathbf{B}\big{(}\overrightarrow{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\ for some \(L_{\infty}\)-algebra \(\overrightarrow{\mathfrak{C}\mathsf{rit}}(S)_{(P,\nabla_{A})}\) which encodes the infinitesimal deformations of the derived critical locus around the fixed point \((P,\nabla_{A})\in\operatorname{\mathsf{RCrit}}(S)(M)\). By unravelling this \(L_{\infty}\)-algebra, we see that its underlying differential graded vector space is given by the cochain complex which depends on the point \((P,\nabla_{A})\in\operatorname{\mathsf{RCrit}}(S)(M)\). Such an \(L_{\infty}\)-algebra controls the infinitesimal deformations \(\nabla_{A}+\vec{A}\) of the fixed connection, together with infinitesimal gauge transformations and equations of motion for the deformed connection. Thus, not too surprisingly, the \(L_{\infty}\)-bracket structure is given as follows: \[\ell_{1}(\vec{c}) \;=\;\nabla_{A}\vec{c},\] \[\ell_{1}(\vec{A}) \;=\;\nabla_{A}\star\nabla_{A}\vec{A}, \ell_{1}(\vec{A}^{+}) \;=\;\nabla_{A}\vec{A}^{+},\] \[\ell_{2}(\vec{c}_{1},\vec{c}_{2}) \;=\;[\vec{c}_{1},\vec{c}_{2}]_{\mathfrak{g}}, \ell_{2}(\vec{c},\vec{c}^{+}) \;=\;[\vec{c},\vec{c}^{+}]_{\mathfrak{g}}, \tag{5.3.30}\] \[\ell_{2}(\vec{c},\vec{A}) \;=\;[\vec{c},\vec{A}]_{\mathfrak{g}}, \ell_{2}(\vec{c},\vec{A}^{+}) \;=\;[\vec{c},\vec{A}^{+}]_{\mathfrak{g}},\] \[\ell_{2}(\vec{A},\vec{A}^{+}) \;=\;[\vec{A}^{\wedge},\vec{A}^{+}]_{\mathfrak{g}},\] \[\ell_{2}(\vec{A},\vec{A}^{+}) \;=\;[\vec{A}^{\wedge},\vec{A}^{+}]_{\mathfrak{g}},\] \[\ell_{2}(\vec{A}_{1},\vec{A}_{2}) \;=\;\nabla_{A}\star[\vec{A}_{1}\,\dot{\wedge},\vec{A}_{2}]_{ \mathfrak{g}}+[\vec{A}_{1}\,\dot{\wedge},\star\nabla_{A}\vec{A}_{2}]_{ \mathfrak{g}}+[\vec{A}_{2}\,\dot{\wedge}\,\star\nabla_{A}\vec{A}_{1}]_{ \mathfrak{g}},\] \[\ell_{3}(\vec{A}_{1},\vec{A}_{2},\vec{A}_{3}) \;=\;[\vec{A}_{1}\,\dot{\wedge}\,\star[\vec{A}_{2}\,\dot{ \wedge}\,\star[\vec{A}_{3}\,\dot{\wedge}\,\star[\vec{A}_{1}\,\dot{\wedge}\, \vec{A}_{2}]_{\mathfrak{g}}]_{\mathfrak{g}},\] for any \(\vec{c}_{k}\in\Omega^{0}(M,\mathfrak{g}_{P})\), \(\vec{A}_{k}\in\Omega^{1}(M,\mathfrak{g}_{P})\), \(\vec{A}^{+}_{k}\in\Omega^{d-1}(M,\mathfrak{g}_{P})\) and \(\vec{c}^{+}_{k}\in\Omega^{d}(M,\mathfrak{g}_{P})\) elements of the underlying graded vector space. Notice that, if we pick a \(G\)-bundle \((P,\nabla_{A})\in\operatorname{\mathsf{RCrit}}(S)(M)\) which is topologically trivial \(P\simeq M\times G\) and has flat connection \(\nabla_{A}=\operatorname{d}\), we recover the \(L_{\infty}\)-algebra structure from equation (5.1.19). Thus, usual BV-BRST theory can be understood as the infinitesimal disk \(\mathbb{D}_{\operatorname{\mathsf{RCrit}}(S)(M),(M\times G,\operatorname{d})}\) at the trivial \(G\)-bundle with flat connection, which is in fact a solution of the Yang-Mills equations. To conclude this section, we will examine the smooth stack of solutions of Yang-Mills theory, i.e. the underived critical locus \(\operatorname{\mathsf{Crit}}(S)(M)\in\operatorname{\mathbf{SmoothStack}}\), seen as a smooth stack that can be obtained by underived truncation of the derived critical locus \(\operatorname{\mathsf{RCrit}}(S)(M)\). **Remark 5.47** (Underived critical locus).: Let the underived critical locus be the smooth stack given by the underived truncation \(\operatorname{\mathsf{Crit}}(S)(M)\coloneqq t_{0}\operatorname{\mathsf{RCrit}} (S)(M)\). Such a smooth stack will come equipped with a canonical morphism \(\operatorname{\mathsf{Crit}}(S)(M)\longleftrightarrow\operatorname{\mathbf{ Bun}}_{G}^{\nabla}(M)\) of smooth stacks and, roughly speaking, \(\operatorname{\mathsf{Crit}}(S)(M)\) will include only those principal \(G\)-bundles on \(M\) with connection such that they satisfy the Yang-Mills equations of motion. Thus, any principal \(G\)-bundle with connection \((P,\nabla_{A})\in\operatorname{\mathsf{Crit}}(S)(M)\) will satisfy by construction both the Bianchi identities and the Yang-Mills equations of motion: \[\nabla_{A}F_{A} \;=\;0\quad\text{(Bianchi identity)},\] \[\nabla_{A}\star F_{A} \;=\;0\quad\text{(Equations of motion)},\] where \(F_{A}\in\Omega^{2}(M,\mathfrak{g}_{P})\) is the curvature of the bundle \((P,\nabla_{A})\). A subtlety is that, in \(\operatorname{\mathsf{Crit}}(S)(M)\), Noether identities are not anymore simplicially unravelled, but they are imposed on the nose. More concretely, if we pick an ordinary smooth manifold \(U\in\mathsf{Mfd}\) diffeomorphic to a Cartesian space, we can concretely write the smooth stack \(\operatorname{\mathsf{Crit}}(S)(M)\) by the 2-coskeletal simplicial set \[\operatorname{\mathsf{Hom}}\bigl{(}U,\,\operatorname{\mathsf{Crit}}(S)(M)\bigr{)} \,\simeq\,\operatorname{\mathrm{cos}}_{2}\left(\begin{array}{c}\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par where the sets of \(0\)- and \(1\)-simplices are, respectively, given by \[Z_{0}\,=\,\left\{\begin{array}{l}g_{\alpha\beta}\in\mathcal{C}^{\infty}(V_{ \alpha}\cap V_{\beta}\times U,G)\\ A_{\alpha}\ \in\Omega^{1}_{\mathrm{ver}}(V_{\alpha}\times U,\mathfrak{g})\end{array} \right|\,\begin{array}{l}g_{\alpha\beta}\cdot g_{\beta\gamma}\cdot g_{\gamma \alpha}=1\\ A_{\alpha}=g_{\beta\alpha}^{-1}(A_{\beta}+\mathrm{d})g_{\beta\alpha}\\ \nabla_{A_{\alpha}}\star F_{A_{\alpha}}=0\end{array}\right\},\] \[Z_{1}\,=\,\left\{\begin{array}{l}g_{\alpha\beta}\cdot g_{\beta\gamma}\cdot g _{\gamma\alpha}=1\\ A_{\alpha}=g_{\beta\alpha}^{-1}(A_{\beta}+\mathrm{d})g_{\beta\alpha}\\ \nabla_{A_{\alpha}}\star F_{A_{\alpha}}=0\\ g_{\alpha\beta}^{\prime}\ \cdot g_{\beta\gamma}^{\prime}\cdot g_{\gamma\alpha}^{ \prime}=1\\ A_{\alpha},A_{\alpha}^{\prime}\ \in\Omega^{1}_{\mathrm{ver}}(V_{\alpha}\times U, \mathfrak{g})\\ \nabla_{A_{\alpha}^{\prime}}\star F_{A_{\alpha}}=0\\ g_{\alpha\beta}^{\prime}\ =c_{\beta}^{-1}g_{\alpha\beta}c_{\alpha}\\ A_{\alpha}^{\prime}\ =c_{\alpha}^{-1}(A_{\alpha}+\mathrm{d})c_{\alpha}\end{array} \right\},\] and where the set of \(2\)-simplices \(Z_{2}\) is simply given by composition of gauge transformations, in analogy with the smooth stack \(\mathbf{Bun}_{G}^{\nabla}(M)\). As before, to obtain the \(\infty\)-groupoid of sections on a generic smooth manifold \(U\), we only have to take the homotopy limit over the Cech nerve \(\tilde{C}(U)_{\bullet}\to U\) provided by a good open cover \(\coprod_{i\in I}U_{i}\twoheadrightarrow U\). ## 6 Outlook The authors hope that the derived differential cohesive geometry exhibited in the present paper may prove a useful language for addressing various open problems in QFT. In this final section we will point to some of them. Non-perturbative BV-quantisation as higher geometric quantisation.In the \(L_{\infty}\)-algebra formulation of BV-theory, one quantises a field theory by lifting its classical BV-action \(S_{\mathrm{BV}}\in\mathcal{O}\big{(}T^{\vee}[-1]X\big{)}\) to a quantum BV-action \(S_{\mathrm{BV}}^{\hbar}\in\mathcal{O}\big{(}T^{\vee}[-1]X\big{)}[[\hbar]]\) satisfying the quantum master equation \[i\hbar\triangle S_{\mathrm{BV}}^{\hbar}+\frac{1}{2}\{S_{\mathrm{BV}}^{\hbar},S _{\mathrm{BV}}^{\hbar}\}\ =\ 0, \tag{6.0.1}\] where \(\triangle\) is the BV-Laplacian. In fact, see e.g. [11], the introduction of the quantum BV-differential \[Q_{\mathrm{BV}}^{\hbar}\ \coloneqq\ i\hbar\triangle+\{S_{\mathrm{BV}}^{\hbar},-\} \tag{6.0.2}\] makes the \(\mathbb{P}_{0}\)-algebra of observables into a \(\mathbb{BD}_{0}\)-algebra (i.e. a Beilinson-Drinfeld algebra), whose structure provides a quantisation of the algebra of observables. In [11] it was also observed that the dg-algebra of quantum observables has an interesting geometric origin. In fact, one can define the Heisenberg algebra \[0\xrightarrow{}i\hbar\mathbb{R}[-1]\xrightarrow{}\mathfrak{H}\mathfrak{eis}( X)\xrightarrow{}T^{\vee}[-1]X\xrightarrow{}0, \tag{6.0.3}\] where the extended bracket is given by the canonical pairing on the \((-1)\)-shifted cotangent bundle \(T^{\vee}[-1]X\), i.e. we have \([\alpha,\beta]\coloneqq i\hbar\{\alpha,\beta\}\) for any \(\alpha,\beta\in T^{\vee}[-1]X\). This is nothing but a degree-shifted version of the ordinary Heisenberg algebra. Thus, one has that the dg-algebra of functions is \(\mathcal{O}\big{(}\mathfrak{H}\mathfrak{eis}(X)\big{)}\ \simeq\ \mathcal{O}\big{(}T^{\vee}[-1]X\big{)}[[\hbar]]\), which means that the observables on the Heisenberg algebra are the quantum observables. In a certain sense, an ordinary Heisenberg algebra can be thought of as a Lie algebra version of a prequantum \(U(1)\)-bundle. This suggests an intriguing relation between geometric quantisation and BV-quantisation. Possibly, it suggests that non-perturbative BV-theory may be thought of as a kind of higher geometric quantisation. In an algebraic-geometric context, aspects of such a relation have been investigated by [10]. The formalism proposed in the present paper combines global smooth geometry with derived geometry and thus provides a toolbox to study BV-theory as a derived geometric quantisation in a truly non-perturbative sense. Schematically, one would aim to define a derived prequantum bundle as a lift of the form (6.0.4) where \(\operatorname{RCrit}(S)(M)\) is the derived critical locus of our chosen classical field theory on spacetime, \(\boldsymbol{\mathcal{A}}_{\mathrm{cl}}^{2}(-1)\) is the moduli stack of \((-1)\)-shifted closed 2-forms and the stack \(\mathbf{B}U(1)_{\mathrm{conn}}(-1)\) is a well-defined \((-1)\)-shifted version of the moduli stack \(\mathbf{B}U(1)_{\mathrm{conn}}\) of \(U(1)\)-bundles with connection. Derived \(n\)-plectic geometry.Interestingly, as explored by [12, 13, 14, 15, 16, 17, 18, 19], the language of \(n\)-plectic manifolds is a natural setting for higher geometric (pre)quantisation, just as that of ordinary symplectic manifolds is natural for ordinary geometric quantisation. In higher geometric quantisation of \(n\)-plectic manifolds, the prequantum bundle of ordinary geometric prequantisation is typically generalised to a bundle \((n-1)\)-gerbe [13]. This procedure can be naturally applied to an \(n\)-plectic manifold, by finding the bundle \((n-1)\)-gerbe whose curvature coincides with the \(n\)-plectic form. Recent work in this area includes [1, 1, 18, 19, 20]. It is interesting to consider whether higher geometric quantisation of \(n\)-plectic manifolds can be generalised to derived smooth geometry. In a paper in preparation, [1], we will give a notion of derived \(n\)-plectic geometry and propose its application to BV-BFV theory. Figure 11: Derived \(n\)-plectic geometry would complete this diagram of formalisms. Just like by transgressing ordinary \(n\)-plectic geometry one obtains Lagrangian classical field theory, by transgressing derived \(n\)-plectic geometry one recovers classical BV-theory. By underived-truncation, one gets ordinary \(n\)-plectic geometry. Non-perturbative aspects of string dualities.In recent years, in String Theory, there has been an increasing understanding of string dualities in terms of higher principal bundles [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 191, 193, 195, 196, 197, 198, 199, 199, 199, 190, 191, 197, 199, 198, 199, 199, 199, 190, 192, 193, 194, 195, 196, 197, 198, 199, 199, 199, 199, 198, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 19, 199, 199, 199, 199, 199, 199, 199, 199, 199, 19, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 199, 19, 199, 199, 199, 199, 19, 199, 199, 199, 199, 19, 199, 1
2304.03234
On the threshold for Szemerédi's theorem with random differences
Using recent developments on the theory of locally decodable codes, we prove that the critical size for Szemer\'edi's theorem with random differences is bounded from above by $N^{1-\frac{2}{k} + o(1)}$ for length-$k$ progressions. This gives polynomial improvements over the previous best bounds for all odd $k$.
Jop Briët, Davi Castro-Silva
2023-04-06T17:14:10Z
http://arxiv.org/abs/2304.03234v2
# On the threshold for Szemeredi's theorem with random differences ###### Abstract. Using recent developments on the theory of locally decodable codes, we prove that the critical size for Szemeredi's theorem with random differences is bounded from above by \(N^{1-\frac{1}{k}+o(1)}\) for length-\(k\) progressions. This improves the previous best bounds of \(N^{1-\frac{1}{k/2^{k}}+o(1)}\) for all odd \(k\). This work was supported by the Dutch Research Council (NWO) as part of the NETWORKS programme (grant no. 024.002.003). every monotone property has a threshold function; this is to say that the probability \[p(m)=\Pr_{A\in\binom{[N]}{m}}[A\in\mathcal{F}]\] spikes from \(o(1)\) to \(1-o(1)\) when \(m\) increases from \(o(m^{*})\) to \(\omega(m^{*})\).1 In general, it is notoriously hard to determine the critical size of a monotone property. Footnote 1: Our (standard) asymptotic notation is defined as follows. Given a parameter \(n\) which grows without bounds and a function \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\), we write: \(g(n)=o(f(n))\) to mean \(g(n)/f(n)\to 0\); \(g(n)=\omega(f(n))\) to mean \(g(n)/f(n)\to\infty\); \(g(n)\ll f(n)\) to mean that \(g(n)\leq Cf(n)\) holds for some constant \(C>0\) and all \(n\); and \(g(n)\asymp f(n)\) to mean both \(g(n)\ll f(n)\) and \(f(n)\ll g(n)\). This problem is also wide open for the property of being \(\ell\)-intersective, which is clearly monotone, and for which we denote the critical size by \(m_{\ell}^{*}(N)\). Bourgain [5] showed that the critical size for \(1\)-intersective sets is given by \(m_{1}^{*}(N)\asymp\log N\); at present, this is the only case where precise bounds are known. It has been conjectured [10] that \(\log N\) is the correct bound for all fixed \(\ell\), and indeed no better lower bounds are known for \(\ell\geq 2\). It was shown by Frantzikinakis, Lesigne and Wierdl [11] and independently by Christ [9] that \[m_{2}^{*}(N)\ll N^{\frac{1}{2}+o(1)}. \tag{1}\] The same upper bound was later shown to hold for \(m_{3}^{*}(N)\) by the first author, Dvir and Gopi [7]. More generally, they showed that \[m_{\ell}^{*}(N)\ll N^{1-\frac{1}{\lceil(\ell+1)/2\rceil}+o(1)}, \tag{2}\] which improved on prior known bounds for all \(\ell\geq 3\). The appearance of the ceiling function in these bounds is due to a reduction for even \(\ell\) to the case \(\ell+1\). The reason for this reduction originates from work on locally decodable error correcting codes [14]. It was shown in [7] that lower bounds on the block length of \((\ell+1)\)-query locally decodable codes (LDCs) imply upper bounds on \(m_{\ell}^{*}\). The bounds (2) then followed directly from the best known LDC bounds; see [8] for a direct proof of (2), however. For the same reason, a recent breakthrough of Alrabiah et al. [1] on \(3\)-query LDCs immediately implies an improvement of (1) to \[m_{2}^{*}(N)\ll N^{\frac{1}{3}+o(1)}.\] For technical reasons, their techniques do not directly generalize to improve the bounds for \(q\)-query LDCs with \(q\geq 4\). Here, we use the ideas of [1] to directly prove upper bounds on \(m_{\ell}^{*}\). Due to the additional arithmetic structure in our problem, it is possible to simplify the exposition and, more importantly, apply the techniques to improve the previous best known bounds for all even \(\ell\geq 2\). **Theorem 1.1**.: _For every integer \(\ell\geq 2\), we have that_ \[m_{\ell}^{*}(N)\ll N^{1-\frac{2}{\ell+1}+o(1)}.\] The arguments presented here in fact work in greater generality, and hold for any finite additive group \(G\) whose size is coprime to \(\ell!\) (so as not to incur in divisibility issues when considering \((\ell+1)\)-tem arithmetic progressions). Let \(G\) be a finite additive group, \(\ell\geq 1\) be an integer and \(\varepsilon\in(0,1)\). We say that a set \(S\subseteq G\) is _\((\ell,\varepsilon)\)-intersective_ if every subset \(A\subseteq G\) of size \(|A|\geq\varepsilon|G|\) contains an \((\ell+1)\)-term arithmetic progression with common difference in \(D\). We denote the critical size for the property of being \((\ell,\varepsilon)\)-intersective in \(G\) by \(m^{*}_{\ell,\varepsilon}(G)\). Our main result is the following: **Theorem 1.2**.: _For every \(\ell\geq 2\) and \(\varepsilon\in(0,1)\), there exists \(C(\ell,\varepsilon)>0\) such that_ \[m^{*}_{\ell,\varepsilon}(G)\leq C(\ell,\varepsilon)(\log|G|)^{2\ell+3}|G|^{1- \frac{2}{\ell+1}}\] _for every additive group \(G\) whose size is coprime to \(\ell!\)._ Note that Theorem 1.1 follows easily from this last result by embedding \([N]\) into a group of the form \(\mathbb{Z}/p\mathbb{Z}\), where \(p\) is a prime between \((\ell+1)N\) and \(2(\ell+1)N\). We omit the standard details. ## 2. Preliminaries Our arguments will rely heavily on the analysis of high-dimensional matrices. Here we recall the matrix inequalities which will be needed. If \(M\in\mathbb{R}^{d\times d}\) is a matrix, we define its operator norms \[\|M\|_{2} =\max\big{\{}u^{T}Mv:\;\|u\|_{2}=\|v\|_{2}=1\big{\}}\] \[\|M\|_{\infty\to 1} =\max\big{\{}u^{T}Mv:\;\|u\|_{\infty}=\|v\|_{\infty}=1\big{\}}\] \[\|M\|_{1\to 1} =\max\big{\{}u^{T}Mv:\;\|u\|_{\infty}=\|v\|_{1}=1\big{\}}.\] We will make use of the following simple inequalities: \[\|M\|_{\infty\to 1}\leq d\|M\|_{2},\quad\|M\|_{\infty\to 1}\leq\sum_{i=1}^{d}\|M(i,\cdot)\|_{1}\] and, when \(M\) is symmetric, \[\|M\|_{2}\leq\|M\|_{1\to 1}.\] We will also use the following noncommutative version of Khintchine's inequality, which can be extracted from a result of Tomczak-Jaegermann [19]: **Theorem 2.1**.: _Let \(n,d\geq 1\) be integers, and let \(A_{1},\ldots,A_{n}\) be any sequence of \(d\times d\) real matrices. Then_ \[\mathbb{E}_{\sigma\in\{-1,1\}^{n}}\bigg{\|}\sum_{i=1}^{n}\sigma_{i}A_{i} \bigg{\|}_{2}\leq 10\sqrt{\log d}\bigg{(}\sum_{i=1}^{n}\|A_{i}\|_{2}^{2} \bigg{)}^{1/2}.\] Furthermore, we will need a well-known concentration inequality for polynomials due to Kim and Vu [15], which requires the introduction of some extra notation. Let \(H=(V,E)\) be a hypergraph, where we allow for repeated edges (so \(E\) may be a multiset), and let \(f:\{0,1\}^{V}\to\mathbb{R}\) be the polynomial given by \[f(x)=\sum_{e\in E}\prod_{v\in e}x_{v}. \tag{3}\] For a set \(A\subseteq V\), define \[f_{A}(x)=\sum_{e\in E:\,A\subseteq e}\prod_{v\in e\setminus A}x_{v},\] where the monomial corresponding to the empty set is defined to be \(1\). For \(p\in(0,1)\), we say that \(X\) is a \(p\)_-Bernoulli random variable on \(\{0,1\}^{V}\)_, denoted \(X\sim\operatorname{Bern}(p)^{V}\), if its coordinates are all independent and each equals \(1\) with probability \(p\) (and equals \(0\) with probability \(1-p\)). For each \(i\in\{0,1,\ldots,|V|\}\), define \[\mu_{i}=\max_{A\in\binom{V}{i}}\mathbb{E}_{X\sim\operatorname{Bern}(p)^{V}}f_ {A}(X).\] Note that \(\mu_{0}\) is just the expectation of \(f(X)\). Define also the quantities \[\mu=\max_{i\in\{0,1,\ldots,|V|\}}\mu_{i}\qquad\text{and}\qquad\mu^{\prime}= \max_{i\in\{1,2,\ldots,|V|\}}\mu_{i}.\] The polynomial concentration inequality of Kim and Vu is given as follows: **Theorem 2.2**.: _For every \(k\in\mathbb{N}\), there exist constants \(C,C^{\prime}>0\) such that the following holds. Let \(H=(V,E)\) be an \(n\)-vertex hypergraph whose edges have size at most \(k\), and let \(f\) be given by (3). Then, for any \(\lambda>1\), we have_ \[\Pr\bigl{[}|f(X)-\mu_{0}|>C\lambda^{k-\frac{1}{2}}\sqrt{\mu\mu^{\prime}}]\leq C ^{\prime}\exp\big{(}-\lambda+(k-1)\log n\big{)}.\] To suit our needs, we will use a slight variant of this result, which follows easily from it and the following basic proposition. **Proposition 2.3**.: _Let \(f:\{0,1\}^{n}\to\mathbb{R}_{+}\) be a monotone increasing function and \(p\in(\frac{16}{n},1)\). Then, for any integer \(0\leq t\leq pn/2\),_ \[\mathbb{E}_{S\in\binom{[n]}{t}}f(1_{S})\leq\frac{1}{2}\,\mathbb{E}_{X\sim \operatorname{Bern}(p)^{n}}f(X).\] _Proof:_ By direct calculation, \[\mathbb{E}_{X\sim\operatorname{Bern}(p)^{n}}f(X) =\sum_{i=0}^{n}p^{i}(1-p)^{n-i}\sum_{S\in\binom{[n]}{i}}f(1_{S})\] \[=\sum_{i=0}^{n}\binom{n}{i}p^{i}(1-p)^{n-i}\,\mathbb{E}_{S\in \binom{[n]}{i}}f(1_{S})\] \[\geq\sum_{i\geq t}\binom{n}{i}p^{i}(1-p)^{n-i}\,\mathbb{E}_{S\in \binom{[n]}{t}}f(1_{S})\] \[\geq\frac{1}{2}\,\mathbb{E}_{S\in\binom{[n]}{t}}f(1_{S}),\] where in the third line we used monotonicity of \(f\) and the fourth line follows from the Chernoff bound. \(\square\) **Corollary 2.4**.: _For every \(k\in\mathbb{N}\), there exist constants \(C,C^{\prime}>0\) such that the following holds. Let \(H=(V,E)\) be an \(k\)-uniform hypergraph on \(n\) vertices, let \(f\) be given as in (3) and let \(p\in(\frac{16}{n},1)\). Then, for any integer \(0\leq t\leq pn/2\), we have_ \[\Pr_{S\in\binom{V}{t}}\bigl{[}f(1_{S})\geq C(\log n)^{k-\frac{1}{2}}\mu]\leq \frac{C^{\prime}}{n^{4}}.\] _Proof:_ For a sufficiently large constant \(C=C(k)>0\), let \(g:\{0,1\}^{n}\to\{0,1\}\) be the indicator function \[g(1_{S})=\mathbf{1}\bigl{[}f(1_{S})\geq C(\log n)^{k-\frac{1}{2}}\mu].\] Since \(f\) is monotone, so is \(g\). Setting \(\lambda=(3+k)\log n\), it follows from Theorem 2.2 that \[\mathbb{E}_{X\sim\operatorname{Bern}(p)^{n}}\,g(X)\leq\frac{C^{\prime}}{n^{4}}.\] The result now follows from Proposition 2.3. \(\square\) ## 3. The main argument Fix an integer \(k\geq 3\) and a positive parameter \(\varepsilon>0\). Let \(G\) be an additive group with \(N\) elements, where \(N\) is coprime to \((k-1)!\) and is assumed to be sufficiently large relative to \(k\) and \(\varepsilon\) for our arguments to hold. For convenience, instead of considering random intersective sets, we will consider random _intersective sequences_, where a sequence in \(G^{m}\) is \(\ell\)-intersective if the set of its distinct elements is. Clearly, the probability that a uniformly random \(m\)-element sequence is \(\ell\)-intersective is a most the probability that a uniform \(m\)-element set is. Since we are interested in proving upper bounds on the critical size, it suffices to bound the minimal \(m\) such that a random sequence in \(G^{m}\) is \(\ell\)-intersective with probability at least \(1/2\). Given a sequence of differences \(D=(d_{1},\ldots,d_{m})\in G^{m}\) and some set \(A\subseteq G\), let \(\Lambda_{D}(A)\) be the normalized count of \(k\)-APs with common difference in \(D\) which are contained in \(A\): \[\Lambda_{D}(A)=\mathbb{E}_{i\in[m]}\mathbb{E}_{x\in G}\prod_{\ell=0}^{k-1}A(x+ \ell d_{i}).\] Similarly, we denote by \(\Lambda_{G}(A)\) the proportion of all \(k\)-APs which are contained in \(A\): \[\Lambda_{G}(A)=\mathbb{E}_{d\in G}\mathbb{E}_{x\in G}\prod_{\ell=0}^{k-1}A(x+ \ell d).\] By a suitable generalization of Szemeredi's theorem, we know that \[\Lambda_{G}(A)\gg_{k,\varepsilon}1\quad\text{for all $A\subseteq G$ with $|A|\geq \varepsilon|G|$}. \tag{4}\] This can be proven, for instance, by using the _hypergraph removal lemma_ of Gowers [13] and Nagle, Rodl, Schacht and Skokan [17, 16]. It can also be obtained via a standard averaging argument (originally due to Varnavides [20]) applied to a version of Szemeredi's theorem valid for the specific group \(G\) in consideration (though the bound obtained might then depend on the structure of \(G\)). Now suppose \(m\in[N]\) is an integer for which \[\Pr_{D\in G^{m}}\bigl{(}\exists A\subseteq G:\;|A|\geq\varepsilon|G|,\,\Lambda _{D}(A)=0\bigr{)}\geq 1/2. \tag{5}\] Noting that \(\mathbb{E}_{D^{\prime}\in G^{m}}\Lambda_{D^{\prime}}(A)=\Lambda_{G}(A)\), by combining inequalities (5) and (4) we conclude that \[\mathbb{E}_{D\in G^{m}}\max_{A\subseteq G:\;|A|\geq\varepsilon N}\big{|} \Lambda_{D}(A)-\mathbb{E}_{D^{\prime}\in G^{m}}\Lambda_{D^{\prime}}(A)\big{|} \gg_{k,\varepsilon}1.\] We next apply a simple symmetrization argument given in [8, page 8690] to write this in a more convenient form: **Lemma 1** (Symmetrization).: _Let \(c>0\), and suppose that_ \[\mathbb{E}_{D\in G^{m}}\max_{A\subseteq G:\;|A|\geq\varepsilon|G|}\big{|} \Lambda_{D}(A)-\mathbb{E}_{D^{\prime}\in G^{m}}\Lambda_{D^{\prime}}(A)\big{|} \geq c.\] _Then_ \[\mathbb{E}_{D\in G^{m}}\mathbb{E}_{\sigma\in\{-1,1\}^{m}}\max_{A\subseteq G:\; |A|\geq\varepsilon|G|}\left|\mathbb{E}_{i\in[m]}\mathbb{E}_{x\in G}\,\sigma_{ i}\prod_{\ell=0}^{k-1}A(x+\ell d_{i})\right|\geq\frac{c}{2}.\] The appearance of the expectation over signs \(\sigma\in\{-1,1\}^{m}\) is crucial to our arguments. By an easy multilinearity argument, we can replace the set \(A\subseteq G\) (which can be seen as a vector in \(\{0,1\}^{G}\)) by a vector \(Z\in\{-1,1\}^{G}\). In combination with (5) and Lemma 1, this gives \[\mathbb{E}_{D\in G^{m}}\mathbb{E}_{\sigma\in\{-1,1\}^{m}}\max_{Z\in\{-1,1\}^{G }}\left|\mathbb{E}_{i\in[m]}\mathbb{E}_{x\in G}\,\sigma_{i}\prod_{\ell=0}^{k- 1}Z(x+\ell d_{i})\right|\gg_{k,\varepsilon}1. \tag{6}\] The change from \(\{0,1\}^{G}\) to \(\{-1,1\}^{G}\) is a convenient technicality so we can ignore terms which get squared in a product. This last inequality (6) is what we need to prove the result for even values of \(k\) using the arguments we will outline below. For odd values of \(k\), however, this inequality is unsuited due to the odd number of factor inside the product. The main idea from [1] to deal with this case is to apply a "Cauchy-Schwarz trick" to obtain a better suited inequality: **Lemma 2** (Cauchy-Schwarz trick).: _Let \(c>0\), and suppose \(m\geq 2/c^{2}\) is an integer for which_ \[\mathbb{E}_{D\in G^{m}}\mathbb{E}_{\sigma\in\{-1,1\}^{m}}\max_{Z\in\{-1,1\}^{G }}\left|\mathbb{E}_{i\in[m]}\mathbb{E}_{x\in G}\,\sigma_{i}\prod_{\ell=0}^{k- 1}Z(x+\ell d_{i})\right|\geq c.\] _Then there exists a partition \([m]=L\,\dot{\cup}\,R\) such that_ \[\mathbb{E}_{D\in G^{m}}\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L} \\ \tau\in\{-1,1\}^{R}\end{subarray}}\max_{Z\in\{-1,1\}^{G}}\sum_{\begin{subarray} {c}i\in L\\ j\in R\end{subarray}}\sum_{x\in G}\sigma_{i}\tau_{j}\prod_{\ell=1}^{k-1}Z(x+ \ell d_{i})Z(x+\ell d_{j})\geq\frac{c^{2}m^{2}N}{8}.\] _Proof:_ By Cauchy-Schwarz, for any \(Z\in\{-1,1\}^{G}\) we have \[\left|\mathbb{E}_{i\in[m]}\mathbb{E}_{x\in G}\,\sigma_{i}\prod_{ \ell=0}^{k-1}Z(x+\ell d_{i})\right|^{2} =\left|\mathbb{E}_{x\in G}\,Z(x)\cdot\left(\mathbb{E}_{i\in[m]} \sigma_{i}\prod_{\ell=1}^{k-1}Z(x+\ell d_{i})\right)\right|^{2}\] \[\leq\big{(}\mathbb{E}_{x\in G}\,Z(x)^{2}\big{)}\mathbb{E}_{x\in G }\bigg{(}\mathbb{E}_{i\in[m]}\sigma_{i}\prod_{\ell=1}^{k-1}Z(x+\ell d_{i}) \bigg{)}^{2}\] \[=\mathbb{E}_{x\in G}\mathbb{E}_{i,j\in[m]}\,\sigma_{i}\sigma_{j} \prod_{\ell=1}^{k-1}Z(x+\ell d_{i})Z(x+\ell d_{j}).\] Applying Cauchy-Schwarz again, we conclude from our assumption that \[c^{2} \leq\mathbb{E}_{D\in G^{m}}\mathbb{E}_{\sigma\in\{-1,1\}^{m}} \max_{Z\in\{-1,1\}^{G}}\left|\mathbb{E}_{i\in[m]}\mathbb{E}_{x\in G}\,\sigma_ {i}\prod_{\ell=0}^{k-1}Z(x+\ell d_{i})\right|^{2}\] \[\leq\mathbb{E}_{D\in G^{m}}\mathbb{E}_{\sigma\in\{-1,1\}^{m}} \max_{Z\in\{-1,1\}^{G}}\mathbb{E}_{x\in G}\mathbb{E}_{i,j\in[m]}\,\sigma_{i} \sigma_{j}\prod_{\ell=1}^{k-1}Z(x+\ell d_{i})Z(x+\ell d_{j}).\] Now consider a uniformly random partition \([m]=L\,\dot{\cup}\,R\), so that for any \(i,j\in[m]\) with \(i\neq j\) we have \(\Pr_{L,R}(i\in L,\,j\in R)=1/4\); then \[\mathbb{E}_{i,j\in[m]} \,\sigma_{i}\sigma_{j}\prod_{\ell=1}^{k-1}Z(x+\ell d_{i})Z(x+\ell d _{j})\] \[=\frac{1}{m^{2}}\sum_{\begin{subarray}{c}i,j=1\\ i\neq j\end{subarray}}^{m}\sigma_{i}\sigma_{j}\prod_{\ell=1}^{k-1}Z(x+\ell d_{i })Z(x+\ell d_{j})+\frac{1}{m^{2}}\sum_{i=1}^{m}\sigma_{i}^{2}\prod_{\ell=1}^{k- 1}Z(x+\ell d_{i})^{2}\] \[=\frac{4}{m^{2}}\mathbb{E}_{L,R}\sum_{i\in L,j\in R}\sigma_{i} \sigma_{j}\prod_{\ell=1}^{k-1}Z(x+\ell d_{i})Z(x+\ell d_{j})+\frac{1}{m}.\] It follows that \[c^{2}\leq\frac{1}{m}+\frac{4}{m^{2}}\mathbb{E}_{L,R}\mathbb{E}_{D\in G^{m}} \mathbb{E}_{\sigma\in\{-1,1\}^{m}}\max_{Z\in\{-1,1\}^{G}}\mathbb{E}_{x\in G} \sum_{i\in L,j\in R}\sigma_{i}\sigma_{j}\prod_{\ell=1}^{k-1}Z(x+\ell d_{i})Z( x+\ell d_{j}).\] Using that \(m\geq 2/c^{2}\), we conclude there exists a choice of partition \([m]=L\,\dot{\cup}\,R\) satisfying the conclusion of the lemma. From now on we assume that \(k\) is odd, and write \(k=2r+1\).2 For \(i,j\in[m]\), denote \(P_{i}(x)=\{x+d_{i},x+2d_{i},\ldots,x+2rd_{i}\}\) and \(P_{ij}(x)=P_{i}(x)\cup P_{j}(x)\), where we hide the dependence on the difference set \(D\) for ease of notation. From inequality (6) and Lemma 2 we conclude that Footnote 2: The even case is similar but simpler. We focus on the odd case here because this is where we get new bounds. \[\mathbb{E}_{D\in G^{m}}\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L} \\ \tau\in\{-1,1\}^{R}\end{subarray}}\max_{Z\in\{-1,1\}^{G}}\sum_{\begin{subarray} {c}i\in L\\ j\in R\end{subarray}}\sum_{x\in G}\sigma_{i}\tau_{j}\prod_{y\in P_{ij}(x)}Z(y) \gg_{k,\varepsilon}m^{2}N, \tag{7}\] where \((L,R)\) is a suitable partition of the index set \([m]\) and we assume (without loss of generality) that \(m\) is sufficiently large depending on \(\varepsilon\) and \(k\). From inequality (7) it follows that we can fix a "good" set \(D\in G^{m}\) satisfying \[\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L}\\ \tau\in\{-1,1\}^{R}\end{subarray}}\max_{Z\in\{-1,1\}^{G}}\sum_{ \begin{subarray}{c}i\in L\\ j\in R\end{subarray}}\sigma_{i}\tau_{j}\sum_{x\in G}\prod_{y\in P_{ij}(x)}Z(y) \gg_{k,\varepsilon}m^{2}N \tag{8}\] and for which we have the technical conditions \[\big{|}\big{\{}i\in L,j\in R:\;|P_{ij}(0)|\neq 4r\big{\}}\big{|} \ll_{k}m^{2}/N\quad\text{and} \tag{10}\] \[\max_{x\neq 0}\sum_{i=1}^{m}\sum_{\ell=-2r}^{2r}\mathbf{1}\{\ell d _{i}=x\} \ll_{k}\log N, \tag{9}\] which are needed to bound the probability of certain bad events later on. Indeed, for \(\ell,\ell^{\prime}\in[k-1]\) and independent uniform \(d_{i},d_{j}\in G\), we have that \(\Pr[\ell d_{i}=\ell^{\prime}d_{j}]=1/N\). Hence, the expectation of the left-hand side of (9) (taken with respect to independent \(d_{i},d_{j}\) for \(i\in L\) and \(j\in R\)) is at most \(O_{k}(m^{2}/N)\). It then follows from Markov's inequality that (9) holds with probability at least \(3/4\). It follows from the Chernoff bound and a union bound that (10) also holds with probability at least \(3/4\). Finally, since the maxima in the expectation of (8) are bounded by \(m^{2}N\), it follows that also this condition holds with probability at least \(3/4\). Hence, with positive probability, all the conditions hold. The next key idea is to construct matrices \(M_{ij}\) for which the quantity \[\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L}\\ \tau\in\{-1,1\}^{R}\end{subarray}}\bigg{\|}\sum_{i\in L,j\in R}\sigma_{i}\tau_ {j}M_{ij}\bigg{\|}_{\infty\to 1} \tag{11}\] is related to the expression on the left-hand side of inequality (8). The reason for doing so is that this allows us to use strong _matrix concentration inequalities_, which can be used to obtain a good upper bound on the expectation (11); this in turn translates to an upper bound on \(m\) as a function of \(N\), which is our goal. Such uses of matrix inequalities go back to work of Ben-Aroya, Regev and de Wolf [2], in turn inspired by work of Kerenidis and de Wolf [14] (see also [6]). The matrices we will construct are indexed by sets of a given size \(s\), where (with hindsight) we choose \(s=\lfloor N^{1-2/k}\rfloor\). For \(i\in L\), \(j\in R\), define the matrix \(M_{ij}\in\mathbb{R}^{\binom{G}{s}\times\binom{G}{s}}\) by \[M_{ij}(S,T)=\sum_{x\in G}\mathbf{1}\big{\{}|S\cap P_{i}(x)|=|S\cap P_{j}(x)|= r,\,S\triangle T=P_{ij}(x)\big{\}}\] if \(|P_{ij}(0)|=4r\), and \(M_{ij}(S,T)=0\) if \(|P_{ij}(0)|\neq 4r\); note that, despite the asymmetry in their definition, these matrices are in fact symmetric. We will next deduce from inequality (8) a lower bound on the expectation (11). For a vector \(Z\in\{-1,1\}^{G}\), denote by \(Z^{\odot s}\in\{-1,1\}^{\binom{G}{s}}\) the "lifted" vector given by \[Z^{\odot s}(S)=\prod_{y\in S}Z(y)\quad\text{for all }S\in\binom{G}{s}.\] If \(|P_{ij}(0)|=4r\), then for all \(Z\in\{-1,1\}^{G}\) we have \[\sum_{S,T\in\binom{G}{s}}M_{ij}(S,T)Z^{\odot s}(S)Z^{\odot s}(T) =\sum_{S,T\in\binom{G}{s}}M_{ij}(S,T)\prod_{y\in S\triangle T}Z(y)\] \[=\sum_{x\in G}\sum_{S\in\binom{G}{s}}\mathbf{1}\big{\{}|S\cap P_{ i}(x)|=|S\cap P_{j}(x)|=r\big{\}}\prod_{y\in P_{ij}(x)}Z(y) \tag{12}\] \[=\binom{2r}{r}^{2}\binom{N-4r}{s-2r}\sum_{x\in G}\prod_{y\in P_{ij }(x)}Z(y),\] since there are \(\binom{2r}{r}^{2}\binom{N-4r}{s-2r}\) ways of choosing a set \(S\in\binom{G}{s}\) satisfying \(|S\cap P_{i}(x)|=|S\cap P_{j}(x)|=r\) and, once such a set \(S\) is chosen, there is only one set \(T\in\binom{G}{s}\) for which \(S\triangle T=P_{ij}(x)\). It follows that \[\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L}\\ \tau\in\{-1,1\}^{R}\end{subarray}} \bigg{\|}\sum_{i\in L,j\in R}\sigma_{i}\tau_{j}M_{ij}\bigg{\|}_{ \infty\to 1}\] \[\geq\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L}\\ \tau\in\{-1,1\}^{R}\end{subarray}}\max_{Z\in\{-1,1\}^{G}}\sum_{S,T\in\binom{G }{s}}\sum_{i\in L,j\in R}\sigma_{i}\tau_{j}M_{ij}(S,T)Z^{\odot s}(S)Z^{\odot s }(T)\] \[=\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L}\\ \tau\in\{-1,1\}^{R}\end{subarray}}\max_{Z\in\{-1,1\}^{G}}\binom{2r}{r}^{2} \binom{N-4r}{s-2r}\sum_{\begin{subarray}{c}i\in L,j\in R\\ |P_{ij}(0)|=4r\end{subarray}}\sigma_{i}\tau_{j}\sum_{x\in G}\prod_{y\in P_{ij} (x)}Z(y);\] combining this with inequalities (8) and (9), we conclude the lower bound \[\mathbb{E}_{\begin{subarray}{c}\sigma\in\{-1,1\}^{L}\\ \tau\in\{-1,1\}^{R}\end{subarray}}\bigg{\|}\sum_{i\in L,j\in R}\sigma_{i}\tau_ {j}M_{ij}\bigg{\|}_{\infty\to 1}\gg_{k,\varepsilon}\binom{N-4r}{s-2r}m^{2}N. \tag{13}\] Now we need to compute an upper bound for the expectation above. The main idea here is to use the non-commutative version of Khintchine's inequality given in Theorem 2.1. Intuitively, this inequality shows that the sum in the last expression incurs many cancellations due to the presence of the random signs \(\sigma_{i}\), and thus the expectation on the left-hand side of (13) is much smaller than one might expect. To apply Theorem 2.1, it is better to collect the matrices \(M_{ij}\) into groups and use only one half of the random signs \(\sigma_{i}\) (another idea from [1]). For \(i\in L\), \(\tau\in\{-1,1\}^{R}\), we define the matrix \[M_{i}^{\tau}=\sum_{j\in R}\tau_{j}M_{ij}.\] We will then provide an upper bound for the expression \[\max_{\tau\in\{-1,1\}^{R}}\mathbb{E}_{\sigma\in\{-1,1\}^{L}}\bigg{\|}\sum_{i \in L}\sigma_{i}M_{i}^{\tau}\bigg{\|}_{\infty\to 1}\] which is itself an upper bound for the expectation in (13). Towards this goal, we will prune the matrices \(M_{i}^{\tau}\) by removing remove all rows and columns whose \(\ell_{1}\)-weight significantly exceeds the average. By symmetry and non-negativity of these matrices, the \(\ell_{1}\)-weight of a row or column indexed by a set \(S\in\binom{G}{s}\) is bounded by \[\sum_{T\in\binom{G}{s}}\bigg{|}\sum_{j\in R}\tau_{j}M_{ij}(S,T) \bigg{|} \leq\sum_{T\in\binom{G}{s}}\sum_{j\in R}M_{ij}(S,T)\] \[=\sum_{\begin{subarray}{c}j\in R\\ |P_{ij}(0)|=4r\end{subarray}}\sum_{x\in G}\mathbf{1}\big{\{}|S\cap P_{i}(x)|=|S \cap P_{j}(x)|=r\big{\}}.\] To show that pruning makes little difference to the final bounds, we show that only a small proportion of the rows and columns have large \(\ell_{1}\)-weight. To this end, let \(U\) be a uniformly distributed \(\binom{G}{s}\)-valued random variable and, for each \(i\in L\), define the random variable corresponding to the last expression above, \[X_{i}:=\sum_{\begin{subarray}{c}j\in R\\ |P_{ij}(0)|=4r\end{subarray}}\sum_{x\in G}\mathbf{1}\big{\{}|U\cap P_{i}(x)|=| U\cap P_{j}(x)|=r\big{\}}.\] The calculation done in (12), with \(Z\) the all-ones vector, shows that \[\mathbb{E}[X_{i}]=\frac{1}{\binom{N}{s}}\sum_{\begin{subarray}{c}j\in R\\ |P_{ij}(0)|=4r\end{subarray}}\binom{2r}{r}^{2}\binom{N-4r}{s-2r}N\ll_{k}\frac{m }{N^{1-2/k}} \tag{14}\] where we used our chosen value for \(s\) in the inequality. The following lemma gives an upper-tail estimate on \(X_{i}\), provided \(m\) is sufficiently large. **Lemma 3**.: _Suppose that \(m\geq N^{1-2/k}\). Then, for every \(i\in L\), we have that_ \[\Pr\Bigl{[}X_{i}\geq(\log N)^{k}\frac{m}{N^{1-2/k}}\Bigr{]}\leq\frac{1}{N^{4}}.\] _Proof:_ Fix an \(i\in L\). Consider the hypergraph \(H_{i}\) on vertex set \(G\) and with edge set \[E(H_{i})=\biguplus_{\begin{subarray}{c}j\in R\\ |P_{ij}(0)|=4r\end{subarray}}\biguplus_{x\in G}\binom{P_{i}(x)}{r}\times \binom{P_{j}(x)}{r},\] and let \(f:\mathbb{R}^{G}\to\mathbb{R}\) be the polynomial associated with \(H_{i}\) as in (3), \[f(t)=\sum_{e\in E(H_{i})}\prod_{v\in e}t_{v}.\] Note that \(X_{i}=f(1_{U})\), where \(U\) is uniformly distributed over \(\binom{G}{s}\) and \(1_{U}\in\mathbb{R}^{G}\) denotes its (random) indicator vector. For each \(0\leq\ell\leq 2r\), we wish to bound the quantity \[\mu_{\ell}:=\max_{A\in\binom{G}{\ell}}\mathbb{E}_{t\sim\operatorname{Bern}(s/N)^{G }}f_{A}(t).\] (Recall the notation introduced in Section 2.) By (14), we have that \(\mu_{0}\ll_{k}mN^{-(1-2/k)}\). For a set \(A\in\binom{G}{\ell}\), define its degree in \(H_{i}\) by \[\deg(A)=|\{e\in E(H_{i}):\,e\supseteq A\}|,\] where we count multiplicities of repeated edges. Note that for any \(B\subseteq A\), we have that \(\deg(A)\leq\deg(B)\). Then, \[\mu_{\ell}=\max_{A\in\binom{G}{\ell}}\left(\frac{s}{N}\right)^{2r-\ell}\deg( A).\] For any \(v\in G\), we have that \(\deg(v)\ll_{k}m\), since \(v\) is contained in \(O_{k}(1)\) arithmetic progressions of length \(k\) with a fixed common difference. It follows that for \(\ell\in[r]\), we have that \[\mu_{\ell}\:\leq\:\left(\frac{s}{N}\right)^{2r-\ell}\max_{v\in G}\deg(v)\:\ll _{k}\ mN^{-2r/(2r+1)}\:=\:\frac{m}{N^{1-1/k}}.\] Let \(A\subseteq G\) be a set of size \(\ell\in\{r+1,\ldots,2r\}\) and \[e\in\binom{P_{i}(x)}{r}\times\binom{P_{j}(x)}{r}\] be an edge of \(E(H_{i})\) that contains \(A\). By the Pigeonhole principle, \(A\) contains an element \(a\in P_{i}(x)\) and an element \(b\in P_{j}(x)\). Knowing \(a\) limits \(x\) to a set of size at most \(2r\). Moreover, it follows from (10) that for each \(x\), there are at most \(O_{k}(\log N)\) possible values of \(j\in R\) such that \(b\in P_{j}(x)\). Therefore, \[\mu_{\ell}\ll_{k}\left(\frac{s}{N}\right)^{2r-\ell}\log N\leq\log N.\] Using our assumption on \(m\), it follows that for each \(\ell\in\{0,\ldots,2r\}\), we have that \(\mu_{\ell}\ll_{k}mN^{-(1-2/k)}\log N\). The result now follows directly from Corollary 2.4. Lemma 3 shows that for each matrix \(M_{i}^{\tau}\), at most an \(N^{-4}\) fraction of all rows and columns have \(\ell_{1}\)-weight exceeding \((\log N)^{k}mN^{-(1-2/k)}\). Now define \(\widetilde{M}_{i}^{\tau}\) as the 'pruned' matrix obtained from \(M_{i}^{\tau}\) by zeroing out all such heavy rows and columns. Note that \(\widetilde{M}_{i}^{\tau}\) is symmetric, and so \[\|\widetilde{M}_{i}^{\tau}\|_{2}\leq\|\widetilde{M}_{i}^{\tau}\|_{1\to 1}= \max_{S\in\binom{G}{s}}\|\widetilde{M}_{i}^{\tau}(S,\cdot)\|_{1}\leq(\log N)^{ k}\frac{m}{N^{1-2/k}};\] this bound on the operator norm is what makes the pruned matrices more convenient for us to work with. We first show that replacing the original matrices by their pruned versions has negligible effect on our bounds. Indeed, from the definition of \(X_{i}\) we see that its maximum value is bounded by \(mN\), and so \[\left\|M_{i}^{\tau}-\widetilde{M_{i}^{\tau}}\right\|_{\infty\to 1} \leq\sum_{S\in\binom{O}{s}}\left\|M_{i}^{\tau}(S,\cdot)-\widetilde {M_{i}^{\tau}}(S,\cdot)\right\|_{1}\] \[\leq 2\binom{N}{s}\cdot\mathbb{E}\big{[}X_{i}\,\mathbf{1}\big{\{} X_{i}\geq(\log N)^{k}mN^{-(1-2/k)}\big{\}}\big{]}\] \[\leq 2\binom{N}{s}\cdot mN\mathrm{Pr}\big{[}X_{i}\geq(\log N)^{k} mN^{-(1-2/k)}\big{]}.\] (The multiplication by \(2\) in the second inequality happens because we must take into account both heavy rows and heavy columns.) By Lemma 3 we conclude that \[\left\|M_{i}^{\tau}-\widetilde{M_{i}^{\tau}}\right\|_{\infty\to 1}\leq\frac{2m}{N^{ 3}}\binom{N}{s}\quad\text{for all $i\in L$, $\tau\in\{0,1\}^{R}$.} \tag{15}\] Next we apply the concentration inequality from Theorem 2.1 to the pruned matrices \(\widetilde{M_{i}^{\tau}}\); we obtain \[\mathbb{E}_{\sigma\in\{-1,1\}^{L}}\bigg{\|}\sum_{i\in L}\sigma_{ i}\widetilde{M_{i}^{\tau}}\bigg{\|}_{\infty\to 1} \leq\binom{N}{s}\mathbb{E}_{\sigma\in\{-1,1\}^{L}}\bigg{\|}\sum_{ i\in L}\sigma_{i}\widetilde{M_{i}^{\tau}}\bigg{\|}_{2}\] \[\leq\binom{N}{s}\sqrt{\log\binom{N}{s}}\bigg{(}\sum_{i\in L} \|\widetilde{M_{i}^{\tau}}\|_{2}^{2}\bigg{)}^{1/2}\] \[\leq\binom{N}{s}\sqrt{\log\binom{N}{s}}\bigg{(}\sum_{i\in L}\| \widetilde{M_{i}^{\tau}}\|_{1\to 1}^{2}\bigg{)}^{1/2}\] \[\leq\binom{N}{s}\sqrt{s\log N}\cdot m^{1/2}(\log N)^{k}\frac{m}{N ^{1-2/k}}.\] By the triangle inequality and our previous bounds, we conclude that \[\mathbb{E}_{\sigma\in\{-1,1\}^{L}}\bigg{\|}\sum_{i\in L}\sigma_{ i}M_{i}^{\tau}\bigg{\|}_{\infty\to 1} \leq\mathbb{E}_{\sigma\in\{-1,1\}^{L}}\bigg{\|}\sum_{i\in L} \sigma_{i}\widetilde{M_{i}^{\tau}}\bigg{\|}_{\infty\to 1}+\sum_{i\in L}\left\|M_{i}^{ \tau}-\widetilde{M_{i}^{\tau}}\right\|_{\infty\to 1}\] \[\leq\binom{N}{s}\sqrt{s\log N}\cdot m^{1/2}(\log N)^{k}\frac{m}{N ^{1-2/k}}+\frac{2m^{2}}{N^{3}}\binom{N}{s}.\] Combining this with inequality (13) gives \[\binom{N-4r}{s-2r}m^{2}N\ll_{k,\varepsilon}\binom{N}{s}\sqrt{ms\log N}(\log N )^{k}\frac{m}{N^{1-2/k}}.\] Rearranging and using that \(\binom{N}{s}/\binom{N-4r}{s-2r}\ll_{k}(N/s)^{2r}=N^{2-2/k}\), we conclude that \[m\ll_{k,\varepsilon}s(\log N)^{2k+1}=N^{1-2/k}(\log N)^{2k+1}.\] As we started with the assumption (5), this shows that \(m_{k-1,\varepsilon}^{*}(G)\ll_{k,\varepsilon}N^{1-2/k}(\log N)^{2k+1}\) as wished.
2305.14461
Engineering Rank/Select Data Structures for Large-Alphabet Strings
Large-alphabet strings are common in scenarios such as information retrieval and natural-language processing. The efficient storage and processing of such strings usually introduces several challenges that are not witnessed in small-alphabets strings. This paper studies the efficient implementation of one of the most effective approaches for dealing with large-alphabet strings, namely the \emph{alphabet-partitioning} approach. The main contribution is a compressed data structure that supports the fundamental operations $rank$ and $select$ efficiently. We show experimental results that indicate that our implementation outperforms the current realizations of the alphabet-partitioning approach. In particular, the time for operation $select$ can be improved by about 80%, using only 11% more space than current alphabet-partitioning schemes. We also show the impact of our data structure on several applications, like the intersection of inverted lists (where improvements of up to 60% are achieved, using only 2% of extra space), the representation of run-length compressed strings, and the distributed-computation processing of $rank$ and $select$ operations. In the particular case of run-length compressed strings, our experiments on the Burrows-Wheeler transform of highly-repetitive texts indicate that by using only about 0.98--1.09 times the space of state-of-the-art RLFM-indexes (depending on the text), the process of counting the number of occurrences of a pattern in a text can be carried out 1.23--2.33 times faster.
Diego Arroyuelo, Gabriel Carmona, Héctor Larrañaga, Francisco Riveros, Carlos Eugenio Rojas-Morales, Erick Sepúlveda
2023-05-23T18:35:39Z
http://arxiv.org/abs/2305.14461v2
# Engineering Rank/Select Data Structures for Big-Alphabet Strings ###### Abstract Big-alphabet strings are common in several scenarios such as information retrieval and natural-language processing. The efficient storage and processing of such strings usually introduces several challenges that are not witnessed in smaller-alphabets strings. This paper studies the efficient implementation of one of the most effective approaches for dealing with big-alphabet strings, namely the _alphabet-partitioning_ approach. The main contribution is a compressed data structure that supports the fundamental operations rank and select efficiently. We show experimental results that indicate that our implementation outperforms the current realizations of the alphabet-partitioning approach. In particular, the time for operation select can be improved by about 80%, using only 11% more space than current alphabet-partitioning schemes. We also show the impact of our data structure on several applications, like the intersection of inverted lists (where improvements of up to 60% are achieved, using only 2% of extra space), the representation of run-length compressed strings, and the distributed-computation processing of rank and select operations. keywords: Compressed data structures, rank/select compressed data structures, rank/select on strings + Footnote †: journal: ## 1 Introduction Strings are a fundamental data type in computer science. Today they are intensively used in applications such as text, biological, and source code databases, hence their efficient manipulation is key. By efficiency we mean being able to (1) use as less space as possible to store them, and (2) manipulate them efficiently, supporting operations of interest. Let \(s[1..n]\) be a string of length \(n\), with symbols drawn over an alphabet \(\Sigma=\{0,\ldots,\sigma-1\}\). We define the following operations on strings that will be studied in this paper: * \(s.\mathsf{rank}_{c}(i)\): for \(1\leq i\leq n\) and symbol \(c\in\Sigma\), yields the number of occurrences of \(c\) in \(s[1..i]\). * \(s.\mathtt{select}_{c}(j)\): for \(c\in\Sigma\) and \(1\leq j\leq n_{c}\), yields the position of the \(j\)-th occurrence of symbol \(c\) in \(s\). Here, \(n_{c}=s.\mathtt{rank}_{c}(n)\) is the total number of occurrences of \(c\) in \(s\). * \(s.\mathtt{access}(i)\): yields symbol \(s[i]\), for \(1\leq i\leq n\). These operations are fundamental for many applications [46], such as snippet extraction in text databases [6], query processing in information retrieval [4; 3], and the space-efficient representation of cardinal trees, text-search data structures, and graphs [13], among others. As the amount of data managed by these applications is usually large, _space-efficient_ data structures that support these operations are fundamental [46]. _Succinct data structures_ use space close to the information-theory minimum, while supporting operations efficiently. _Compressed data structures_, on the other hand, take advantage of certain regularities in the data to further reduce the space usage. The main motivation behind the use of space-efficient data structures is the in-memory processing of big amounts of data, avoiding expensive secondary-memory accesses and saving important time. Starting with the seminal work by Jacobson [37], space-efficient data structures have been one of the main lines in data-structure research [5]. After around 35 years, this is a mature research area with most fundamental theoretical problems already solved. Thus, research has slightly turned into the applications of space-efficient data structures and technology transfer. Being able to properly implement the theoretical proposals is not trivial [46], usually requiring in-depth studies and strong algorithm engineering. The main focus of this paper is the engineering of space-efficient data structures supporting \(\mathtt{rank}\) and \(\mathtt{select}\) operations on strings. In particular, we are interested in big-alphabet strings, which are common in scenarios like information retrieval and natural language processing and introduce additional challenges when implementing them. To illustrate this, the most relevant data structures supporting \(\mathtt{rank}\) and \(\mathtt{select}\) operations on strings are _wavelet trees_[35], using \(n\lg\sigma(1+o(1))+\Theta(\sigma w)\), where \(w=\Omega(\lg n)\) is the word size (in bits) of the word-RAM model we assume in this paper. For big \(\sigma\), the \(\Theta(\sigma w)\) term dominates the space usage. _Wavelet matrices_[24] tackle this linear alphabet dependency using just \(n\lg\sigma(1+o(1))\) bits of space, while supporting \(\mathtt{rank}\), \(\mathtt{select}\), and \(\mathtt{access}\) in \(O(\lg\sigma)\) time. However, they have two main drawbacks: 1. The \(O(\lg\sigma)\) computation time of the operations can be high in practice, as wavelet matrices are implemented as a bit string on which \(\lceil\lg\sigma\rceil\)\(\mathtt{rank}\) or \(\mathtt{select}\) operations are needed to support the corresponding \(\mathtt{rank}\) and \(\mathtt{select}\) on \(s\). Operation \(\mathtt{rank}\) on bit strings is typically supported in \(\sim\)25-50 nanoseconds using the most efficient data structures from the sdsl library [31], whereas \(\mathtt{select}\) on a bit string typically takes \(\sim\)100-300 nanoseconds. On a big-alphabet string, \(\lceil\lg\sigma\rceil\) can be high, hence the total operation time can become impractical. For instance, for the well-known GOV2 collection [23], \(\lceil\lg\sigma\rceil=25\), whereas for ClueWeb [1] we have \(\lceil\lg\sigma\rceil=26\). 2. Although they use only \(o(n\lg\sigma)\) additional bits on top of the plain representation of string \(s\), no compression is achieved. In practice, strings have regularities that could be used to improve the space usage, using space proportional to some compressibility measure (such as, e.g., the empirical entropy of the string [41]). The approach by Golynski et al. [32] improves the operation times to \(O(\lg\lg\sigma)\) (and select to \(O(1)\) time), improving practical performance notoriously compared to wavelet matrices, yet still using non-compressed space. The _alphabet-partitioning_ approach by Barbay et al. [13], on the other hand, keeps the same computation times as Golynsky et al.'s approach, yet using compressed space --so, avoiding both drawbacks mentioned above. The main disadvantage is, however, that the theoretical times obtained by Barbay et al. [13] rely on the multi-ary wavelet trees by Ferragina et al. [26] which have been shown impractical by Bowe [17; 46]. Indeed, the experiments in this paper indicate that alphabet partitioning and wavelet matrices have similar operation times, although the former uses less space. ContributionsIn this paper we study practical ways to implement the alphabet-partitioning approach [13]. Our main contributions are as follows: 1. We carry out algorithm engineering on the _alphabet-partition_ approach by Barbay et al. [13], obtaining an implementation that uses compressed space while supporting operations rank and select efficiently. 2. We show that our approach yields competitive trade-offs when used for (i) snippet extraction from text databases and (ii) intersection of inverted lists. Both operations are of major interest for modern information retrieval systems. 3. Similar as how Barbay et al. [13] showed that alphabet partitioning eases zero-order compression, we show it can also improve run-length compression of big-alphabet strings formed by \(r\) sufficiently-long runs. We introduce a competitive alternative both in theory and practice (see Table 2 on page 2). 4. We carry out intensive algorithm engineering to implement our above run-length compression approach. In particular, we implement and try several data structures to represent bit strings with runs of 0s and 1s, in order to use the most effective alternatives as building blocks of our data structure. 5. We show that our approach can be efficiently implemented on a distributed-memory system. An overall conclusion from our study is that our implementation of alphabet partitioning is not only effective (and efficient) to support the fundamental rank and select operations, but also to support several operations that are key for implementing modern information retrieval systems [19; 20]. ## 2 Related Work The main problem we deal with in this paper is that of supporting operations rank and select efficiently on strings. We review in this section the main concepts and solutions for the problem. ### Succinct Data Structures for Bit Vectors Given a bit vector \(B[1..n]\) with \(m\)\(\mathtt{1}\) bits, operations \(\mathsf{rank}\) and \(\mathsf{select}\) can be generalized as \(\mathsf{rank}_{b}\) and \(\mathsf{select}_{b}\), for \(b\in\{\mathtt{0},\mathtt{1}\}\). The following space-efficient data structures offer the most interesting trade-offs: * The SDarray data structure by Okanohara and Sadakane [47] uses \(m\lg\frac{n}{m}+2m+o(m)\) bits of space, and supports \(\mathsf{select}\) in \(O(1)\) time (provided we replace the \(\mathsf{rank}\)/\(\mathsf{select}\) bit vector data structure on which SDarray is based by a constant-time \(\mathsf{select}\) data structure [43]). Operations \(\mathsf{rank}\) and \(\mathsf{access}\) are supported in \(O\left(\lg\frac{n}{m}\right)\) time. * The data structure by Raman et al. [50] uses \(\lg\binom{n}{m}+O\big{(}n/\lg^{2}n\big{)}\) bits of space, supporting \(\mathsf{rank}\), \(\mathsf{select}\), and \(\mathsf{access}\) in \(O(1)\) time. * The data structure by Patrascu [49] improves the second term in the space usagevof the above data structure, requiring \(\lg\binom{n}{m}+O(n/\lg^{c}(n/c))+O\big{(}n^{3/4}\mathrm{polylog}(n)\big{)}\) bits of space, for any \(c>0\), while supporting all operations in \(O(1)\) time. ### Compressed Data Structures A _compressed data structure_ uses space proportional to some compression measure of the data, e.g., the \(0\)-th order empirical entropy of a string \(s[1..n]\) over an alphabet of size \(\sigma\), which is denoted by \(H_{0}(s)\) and defined as: \[H_{0}(s)=\sum_{c\in\Sigma}\frac{n_{c}}{n}\lg\frac{n}{n_{c}}, \tag{1}\] where \(n_{c}\) is the number of occurrences of symbol \(c\) in \(s\). The sum includes only those symbols \(c\) that do occur in \(s\), so that \(n_{c}>0\). The value \(H_{0}(s)\leq\lg\sigma\) is the average number of bits needed to encode each string symbol, provided we encode them using \(\lg\frac{n}{n_{c}}\) bits. We can generalize this definition to that of \(H_{k}(s)\), the \(k\)-th _order empirical entropy_ of string \(s\), for \(k>0\), defined as \[H_{k}(s)=\sum_{a\in\Sigma^{k}}\frac{|s_{a}|}{n}H_{0}(s_{a}), \tag{2}\] where \(s_{a}\) denotes the string of symbols obtained from concatenating all symbols of \(s\) that are preceded by a given context \(a\) of length \(k\). It holds that \(0\leq H_{k}(s)\leq\ldots\leq H_{1}(s)\leq H_{0}(s)\leq\lg\sigma\), for any \(k\). Following Belazzougui and Navarro [15], in general we call _succinct_ a string representation that uses \(n\lg\sigma+o(n\lg\sigma)\) bits, _zeroth-order compressed_ a data structure that uses \(nH_{0}(s)+o(n\lg\sigma)\) bits, and _high-order compressed_ one using \(nH_{k}(s)+o(n\lg\sigma)\) bits. We call _fully-compressed_ a compressed data structure such that the redundancy is also compressed, e.g., to use \(nH_{0}(s)(1+o(1))\) bits. ### Rank/Select Data Structures on Strings Let \(s[1..n]\) be a string of length \(n\) with symbols drawn from an alphabet \(\Sigma=\{0,\ldots,\sigma-1\}\). We review next the state-of-the-art approaches for supporting \(\mathsf{rank}\) and \(\mathsf{select}\) on \(s\). Table 1 summarizes the space/time trade-offs of the approaches we will discuss next. #### 2.3.1 Wavelet Trees A _wavelet tree_[35] (WT for short) is a succinct data structure that supports operations \(\mathsf{rank}\) and \(\mathsf{select}\) on the input string \(s\), among many other operations (see, e.g., the work by Navarro [45]). The space requirement is \(n\lg\sigma+o(n\lg\sigma)+\Theta(\sigma w)\) bits [35], supporting operations \(\mathsf{rank}\), \(\mathsf{select}\), and \(\mathsf{access}\) take \(O(\lg\sigma)\) time. Term \(\Theta(\sigma w)\) in the space usage corresponds to the number of pointers needed to represent the \(\Theta(\sigma)\)-node binary tree on which a WT is built. This space is negligible only for small alphabets, becoming dominant for big alphabets, e.g., \(\sigma=\sqrt{n}\) or extreme cases like \(\sigma=O(n)\). There are practical scenarios where big alphabets are common, such as the representation of labeled graphs to support join operations on them [7]. To achieve compressed space, the WT can be given the shape of the Huffman tree, obtaining \(nH_{0}(s)(1+o(1))+O(n)+\Theta(\sigma w)\) bits of space. Operations take \(O(\lg n)\) worst-case, and \(O(1+H_{0}(s))\) time on average [46]. Alternatively, one can use compressed bit vectors [50] to represent each WT node. The space usage in that case is \(nH_{0}(s)+o(n\lg\sigma)+\ \Theta(\sigma w)\) bits, whereas operations take \(O(\lg\sigma)\) time. Notice that even though one is able to compress the main components of the WT, the term \(\Theta(\sigma w)\) keeps uncompressed and hence is problematic for big alphabets. The approach by Ferragina et al. [26], which is based on _multiary WTs_, supports operations in \(O(1+\frac{\lg\sigma}{\lg\lg n})\) worst-case time, and the space usage is \(nH_{0}(S)+o(n\lg\sigma)+\Theta(\sigma w)\) bits. Later, Golynski et al. [33] improved the (lower-order term of the) space usage to \(nH_{0}(s)+o(n)+\Theta(\sigma w)\) bits. Notice that if \(\sigma=O(\text{polylog}(n))\) \begin{table} \begin{tabular}{l c c c c} \hline \hline Authors & Space (bits) & \(\mathsf{access}\) & \(\mathsf{rank}\) & \(\mathsf{select}\) \\ \hline [35] & \(n\lg\sigma+o(n\lg\sigma)+\Theta(\sigma w)\) & \(O(\lg\sigma)\) & \(O(\lg\sigma)\) & \(O(\lg\sigma)\) \\ \hline [26], [33] & \(n(H_{0}(s)+o(1))+\Theta(\sigma w)\) & \(O\Big{(}1+\frac{\lg\sigma}{\lg\lg n}\Big{)}\) & \(O\Big{(}1+\frac{\lg\sigma}{\lg\lg n}\Big{)}\) & \(O\Big{(}1+\frac{\lg\sigma}{\lg\lg n}\Big{)}\) \\ \hline [24] & \(n\lg\sigma+o(n\lg\sigma)\) & \(O(\lg\sigma)\) & \(O(\lg\sigma)\) & \(O(\lg\sigma)\) \\ \hline [32] & \(n\lg\sigma+n\cdot o(\lg\sigma)\) & \(O(\lg\lg\sigma)\) & \(O(\lg\lg\sigma)\) & \(O(1)\) \\ \hline [36] & \(nH_{k}(s)+O\Big{(}\frac{n\lg\sigma}{\lg\lg\sigma}\Big{)}\) (\(\dagger\)) & \(O(1)\) & \(O(\lg\lg\sigma)\) & \(O(\lg\lg\sigma)\) \\ \hline [13] & \(nH_{0}(s)(1+o(1))+o(n)\) & \(O(\lg\lg\sigma)\) & \(O(\lg\lg\sigma)\) & \(O(1)\) \\ \hline [15] & \(n(H_{0}(s)+o(1))\) & \(O\Big{(}1+\frac{\lg\sigma}{\lg w}\Big{)}\) & \(O\Big{(}1+\frac{\lg\sigma}{\lg w}\Big{)}\) & \(O\Big{(}1+\frac{\lg\sigma}{\lg w}\Big{)}\) \\ \cline{2-5} & \(nH_{0}(s)(1+o(1))+o(n)\) & \(O(1)\) & \(O\Big{(}\lg\frac{\lg\sigma}{\lg w}\Big{)}\) & any \(\omega(1)\) \\ \cline{2-5} & \(nH_{k}(s)+O(n\lg\lg\sigma)\) (\(\ddagger\)) & \(O(1)\) & \(O\Big{(}\lg\frac{\lg\sigma}{\lg w}\Big{)}\) & any \(\omega(1)\) \\ \hline This paper & \(nH_{0}(s)(1+o(1))+o(n)\) & \(O\big{(}\max\{\lg^{2}(n),L\}\big{)}\) (\(\sharp\)) & \(O(\lg\lg\sigma)\) & \(O(1)\) \\ \hline \hline \end{tabular} (\(\dagger\)): for any \(k=o(\frac{\lg\sigma}{\lg\lg\sigma})\). \(\ddagger\): for any \(k=o(\lg\sigma)\), and \(\lg\sigma=\omega(\lg w)\). \(\ddagger\): time for accessing \(L\geq 1\) consecutive symbols. \end{table} Table 1: Space/time trade-offs for supporting operations \(\mathsf{rank}\), \(\mathsf{select}\), and \(\mathsf{access}\) on a string \(s[1..n]\) of symbols drawn from an alphabet of size \(\sigma\). these approaches allow one to compute operations in \(O(1)\) time. To avoid the \(\Theta(\sigma w)\) term in the space usage of WTs, _wavelet matrices_[24] (WM, for short) concatenate the \(\Theta(\sigma)\) WT nodes into a single bit vector. These nodes are arranged in such a way that the pointers between nodes can be simulated using operations rank (to go down to a child node) and select (to go up to a parent node) on the bit vector that implements the WM. Thus, operation rank and select on the input string \(s\) are still supported in \(O(\lg\sigma)\) time provided rank and select are supported in \(O(1)\) time on the underlying bit vector. However, even though both accessing a pointer and operations rank/select take constant time, the latter has a bigger constant and hence is slower in practice. The space usage is \(n\lg\sigma+o(n\lg\sigma)\) bits, avoiding the alphabet-unfriendly term. As WM basically implement a WT avoiding the space of pointers, they are able to support also all operations a WT can support [45]. #### 2.3.2 Reducing to Permutations and Further Improvements Golynski et al. [32] introduce an approach based on data structures for representing permutations (and their inverses) [44]. The resulting data structure is more effective for larger alphabet than the above WTs schemes (including WM). Their solution uses \(n\lg\sigma+n\cdot o(\lg\sigma)\) bits of space, supporting operation rank in \(O(\lg\lg\sigma)\) time, and operations select and access in \(O(1)\) time (among other trade-offs, see the original paper for details). Later, Grossi et al. [36] improved the space usage achieving high-order compression, that is, \(nH_{k}(s)+O\Big{(}\frac{n\lg\sigma}{\lg\lg\sigma}\Big{)}\) bits. Operations rank and select are supported in \(O(\lg\lg\sigma)\), whereas access is supported in \(O(1)\) time. #### 2.3.3 Alphabet Partitioning We are particularly interested in this paper in the alphabet-partitioning approach [13]. Given an alphabet \(\Sigma=\{0,\ldots,\sigma-1\}\), the aim of _alphabet partitioning_ is to divide \(\Sigma\) into \(p\) sub-alphabets \(\Sigma_{0},\Sigma_{1},\ldots,\Sigma_{p-1}\), such that \(\bigcup_{i=0}^{p-1}\Sigma_{i}=\Sigma\), and \(\Sigma_{i}\cap\Sigma_{j}=\emptyset\) for all \(i\neq j\). The Mapping from Alphabet to Sub-alphabetThe data structure [13] consists of an alphabet mapping \(m[1..\sigma]\) such that \(m[i]=j\) iff symbol \(i\in\Sigma\) has been mapped to sub-alphabet \(\Sigma_{j}\). Within \(\Sigma_{j}\), symbols are re-enumerated from \(0\) to \(|\Sigma_{j}|-1\) as follows: if there are \(k\) symbols smaller than \(i\) that have been mapped to \(\Sigma_{j}\), then \(i\) is encoded as \(k\) in \(\Sigma_{j}\). Formally, \(k=m.\mathsf{rank}_{j}(i)\). Let \(n_{j}=|\{i,\ \ m[s[i]]=j\}|\) be the number of symbols of string \(s\) that have been mapped to sub-alphabet \(\Sigma_{j}\). A way of defining the partitioning (which is called sparse[13]) is: \[m[\alpha]=\left\lceil\lg\left(\frac{n}{n_{\alpha}}\right)\lg n\right\rceil, \tag{3}\] where symbol \(\alpha\in\Sigma\) occurs \(n_{\alpha}\) times in \(s\). Notice that \(m[\alpha]\leq\lceil\lg^{2}n\rceil\). The Sub-alphabet StringsFor each sub-alphabet \(\Sigma_{\ell}\), we store the subsequence \(s_{\ell}[1..n_{\ell}]\), \(\ell=1,\ldots,p\), with the symbols of the original string \(s\) that have been mapped to sub-alphabet \(\Sigma_{\ell}\). _The Mapping from String Symbols to Sub-alphabets._ In order to retain the original string, we store a sequence \(t[1..n]\), which maps every symbol \(s[i]\) into the corresponding sub-alphabet. That is, \(t[i]=m[s[i]]\). If \(\ell=t[i]\), then the corresponding symbol \(s[i]\) has been mapped to sub-alphabet \(\Sigma_{\ell}\), and has been stored at position \(t.\mathsf{rank}_{\ell}(i)\) in \(s_{\ell}\). Also, symbol \(s[i]\) in \(\Sigma\) corresponds to symbol \(m.\mathsf{rank}_{\ell}(s[i])\) in \(\Sigma_{\ell}\). Thus, we have \(s_{\ell}[t.\mathsf{rank}_{\ell}(i)]=m.\mathsf{rank}_{\ell}(s[i])\). \[s=\mathtt{alabar\_a\_la\_alabarda}\] \[\Sigma_{1}=\{\mathtt{a}\};\quad\Sigma_{2}=\{\mathtt{l},\_{ \mathtt{j}}\};\quad\Sigma_{3}=\{\mathtt{b},\mathtt{r}\};\quad\Sigma_{4}=\{ \mathtt{d}\}\] Notice that \(t\) has alphabet of size \(p\). Also, there are \(n_{0}\) occurrences of symbol \(0\) in \(t\), \(n_{1}\) occurrences of symbol \(1\), and so on. Hence, we define: \[H_{0}(t)=\sum_{i=0}^{p-1}\frac{n_{i}}{n}\lg\frac{n}{n_{i}}. \tag{4}\] _Computing the Operations._ One can compute the desired operations as follows, assuming that \(m\), \(t\), and the sequences \(s_{\ell}\) have been represented using appropriate \(\mathsf{rank}/\mathsf{select}\) data structures (details about this later). For \(\alpha\in\Sigma\), let \(\ell=m.\mathsf{access}(\alpha)\) and \(c=m.\mathsf{rank}_{\ell}(\alpha)\). Hence, \[s.\mathsf{rank}_{\alpha}(i)\equiv s_{\ell}.\mathsf{rank}_{c}(t.\mathsf{rank}_ {\ell}(i)),\] and \[s.\mathsf{select}_{\alpha}(j)\equiv t.\mathsf{select}_{\ell}(s_{\ell}.\mathsf{ select}_{c}(j)).\] If we now define \(\ell=t[i]\), then we have \[s.\mathsf{access}(i)\equiv m.\mathsf{select}_{\ell}(s_{\ell}.\mathsf{access}(t. \mathsf{rank}_{\ell}(i))).\] _Space Usage and Operation Times._ Barbay et al. [13] have shown that \(nH_{0}(t)+\sum_{\ell=0}^{p-1}n_{\ell}\lg\sigma_{\ell}\leq nH_{0}(s)+o(n)\). This means that if we use a zero-order compressed \(\mathsf{rank}/\mathsf{select}\) data structure for \(t\), and then represent every \(s_{\ell}\) even in uncompressed form, we obtain zero-order compression for the input string \(s\). Recall that \(p\leq\lceil\lg^{2}n\rceil\), hence the alphabets of \(t\) and \(m\) are poly-logarithmic. Thus, a multi-ary wavelet tree [33] is used for \(t\) and \(m\), obtaining \(O(1)\) time for \(\mathsf{rank}\), \(\mathsf{select}\), and \(\mathsf{access}\). The space usage is \(nH_{0}(t)+o(n)\) bits for \(t\), and \(O\left(\frac{n\lg\lg n}{\lg n}\right)H_{0}(s)=o(n)H_{0}(s)\) bits for \(m\). For \(s_{\ell}\), if we use Golynski et al. data structure [32] we Figure 1: Alphabet-partitioning data structure for the string \(s=\mathtt{alabar\_a\_la\_alabarda}\), assuming 4 sub-alphabets \(\Sigma_{1}\), \(\Sigma_{2}\), \(\Sigma_{3}\), and \(\Sigma_{4}\), the corresponding mapping \(t\) and the sub-alphabet strings \(s_{1}\), \(s_{2}\), \(s_{3}\), and \(s_{4}\). obtain a space usage of \(n_{\ell}\lg\sigma_{\ell}+O\left(\frac{n_{\ell}\lg\sigma_{\ell}}{\lg\lg\lg n}\right)\) bits per partition, and support operation \(\mathsf{select}\) in \(O(1)\) time for \(s_{\ell}\), whereas \(\mathsf{rank}\) and \(\mathsf{access}\) are supported in \(O(\lg\lg\sigma)\) time for \(s_{\ell}\). Overall, the space is \(nH_{0}(s)+o(n)(H_{0}(s)+1)\) bits, operation \(\mathsf{select}\) is supported in \(O(1)\) time, whereas operations \(\mathsf{rank}\) and \(\mathsf{access}\) on the input string \(s\) are supported in \(O(\lg\lg\sigma)\) time (see [13] for details and further trade-offs). Practical ConsiderationsIn practice, the \(\mathsf{sparse}\) partitioning defined in Equation (3) is replaced by an scheme such that for any \(\alpha\in\Sigma\), \(m[\alpha]=\lfloor\lg r(\alpha)\rfloor\). Here \(r(\alpha)\) denotes the ranking of symbol \(\alpha\) according to its frequency (that is, the most-frequent symbol has ranking \(1\), and the least-frequent one has ranking \(\sigma\)). Thus, the first partition contains only one symbol (the most-frequent one), the second partition contains two symbols, the third contains four symbols, and so on. Hence, there are \(p=\lfloor\lg\sigma\rfloor\) partitions. This approach is called \(\mathsf{dense}\)[13]. Another practical consideration is to have a parameter \(\ell_{min}\) for \(\mathsf{dense}\), such that the top-\(2^{\ell_{min}}\) symbols in the ranking are represented directly in \(t\). That is, they are not represented in any partition. Notice that the original \(\mathsf{dense}\) partitioning can be achieved by setting \(\ell_{min}=1\). ### Strings with Runs There are applications that need to deal with strings \(s\) is formed by \(r\) runs of equal symbols. Formally, let \(s=c_{1}^{l_{1}}c_{2}^{l_{2}}\cdots c_{r}^{l_{r}}\), where \(c_{i}\in\Sigma\), for \(i=1,\ldots,r\), \(l_{1},l_{2},\ldots,l_{r}>0\), \(c_{i}\neq c_{i+1}\) for all \(1\leq i<r\), and \(2\leq\sigma\leq r\). Interesting cases are strings with just a few long runs, as they are highly compressible. Typical examples are the Burrows-Wheeler transform of repetitive strings, such as those from applications like versioned document collections and source code repositories. Run-length encoding is the usual way to compress strings formed by runs, where \(s\) is represented as the sequence \((c_{1},l_{1}),(c_{2},l_{2}),\ldots,(c_{r},l_{r})\). The space usage is \(r(\lg\sigma+\lg n)\) bits. Remarkable compression can be achieved in this way if \(r\) is small (compared to the text length \(n\)). There are two main approaches able to handle these kind of strings efficiently. First, the approach by Makinen and Navarro [39] uses \(2r(2+\lg\left(n/r\right))+\sigma\lg n+r\lg\sigma\) bits of space, supporting \(\mathsf{rank}\) and \(\mathsf{access}\) in \(O(\lg\left(n/r\right)+\lg\lg\sigma)\), and \(\mathsf{select}\) in \(O(\lg\left(n/r\right))\). On the other hand, Fuente-Sepulveda et al. [29] introduce a data structure using \((1+\epsilon)r\lg\frac{n\sigma}{r}+O(r)\) bits of space, for any \(\epsilon>0\). Operation \(\mathsf{rank}\) is supported in \(O\Big{(}\lg\frac{\lg\left(n\sigma/r\right)}{\lg\lg n}\Big{)}\), whereas \(\mathsf{select}\) and \(\mathsf{access}\) take \(O\Big{(}\lg\frac{\lg\left(n/r\right)}{\lg\lg n}\Big{)}\) time. Table 2 summarizes these results. ## 3 A Faster Practical Alphabet-Partitioning Rank/Select Data Structure The alphabet-partitioning approach was originally devised to speed-up decompression [51]. Barbay et al. [13] showed that alphabet partitioning is also effective for supporting operations \(\mathsf{rank}\) and \(\mathsf{select}\) on strings, being also one of the most competitive approaches in practice. In the original proposal [13], mapping \(t\) (introduced in Section 2.3.3) is represented with a multiary WT [26; 33], supporting \(\mathsf{rank}\), \(\mathsf{select}\), and \(\mathsf{access}\) in \(O(1)\) time, since \(t\) has alphabet of size \(O(\mathrm{polylog}(n))\). In practice, Bowe [17] showed that multiary WTs can improve rank/select computation time by about a half when compared to a binary wavelet tree 1, yet increasing the space usage noticeably --making them impractical, in particular for representing sequence \(t\). Consequently, in the experiments of Barbay et al. [13] no multiary WT was tested. Besides, the sdsl library [31] uses a Huffman-shaped WT by default for \(t\) in the template of class wt_ap< 2. Thus, operations rank and select on \(t\) are not as efficient as one would expect in practice. This would mean hundreds of nanoseconds in practice. We propose next an implementation of the alphabet-partitioning approach that avoids mapping \(t\), storing the same information in a data structure that can be queried faster in practice and is space-efficient: just a few tens of nanoseconds, still using space close to \(nH_{0}(t)\) bits. Our simple implementation decision has several important consequences in practice, not only improving the computation time of rank/select, but also that of applications such as inverted list intersection, full-text search on highly-repetitive text collections, and the distributed computation of rank/select operations. Footnote 1: Operation times of up to 1 microsecond were shown by Bowe [17]. Footnote 2: Actually, no multiary wavelet tree is implemented in the sdsl. ### Data Structure Definition Our scheme consists of the mapping \(m\) and the sub-alphabets subsequences \(s_{\ell}\) for each partition \(\ell\), just as originally defined in Section 2.3.3. However, now we disregard mapping \(t\), replacing it by bit vectors \(B_{0}[1..n],B_{1}[1..n],\ldots,B_{p-1}[1..n]\), one per partition. See Figure 2 for an illustration. For any partition \(\ell=0,\ldots,p-1\), we set \(B_{\ell}[i]=1\), for \(1\leq i\leq n\), iff \(s[i]\in\Sigma_{\ell}\) (or, equivalently, it holds that \(m.\mathsf{access}(s[i])=\ell\)). Notice that \(B_{\ell}\) has \(n_{\ell}\)\(1\)s. We represent these bit vectors using a data structure supporting operations rank\({}_{1}\) and select\({}_{1}\). So, operations \(s.\mathsf{rank}_{\alpha}\) and \(s.\mathsf{select}_{\alpha}\) are computed as follows. Given a symbol \(\alpha\in\Sigma\) mapped to sub-alphabet \(\ell=m.\mathsf{access}(\alpha)\), let \(c=m.\mathsf{rank}_{\ell}(\alpha)\) be its representation in \(\Sigma_{\ell}\). Hence, we define: \[s.\mathsf{rank}_{\alpha}(i)\equiv s_{\ell}.\mathsf{rank}_{c}(B_{\ell}.\mathsf{ rank}_{1}(i)).\] Similarly, \[s.\mathsf{select}_{\alpha}(j)\equiv B_{\ell}.\mathsf{select}_{1}(s_{\ell}. \mathsf{select}_{c}(j)).\] \begin{table} \begin{tabular}{l c c c} \hline \hline Authors & Space (bits) & access & rank & select \\ \hline [39] & \(2r(2+\lg\frac{n}{r})+\sigma\lg n+u\) & \(O\!\left(\lg\frac{n}{r}+t_{a}\right)\) & \(O\!\left(\lg\frac{n}{r}+t_{a}+t_{r}\right)\) & \(O\!\left(\lg\frac{n}{r}+t_{s}\right)\) \\ [29] & \((1+\epsilon)r\lg\frac{n\sigma}{r}+O(r)\) & \(O\!\left(\lg\frac{\lg(n/r)}{\lg\lg n}\right)\) & \(O\!\left(\lg\frac{\lg(n\sigma/r)}{\lg\lg n}\right)\) \\ \hline This paper (\(\dagger\)) & \(2r^{(t)}\lg\frac{n}{r^{(t)}}+r^{(t)}\lg\lg n\) & \(O\!\left(\lg n\right)\) & \(O\!\left(\lg\frac{\lg\left(n\sigma/r^{(s)}\lg n\right)}{\lg\lg n}\right)\) & \(O\!\left(\lg\frac{\lg\left(n/r^{(s)}\right)}{\lg\lg n}\right)\) \\ & \(+(1+\epsilon)r^{(s)}\lg\frac{n\sigma}{r^{(s)}\lg n}\) & & \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of rank/select data structures for strings with \(r\) runs. Unfortunately, operation \(s.\mathsf{access}(i)\) cannot be supported efficiently with this scheme: since we do not know symbol \(s[i]\), we do not know the partition \(j\) such that \(B_{j}[i]=1\). Hence, for \(j=0,\ldots,p-1\), we must check \(B_{j}[i]\), until for a given \(\ell\) it holds that \(B_{\ell}[i]=1\). This takes \(O(p)=O(\lg^{2}n)\) time. Then, we compute: \[s.\mathsf{access}(i)\equiv m.\mathsf{select}_{\ell}(s_{\ell}.\mathsf{access}(B_{ \ell}.\mathsf{rank}_{1}(i))).\] Although \(\mathsf{access}\) cannot be supported efficiently, there are still relevant applications where this operation is not needed, such as computing the intersection of inverted lists [4, 13, 46], or computing the term positions for phrase searching and positional ranking functions [6]. Besides, many applications need operation \(\mathsf{access}\) to obtain not just a single symbol, but a string snippet \(s[i..i+L-1]\) of length \(L\) --e.g., snippet-generation tasks [6]. This operation can be implemented efficiently on our scheme as in Algorithm 1. The idea is to obtain the snippet \(s[i..i+L-1]\) by extracting all the relevant symbols from each partition. To do so, for every partition \(\ell=0,\ldots,p-1\), the number of \(1\)s within \(B_{\ell}[i..i+L-1]\) (which can be computed as \(B_{\ell}.\mathsf{rank}_{1}(i+L-1)-B_{\ell}.\mathsf{rank}_{1}(i-1)\)) indicates the number of symbols of \(s_{\ell}\) that belong to the snippet \(s[i..i+L-1]\). Also, the position of each such \(1\) within \(B_{\ell}[i..i+L-1]\) corresponds to the position of the corresponding symbol of \(s_{\ell}\) within the snippet. The main idea is to obtain these \(1\)s using \(\mathsf{select}_{1}\) on \(B_{\ell}\). ``` 1: Let \(S[1..L]\) be an array of symbols in \(\Sigma\). 2:for\(\ell=0\)to\(p-1\)do 3:\(cur\gets B_{\ell}.\mathsf{rank}_{1}(i-1)\) 4:\(count\gets B_{\ell}.\mathsf{rank}_{1}(i+L-1)-cur\) 5:for\(k=1\)to\(count\)do 6:\(cur\gets cur+1\) 7:\(S[B_{\ell}.\mathsf{select}(cur)-i+1]\gets m.\mathsf{select}_{j}(s_{j}. \mathsf{access}(cur))\) 8:endfor 9:endfor 10:return\(S\) ``` **Algorithm 1**\(\mathsf{snippet}(i,L)\) Figure 2: Our implementation of the alphabet-partitioning data structure for the string \(s=\mathsf{alabar\_a\_la\_alabarda}\), assuming \(4\) sub-alphabets \(\Sigma_{1}\), \(\Sigma_{2}\), \(\Sigma_{3}\), and \(\Sigma_{4}\). The original mapping \(t\) is replaced by bit vectors \(B_{1}\), \(B_{2}\), \(B_{3}\), and \(B_{4}\). ### Space Usage and Operation Time To represent bit vectors \(B_{0},\ldots,B_{p-1}\), recall that we need to support \(\mathsf{rank_{1}}\) and \(\mathsf{select_{1}}\). The first alternative would be a plain bit vector representation, such as the one by Clark and Munro (2006); Clark and Munro (2006); Clark and Munro (2006). Although operations \(\mathsf{rank}\) and \(\mathsf{select}\) are supported in \(O(1)\), the total space would be \(p\cdot n+o(p\cdot n)=O\big{(}n\lg^{2}n\big{)}\) bits, which is excessive. A more space-efficient alternative would be the data structure by Patrascu Patrascu (1999), able to represent a given \(B_{\ell}[1..n]\) (that has \(n_{\ell}\) 1s) using \(\lg\binom{n}{n_{\ell}}+O(n/\lg^{c}(n/c))+O\big{(}n^{3/4}\mathrm{poly}\lg n \big{)}\) bits of space, for any \(c>0\). The total space for the \(p\) bit vectors would hence be \[\sum_{\ell=0}^{p-1}\left(\lg\binom{n}{n_{\ell}}+O\bigg{(}\frac{n}{\lg^{c}( \frac{n}{c})}\bigg{)}+O\Big{(}n^{3/4}\mathrm{poly}\lg n\Big{)}\right)=\left( \sum_{\ell=0}^{p-1}\lg\binom{n}{n_{\ell}}\right)+p\cdot\left(O\bigg{(}\frac{n} {\lg^{c}(\frac{n}{c})}\bigg{)}+O\Big{(}n^{3/4}\mathrm{poly}\lg n\Big{)}\right).\] For the first term, we have that \[\sum_{\ell=0}^{p-1}\lg\binom{n}{n_{\ell}}\leq nH_{0}(t),\] whereas the second term can be made \(o(n)\) by properly choosing a constant \(c>2\) (since \(p=O\big{(}\lg^{2}n\big{)}\)). The total space for \(B_{0},\ldots,B_{p-1}\) is, thus, \(nH_{0}(t)+o(n)\), supporting \(\mathsf{rank_{1}}\) and \(\mathsf{select_{1}}\) in \(O(1)\) time Patrascu (1999). We are able, in this way, to achieve the same space and time complexities as the original schema Clark and Munro (2006), replacing the multi-ary wavelet tree that represents \(t\) by bit vectors \(B_{0},\ldots,B_{p-1}\). A drawback of this approach is, however, its impracticality. A more practical approach would be to use Raman et al. Raman et al. (2015) approach (whose implementation is included in several libraries, such as sdsl) Raman et al. (2016)), however the additional \(o(\cdot)\) term of this alternative would not be \(o(n)\) as before and would need a non-negligible amount of additional space in practice. A third approach, more practical than the previous ones, is the SDarray representation of Okanohara and Sadakane (2010) for bit vectors \(B_{\ell}\), using \[\sum_{i=0}^{p-1}n_{i}\lg\frac{n}{n_{i}}+2n_{i}+o(n_{i})\] bits of space overall. Notice that for the first term we have \(\sum_{i=0}^{p-1}n_{i}\lg\frac{n}{n_{i}}=nH_{0}(t)\), according to Equations (1) and (4). Also, we have that \(2\sum_{i=0}^{p-1}n_{i}=2n\). Finally, for \(\sum_{i=0}^{p-1}o(n_{i})\) we have that each term in the sum is actually \(O(n_{i}/\lg n_{i})\)(Kapetrov, 2010). In the worst case, we have that every partition has \(n_{i}=n/p\) symbols. Hence, \(n_{i}/\lg n_{i}=n/(p\lg\frac{n}{p})\), which for \(p\) partitions yields a total space of \(O(n/\lg\frac{n}{p})\) bits. This is \(o(n)\) since \(\lg\frac{n}{p}\in\omega(1)\) for \(n\in\omega(p)\) (which is a reasonable assumption). In our case, \(p\leq\lg^{2}n\), hence \(\sum_{i=0}^{p-1}o(n_{i})\in o(n)\). Summarizing, bit vectors \(B_{\ell}\) require \(n(H_{0}(t)+2+o(1))\) bits of space. This is 2 extra bits per symbol when compared to mapping \(t\) from Barbay et al.'s approach Barbay et al. (2006). The whole data structure uses \(nH_{0}(s)+2n+o(n)(H_{0}(s)+1)\) bits. Regarding the time needed for the operations, \(s.\mathsf{select}\) can be supported in \(O(1)\) time. Operation \(s.\mathsf{rank}\) can be supported in \(O(\lg n)\) worst-case time: if \(n_{i}=O(\sqrt{n})\), operation \(B_{i}.\mathsf{rank}\) takes \(O(\lg\frac{n}{n_{i}})=O(\lg n)\) time. Using SDArray, Algorithm 1 (snippet) takes \(O\left(\sum_{i=0}^{p-1}\lg\frac{n}{n_{i}}+L\lg\lg\sigma\right)\) time. The sum \(\sum_{i=0}^{p-1}\lg\frac{n}{n_{i}}\) is maximized when \(n_{i}=n/p=n/\lg^{2}n\). Hence, \(\sum_{i=0}^{p-1}\lg\frac{n}{n_{i}}=\lg^{2}n\cdot\lg\lg n\), thus the total time for snippet is \(O\left(\lg^{2}n\cdot\lg\lg n+L\lg\lg\sigma\right)\). Thus, for sufficiently long snippets, this algorithm is faster than using access. Regarding construction time, bit vectors \(B_{\ell}\) can be constructed in linear time: we traverse string \(s\) from left to right; for each symbol \(s[j]\), determine its partition \(\ell\) and push-back the corresponding symbol in \(s_{\ell}\), as well as position \(j\) into an auxiliary array \(A_{\ell}\), \(1\leq\ell\leq p\). Afterwards, positions stored in \(A_{\ell}\) corresponds to the positions of the 1s within \(B_{\ell}\), so they can be used to construct the corresponding SDarray. ### Experimental Results on Basic Operations In this section, we experimentally evaluate the performance of our approach to support the basic operations rank, select, and access. We implemented our data structure following the sdsl library [31]. Our source code was compiled using g++ with flags -std=c++11 and optimization flag -O3. Our source code can be downloaded from [https://github.com/ericksepulveda/asap](https://github.com/ericksepulveda/asap). We run our experiments on an HP Proliant server running an Intel(R) Xeon(R) CPU E5-2630 at 2.30GHz, with 6 cores, 15 MB of cache, and 384 GB of RAM. As input string in our test we use a 3.0 GB prefix of the Wikipedia (dump from August 2016). We removed the XML tags, leaving just the text. The resulting text length is 505,268,435 words and the vocabulary has 8,468,328 distinct words. We represent every word in the text using a 32-bit unsigned integer, resulting in 1.9 GB of space. The zero-order empirical entropy of this string is 12.45 bits. We tested sparse and dense partitionings, the latter with parameter \(\ell_{min}=1\) (i.e., the original dense partitioning), and \(\ell_{min}=\lg\lg\sigma=\lg 23\) (which corresponds to the partitioning scheme currently implemented in sdsl). The number of partitions generated is 476 for sparse, 24 for dense\(\ell_{min}=1\), and 46 for dense\(\ell_{min}=\lg 23\). For operations rank and select, we tested two alternatives for choosing the symbols on which to base the queries: * **Random symbols**: 30,000 vocabulary words generated by choosing positions of the input string uniformly at random, then using the corresponding word for the queries. This is the same approach used in the experimental study by Barbay et al. [13]. * **Query-log symbols**: we use words from the query log TREC 2007 Million Query Track 3. We removed stopwords, and used only words that exist in the vocabulary. Overall we obtained 29,711 query words (not necessarily unique). Footnote 3: [http://trec.nist.gov/data/million.query/07/07-million-query-topics.1-10000.gz](http://trec.nist.gov/data/million.query/07/07-million-query-topics.1-10000.gz) For rank operation, we generate uniformly at random the positions where the query is issued. For select, we search for the \(j\)-th occurrence of a given symbol, with \(j\) generated at random (we ensure that there are at least \(j\) occurrences of the queried symbol). For operation access, we generate text positions at random. We call ASAP our approach [11]. We show several combinations for mapping \(m\) and sequences \(s_{\ell}\), as well as several ways to carry out the alphabet partitioning: * ASAP GMR-WM (D 23): the scheme using Golynski et al. data structure [32] (gmr_wt<> in sdsl) for \(s_{\ell}\), using samplings 4, 8, 16, 32, and 64 for the inverse permutation data structure. For mapping \(m\), this scheme uses a wavelet matrix (wm_int<> in sdsl), and dense\(\ell_{min}=\lg 23\) partitioning. * ASAP GMR-WM (D): the same approach as before, this time using the original dense partitioning. * ASAP WM-AP (S): we use wavelet matrices for \(s_{\ell}\) and alphabet partitioning for \(m\), with sparse partitioning. * ASAP WM-HUFF_INT (D 23): we use wavelet matrices for \(s_{\ell}\) and a Huffman-shaped wavelet tree for \(m\) (wt_hwff<> in sdsl). The partitioning is dense\(\ell_{min}=\lg 23\). * ASAP WM-WM (S): we use wavelet matrices for both \(s_{\ell}\) and \(m\), with sparse partitioning. In all cases, bit vectors \(B_{\ell}\) are implemented using template sd_vector<> from sdsl, which corresponds to Okanohara and Sadakane's SDArray data structure [47]. We only show the most competitive combinations we tried. We compare with the following competitive approaches in the sdsl: * AP: the original alphabet partitioning data structure [13]. We used the default scheme from sdsl, which implements mappings \(t\) and \(m\) using Huffman-shaped WTs, and the sequences \(s_{\ell}\) using wavelet matrices. The alphabet partitioning used is dense\(\ell_{min}=\lg 23\). This was the most competitive combination for AP in our tests. * BLCD-WT: the original wavelet trees [35] (wt_int in sdsl), using plain bit vectors (bit_vector<> in sdsl), and constant time rank and select using rank_support_v and select_support_mcl, respectively, from the sdsl. * GMR: the data structure by Golynski et al. [32] (gmr_wt<> in sdsl), using samplings 4, 8, 16, 32, and 64 for the inverse permutation data structure this scheme is built on. * HUFF-WT: the Huffman-shaped wavelet trees, using plain bit vectors (bit_vector<> in sdsl), and constant time rank and select using rank_support_v and select_support_mcl from the sdsl. This approach achieves \(H_{0}(s)\) compression thanks to the Huffman shape of the WT. * RRR-WT: the original wavelet trees [35] (wt_int in sdsl), this time using compressed bit vectors by Raman et al. [50] for the WT nodes (rrr_vector<> in the sdsl, using block sizes 15, 31, and 63). This approach also achieves \(H_{0}(s)\) compression, this time by compressing each WT node. * HUFF-RRR-WT: the Huffman-shaped wavelet trees, using compressed bit vectors by Raman et al. [50] for the WT nodes. As in the previous approach, we used rrr_vector<> in the sdsl, with block sizes 15, 31, and 63. * WM: the wavelet matrix [24], using a plain bit vector (bit_vector<> in sdsl), and constant time rank and select using rank_support_v and select_support_mcl from the sdsl. Figure 3 shows the experimental results for operations rank and select, comparing with the most efficient approaches implemented in sdsl. As it can be seen, ASAP yields interesting trade-offs. In particular, for operation select and random symbols, alternative ASAP GMR-WM (D 23) uses 1.11 times the space of AP, and improves the average time per select by 79.50% (from 9.37 to 1.92 microseconds per select). For query-log symbols, we obtain similar results. However, this time there is another interesting alternative: ASAP WM-AP (S) uses only 1.01 times the space of AP (i.e., an increase of about 1%), and improves the average select time by 38.80%. For rank queries we improve query time between 4.78% to 17.34%. In this case the improvements Figure 3: Experimental results for operations rank (above) and select (below) on the Wikipedia text. The \(x\) axis starts at \(H_{0}(s)=12.45\) bits. are smaller compared to select. This is because operation rank on bit vectors sd_vector<> is not as efficient as select[47], and AP uses rank on the bit vectors of the Huffman-shaped WT that implements mapping \(t\). Figure 4 shows experimental results for operation access. As expected, we cannot compete with the original AP scheme. However, we are still faster than RRR-WT, and competitive with GMR[32] (yet using much less space). ## 4 Experimental Results on Information-Retrieval Applications We test in this section our alphabet-partitioning implementation on some relevant applications. ### Application 1: Snippet Extraction We evaluate next the snippet extraction task, common in text search engines [6; 52]. As we have already said, in this case one needs operation access to obtain not just a single symbol, but a snippet \(s[i..i+L-1]\) of \(L\) consecutive symbols in \(s\). In our experiments we tested with \(L=100\) and \(200\) (see Figure 5). As it can be seen, we are able to reduce the time per symbol considerably (approximately by \(75\%\)) when compared with operation access, making our approach more competitive for snippet extraction. It is important to note that operation \(B_{j}.\mathsf{select}\) in line 7 of Algorithm 1 is implemented using the select operation provided by the sd_vector<> implementation. ### Application 2: Intersection of Inverted Lists Another relevant application of rank/select data structures is that of intersecting inverted lists. A previous work [4] has shown that one can simulate the intersection of inverted lists by representing the document collection (concatenated into a single string) with a rank/select data structure. So, within the compressed space used by the text collection, one is able to simulate: Figure 4: Experimental results for operation access. The \(x\) axis starts at \(H_{0}(s)=12.45\) bits. * the inverted index of the text collection, supporting intersection of (the simulated) inverted lists in time close to that of Barbay and Kenyon adaptive intersection time [14; 4]; * a positional inverted index, using the approach by Arroyuelo et al. [6]; and * snippet extraction. These cover most functionalities of a search engine on document collections, making it an attractive approach because of its efficient space usage. Figure 6 shows experimental results for intersecting inverted lists. We implemented the variant of intersection algorithm tested by Barbay et al. [13]. As it can be seen, ASAP yields important improvements in this application: using only 2% extra space, ASAP wm-wm (S) is able to reduce the intersection time of AP by 60.67%. ## 5 Alphabet Partitioning for Representing Strings with Runs Let us consider next the case where the input string \(s\) is formed by \(r\) runs of equal symbols. Formally, let \(s=c_{1}^{l_{1}}c_{2}^{l_{2}}\cdots c_{r}^{l_{r}}\), where \(c_{i}\in\Sigma\), for \(i=1,\ldots,r\), \(l_{1},l_{2},\ldots,l_{r}>0\), \(c_{i}\neq c_{i+1}\) for all \(1\leq i<r\), and \(2\leq\sigma\leq r\). We study first how to implement run-length compression with the original alphabet partitioning approach [13], to then analyze the scheme resulting from our approach. Notice that the alphabet partition does not need to be carried out according to Equation (3), whose main objective was to distribute the alphabet symbols within the sub-alphabets in such a way \(H_{0}(s)\) compression is achieved. As we aim at run-length compression now, we partition in a different --much simpler-- way. We divide the alphabet into \(p=\lceil\lg n\rceil\) Figure 5: Experimental results for extracting snippets of length \(L=100\) (right) and \(L=200\) (left). The \(x\) axis starts at \(H_{0}(s)=12.45\) bits. subalphabets consisting of \(\lceil\sigma/\lceil\lg n\rceil\rceil\) alphabet symbols each (the last partition can have less symbols). In this way, \[m[\alpha]=\frac{\alpha}{\lceil\sigma/\lceil\lg n\rceil\rceil},\] which can be computed on the fly when needed, without storing mapping \(m\), saving non-negligible space [13]. Every symbol \(\alpha\in\Sigma_{\ell}\) is reenumerated as \(\alpha\bmod(\lceil\sigma/\lceil\lg n\rceil\rceil)\). Notice that the \(r\) runs in the input string are also transferred to mapping \(t\) and the sub-alphabet strings \(s_{\ell}\). Moreover, the number of runs in mapping \(t\) and strings \(s_{\ell}\) can be smaller than \(r\). For \(t\), if symbols \(c_{j},c_{j+1},\ldots,c_{j+k}\) (for \(1\leq j\leq r\), \(k\geq 0\) and \(j+k\leq r\)) correspond to the same sub-alphabet, then \(t\) will have a single run of length \(l_{j}+l_{j+1}+\cdots+l_{j+k}\) whereas \(s\) has \(j+k\) runs corresponding to them. Let us call \(r^{(t)}\leq r\) the number of runs in mapping \(t\). Similarly, there can be symbols \(c_{j}=c_{j+k}=\cdots=c_{j+m}\) (i.e., equal symbols whose runs are not consecutive in \(s\)) that could form a single run of length \(l_{j}+l_{j+k}+\cdots+l_{j+m}\) in the corresponding string \(s_{\ell}\). Let us call \(r_{\ell}\), \(\ell=0,\ldots,p-1\), the number of runs in \(s_{\ell}\). Also, let us denote \(r^{(s)}=\sum_{\ell=0}^{p-1}r_{\ell}\leq r\). Reducing the overall number of runs is one of the advantages of using alphabet partitioning for run-length encoded strings, as we show next. For mapping \(t\), which has alphabet \(p=\lceil\lg n\rceil\), we use Fuentes-Sepulveda et al. [29] data structure, requiring \((1+\epsilon)r^{(t)}\lg\frac{n\lg n}{r^{(t)}}+O\big{(}r^{(t)}\big{)}\) bits. For the sub-alphabet strings \(s_{\ell}\), since all of them have alphabets Figure 6: Experimental results for inverted list intersection. Times are in milliseconds. The \(x\) axis starts at \(H_{0}(s)=12.45\) bits. of size \(\sigma_{\ell}=\lceil\sigma/\lceil\lg n\rceil\rceil\), we concatenate them into a single string \(s^{\prime}\) of length \(n\) and \(r^{(s)}\) runs 4. We then also use Fuentes-Sepulveda et al. data structure for \(s^{\prime}\), using \((1+\epsilon)r^{(s)}\lg\frac{n\sigma}{r^{(\epsilon)}\lg n}+O\big{(}r^{(s)} \big{)}\) additional bits of space. Footnote 4: Notice that since the original symbols are re-enumerated within each sub-alphabet, there can be less than \(r^{(s)}\) runs after concatenating all \(s_{\ell}\), however we use \(r^{(s)}\) in our analysis. Next, we consider replacing \(t\) with bit vectors \(B_{0},\ldots,B_{p}\), as before. In particular, we concatenate them into a single bit vector \(B[1..n\lg n]\), with \(n\) 1s and \(r^{(t)}\) runs of 1. According to Arroyuelo and Raman [10], bit vector \(B\) can be represented using \(\lg\binom{n\lg n-n+1}{r^{(t)}-1}+\lg\binom{n-1}{r^{(t)}-1}\), which by Stirling approximation is about \(r^{(t)}\lg\frac{n\lg n}{r^{(t)}}+r^{(t)}\lg\frac{n}{r^{(t)}}=2r^{(t)}\lg \frac{n}{r^{(t)}}+r^{(t)}\lg n\) bits, Operation rank is supported in time \(O\Big{(}\lg\frac{\lg\left(n\sigma/(r^{(\sigma)}\lg n)\right)}{\lg\lg n} \Big{)}\), whereas select takes time \(O\Big{(}\lg\frac{\lg\left(n/r^{(s)}\right)}{\lg\lg n}\Big{)}\). This scheme uses slightly more space (depending on the value of \(\epsilon\)) than the space used by \(t\) of the original alphabet-partitioning approach. However, using bit vector \(B\) instead of \(t\) allows for a faster implementation in practice (as we will see in our experiments of Section 6). So, alphabet partitioning on run-length compressed strings has the advantage of potentially reducing the number of runs needed to represent the original string, so the space usage of the data structure depends on \(r^{(t)}\) and \(r^{(s)}\), rather than on \(r\). ## 6 Application 3: Full-Text Search on Big-Alphabet Highly-Repetitive Collections Several applications deal with highly-repetitive texts, that is, texts where the amount of repetitiveness is high. Typical examples are: 1. Biological databases, where the DNA of different individuals of the same species share a high percentage of their sequences [38]; 2. Source code repositories, where the different (highly-similar) versions of a source code are stored; and 3. The Wikipedia Project, where the different (highly-similar, in general) versions of Wikipedia pages need to be stored. These are just a few examples of this kind of text databases that are common nowadays. We are particularly interested in big-alphabet highly-repetitive text collections, as big-alphabets are the most suitable for the alphabet-partitioning approach. These text databases usually need to be searched to find patterns of interest, while using compressed space [46]. We are interested in the classical full-text search problem in this section, defined as follows. Let \(T[1..n]\) be a text string of length \(n\) over an alphabet \(\Sigma\), and let \(P[1..m]\) be another string (the search pattern) of length \(m\) over the same alphabet \(\Sigma\). The problem consists in finding (or counting) all occurrences of \(P\) in \(T\). We assume the offline version of the problem, where the text is given in advance to queries and several queries will be carried out on the text. Hence, one can afford building a data structure on the text to later speed-up queries. Several solutions to this problem, such as suffix trees [54; 42; 2], suffix arrays [40], and compressed full-text indexes [28; 35; 8]. We show next that our alphabet-partitioning implementation for strings with runs from Section 5 is a competitive building block for implementing compressed-suffix arrays indexes [28; 30]. An effective compression approach that improves space usage while supporting search functionalities is the Burrows-Wheeler transform [18; 28] (BWT, for short). In particular, for highly-repetitive texts the BWT tends to generate long runs of equal symbols [30]. We test next this application, representing the BWT of a text using our ASAP data structure. On it, we implement the _backward search_ algorithm by Ferragina and Manzini [28], which carries out \(\Theta(m)\) ranks on the BWT to count the number of occurrences of a pattern \(P[1..m]\) in the original text. ### Text Collections We test with the BWT of the following highly-repetitive texts from the _Pizza&Chili Corpus_[27]5: Footnote 5: [http://pizzachili.dcc.uchile.cl/repcorpus.html](http://pizzachili.dcc.uchile.cl/repcorpus.html). * **Einstein.de**: A single sequence containing the concatenation of the different versions of the Wikipedia articles corresponding to Albert Einstein, in German. The original text can be obtained from [http://pizzachili.dcc.uchile.cl/repcorpus/real/einstein.de.txt.gz](http://pizzachili.dcc.uchile.cl/repcorpus/real/einstein.de.txt.gz). To obtain a big alphabet, we use the following technique from natural-language text processing: we enumerate all words and punctuation marks in the text, and use a 32-bit integer array to represent it. The result is a sequence of 29,970,916 symbols, with an alphabet of size 7,105. The BWT of this text has 40,349 runs. * **Einstein.en**: Similar as before, this time with the Wikipedia articles corresponding to Albert Einstein in English. This text can be obtained from [http://pizzachili.dcc.uchile.cl/repcorpus/real/einstein.en.txt.gz](http://pizzachili.dcc.uchile.cl/repcorpus/real/einstein.en.txt.gz). We used the same representation as before, obtaining a sequence of 159,119,879 words and quotation marks, with an alphabet of size 15,862. The BWT of this text has 96,953 runs. * **English**: The sequence obtained from the concatenation of English text files selected from etext02 to etext05 collections of the Gutenberg Project. This text can be obtained from [http://pizzachili.dcc.uchile.cl/repcorpus/pseudo-real/english.001.2.gz](http://pizzachili.dcc.uchile.cl/repcorpus/pseudo-real/english.001.2.gz). We use the same representation as before, obtaining a text of 39,894,587 symbols and alphabet of size 89,753. The number of runs in the BWT of this text is 644,502. * **Coreutils**: all versions 5.x of the Coreutils package, with a total of 9 versions. We also collected all 1.0.x and 1.1.x versions of the Linux Kernel, with a total of 36 versions [http://pizzachili.dcc.uchile.cl/repcorpus/real/coreutils.gz](http://pizzachili.dcc.uchile.cl/repcorpus/real/coreutils.gz). Table 3 shows a summary of the main statistics of these texts. We implement the data structure from Section 5, trying the following approaches for its main components. We engineer next the different components. ### Practical Run-Length Compressed Bit Vectors We survey in this section approaches for compressing bit vectors with runs of 1s, in order to decide the best alternative for \(B_{0},\ldots,B_{p-1}\). In particular, we test the following approaches. Pef The _Partitioned Elias-Fano_ (PEF) approach [48]. We re-implemented this scheme based on the original one, adding the capabilities for rank and select (as the original one only supported the next-greater-or-equal operation). We only test with the Optimized Partition approach (opt), as this is the one that uses the least space. The main idea is that the original bit vector is divided into blocks such that: (1) each block is represented either as a plain bit vector, an SDArray, or it is represented implicitly if it consists only of 1s; and (2) the overall space usage obtained from this partitioning is optimized 6. Footnote 6: Or almost [48], as an approximation algorithm is used to compute the partitioning. We parameterized the minimum size a block can have as 64, 128, 256, 512, 1024, and 2048. That is, the approach determines the optimal block sizes having into account that no block can be smaller than these values. For the blocks represented with plain bit vectors, we tested with bit_vector< from the sdsl and Vigna's broad-word approach [53] ([https://github.com/vigna/sux](https://github.com/vigna/sux)). For the former, we tested with rank_support_v and rank_support_v5 in order to provide different space/time trade-offs. However, and since we use these approaches to represent relatively small bit vectors, we modified the original implementation of both rank support data structures such that the precomputed rank information now is stored using an array of \(\lg u\)-bit integers, being \(u\) the length of the bit vector. The original implementation used an array of 64-bit integers. We divided the bit vector into blocks of 512 bits. Hence, we store a second-level array of 9-bit integers, as the precomputed values in this level are local to a block. For blocks represented with SDArray we use sd_vector<> from the sdsl. The source code of our implementation can be accessed at [https://github.com/Yhatoh/PEF](https://github.com/Yhatoh/PEF). S18 The rank/select data structure based on S18 compression [9; 12]. We used block sizes 8, 16, 32, 64, 128, and 256. The source code of this approach is at [https://github.com/HectorxH/Modificacion-S18](https://github.com/HectorxH/Modificacion-S18). hyb_vector. The hyb_vector<> data structure from the sdsl, using block size 16. \begin{table} \begin{tabular}{l c c c c} \hline \hline Text & Length & Alphabet size & BWT runs & Avg. run length \\ \hline Einstein.de & 29,970,946 & 7,105 & 40,349 & 742 \\ Einstein.en & 159,119,879 & 15,862 & 96,953 & 1,640 \\ English & 39,894,587 & 89,753 & 644,502 & 61 \\ Coreutils & 93,319,681 & 148,654 & 2,540,091 & 36 \\ \hline \hline \end{tabular} \end{table} Table 3: Statistics of the highly-repetitive texts used in our tests. Zombit. The data structure by Gomez-Brandon [34]. This data structure only implements operation rank, and the source code can be accessed at [https://github.com/adriangbrandon/zombit/blob/main/include/zombit_vector.hpp](https://github.com/adriangbrandon/zombit/blob/main/include/zombit_vector.hpp). OZ. The approach by Delpratt et al. [25], which stores the lengths of the 0 and 1 runs using two separate arrays, Z and O[46, see Section 4.4.3]. We use an implementation provided by Gomez-Brandon [34], where these arrays are represented using sd_vector<> from the sdsl. RLE_VECTOR. The approach tested by Boffa et al. [16], which implements scheme OZ yet using VNibble compression [6]. Experimental SurveyIn order to choose the most efficient trade-offs, we test experimentally using 1,000,000 random rank and select queries on the \(p\) bit vectors of the ASAP data structure corresponding to each text we tested. Figure 7 shows the space-time trade-off for operations rank and select. Regarding space usage, we report the average over the \(p\) bit vectors. As it can be seen, RLE_VECTOR and S18 offer the best trade-offs for rank and select, so we will use them in the following experiments. OZ is also competitive yet just for select. As backward search only needs rank, we disregard it in what follows. ### Practical Run-Length Compressed Strings For the sub-alphabet strings \(s_{0},\ldots,s_{p-1}\) we test the following run-length-compression approaches: * RLMN: the run-length wavelet tree by Makinen and Navarro [39], implemented with class wt_rlmn in the sdsl. * FKKP: the run-length data structure by Fuentes-Sepulveda et al. [29]. We use the original implementations by the authors, obtained from [https://github.com/jfuentess/sdsl-lite](https://github.com/jfuentess/sdsl-lite). We tried the possible combinations among these approaches and RLE_VECTOR or S18, obtaining the 10 schemes denoted by the following regular expression: ASAP (AP(RLMN) \(|\) RLMN(AP) \(|\) RLMN(INT) \(|\) FKKP(AP) \(|\) FKKP(GMR) (RLE \(|\) S18) The second term in the regular expression corresponds to the representation for \(s_{0},\ldots,s_{p-1}\), where we use: * AP(RLMN): the alphabet partitioning approach [13] implemented by class wt_ap<> in the sdsl. We used the RLMN approach [39] for \(t\) and the sub-alphabet sequences \(s_{i}\) (we used wt_rlmn<> from the sdsl). * RLMN(AP) and RLMN(INT): the approach by Makinen and Navarro [39], using the wt_rlmn<> implementation from the sdsl. We use AP as the base data structure (first approach) and a wavelet tree wt_int<> from the sdsl (second approach). Figure 7: Experimental space/time trade-offs for different compressed bit vector approaches on bit sequences with runs. Results for operation rank are shown in the left column, whereas select is shown in the right column. * FKKP(AP) and FKKP(GMR): the approach by Fuentes-Sepulveda et al. [29], using AP[13] and GMR[32] (wt_gmr<> in the sdsl) as building blocks. We compared with the baseline approaches AP(RLMN), RLMN(AP), FKKP(AP), and FKKP(GMR), this time to represent the original BWT --yet with the same meaning as explained before to represent the sub-alphabet sequences \(s_{i}\). In our experiments, we searched for 50,000 unique random patterns of length \(m=4,8\), and 16 words. To generate the patterns, we choose a random position within the corresponding text and then extract \(m\) words from that position. Figure 8 shows the space/time trade-offs for counting the number of occurrences of a pattern using the backward search algorithm [28]. We show the average search time per pattern searched, and the space usage of each scheme, in bits per text word. ## 7 Distributed Computation of rank and select The partitions generated by the alphabet partitioning approach are amenable for the distributed computation of batches of rank and select operations. In this section we explore ways to implement then on a distributed scheme. In a distributed query processing system, there exists a specialized node that is in charge of receiving the query requests (this is called the _broker_) [20], and then distributes the computation among a set of computation nodes (or simply _processors_). The latter are in charge of carrying out the actual computation. Next, we study how the original alphabet partitioning scheme (AP) and our proposal (ASAP) can be implemented on a distributed system, allowing the fast computation of batches of rank and select queries. ### A Distributed Query-Processing System Based on Ap The sub-alphabet sequences \(s_{\ell}\) are distributed among the computation nodes, hence we have \(p\) processors in the system. We also have a specialized broker, storing mappings \(m\) and \(t\). This is a drawback of this approach, as these mappings become a bottleneck for the distributed computation. ### A Distributed Query-Processing System Based on ASAP In this case, the sub-alphabet sequences \(s_{\ell}\) and the bit vectors \(B_{\ell}\) are distributed among the computation nodes. Unlike AP, now each computation node acts as a broker: we only need to replicate the mapping \(m\) in them. The overall space usage for this is \(O(p\sigma\lg\lg p)\) if we use an uncompressed WT for \(m\). This is only \(O(\sigma\lg\lg p)=o(n)H_{0}(s)\) bits per processor [13]. In this simple way, we avoid having a specialized broker, but distribute the broker task among the computation nodes. This avoids bottlenecks at the broker, and can make the system more fault tolerant. Queries arrive at a computation node, which uses mapping \(m\) to distribute it to the corresponding computation node. For operation \(s\_\texttt{access}(i)\), we carry out a broadcast operation, in order to determine for Figure 8: Space/time trade-off for count queries on the Burrows-Wheeler Transform of different big-alphabet texts, using patterns of length 4 and 16. which processor \(\ell\), \(B_{\ell}[i]=1\); this is the one that must answer the query. For extracting snippets, on the other hand, we also broadcast the operation to all processors, which collaborate to construct the desired snippet using the symbols stored at each partition. ### Comparison The main drawback of the scheme based on \(\mathsf{AP}\) is that it needs a specialized broker for \(m\) and \(t\). Thus, the computation on these mappings is not distributed, lowering the performance of the system. The scheme based on \(\mathsf{ASAP}\), on the other hand, allows a better distribution: we only need to replicate mapping \(m\) in each processor, with a small space penalty in practice. To achieve a similar distribution with \(\mathsf{AP}\), we would need to replicate \(m\) and \(t\) in each processor, increasing the space usage considerably. Thus, given a fixed amount of main memory for the system, the scheme based on \(\mathsf{ASAP}\) would be likely able to represent a bigger string than \(\mathsf{AP}\). Table 4 shows experimental results on a simulation of these schemes. We only consider computation time, disregarding communication time. As it can be seen, \(\mathsf{ASAP}\) uses the distributed system in a better way. The average time per operation for operations \(\mathsf{rank}\) and \(\mathsf{select}\) are improved by about 71% and 76%, respectively, when compared with \(\mathsf{AP}\). For extracting snippets, the time per symbol extracted is reduced by about 50%. Although the speedup for 46 nodes might seem not too impressive (around 7-8), it is important to note that our experiments are just a proof of concept. For instance, the symbols could be distributed in such a way that the load balance is improved. ## 8 Conclusions Our alphabet-partitioning \(\mathsf{rank}\)/\(\mathsf{select}\) data structure offers interesting trade-offs. Using slightly more space than the original alphabet-partitioning data structure from [13], we are able to reduce the time for operation \(\mathsf{select}\) by about 80%. The performance for \(\mathsf{rank}\) can be improved between 4% and 17%. For the inverted-list intersection problem, we showed improvements of about 60% for query processing time, using \begin{table} \begin{tabular}{l c c c c c} \hline \hline Operation & \multicolumn{2}{c}{\(\mathsf{ASAP}\)} & \multicolumn{2}{c}{\(\mathsf{AP}\)} & \multicolumn{2}{c}{\(\mathsf{AP}/\mathsf{ASAP}\)} \\ \cline{2-6} & Time & Speedup & Time & Speedup & \\ \hline \(\mathsf{rank}\) & 0,373 & 8.03 & 1.310 & 1.91 & 3.51 \\ \(\mathsf{select}\) & 0.706 & 8.41 & 2.970 & 2.55 & 4.21 \\ \(\mathsf{access}\) & 1.390 & 8.11 & 2.130 & 1.45 & 1.53 \\ snippet & 0.466 & 6.96 & 0.939 & 1.25 & 2.02 \\ \hline \hline \end{tabular} \end{table} Table 4: Experimental results for the distributed computation of operations on a string. Times are in microseconds per operation, on average (for extracting snippets, it is microseconds per symbol). For \(\mathsf{rank}\) and \(\mathsf{select}\), the symbols used are from our query log. Scheme \(\mathsf{ASAP}\) implements the sequences \(s_{\ell}\) using wavelet matrices, whereas mapping \(m\) is implemented using a Huffman-shaped WT. The partitioning is \(\mathsf{dense}\)\(\ell_{min}=\lg 23\). The number of partitions generated (i.e., computation nodes in the distributed system) is 46. only 2% extra space when compared to the original alphabet-partitioning data structure. This makes this kind of data structures more attractive for this relevant application in information retrieval tasks. We also studied how the alphabet-partitioning data structures can be used for the distributed computation of rank, select, access, and snippet operations. As far as we know, this is the first study about the support of these operation on a distributed-computing environment. In our experiments, we obtained speedups from 6.96 to 8.41, for 46 processors. This compared to 1.25-2.55 for the original alphabet-partitioning data structure. Our results were obtained simulating the distributed computation, hence considering only computation time (and disregarding communication time). The good performance observed in the experiments allows us to think about a real distributed implementation. This is left for future work, as well as a more in-depth study that includes aspects like load balance, among others. Overall, from our study we can conclude that our approach can be used to implement the ideas from Arroyuelo at al. [6], such that by representing a document collection as a single string using our data structure (hence using compressed space), one can: (i) simulate an inverted index for the document collection; (ii) simulate tf-idf information (using operation rank); (iii) simulate a positional inverted index (using operation select); (iv) to carry out snippet extraction to obtain snippets of interest from the documents. Implementing an information retrieval system based on these ideas is left as future work.
2301.06592
Daily and annual modulation rate of low mass dark matter in silicon detectors
Low threshold detectors with single-electron excitation sensitivity to nuclear recoil events in solid-state detectors are also sensitive to the crystalline structure of the target and, therefore, to the recoil direction via the anisotropic energy threshold for defect creation in the detector material. We investigate this effect and the resulting daily and annual modulation of the observable event rate for dark matter mass range from 0.2 to 5 GeV/c$^{2}$ in a silicon detector. We show that the directional dependence of the threshold energy and the motion of the laboratory result in modulation of the event rate which can be utilized to enhance the sensitivity of the experiment. We demonstrate that the spin-independent interaction rate in silicon is significant for both high and low dark matter masses. For low-mass dark matter, we show that the average interaction rate in silicon is larger than germanium, making silicon an important target for identifying dark matter from backgrounds. We find 8 and 12 hours periodicity in the time series of event rates for silicon detector due to the 45-degree symmetry in the silicon crystal structure.
Abolfazl Dinmohammadi, Matti Heikinheimo, Nader Mirabolfathi, Kai Nordlund, Hossein Safari, Sebastian Sassi, Kimmo Tuominen
2023-01-16T20:18:18Z
http://arxiv.org/abs/2301.06592v1
# Daily and annual modulation rate of low mass dark matter in silicon detectors ###### Abstract Low threshold detectors with single-electron excitation sensitivity to nuclear recoil events in solid-state detectors are also sensitive to the crystalline structure of the target and, therefore, to the recoil direction via the anisotropic energy threshold for defect creation in the detector material. We investigate this effect and the resulting daily and annual modulation of the observable event rate for dark matter mass range from 0.2 to 5 GeV/c\({}^{2}\) in a silicon detector. We show that the directional dependence of the threshold energy and the motion of the laboratory result in modulation of the event rate which can be utilized to enhance the sensitivity of the experiment. We demonstrate that the spin-independent interaction rate in silicon is significant for both high and low dark matter masses. For low-mass dark matter, we show that the average interaction rate in silicon is larger than germanium, making silicon an important target for identifying dark matter from backgrounds. We find 8 and 12 hours periodicity in the time series of event rates for silicon detector due to the 45-degree symmetry in the silicon crystal structure. + Footnote †: preprint: HIP-2023-2/TH ## I Introduction Observations of large-scale cosmic phenomena, galaxy clusters, the matter power spectrum and cosmic microwave background radiation are evidence for the existence of non-baryonic matter that constitutes about 85% of the total matter content of the Universe [1; 2]. Non-baryonic matter, known as dark matter, is one of the most fundamental topics in cosmology, astronomy, and high-energy physics [3; 4; 5]. Most dark matter laboratories are designed and implemented based on direct and indirect detection approaches. The indirect detection approach corresponds to the annihilation or decay of dark matter particles [6]. In contrast, direct detection is via the elastic collision of a dark matter particle with the nuclei of atoms [7] and generates typical recoil energy of \(\mathcal{O}\)(keV) [8; 9]. This energy operates to create an electron-hole pair inside the detector. In recent decades, many studies for the direct search of dark matter were employed, such as XENON10 [10], XENON100 [11], SIMPLE [12], CoGeNT [13] and DAMA/LIBRA [14; 15]. In most direct dark matter experiments, a dark matter particle has been considered with a mass of \(\mathcal{O}(10-100)\) GeV/c\({}^{2}\)[3]. Recently, with the development of convincing theoretical models, dark matter masses less than 10 GeV/c\({}^{2}\) have also been introduced [1; 3; 16; 17]. In many experiments, the target material is either low-pressure gas or scintillating liquids. These experiments typically feature a rather high threshold energy for detecting dark matter based on nuclear recoil [1; 18]. Also, in a gaseous detector, the large volume of gas required is a deterrent. Due to the low energy of nuclear recoils for dark matter with low mass, high-mass detectors with low detection thresholds are more desirable [19; 20; 21]. Semiconductor targets, such as silicon or germanium, can be utilized as cryogenic solid-state ionization detectors, where a sensitivity to single electron events has been demonstrated [1; 22; 23; 20]. When a particle colliding with a nucleus of the detector material transfers enough energy to the nucleus, it causes a defect in the configuration of the crystal lattice [1]. The effective threshold energy in generating the defect depends on the recoil angle in silicon and germanium crystals. Holmstrom et al. [24; 25] obtained the threshold energy required to create a defect in germanium and silicon crystals for different directions using molecular dynamics simulations. It has been argued [1; 26] that this threshold displacement energy is equivalent to the ionization threshold, which implies that the ionization signal will also be sensitive to the recoil direction. Like most spiral galaxies, the current models assume that the Milky Way is immersed in a halo of dark matter. Inside the halo, the solar system moves toward the constellation Cygnus [9], and the direction of motion of dark matter particles in the lab frame is opposite to this. The Earth's rotation and motion around the Sun cause the laboratory frame velocity of the dark matter particles to change over time. Due to the anisotropy of the ionization threshold, this results in daily and annual modulation effects in the dark matter interaction rate [26; 27; 28; 29; 30]. In this paper, we study the daily and annual rate of dark matter interaction with silicon nuclei, taking into account the energy threshold and the direction of the recoil induced by the collision with a dark matter par ticle. We use threshold energy data for silicon obtained via molecular dynamics simulations. We investigate the modulation of the ionization rate for dark matter interactions due to the directional dependence of the threshold energy and the motion of the laboratory frame with respect to the galactic rest frame. The layout of this paper is as follows: in Section II we calculate the directional event rate of a dark matter in the presence of a direction-dependent energy threshold, and in Section III we present the results. ## II Dark matter rate To calculate the interaction rate of dark matter with the nuclei of the detecting material, we need the distribution function of the dark matter velocity in the galactic halo. We consider the Maxwell-Boltzmann distribution [27, 31] for the velocity of dark matter in the halo as follows: \[f(\mathbf{v})=\frac{1}{N_{\mathrm{esc}}(2\pi\sigma_{v}^{2})^{\frac{3}{2}}} \exp\left(-\frac{\mathbf{v}^{2}}{2\sigma_{v}^{2}}\right)\Theta(v_{\mathrm{esc }}-|\mathbf{v}|), \tag{1}\] where \(v_{\mathrm{esc}}=544\) kms\({}^{-1}\)[32] is the escape velocity. The dark matter velocity dispersion \(\sigma_{v}=\frac{v_{0}}{\sqrt{2}}\), where \(v_{0}=220\) kms\({}^{-1}\) is the local circular speed [33]. The normalization constant \(N_{\mathrm{esc}}\) is given by \[N_{\mathrm{esc}}=\mathrm{erf}\left(\frac{v_{\mathrm{esc}}}{\sqrt{2}v_{0}} \right)-\sqrt{\frac{2}{\pi}}\frac{v_{\mathrm{esc}}}{\sigma_{v}}\exp\left(- \frac{v_{\mathrm{esc}}^{2}}{2\sigma_{v}^{2}}\right). \tag{2}\] Applying a Galilean transformation on the velocity distribution in the Galactic frame, we obtain the distribution in the laboratory frame as \[f_{\mathrm{lab}}(\mathbf{v})=f_{\mathrm{gal}}(\mathbf{v}+\mathbf{v}_{\mathrm{ lab}}). \tag{3}\] The velocity of the laboratory frame \(\mathbf{v}_{\mathrm{lab}}\) with respect to the galactic rest frame is given by \[\mathbf{v}_{\mathrm{lab}}=\mathbf{v}_{\mathrm{GalRot}}+\mathbf{v}_{\mathrm{ Solar}}+\mathbf{v}_{\mathrm{Earth}}+\mathbf{v}_{\mathrm{EarthRot}}, \tag{4}\] where \(\mathbf{v}_{\mathrm{GalRot}}\), \(\mathbf{v}_{\mathrm{Solar}}\), \(\mathbf{v}_{\mathrm{Earth}}\), and \(\mathbf{v}_{\mathrm{EarthRot}}\) are the galactic rotation, the peculiar velocity of the Solar system with respect to the galactic halo, Earth's revolution, and Earth's rotation velocities respectively. Even though the rotation velocity of the Earth is small compared to the other components, it plays an essential role in the daily modulation of the interaction rate, due to the effect on the direction of the dark matter velocity in the laboratory frame. The relations and algebraic calculations for converting the velocity and coordinates from the galactic frame to the laboratory frame are given in the reference [34]. Since the direction of the laboratory velocity is towards the constellations of Cygnus [9], the direction of the velocity of the dark matter can be considered as the inverse of the direction of Cygnus. Fig. 1 shows the position of the Sun (yellow line) and the inverse of the Cygnus's position (blue line) in the sky for a laboratory located at latitude 45.2 and longitude 6.7 on June 24, 2020. The red dots indicate the position of the Sun and Cygnus at 6, 12, and 18 o'clock. Since the direction of the dark matter wind is from the Cygnus and the dominant low energy neutrino background is from the Sun, it is useful to notice that these directions never overlap in the figure. Fig. 2 shows the hourly changes in the Cygnus's direction relative to the laboratory's location for latitude 45.2 and longitude 6.7 in one year. Each red dot corresponds to one hour from January 1, 2020, 0:00, until December 31, 2020, 23:00. According to the Earth's rotation around the Sun, the direction of the Cygnus changes during the year, affecting the daily modulation of the event rate. The dark matter scattering event rate as a function of recoil energy, direction in the laboratory frame, and time Figure 1: The position of the Sun (yellow) and the direction of the dark matter flux (blue) during a day in latitude 45.2\({}^{*}\) and longitude 6.7\({}^{*}\) on June 24, 2020. The red dots indicate the positions at 6, 12, and 18 o’clock. The dashed line in the middle is the horizon. Figure 2: The hourly changes in the position of the constellation Cygnus during a year for latitude 45.2\({}^{0}\) and longitude 6.7\({}^{0}\). Each red dot corresponds to one hour. is given by [1; 26; 27], \[\frac{d^{3}R}{dE_{r}d\Omega_{r}dt}=\frac{\rho\sigma_{m_{D}-n}}{4\pi m_{D}\mu_{m_{ D}-n}^{2}\Delta t}A^{2}F^{2}(E_{r})\hat{f}_{\rm SHM}(v_{\rm min},{\bf q},t), \tag{5}\] Where \(\rho=0.3\) GeV/cm\({}^{3}\) is the local dark matter density, \({\bf q}\) the recoil direction in the detector, \(\sigma_{m_{D}-n}\) is the spin-independent dark matter-nucleon cross-section, \(A\) the mass number of the target in the detector, \(m_{D}\) the dark matter particle mass, \(\mu_{m_{D}-n}=m_{D}m_{n}/(m_{D}+m_{n})\) the dark matter-nucleon reduced mass and \(F^{2}(E_{r})\) the nuclear Helm form factor [35]. Finally, \(\hat{f}_{\rm SHM}(v_{\rm min},{\bf q},t)\) is the radon transform of the dark matter velocity distribution [26]. The analytical formula of the radon transform given by [26], \[\hat{f}_{\rm SHM}(v_{\rm min},{\bf q},t)= \frac{1}{N_{\rm esc}(2\pi\sigma_{v}^{2})^{\frac{2}{2}}}\Big{[} \exp\Big{(}-\frac{|v_{\rm min}+{\bf q}_{\rm s}\nu_{\rm lab}|^{2}}{2\sigma_{v} ^{2}}\Big{)}\] \[-\exp(-\frac{|v_{\rm min}|^{2}}{2\sigma_{v}^{2}})\Big{]}, \tag{6}\] where \(v_{\rm min}=\sqrt{2m_{n}E_{r}}/2\mu_{m_{D}-N}\) is the minimum velocity required to produce a nuclear recoil with energy equal to \(E_{r}\), where \(\mu_{m_{D}-N}\) is the dark matter-nucleus reduced mass and \(\sigma_{v}\) is the velocity dispersion. Equation (6) clearly shows that when \(v_{\rm min}<v_{\rm lab}\), in the case of \({\bf q}\cdot{\bf v}_{\rm lab}=-v_{\rm min}\) or \({\bf q}=-{\bf v}_{\rm lab}\), the rate has a maximum. In other words, the relationship between the recoil direction and the laboratory motion in the galactic rest frame confirms the existence of a dark matter signal. To determine the recoil direction in a detector, we consider the axes \(\hat{\bf x}\), \(\hat{\bf y}\), and \(\hat{\bf z}\) as the north, east and vertical directions, respectively, \[\hat{q}=s\sin(\theta)\cos(\phi)\hat{\bf x}+\sin(\theta)\sin(\phi)\hat{\bf y}+ \cos(\theta)\hat{\bf z}. \tag{7}\] In an ionization detector, the total accepted signal rate is obtained by integrating the differential rate over the recoil direction and recoil energy \[R(t)=\int_{4\pi}\int_{E_{\rm th}(\theta,\phi)}^{E_{r}^{\rm max}}\frac{d^{2}R}{ dE_{r}d\Omega_{r}}dE_{r}d\Omega_{r}, \tag{8}\] where \(E_{\rm th}\) is the minimum observable recoil energy, which is here assumed to be correlated with the threshold displacement energy, and therefore depends on the recoil direction with respect to the crystal lattice [1; 26]. The threshold energies are obtained from molecular dynamics simulations [1]. The silicon threshold ranges from 17.5 to 77.5 eV. Fig. 3 represents the angular distribution of the threshold displacement energy. The brighter points correspond to directions of higher threshold energy, indicating that more energy is needed to create a defect, and according to our assumption, to create a detectable ionization signal. The energy threshold surface shown in Fig. 3 therefore determines the lower limit of the energy integral in equation 8. ## III Results Here, we study the time evolution of the event rate of dark matter interacting with silicon and germanium nuclei. The dark matter event rate is obtained for each time bin by averaging 24,155 and 84,936 directions for silicon and germanium, respectively. We calculate the rate by integrating over the recoil energy for each sampled direction (\(\theta_{i}\),\(\phi_{i}\)) with the corresponding threshold energy value (\(E_{\rm min}(\theta_{i},\phi_{i})\)). Throughout the analysis, we take the detector mass to be one ton. Fig. 4 shows the time evolution of the event rate of a dark matter particle of mass 300 MeV/c\({}^{2}\) interacting with silicon (top) and germanium (bottom) via spin-independent cross-section 10\({}^{-39}\) cm\({}^{2}\) for January 1, 2020. As expected the event rates of both detectors oscillate due to the Earth's rotation. As seen in Fig. 4, for the germanium detector the overall rate is much lower than for silicon. The difference in the total rate is mostly due to the larger nuclear mass in germanium, implying a larger \(v_{\rm min}\) for a given recoil energy, and therefore cutting out a larger fraction of the DM velocity distribution. The normalized diurnal modulation of the event rate for the dark matter mass range from 225 to 400 MeV/c\({}^{2}\) interacting with silicon nuclei is depicted in Fig. 5. It was already noted in [1; 26] that the modulation amplitude increases for decreasing dark matter mass for a germanium target. Here we observe the same effect for silicon. For low-mass dark matter, the diurnal modulation effect can be utilized to discriminate dark matter from the background. Silicon offers a better sensitivity to low mass dark matter due to its smaller atomic number. This is evident in Fig. 6, showing the daily average of the event rate for the dark matter mass range from 200 MeV/c\({}^{2}\) to 5 GeV/c\({}^{2}\) for silicon (blue line) and germanium (red line) on January 1, 2020. The event rate for silicon is about 200 times larger than for germanium at low mass (300 MeV/c\({}^{2}\)). The high event rate for silicon detectors provides more sensitivity for direct dark matter searches for low masses. Figure 3: Angular distribution of the defect creation energy threshold of silicon as a function of the recoil direction. For high-mass dark matter (1 GeV/c\({}^{2}\)), the event rate for the silicon detector is about 1.5 times greater than the germanium rate. Our numerical studies show that the event rate for silicon detector approaches germanium at dark matter masses larger than 1 GeV/c\({}^{2}\). Fig. 7 represents the annual modulation of a dark matter particle with mass of 300 MeV/c\({}^{2}\) and a cross-section of 10\({}^{-39}\) cm\({}^{2}\) scattering on silicon target over the year 2020. Due to the rotation of Earth around the Sun, the orientation of the detector with respect to the inverse direction of Cygnus varies throughout the year. Although the fluctuation reaches its maximum in June, Fig. 7 shows that the diurnal modulation remains significant for the low mass of 300 MeV/c\({}^{2}\) throughout the year. Since the direction of the neutrinos is from the Sun, the annual interaction rate of dark matter can be considered a helpful tool for distinguishing between dark matter signal and the neutrino background, as discussed in [27]. To further analyze the periodicity of the event rate, we applied the fast Fourier transformation (FFT) to the event rate time series (Fig. 7). Table 1 represents the periods and amplitudes for silicon event rate. As shown in the table, beyond the leading annual modulation (\(\sim\)8760 hours), we observed the main periods of about 8 hours (three maxima per day) and 12 hours (two maxima per day). The patterns with the period of 8 and 12 hours are due to silicon crystal orientation in laboratory coordinates (about 45-degree patterns of the angular distribution of the defect creation energy threshold for the silicon detector, Fig. 3). As the Earth rotates around its axis, the inverse direction of Cygnus scans various patterns of the minima (dark regions in Fig. 3) and maxima (bright regions in Fig. 3) of the threshold energy throughout the day. Dep Figure 4: Time evolution of the event rate for a 300 MeV/c\({}^{2}\) dark matter particle with a 10\({}^{-39}\) cm\({}^{2}\) cross-section in interaction with silicon (top) and germanium (bottom). Figure 5: Normalized diurnal modulation of the event rate for different masses range of dark matter from 225 to 400 MeV/c\({}^{2}\) interacting with silicon nuclei. Figure 6: The daily average together with error bars of the event rate for the dark matter mass range from 200 MeV/c\({}^{2}\) to 5 GeV/c\({}^{2}\) for silicon (blue line) and germanium (red line) on January 1, 2020. Figure 7: The annual event rate on silicon for a 300 MeV/c\({}^{2}\) dark matter particle with a 10\({}^{-39}cm^{2}\) cross-section. of the crystal frame, the path scanned by the inverse direction of Cygnus may cross two (12-hour period) and three (8-hour period) minima/maxima of the threshold surface during a daily cycle. ## IV Conclusions We have studied the dark matter interaction rate in silicon assuming that the ionization threshold directional dependence follows similar functional form to the threshold displacement energy simulated using molecular dynamics methods. [24]. We obtained a daily and annual modulation of the dark matter event rate for the silicon target. We found that the directional dependence of the threshold energy and the motion of the laboratory with respect to the galactic rest frame results in a modulation of the dark matter event rate in silicon, thus it provides a cosmological signature to identify dark matter from background interactions. Therefore silicon appears as very appropriate target for identifying dark matter from backgrounds with a high event rate expected for low-mass dark matter. Using FFT analysis, we obtained the 8 and 12 hours periodicity in simulated time series of event rates for the silicon detector that is due to the 45-degree pattern in the displacement threshold energy surface of silicon. We also observed the periods 24 hour, six months, and one year as the harmonics of the primary modes (8 and 12 hours).
2306.12540
2D Zak Phase Landscape in Photonic Discrete-Time Quantum Walks
We present a study of the 2D Zak phase landscape in photonic discrete-time quantum walk (DTQW) protocols. In particular, we report numerical results for three different DTQW scenarios which preserve spatial inversion symmetry (SIS) and time-reversal symmetry (TRS), while presenting a non-trivial Zak phase structure, as a consequence of a non-vanishing Berry connection. Additionally, we propose a novel approach to break TRS in photonic systems, while preserving a vanishing Berry curvature. Our results bear a close analogy to the Aharonov-Bohm effect, stating that in a field-free multiply connected region of space the evolution of the system depends on vector potentials, due to the fact that the underlying canonical formalism cannot be expressed in terms of fields alone.
Graciana Puentes
2023-06-21T20:03:20Z
http://arxiv.org/abs/2306.12540v1
# 2D Zak Phase Landscape in Photonic Discrete-Time Quantum Walks ###### Abstract We present a study of the 2D Zak phase landscape in photonic discrete-time quantum walk (DTQW) protocols. In particular, we report numerical results for three different DTQW scenarios which preserve spatial inversion symmetry (SIS) and time-reversal symmetry (TRS), while presenting a non-trivial Zak phase structure, as a consequence of a non-vanishing Berry connection. Additionally, we propose a novel approach to break TRS in photonic systems, while preserving a vanishing Berry curvature. Our results bear a close analogy to the Aharonov-Bohm effect, stating that in a field-free multiply connected region of space, the evolution of the system depends on vector potentials, due to the fact that the underlying canonical formalism cannot be expressed in terms of fields alone. ## I Introduction It is well known that if a quantum particle, in a given eigenstate with energy \(E\), is slowly transported around a circuit \(\gamma\) by varying parameters \(\mathbf{R}\) in its Hamiltonian \(H(\mathbf{R})\), the system will acquire a geometric phase factor \(\gamma(\mathbf{R})\), in addition to the standard dynamical phase factor \(e^{i\mathcal{E}t}\)[1]. For quantum particles returning adiabatically to their initial state, while storing information about the circuit on the geometric phase, such geometric phase factor can be defined as [1]: \[e^{i\gamma}=\langle\psi_{\text{ini}}|\psi_{\text{final}}\rangle. \tag{1}\] Geometric phases can be held responsible for a number of situations: they affect material properties in solids, such as conductivity in graphene [2], they trigger the emergence of surface edge-states in topological insulators, whose surface electrons experience a geometric phase [3], they can modify the outcome of molecular chemical reactions [4], and could even have implications for quantum information technology, via the Majorana particle [5], in addition to bearing close analogies to gauge-field theories [6]. Here we present a simple system based on Discrete-Time Quantum Walk (DTQW) architectures in 2D, which presents a non-trivial 2D Berry phase on the torus, i.e., the Zak phase [7]. Non-trivial topology is usually regarded as a consequence of a non-vanishing Berry curvature (\(F\)), leading to invariant Chern numbers. Nevertheless, a non-trivial topological scenario can also arise as a direct consequence of a non-vanishing Berry connection (\(A\)), even in the absence of Berry curvature [28]. This is the type of scenario we consider in this article. Discrete-Time Quantum Walks (DTQWs) [8] offer a versatile platform for the exploration of a wide range of non-trivial geometric and topological phenomena (experiment) [9][10][11], and (theory) [12; 13; 14; 15; 16]. Further, QWs are robust platforms for modelling a variety of dynamical processes from excitation transfer in spin chains [17; 18] to energy transport in biological complexes [19]. They enable to study multi-path quantum interference phenomena [20; 21; 22; 23], and can provide for a route to validation of quantum complexity [24; 25], and universal quantum computing [26]. Moreover, multi-particle QWs warrant a powerful tool for encoding information in an exponentially larger space, and for quantum simulations in biological, chemical and physical systems, in 1D and 2D geometries [27][35]. In this paper, we report a simple theoretical scheme for generation of a non-trivial geometric Zak phase landscape in 2D DTQW architectures, based on a non-zero Berry connection. The systems preserves both spatial inversion symmetry (SIS) and time reversal symmetry (TRS), and therefore has a zero Berry curvature. Moreover, we propose a method to break TRS based on photonic quantum walks. Our results bear a close analogy to the Aharonov-Bohm effect, which essentially confirms that in a field-free (\(F=0\)) multiply-connected region of space, the physical properties of the system depend on vector potentials (\(A\)), due to the fact that the Schroedringer equation is obtained from a canonical formalism, which cannot be expressed in terms of fields alone [30]. The 2D topological landscape can be characterized by the extended Zak phase in 2D, which is also the wave polarization vector (\(\mathbf{Z}\)), given by [28]: \[\mathbf{Z}=\frac{1}{2\pi}\int_{BZ}dk_{x}dk_{y}\text{Tr}[\mathbf{A}(k_{x},k_{y })], \tag{2}\] where \(\mathbf{A}=\langle u|i\partial_{\mathbf{k}}|u\rangle\) is the Berry connection, and \(|u\rangle\) are the Bloch eigenvectors in the first Brillouin zone (BZ), obtained by diagonalization of the Hamiltonian. Integration is performed over the first BZ. Spatial inversion symmetry (SIS) and time reversal symmetry (TRS) place strong limitations on the actual values that the 2D Zak phase \(\mathbf{Z}=(Z_{x},Z_{y})\) can take. In fact, systems with SIS and TRS are forced to have zero Berry curvature (\(F=0\)), where the Berry curvature is defined as the curl of the Berry connection (\(F=\nabla\times\mathbf{A}\)), due to the fact that TRS requires \(F(-\mathbf{k})=-F(\mathbf{k})\), while SIS requires \(F(-\mathbf{k})=F(\mathbf{k})\), which yields \(F=-F=0\). This constraint forces (\(Z_{x}=Z_{y}\)). Nevertheless, in the particular photonic implementation we consider here, it is possible to break TRS, obtaining (\(Z_{x}=-Z_{y}\)), while the Berry curvature remains equal to zero because the Hamiltonian is separable and Bloch eigenvectors do not depend on orthogonal wavevectors, meaning \(\partial_{k_{x}}|u_{y}\rangle=\partial_{k_{y}}|u_{x}\rangle=0\) (Eq. 3.11). ## II Discrete-time quantum walks (DTQW) The basic step in the standard 1D DTQW is given by a unitary evolution operator \(U(\theta)=TR_{\vec{n}}(\theta)\), where \(R_{\vec{n}}(\theta)\) is a rotation along an arbitrary direction \(\vec{n}=(n_{1},n_{1},n_{3})\), given by \[R_{\vec{n}}(\theta)=\left(\begin{array}{cc}\cos(\theta)-in_{3}\sin(\theta)& (in_{1}-n_{2})\sin(\theta)\\ (in_{1}+n_{2})\sin(\theta)&\cos(\theta)+in_{3}\sin(\theta)\end{array}\right),\] in the Pauli basis [41]. In this basis, the y-rotation is defined by a coin operator of the form [41]. \[R_{y}(\theta)=\left(\begin{array}{cc}\cos(\theta)&-\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{array}\right).\] This is followed by a spin- or polarization-dependent translation \(T_{x(y)}\) given by \[T_{x(y)}=\sum_{x(y)}|x(y)+1\rangle\langle x(y)|\otimes|H\rangle\langle H|+|x(y )-1\rangle\langle x(y)|\otimes|V\rangle\langle V|,\] where \(H=(1,0)^{T}\) and \(V=(0,1)^{T}\). The evolution operator for a discrete-time step is equivalent to that generated by a Hamiltonian \(H(\theta)\), such that \(U(\theta)=e^{-iH(\theta)}\) (\(\hbar=1\)), with \[H(\theta)=\int_{-\pi}^{\pi}dk[E_{\theta}(k)\vec{n}(k).\vec{\sigma}]\otimes|k \rangle\langle k|,\] and \(\vec{\sigma}\) the Pauli matrices, which readily reveals the spin-orbit coupling mechanism in the system. The quantum walk described by \(U(\theta)\) has been realized experimentally in a number of systems [36; 37; 38; 39; 40], and has been shown to posses chiral symmetry, and display Dirac-like dispersion relation given by \(\cos(E_{\theta}(k))=\cos(k)\cos(\theta)\). Extension to the 2D case can be easily performed by considering independent evolution operators (\(U_{x(y)}(\theta_{x(y)})\)), for each dimension \((x,y)\). Due to spatial periodicity of the Hamiltonian, the eigenstates of the system obey the Bloch theorem and can be written as: \[|\psi_{k}(r)\rangle=e^{ik.r}|u_{k}(r)\rangle, \tag{3}\] where \(|u_{k}(r)\rangle\) are the Bloch eigenvectors which obey the same periodicity of the Hamiltonian, and satisfy the eigenvalue equation: \[H_{k}|u_{k}(r)\rangle=E(k)|u_{k}(r)\rangle, \tag{4}\] where \(E(k)\) is the energy dispersion relation. The physics of the system is captured by the dispersion relation, and by the geometrical properties of the Bloch eigenvectors [29], as described in the following Sections. In the following, we present and characterize three different DTQW scenarios for calculation of the non-trivial geometrical Zak phase landscape in 2D. We consider a generic wavevector \(k\), which eventually will indicate each of the two dimensions under consideration \((x,y)\). ### Hadamard Quantum Walk (HQW) The first example is the traditional Hadamard Quantum Walk [8], consisting of a rotation along the y-direction (\(R_{y}(\theta)\)) by an angle (\(\theta\)) followed by a spin-dependent translation (\(T\)). The 3D axes assigned to norms and rotations will be labelled by the indices Figure 1: 3D Energy dispersion relations as a function of quasi-momentum (\(k\)) and angular parameter (\(\theta\)) for (a) Hadamard Quantum Walk (HQW), (b) Non-Commuting Rotations Quantum Walk (NCRQW) for parameter \(\phi=0\), and (c) Split Step Quantum Walk (SSQW) for parameter \(\theta_{2}=0\). Insets depict density plots of 3D dispersion relations. (\(i=1,2,3\)), in order to distinguish them from the 2D (spatial or temporal) dimensions (\(x,y\)) for implementation of the DTQW. The 3D-norm for decomposing the quantum walk Hamiltonian of the system in terms of Pauli matrices \(H_{\text{QW}}=E(k)\vec{n}\cdot\vec{\sigma}\) becomes [9]: \[\begin{array}{lll}n_{\theta}^{1}(k)&=&\frac{\sin(k)\sin(\theta)}{\sin(E_{ \theta}(k))}\\ n_{\theta}^{2}(k)&=&\frac{\cos(k)\sin(\theta)}{\sin(E_{\theta}(k))}\\ n_{\theta}^{3}(k)&=&\frac{-\sin(k)\cos(\theta)}{\sin(E_{\theta}(k))},\end{array} \tag{5}\] where \(k\) represents the wavevector in either dimension (\(x,y\)). The dispersion relation for the Hadamard quantum walk results in [12]: \[\cos(E_{\theta}(k))=\cos(k)\cos(\theta),\] which corresponds to a Dirac-like, i.e., a linear dispersion relation when the gap is closed at Dirac points [12]. 3D and 2D Plots of the dispersion relation characterizing the DTQWs are displayed in Figure 1 and Figure 2. ### Non-Commuting Rotations Quantum Walk (NCRQW) The second example consists of a DTQW based on two consecutive non-commuting rotations followed by a spin-dependent translation [32; 34]. The first rotation (\(R_{2}(\theta)\)) is performed along the y-direction (\(i=2\)) by an angle (\(\theta\)), and the second rotation (\(R_{1}(\phi)\)) is performed along the x-direction (\(i=1\)) by an angle \(\phi\), such that the unitarity step becomes \(U(\theta,\phi)=TR_{1}(\phi)R_{2}(\theta)\), where \(R_{1}(\phi)\) is given in the same basis [41] by: \[R_{1}(\phi)=\left(\begin{array}{cc}\cos(\phi)&i\sin(\phi)\\ i\sin(\phi)&\cos(\phi)\end{array}\right).\] The modified dispersion relation becomes: \[\cos(E_{\theta,\phi}(k))=\cos(k)\cos(\theta)\cos(\phi)+\sin(k)\sin(\theta)\sin (\phi),\] where we recover the Hadamard dispersion relation for \(\phi=0\), as expected. The 3D-norm for decomposing the Hamiltonian of the system in terms of Pauli matrices becomes: \[\begin{array}{lll}n_{\theta,\phi}^{1}(k)&=&\frac{-\cos(k)\sin(\phi)\cos( \theta)+\sin(k)\sin(\theta)\cos(\phi)}{\sin(E_{\theta,\phi}(k))}\\ n_{\theta,\phi}^{2}(k)&=&\frac{\cos(k)\sin(\theta)\cos(\phi)+\sin(k)\sin(\phi) \cos(\theta)}{\sin(E_{\theta,\phi}(k))}\\ n_{\theta,\phi}^{3}(k)&=&\frac{-\sin(k)\cos(\theta)\cos(\phi)+\cos(k)\sin( \theta)\sin(\phi)}{\sin(E_{\theta,\phi}(k))}.\end{array} \tag{6}\] Dispersion relations for the DTQW with two consecutive non-commuting rotations within the first Brillouin zone are of the Dirac type (linear) at the set of gapless Dirac points, where the quasi-energy gap closes at \(E(k)=0\), as displayed in Figure 1 and Figure 2. The second rotation enables to close the gap at zero energy for complementary points, and allows to create a non-trivial geometric Zak phase structure in the system. In particular, this system has a non-trivial phase diagram with a larger number of gapless points for different momenta as compared to the system consisting of a single rotation. We calculated analytically the gapless Dirac points for the system. Using basic trigonometric considerations, it can be shown that the energy gap closes at 13 discrete points, for different values of quasi-momentum \(k\)[32; 33; 34]. ### Split-Step Quantum Walk (SSQW) The third DTQW protocol consists of two consecutive spin-dependent translations \(T\) and rotations \(R\), such that the unitary step becomes \(U(\theta_{1},\theta_{2})=TR(\theta_{1})TR(\theta_{2})\), as described in detail in [12]. The so-called "Split-Step" Quantum Walk (SSQW), has been shown to possess a non-trivial topological landscape characterized by topological sectors with different topological numbers, such as the winding number \(W=0,1\). The dispersion relation for the split-step quantum walk results in [12]: Figure 2: 2D Energy dispersion relations as a function of quasi-momentum (\(k\)), for different values of the system parameters. HQW, NCRQW and SSQW are represented by red, blue and magenta curves, respectively. Dispersion relation for (a) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\theta_{1}=\pi/4,\theta_{2}=\pi/4\)), (b) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\theta_{1}=\theta_{2}=\pi/4\)), (c) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4\)), SSQW with (\(\theta_{1}=\theta_{2}=\pi/4\)), (d) HQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\theta_{1}=\pi/4,\theta_{2}=\pi/8\)). Linear dispersions at gapless Dirac points are apparent. \[\cos(E_{\theta_{1},\theta_{2}}(k))=\cos(k)\cos(\theta_{1})\cos(\theta_{2})-\sin( \theta_{1})\sin(\theta_{2}).\] The 3D-norm for decomposing the quantum walk Hamiltonian of the system in terms of Pauli matrices \(H_{\rm QW}=E(k)\vec{n}\cdot\vec{\sigma}\) becomes [9]: \[\begin{array}{rcl}n^{1}_{\theta_{1},\theta_{2}}(k)&=&\frac{\sin(k)\sin(\theta _{1})\cos(\theta_{2})}{\sin(E_{\theta_{1},\theta_{2}}(k))}\\ n^{2}_{\theta_{1},\theta_{2}}(k)&=&\frac{\cos(k)\sin(\theta_{1})\cos(\theta_{ 2})+\sin(\theta_{2})\cos(\theta_{1})}{\sin(E_{\theta_{1},\theta_{2}}(k))}\\ n^{3}_{\theta_{1},\theta_{2}}(k)&=&\frac{-\sin(k)\cos(\theta_{2})\cos(\theta_{ 1})}{\sin(E_{\theta_{1},\theta_{2}}(k))}.\end{array} \tag{7}\] The dispersion relation and topological landscape for the split-step quantum walk was analysed in detail in [12]. Figure 1 displays 3D energy dispersion relations as a function of the quasi-momentum (\(k\)) and the angular parameter (\(\theta\)) within the first Brillouin zone, for (a) Hadamard Quantum Walk (HQW), (b) Non-Commuting Rotations Quantum Walk (NCRQW) considering (\(\phi=0\)), and (c) Split Step Quantum Walk (SSQW) considering (\(\theta_{2}=0\)). Figure 2 displays 2D Energy dispersion relations as a function of the quasi-momentum (\(k\)), for different values of the system parameters. HQW, NCRQW and SSQW are represented by red, blue and magenta curves, respectively. Dispersion relation for (a) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\theta_{1}=\pi/4,\theta_{2}=\pi/4\)), (b) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4,\theta=\pi/2\)), SSQW with (\(\theta_{1}=\theta_{2}=\Pi/4\)), (c) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\theta=\pi/4\)), SSQW with (\(\theta_{1}=\theta_{2}=\pi/4\)), (d) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\theta_{1}=\pi/4,\theta_{2}=2\pi/5\)), (e) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\theta_{1}=\pi/4,\theta_{2}=\pi/5\)), and (f) HQW with \(\theta=\pi/4\), NCRQW with (\(\phi=\pi/4,\theta=0\)), SSQW with (\(\theta_{1}=\pi/4,\theta_{2}=\pi/8\)). Linear dispersions at gapless Dirac points are apparent. ## III Zak phase calculation We will now give expressions for the 2D Zak Phase in the different scenarios presented in the previous Section. These scenarios are casted by the following general Hamiltonian, either in the x-dimension or the y-dimension which, in turn, is specified by the wave-number (\(k_{x},k_{y}\)): \[H\sim n_{1}\sigma_{1}+n_{2}\sigma_{2}+n_{3}\sigma_{3}. \tag{8}\] The Hamiltonians to be described differ by a multiplying scalar factor corresponding to the quasi-energy (\(E(k)\)), and by the actual expression of the 3D-norm \(n_{i}\) (\(i=1,2,3\)). But since the Bloch eigenvectors are the only quantities of interest for the present problem, the overall constants of this Hamiltonian can be safely ignored. Now, our generic hamiltonian is given by the matrix \[H=\left(\begin{array}{cc}n_{3}&n_{1}-in_{2}\\ n_{1}+in_{y}&-n_{3}\end{array}\right), \tag{9}\] and has the following eigenvalues \[\lambda=\pm\sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}} \tag{10}\] The normalized eigenvectors, also called the Bloch eigenvector, in either x-direction or y-direction, then result \[|u_{\pm}>=\left(\begin{array}{c}\frac{n_{1}+in_{2}}{\sqrt{2n_{1}^{2}+2n_{2} ^{2}+2n_{3}\mp 2n_{3}\sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}}}}\\ \frac{n_{z}\mp\sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}}}{\sqrt{2n_{1}^{2}+2n_{2}^{2} +2n_{3}^{2}\mp 2n_{3}\sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}}}}\end{array}\right), \tag{11}\] where the given direction is specified by the wave-number \(k_{x,y}\). This expression readily confirms that the Berry curvature (\(F\)) is always equal to zero, because the Bloch eigenvectors do not depend on orthogonal wavevectors, meaning \(\partial_{k_{x}}|u_{y}\rangle=\partial_{k_{y}}|u_{x}\rangle=0\), as anticipated. Note that the scalar factor \(n_{i}\rightarrow\lambda n_{i}\) does not modify the result, as expected. This results from the fact that two Hamiltonians with differ by a constant should have the same eigenvectors. Thus, the 1D Zak phase for the problem to be considered is [7]: \[Z_{x(y)}=i\int_{-\pi/2}^{\pi/2}(<u_{+}|\partial k_{x(y)}u_{+}>+<u_{-}|\partial k _{x(y)}u_{-}>)dk.\] As anticipated in the introduction, the 2D Zak phase can be extended in the form \({\bf Z}=(Zx,Zy)\) (Eq. 1.2), for Figure 3: 2D Zak phase landscape (\(Z_{x},Z_{y}\)) for Hadamard Quantum Walk. (a) Blue curve corresponds to Zak phase in x-direction (\(Z_{x}\)) and purple curve corresponds to Zak phase in y-direction (\(Z_{y}\)). By switching the Bloch vector arguments \(\phi(k_{x})=-\phi(k_{y})\) characterizing the quantum walk in x-direction and quantum walk in y-direction it is possible to break time reversal symmetry (\(TRS=-1\)), obtaining (\(Z_{x}=-Z_{y}\)). (b) Contour plot of \(Z_{x}\) and \(Z_{y}\). systems which preserve TRS and SIS. Moreover, for the particular photonic DTQW implementation we consider here, it is possible to break TRS. Namely, by switching the Bloch eigenvector arguments \(\phi(k_{x})=-\phi(k_{y})\) characterizing the quantum walk in x-direction or in y-direction, it is possible to break time-reversal symmetry (\(TRS=-1\)), obtaining (\(Z_{x}=-Z_{y}\)), as explained in detail in Section IV. We will now apply these concepts to some specific examples. ### Zak phase for SSQW We first consider the Split-Step Quantum Walk. This corresponds to a quantum walk with unitary step given by \(U(\theta_{1},\theta_{2})=TR(\theta_{1})TR(\theta_{2})\), as proposed in [12]. Note that the rotations are performed around the y-direction, therefore in this case angular labels \((1,2)\) do not correspond to the cartesian axes [12]. We consider the particular case that \(n_{3}=0\). By taking one of the angle parameters such that \(n_{3}=0\), it follows that the Bloch eigenvectors of the Hamiltonian are versors, of the form [33]: \[|u_{\pm}>=\frac{1}{\sqrt{2}}\left(\begin{array}{c}e^{i\phi(k)}\\ \mp 1\end{array}\right),\qquad\tan\phi(k)=\frac{n_{2}}{n_{1}}. \tag{3.12}\] There are two choices for \(n_{3}=0\), which are \(\theta_{1}=0\) or \(\theta_{2}=0\). In both cases the Zak phase results in [32; 33]: \[Z=Z_{+}+Z_{-}=i\int_{-\pi/2}^{\pi/2}dk<u_{+}|\partial_{k}u_{+}> \tag{3.13}\] \[+i\int_{-\pi/2}^{\pi/2}dk<u_{-}|\partial_{k}u_{-}>=\phi(-\pi/2)-\phi(\pi/2), \tag{3.14}\] from where it follows that [32; 33]: \[Z=\frac{\tan(\theta_{2})}{\tan(\theta_{1})}. \tag{3.15}\] ### Zak Phase for NCRQW Now we proceed to calculate the Zak phase for DTQW with non-commuting rotations. The unitary step as described in the introduction results in: \[U(\theta,\phi)=TR_{1}(\phi)R_{2}(\theta).\] The norms \(n_{i}\) are of the following form \[n_{1}=-\cos(k)a+\sin(k)b,n_{2}=\cos(k)b+\sin(k)a,\] \[n_{3}=\cos(k)c-\sin(k)d, \tag{3.16}\] with \[a = \sin(\phi)\cos(\theta),\] \[b = \cos(\phi)\sin(\theta), \tag{3.17}\] \[c = \sin(\phi)\sin(\theta),\qquad d=\cos(\phi)\cos(\theta).\] the angular functions defined above. The numerator \(C_{1}\) is given by \[C_{1}=n_{1}+in_{2}=-\exp(-ik)(a-ib), \tag{3.18}\] Figure 5: 2D Zak phase landscape (\(Z_{x},Z_{y}\)) for Split Step Quantum Walk (SSQW). (a) Zak phase in x-direction (\(Z_{x}\)) for system parameters \(\theta_{1}\in[-\pi,\pi]\) and \(\theta_{2}\in[-\pi,\pi]\), (b) Zak phase in y-direction (\(Z_{y}\)) for system parameters \(\theta_{1}\in[-\pi,\pi]\) and \(\theta_{2}\in[-\pi,\pi]\). By switching the Bloch vector arguments \(\phi(k_{x})=-\phi(k_{y})\) characterizing the quantum walk in x-direction and quantum walk in y-direction it is possible to break time reversal symmetry (\(TRS=-1\)), obtaining (\(Z_{x}=-Z_{y}\)). Insets display density plots of the 3D Zak phase. in addition, \(C_{2}\) is \[C_{2}=n_{1}\mp\sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}}=\cos(k)c-\sin(k)d\] \[\mp\sqrt{a^{2}+b^{2}+c^{2}\cos^{2}(k)+d^{2}\sin^{2}(k)-\sin(2k)cd}. \tag{3.19}\] On the other hand, the denominator \(D\) is reduced to \[D_{\pm}=\sqrt{2n_{1}^{2}+2n_{2}^{2}+2n_{3}^{2}\mp 2n_{z}\sqrt{n_{1}^{2}+n_{2}^{ 2}+n_{3}^{2}}}\] \[=\bigg{(}a^{2}+b^{2}+c^{2}\cos^{2}(k)+d^{2}\sin^{2}(k)-\sin(2k)cd\] \[\mp(\cos(k)c-\sin(k)d)\] \[\times\sqrt{a^{2}+b^{2}+c^{2}\cos^{2}(k)+d^{2}\sin^{2}(k)-\sin(2k)cd}\bigg{)}^ {\frac{1}{2}}. \tag{3.20}\] Considering these expressions, eigenvectors can be written as \[|u_{\pm}>=\left(\begin{array}{c}\frac{C_{1}}{D_{\pm}}\\ \frac{C_{2}}{D_{\pm}}\end{array}\right),\qquad<u_{\pm}|=\bigg{(}\frac{C_{1}^{* }}{D_{\pm}},\ \frac{C_{2}}{D_{\pm}}\bigg{)}. \tag{3.21}\] Therefore, calculation of the Zak phase implies \[Z=Z_{+}+Z_{-}=i\int_{-\pi/2}^{\pi/2}dk<u_{+}|\partial_{k}u_{+}>\] \[+i\int_{-\pi/2}^{\pi/2}dk<u_{-}|\partial_{k}u_{-}>, \tag{3.22}\] requiring the following quantities \[Z_{\pm}=i\int\bigg{(}\frac{C_{1}^{*}}{D_{\pm}^{2}}\partial_{k}C_{1}+\frac{C_{2 }}{D_{\pm}^{2}}\partial_{k}C_{2}\] \[-\frac{(|C_{1}|^{2}+|C_{2}|^{2})}{D_{\pm}^{3}}\partial_{k}D_{\pm}\bigg{)}dk. \tag{3.23}\] This expression can be simplified further [32], resulting in \[Z_{\pm}=i\int\frac{C_{1}^{*}\partial_{k}C_{1}}{D_{\pm}^{2}}dk. \tag{3.24}\] By taking into account (3.18 and 3.20) the phases are expressed as \[Z_{\pm}=\int\frac{|C_{1}|^{2}}{D_{\pm}^{2}}dk=\int_{0}^{\pi}\frac{(a^{2}+b^{2 })dk}{D_{\pm}^{2}}. \tag{3.25}\] We note that in this example the case \(n_{3}=0\) is completely different than in the previous case, as it returns a trivial Zak phase \(Z=\pi\), since the \(k\)-dependence vanishes. In addition, as opposed to the SSQW where analytic expressions for the Zak phase could be elaborated, for this system the Zak phase landscape can only be obtained via numerical integration. In particular, at the gapless Dirac points the Zak phase is not defined. Therefore, such singular points can be regarded as topological defects of dimension zero [32]. ### Zak phase for HQW Expressions for the Zak phase in the Hadamard Quantum Walk can be easily obtained by notating that the HQW is equivalent to the DTQW with non-commuting rotations for the particular case of \(\phi=0\). Therefore, the final expression for the Zak Phase in the HQW results in: \[Z_{\pm}=\int\frac{|C_{1}|^{2}}{D_{\pm}^{2}}dk=\int_{0}^{\pi}\frac{(a^{2}+b^{2 })dk}{D_{\pm}^{2}}, \tag{3.26}\] with \(C_{1}=n_{1}+in_{2}\), \(C_{2}=n_{1}\mp\sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}}\), \(D_{\pm}=\sqrt{|C_{1}|^{2}+|C_{2}|^{2}}\), \(a=0\), \(b=\sin(\theta)\), \(c=0\), and \(d=\cos(\theta)\). Figure 3 depicts 2D Zak phase landscape \((Z_{x},Z_{y})\) for Hadamard Quantum Walk. Figure 3 (a) Blue curve corresponds to Zak phase in x-direction \((Z_{x})\) and purple curve corresponds to Zak phase in y-direction \((Z_{y})\). By switching the Bloch vector arguments \(\phi(k_{x})=-\phi(k_{y})\) characterizing the quantum walk in x-direction and quantum walk in y-direction it is possible to break time reversal symmetry \((TRS=-1)\), obtaining \((Z_{x}=-Z_{y})\). Figure 3 (b) Contour plot of \(Z_{x}\) and \(Z_{y}\). Figure 4 depicts 2D Zak phase landscape \((Z_{x},Z_{y})\) for Non-Commuting Rotations Quantum Walk (NCRQW). (a) Zak phase in x-direction \((Z_{x})\) for system parameters \(\theta\in[-\pi,\pi]\) and \(\phi\in[-\pi,\pi]\), (b) Zak phase in y-direction \((Z_{y})\) for system parameters \(\theta\in[-\pi,\pi]\) and \(\phi\in[-\pi,\pi]\). By switching the Bloch eigenvector arguments \(\phi(k_{x})=-\phi(k_{y})\) characterizing the quantum walk in x-direction and quantum walk in y-direction, it is possible to break time reversal symmetry \((TRS=-1)\), obtaining \((Z_{x}=-Z_{y})\). Insets display density plots of the 3D Zak phase. Figure 5 depicts 2D Zak phase landscape \((Z_{x},Z_{y})\) for Split Step Quantum Walk (SSQW). (a) Zak phase in x-direction \((Z_{x})\) for system parameters \(\theta_{1}\in[-\pi,\pi]\) and \(\theta_{2}\in[-\pi,\pi]\), (b) Zak phase in y-direction \((Z_{y})\) for system parameters \(\theta_{1}\in[-\pi,\pi]\) and \(\theta_{2}\in[-\pi,\pi]\). By switching the Bloch eigenvector arguments \(\phi(k_{x})=-\phi(k_{y})\) characterizing the quantum walk in x-direction and quantum walk in y-direction, it is possible to break time reversal symmetry (\(TRS=-1\)), obtaining (\(Z_{x}=-Z_{y}\)). Insets display density plots of the 3D Zak phase. ## IV Time-reversal symmetry (TRS) Now we propose a simple method to break time-reversal symmetry (TRS), enabling a 2D Zak phase of the form (\(Z_{y}=-Z_{x}\)). At first, one could be tempted to change the sign of the Zak phase by imprinting a dynamical relative phase factor between the x- and y-evolutions. Nevertheless, it should be noted that a dynamical phase would not modify the acquired geometrical phase [31]. In order to modify the relative sign of the Zak phase, it is required to modify the relative sign of the argument (\(\phi(k_{x})=-\phi(k_{y})\)) in the Bloch eigenvectors (Eq. 3.12), for each dimension (\(x,y\)). Let us remind that the argument of Bloch vector in each dimension (\(j=x,y\)) takes the form: \[\phi(k_{j})=\arctan[\frac{n_{2}(k_{j})}{n_{1}(k_{j})}]. \tag{4.27}\] By noting that \(\arctan[x]\) is an odd function of \(x\), it is sufficient to modify the sign of the numerator, or the sign of the denominator, in order to change the sign of \(\phi(k_{j})\). ### TRS Breaking in Split-Step Quantum Walk (SSQW) For the case of the SSQW, expressions for the norm components are given by (Eq. 2.7): \[n_{1}=\frac{\sin(k)\sin(\theta_{1})\cos(\theta_{2})}{\sin(E(k))},\] and \[n_{2}=\frac{\cos(k)\sin(\theta_{1})\cos(\theta_{2})+\sin(k)\sin(\theta_{1}) \cos(\theta_{2})}{\sin(E(k))}.\] Note that the energy dispersion \(E(k)\) cancels out of the expression for the argument \(\phi(k)\). It is straightforward to modify the sign of \(n_{1}\), simply by inverting the sign of \(\theta_{1}\), in the manner \(\theta_{1}^{x}=-\theta_{1}^{y}\), due to the fact that \(\sin(\theta_{1})\) is an odd function. Note that since the sign of \(\theta_{1}^{y}\) is inverted for _all_ values of \(\theta_{1}^{x}\), this is not equivalent to a conditional operation, where the state of the coin in y-direction is modified conditioned on the state of the coin in the x-direction. Therefore, the protocol remains separable in (x,y) dimensions. Moreover, in order to ensure a relative change of sign in \(\phi(k)\), \(n_{2}\) should not change sign at the same time as \(n_{1}\). This restricts the values of \(\theta_{2}\) and \(k\) than can be used to switch the sign of the argument \(\phi(k)\), resulting in the following condition: \[\frac{\tan(\theta_{2})}{\tan(\theta_{1})}>\cos(k). \tag{4.28}\] Similar conditions can be derived for all three different DTQW protocols. As an illustrative example, a plot of the allowed region of values in parameter space (\(\theta_{2}\) and \(k\)), which would enable to break TRS in the SSQW, for \(\theta_{1}=\pi/8\) and \(\theta_{1}=\pi/4\), are indicated in Figure 6. As explained in detail in the following Section, independently switching the sign of \(\phi(k)\) in either x- or y-dimension could be accomplished by using fast-switching Electro-Optic Modulators (EOMs) (Section V). ## V Proposed experimental scheme The proposed experimental scheme is based on the novel experiment reported in Ref. [35], for implementation of photonic time-multiplexed DTQWs, as depicted in Fig. 7. Figure 7 (a) the 2D quantum walk lattice is temporally encoded in time-multiplexed pulses with different time delays (\(\Delta t_{x}\) and \(\Delta t_{y}\)), implemented by Single Mode Fibres (SMF) loops of adjustable lengths. The single-photon source is an attenuated pulsed laser, typically at a wavelength of 800 nm, with a pulse width \(\approx\) 90 ps, and a repetition rate of 110 kHz [35]. Photons are coupled into the setup through a low-reflectivity Beam Sampler (BS), and are prepared in the initial polarization state \(|\psi_{\text{ini}}\rangle\) using a fast Electro-Optic Modulator (EOM), corresponding to a 2D DTQW lattice initial state \(|\psi_{\text{ini}}\rangle=|x,y\rangle=|0,0\rangle\). Upon propagation, photonic wavepackets are subject to a first step of the 1D DTQW along the x-direction. For simplicity, in Fig. 7 we consider implementation of the Hadamard Quantum Walk (HQW) in both x- and y-direction, consisting of a rotation by a HWP (coin operation) and a split by a polarizing beam splitter (PBS), although other DTQW protocols could also be implemented, as illustrated in Fig. 7 (c). In order to adjust the rotation operators independently at each point, as required for the implementation of separable coin operations, a fast-switching Figure 6: Numerical simulation displaying the allowed region (blue) of values to break time-reversal symmetry (TRS) in parameter space, for (a) \(\theta_{1}=\pi/8\), (b) \(\theta_{1}=\pi/4\). Insets depict 3D plots of \(n_{1}\) confirming its sign inversion under \(\theta_{1}\rightarrow-\theta_{1}\). Electro-Optic Modulator (EOM) can be used [35]. After the first step in the x-direction, photons are routed through single-mode fibers (SMFs) of lengths (\(L_{1}\) and \(L_{2}\)), implementing a temporal step in the y-direction. The SMF length difference \(|L_{1}-L_{2}|\) determines the delay \(\Delta t_{x}\), while the overall delay introduced by the SMFs (\(L_{1,2}\)) determines \(\Delta t_{y}\). Additional HWPs and a second PBS perform a 1D DTQW step in the y-direction based on the same principle. The orientation of the HWPs at each step determines the probability for the wave-packet to be translated to \(x(y)-1\) or \(x(y)+1\) lattice positions. Upon propagation, the photons are detected by polarization-resolving detection of their arrival time via four avalanche photodiodes (APDs), which enables to map out the probability distribution for each lattice position (i.e., the photon statistics). Including losses and detection efficiency, the probability of a photon continuing after one step is typically around 50 % (without the EOM) [35]. Figure 7 (b) illustrates the projection of the 2D spatial lattice onto a 1D temporally encoded pulse chain, for step one. Each step consists of a shift in both x-direction, corresponding to a time difference of \(\Delta t_{x}\), and y-direction corresponding to a time difference of \(\Delta t_{y}\). Time delays are, in turn, adjusted by tuning the lengths of the SMFs. Time delay \(\Delta t_{x}\) is adjusted by tuning the SMF length difference \(|L_{2}-L_{1}|\), while \(\Delta t_{y}\) is determined by the overall length \(L_{1,2}\). Choosing \(\Delta t_{y}>>\Delta t_{x}\) it is possible to generate time bins characterized by distinctive arrival times. Figure 7 (c) depicts three different DTQW protocols that can be implemented using a similar experimental setup. Namely, (1) Hadamard Quantum Walk (HQW), (2) Non-Commuting Rotations Quantum Walk (NCRQW), and (3) Split Step Quantum Walks (SSQW) (further details are in the text). Note that the proposed experimental scheme only detects probability distributions encoded in arrival times for time bins in 2D DTQWs, but it does not retrieve phase information, which would be required to measure the Zak phase. Nevertheless, a simple experiment to measure Zak phase differences for a closed path can be envisioned, based on the experimental scheme used to reconstruct holonomic phases, reported in [44]. Moreover, it is well known that the Zak phase is not a topological invariant itself, as the Berry connection is not gauge invariant under gauge transformations, in close analogy to the EM vector potential. Nevertheless, we note that the definition of the Bloch states via Eq. (4) does not specify the overall phase of \(|u_{k}\rangle\to e^{i\chi(k)}|u_{k}\rangle\), so one can freely choose such phase. The singlevaluedness of \(\chi(k)\) at the beginning and the end of the path imposes that the Zak phase for a given closed path is gauge invariant modulo \(2\pi\)[29]. ## VI Discussion We have reported numerical results for calculation of the 2D Zak phase landscape in Discrete-Time Quantum Walk (DTQW) protocols, which can be readily implemented in time-multiplexed photonic quantum walks [35]. In particular, we investigated quantum walks which are driven by separable time-evolution operators of the form \(U_{x}\otimes U_{y}\), which result in independent Bloch eigenvectors for each dimension \((x,y)\). This warrants that the Berry curvature of the system (\(F=\nabla\times A\)) is always zero, where \(A\) is the Berry connection whose integral over the first Brillouin zone yields the Zak phase. Nevertheless, more complex topological scenarios, with non-vanishing Berry curvature, can be explored by considering non-separable coin operations, via the implementation of controlled gates. Controlled gates condition the transformation of one coin state on the state of the other coin, thus introducing quantum correlations between the two dimensions. Because of the induced quantum correlations, it is possible to obtain a non-trivial 2D evolution resulting in an inseparable final state, and a different acquired Zak phase for the system altogether, enabling to analyse the impact of quantum correlations and entanglement in topological structures, among other complex phenomena. Such exciting scenarios would enable to investigate the connection between entanglement, Zak phase, Berry curvature, and Berry connection, and will be explored in upcoming works. Our results bear a close Figure 7: Proposed experimental scheme for implementation of 2D time-multiplexed photonic Discrete-Time Quantum Walks (DTQWs) based on [35]. (a) The 2D quantum walk lattice is temporally encoded in time-multiplexed pulses with different time delays (\(\Delta t_{x}\) and \(\Delta t_{y}\)), implemented by Single Mode Fiber (SMF) loops of adjustable lengths. Both the initial state (\(|\psi_{\rm nl}\rangle\)), and the quantum walk parameters (\(\theta_{1},\theta_{2}\)) or (\(\theta,\phi\)) can be independently manipulated using fast EOMs [35]. (b) Projection of the 2D spatial lattice onto a 1D temporally encoded pulse chain, for step one. Time delay \(\Delta t_{x}\) is adjusted by tuning the SMF length difference \(|L_{2}-L_{1}|\), while \(\Delta t_{y}\) is determined by the length \(L_{1,2}\). Choosing \(\Delta t_{y}>>\Delta t_{x}\) it is possible to generate time bins characterized by distinctive arrival times. (c) Three different DTQW protocols that can be implemented using a similar experimental setup: (I) Hadamard Quantum Walk (HQW), (II) Non-Commuting Rotations Quantum Walk (NCRQW), and (III) Split-Step Quantum Walks (SSQW). Further details are in the text. analogy to the Aharonov-Bohm effect, which essentially confirms that in a field-free (\(F=0\)) multiply-connected region of space, the physical properties of the system depend on vector potentials (\(A\)), due to the fact that the Schroedringer equation is obtained from a canonical formalism, which cannot be expressed in terms of fields alone [30]. In addition, our proposed protocols for exploration of 2D Zak phase landscape can be generalized to higher dimensions, by the introduction of orbital angular momentum, instead of polarization as the coin degree of freedom, and by adding extra temporal loops to encode the extra dimensions. To a large extent, this renders a fully unexplored avenue of research, enabling quantum simulation applications with multiple walkers and nonlinear interactions, in high dimension. It may be possible to study the effects of higher-dimensional graph percolations, localization effects, or the use the quantum network topologies in conjunction with single-photon or multi-photon states. ###### Acknowledgements. The Author is grateful to Alberto Grunbaum, Janos Asboth, and Osvaldo Santillan for many insightful discussions. G P. acknowledges financial support from PICT2015-0710 StartUp grant and Raices Programme.
2310.11977
Constraints on dark energy from TDCOSMO & SLACS lenses
Problems with the cosmological constant model of dark energy motivate the investigation of alternative scenarios. I make the first measurement of the dark energy equation of state using the hierarchical strong lensing time delay likelihood provided by TDCOSMO. I find that the combination of seven TDCOSMO lenses and 33 SLACS lenses is only able to provide a weak constraint on the dark energy equation of state, $w < -1.75$ at 68% confidence, which nevertheless implies the presence of a phantom dark energy component. When the strong lensing time delay data is combined with a collection of cosmic microwave background, baryon acoustic oscillation and Type Ia supernova data, I find that the equation of state is $w = -1.025\pm 0.029$.
Natalie B. Hogg
2023-10-18T14:02:58Z
http://arxiv.org/abs/2310.11977v2
# A measurement of the dark energy equation of state with 40 strong lenses ###### Abstract Problems with the cosmological constant model of dark energy motivate the investigation of alternative scenarios. I make the first measurement of the dark energy equation of state using the hierarchical strong lensing time delay likelihood provided by TDCOSMO. I find that the combination of seven TDCOSMO lenses and 33 SLACS lenses is only able to provide an upper bound on the dark energy equation of state, \(w<-1.75\), which nevertheless implies the presence of a phantom dark energy component. When the strong lensing time delay data is combined with a collection of cosmic microwave background, baryon acoustic oscillation and Type Ia supernova data, I find that the equation of state remains consistent with a cosmological constant. keywords: dark energy - gravitational lensing: strong ## 1 Introduction The better part of three decades has passed since one of the most profound discoveries in modern cosmology was made: that the expansion rate of the Universe is currently accelerating (Riess et al., 1998; Perlmutter et al., 1999). In the standard model of cosmology, this acceleration is attributed to the cosmological constant \(\Lambda\) acting as a dark energy, with an equation of state \(w=P/\rho=-1\), where \(P\) is the pressure and \(\rho\) the energy density of the dark energy. However, the true nature of dark energy, and in particular whether its energy density is actually constant in time, remains a subject of debate (Escamilla et al., 2023). Various dark energy models have been proposed as alternatives to the cosmological constant, most of which incorporate a dark energy whose density changes over time (Copeland et al., 2006). A model-agnostic way of investigating this dynamical kind of dark energy is to use a parameterisation of the equation of state which allows for deviations from \(w=-1\). A popular choice is the Chevalier-Polarski-Linder (CPL) parameterisation (Chevallier and Polarski, 2001; Linder, 2003), \[w(a)=w_{0}+(1-a)\,w_{a}, \tag{1}\] a first-order Taylor expansion in the scale factor \(a\). The parameters \(w_{0}\) and \(w_{a}\) can thus be constrained with data; deviations from \(w_{0}=-1\) and \(w_{a}=0\) would indicate that the energy density of dark energy is evolving in time. Myrad observational data have been used to place constraints on the equation of state of dark energy. For example, the Planck satellite's measurements of the cosmic microwave background (CMB) temperature anisotropies, E-mode polarisation and CMB lensing, along with baryon acoustic oscillations (BAO) measured by the 6dF Galaxy Survey and SDSS (Beutler et al., 2011; Ross et al., 2015; Alam et al., 2017) and the Pantheon catalogue of Type Ia supernovae (SNIa) (Scolnic et al., 2018), produced a measurement of \(w=-1.028\pm 0.031\)(Aghanim et al., 2020b). A stated goal of Stage IV surveys such as Euclid is to measure the dark energy equation of state with percent-level precision (Amendola et al., 2018). From a theoretical perspective, strong lensing time delays are a powerful probe of cosmology due to their ability to provide cosmological constraints at low redshift without reliance on the distance ladder. Strong lensing time delays are the measurement of the different arrival times of the multiple images of a gravitationally lensed source object (Refsdal, 1964). This time delay depends on the angular diameter distances between the objects involved and thus directly probes the expansion rate of the Universe, \(H_{0}\). Whilst there is no reliance on the distance ladder in strong lensing time delay measurements of \(H_{0}\), there can be a fairly significant dependence on how the mass profile of the lens galaxy is modelled (Sonnenfeld, 2018). This problem has been recognised since the studies of the very first strongly lensed quasar for which time delays were measured, Q0957+561 (Walsh et al., 1979; Falco et al., 1985). As an example of the effect of different lens modelling on the measurement of \(H_{0}\), we can compare the H0LiCOW measurement of \(H_{0}=73.3^{+1.7}_{-1.8}\mathrm{kms^{-1}Mpc^{-1}}\)(Wong et al., 2020), which comes from six strong lensing time delays and is a 2.4% precision measurement, with the TDCOSMO measurement, \(H_{0}=74.5^{+5.6}_{-6.1}\mathrm{kms^{-1}Mpc^{-1}}\)(Birrer et al., 2020), which comes from the original six H0LiCOW lenses plus one new lens and is a 9% precision measurement. The reduction in precision is mainly the result of relaxing certain assumptions in the H0LiCOW lens modelling. To combat this increase in uncertainty, the TDCOSMO team added 33 additional lenses from the SLACS catalogue in order to provide more information on the mass profiles of the TDCOSMO lenses. This data and its nuisance parameters were combined with the original seven lens dataset in a hierarchical Bayesian manner, leading to a final measurement of \(H_{0}=67.4^{+4.1}_{-3.2}\mathrm{kms^{-1}Mpc^{-1}}\), which is in statistical agreement with both the H0LiCOW measurement and the measurement from the seven TDCOSMO lenses alone. The TDCOSMO \(H_{0}\) value has been used to constrain variations of the fine structure constant (Colaco et al., 2021), axion-photon couplings (Buen-Abad et al., 2022) and interacting dark energy models (Wang et al., 2022), but the full likelihood has not yet been used to obtain cosmological constraints beyond those on \(H_{0}\) and the matter density parameter \(\Omega_{\rm m}\). In this work, I present the first measurement of the dark energy equation of state using the full hierarchical TDCOSMO likelihood. Strong lensing time delays are particularly useful for this task as the constraints in the \(w-\Omega_{\rm m}\) plane are typically orthogonal to those coming from standard rulers i.e. the CMB and BAO (Motta et al., 2021). I begin by reviewing strong lensing time delays and the construction of the hierarchical TDCOSMO likelihood. I explain my analysis method and then present my results and conclusions. ## 2 Strong lensing time delays ### Theory Gravitational lensing is the phenomenon which arises due to the deflection of light by massive objects. The strong lensing regime may be defined as that in which multiple images of a single source are produced. This typically occurs on super-galactic scales, with lenses being galaxies and sources being distant and bright objects such as quasars. Light from the distant source is deflected by the lens galaxy. Different light paths have different lengths, leading to measurable delays between the arrival times of images. This strong lensing time delay is given by \[t(\mathbf{\theta},\mathbf{\beta})=\frac{(1+z_{\rm od})}{c}\,\frac{D_{\rm od}D_{\rm os }}{D_{\rm ds}}\left[\frac{(\mathbf{\theta}-\mathbf{\beta})^{2}}{2}-\psi(\mathbf{\theta}) \right], \tag{2}\] where \(z_{\rm od}\) is the redshift of the deflector; \(D_{\rm od}\), \(D_{\rm os}\) and \(D_{\rm ds}\) are the angular diameter distances between the observer and deflector, observer and source; \(\mathbf{\theta}\) is the observed image position; \(\mathbf{\beta}\) is the unknown source position; and \(\psi(\mathbf{\theta})\) is the lensing potential, which carries the information about the mass density profile of the deflector. In a spatially flat Universe (\(\Omega_{k}=0\)), the time delay is inversely proportional to \(H_{0}\) via the angular diameter distances involved, \[D(z)=\frac{c}{H_{0}(1+z)}\int_{0}^{z}\frac{{\rm d}z^{\prime}}{E(z^{\prime})}, \tag{3}\] where \(E(z)\equiv H(z)/H_{0}\) is the dimensionless Hubble rate. The difficulty faced by all strong lensing time delay inference is that under any arbitrary linear re-scaling of the source position \(\mathbf{\beta}\to\lambda\mathbf{\beta}\), image positions \(\mathbf{\theta}\) are preserved. The lens model is also accordingly transformed, \(\mathbf{\alpha}\to\lambda\mathbf{\alpha}+(1-\lambda)\mathbf{\theta}\), where \(\mathbf{\alpha}\) is the deflection angle. This is known as the internal mass-sheet transform or degeneracy (Falco et al., 1985). The only way that such a degeneracy can be broken and a measurement of \(H_{0}\) made is by obtaining direct knowledge either of the absolute source size or of the lensing potential itself. The former is quite obviously near-impossible, so in order to constrain \(H_{0}\) a choice must be made in how to model the deflector mass density profile. ### The TDCOSMO likelihood Whilst the H0LiCOW team used analytic mass profiles for the deflectors to break the mass-sheet degeneracy, the TDCOSMO work used stellar kinematics data to provide information about the lensing potential. This led to the initial reduction in precision of the \(H_{0}\) measurement compared to the H0LiCOW result. The precision was increased again by the addition of further stellar kinematics data from a set of 33 SLACS lenses, specifically selected for their similarity to the original seven TDCOSMO lenses. Note that these additional lenses do not have time delay information. Furthermore, the combination of data was made under the assumption that the TDCOSMO and SLACS lenses were drawn from the same parent population. The complete likelihood describing this dataset was constructed hierarchically, meaning that a set of hyperparameters were defined which allowed all constraints related to the mass-sheet degeneracy to be inferred on a population level, whilst the lens and light model parameters, \(\xi_{\rm mass}\) and \(\xi_{\rm light}\), could be inferred on a lens-by-lens basis. Thus, all remaining uncertainty about the mass-sheet degeneracy is propagated to the level of the \(H_{0}\) inference. Following Birrer et al. (2020), the posterior distribution of the cosmological parameters of interest, \(\mathbf{\pi}\), given \(N\) sets of individual lens data \(\mathcal{D}_{i}\) and the model parameters \(\mathbf{\xi}\), is given by \[P(\mathbf{\pi}|\{\mathcal{D}_{i}\}_{N}) \propto\mathcal{L}(\{\mathcal{D}_{i}\}_{N}|\mathbf{\pi})P(\mathbf{\pi}),\] \[=\int\mathcal{L}(\{\mathcal{D}_{i}\}_{N}|\mathbf{\pi},\mathbf{\xi})P(\bm {\pi},\mathbf{\xi}),\] \[=\int\prod_{i}^{N}\mathcal{L}(\mathcal{D}_{i}|\mathbf{\pi},\mathbf{\xi})P( \mathbf{\pi},\mathbf{\xi}). \tag{4}\] The nuisance parameter \(\mathbf{\xi}\) is divided into the mass and light model parameters which are constrained at the level of each individual lens, and the set of mass-sheet degeneracy hyperparameters which are constrained at the population level, \(\xi_{\rm pop}\). The hierarchical TDCOSMO likelihood is thus given by \[\mathcal{L}(\mathcal{D}_{i}|D,\xi_{\rm pop})=\int\mathcal{L}( \mathcal{D}_{i}|D,\mathbf{\xi}_{\rm pop},\xi_{\rm mass},\xi_{\rm light})\] \[\qquad\qquad\times P(\xi_{\rm mass},\xi_{\rm light})\ \mathrm{d}\xi_{\rm mass}\ \mathrm{d}\xi_{\rm light}, \tag{5}\] where \(D\) is the set of angular diameter distances \(\{D_{\rm od},D_{\rm os},D_{\rm ds}\}\) from which all cosmological results are obtained. For a complete discussion of the construction of the hierarchical likelihood, including the details of the population hyperparameters, I refer the reader to Section 3 of Birrer et al. (2020). The final TDCOSMO measurement of \(H_{0}\) is thus the most precise measurement possible from strong lensing time delays which does not make any assumptions about the deflector mass profiles in order to artificially break the mass-sheet degeneracy. It is also the first strong lensing time delay measurement of \(H_{0}\) which used information from other datasets to improve the precision of the measurement. I will now discuss how I used this hierarchical likelihood to measure the dark energy equation of state. ## 3 Method I wrote an external likelihood package for the cosmological modelling and sampling software Cobaya (Torrado and Lewis, 2021), so that the hierarchical TDCOSMO likelihood can be used to obtain constraints on cosmological model parameters in combination with any other cosmological likelihood and with any choice of Boltzmann code. This package is publicly available.1 Footnote 1: [https://github.com/nataliehogs/tdcosmo_ext](https://github.com/nataliehogs/tdcosmo_ext). Using the Markov chain Monte Carlo sampler provided by Cobaya, which is adapted from CosmoMC (Lewis and Bridle, 2002; Lewis, 2013) and uses a fast-dragging procedure to increase sampling speed (Neal, 2005), I obtained constraints on three cosmological models: \(\Lambda\)CDM, \(w\)CDM and \(w_{0}w_{a}\)CDM. The \(w\)CDM model allows for a dark energy with a constant equation of state that may differ from \(w=-1\); the \(w_{0}w_{a}\)CDM model allows for a dynamical dark energy. In a \(w\)CDM cosmology (extendable to \(w_{0}w_{a}\)CDM by the CPL parameterisation shown in Equation 1), and recalling that I keep \(\Omega_{k}=0\), the dimensionless Hubble rate is given by \[E(z)=\left[\Omega_{\rm m}(1+z)^{3}+\Omega_{\rm DE}(1+z)^{3(1+w)}\right]^{\frac{ 1}{2}}, \tag{6}\] where \(\Omega_{\rm m}=\Omega_{\rm b}+\Omega_{\rm c}\), the sum of the baryon and cold dark matter densities, and \(\Omega_{\rm DE}\) is the dimensionless energy density of dark energy today. For each cosmological model I considered, I sampled the posterior distributions of \(H_{0}\), \(\Omega_{\rm b}h^{2}\) and \(\Omega_{\rm c}h^{2}\), along with the relevant dark energy equation of state parameter(s). In each case, I also sampled the posterior distributions of the hyperparameters associated with the likelihood, and marginalised over them to obtain the posterior distributions on the cosmological parameters of interest. The computation of the angular diameter distances required by the TDCOSMO likelihood was done using the Boltzmann code CAMB(Lewis et al., 2000; Howlett et al., 2012), but I emphasise that I designed the Cobaya interface so that any theory code currently available in or added to Cobaya in the future can be used for this task. Lastly, I used the Parameterised Post-Friedmann framework in CAMB to allow \(w\) to cross \(-1\)(Fang et al., 2008). Ivalidated my implementation of the likelihood package in Cobaya by comparing my \(\Lambda\)CDM results with the original TDCOSMO results, finding an excellent agreement in terms of constraints on both the cosmological and the nuisance parameters. This validation test, plus all of the code needed to reproduce the results and figures in this paper is also publicly available.2 Footnote 2: [https://github.com/nataliehogg/slide](https://github.com/nataliehogg/slide). Besides the TDCOSMO and SLACS datasets, I also obtained constraints on the models with a dataset consisting of the Planck 2018 measurements of the CMB temperature, polarisation and lensing (Aghanim et al., 2020, 2020); the BAO measurements from the 6dF Galaxy Survey (Beutler et al., 2011), the SDSS Main Galaxy Sample (Ross et al., 2015) and the SDSS DR12 consensus catalogue (Alam et al., 2017); the Pantheon catalogue of Type Ia supernovae (Scolnic et al., 2018); and the TDCOSMO + SLACS lenses, in order to compare with the constraints obtained just using the strong lensing time delay data and with those of the Planck collaboration (Aghanim et al., 2020). I will refer to this combination of data as the "full combination" from now on. The priors I used for the cosmological parameters are listed in Table 1. Following Birrer et al. (2020), I also used the Pantheon prior on \(\Omega_{\rm m}=\mathcal{N}(0.298,0.022)\) when using the TDCOSMO and SLACS data alone, though it is important to note that in my analysis \(\Omega_{\rm m}\) was not directly sampled since it is treated as a derived parameter in CAMB, the posterior being obtained from those of \(\Omega_{\rm b}\) and \(\Omega_{\rm c}\). In the dynamical dark energy cases, I ensured that acceleration occurs by setting a prior on the dark energy equation of state such that \(w<-\frac{1}{3}\). a surprising result. It is known that in some datasets, such as Planck 2018, large negative values of \(w\) correlate with large positive values of \(H_{0}\)(Escamilla et al., 2023). This is due to the so-called geometrical degeneracy, where \(H_{0}\), \(\Omega_{\rm m}\) and \(w\) can take various values which in combination lead to the same value for the angular diameter distance to the surface of last scattering and hence the same angular size of the sound horizon, provided the physical sound horizon size is kept fixed (Efstathiou & Bond, 1999). Since strong lensing time delays also rely on angular diameter distances to probe cosmology, I infer that a similar degeneracy exists here. This conclusion is supported by the clear correlation between \(H_{0}\) and \(w\) in Figure 2. Thus, the strongly negative dark energy equation of state is likely driven by the high central value of \(H_{0}=78.4^{+8.3}_{-6.3}\) obtained in this case - which is nevertheless consistent with the \(\Lambda\)CDM value at \(1\sigma\). As expected, based on the results of Birrer et al. (2020), the marginalised posterior distribution of the matter density \(\Omega_{\rm m}\) is largely informed by the Pantheon prior. This also acts to somewhat ameliorate the aforementioned degeneracy. ### \(w_{0}w_{a}\)CDM In Figure 3, I show the one and two-dimensional marginalised posterior distributions of \(H_{0}\), \(w_{0}\), \(w_{a}\) and \(\Omega_{\rm m}\) in a \(w_{0}w_{a}\)CDM cosmology from the 40 TDCOSMO + SLACS lenses. The likelihood is again only able to provide upper bounds on the dark energy equation of state parameters, \(w_{0}<-1.86\) and \(w_{a}<0.102\). Again the value of \(H_{0}\) is larger than but still consistent with the \(\Lambda\)CDM measurement: \(H_{0}=79.6^{+7.5}_{-6.0}\). In Figure 4, I show the two-dimensional marginalised posterior distributions of \(H_{0}\) and \(\Omega_{\rm m}\) in a \(\Lambda\)CDM cosmology (red), a \(w\)CDM cosmology (orange) and a \(w_{0}w_{a}\)CDM cosmology (yellow) from the 40 TDCOSMO + SLACS lenses. From this plot, it is clear that whilst the \(H_{0}\) values in the extended cosmologies are large, they are still consistent at \(1\sigma\) with the \(\Lambda\)CDM result. ### Full combination of data In Figure 5, I show the two-dimensional marginalised posterior distributions of \(H_{0}\) and \(\Omega_{\rm m}\) in a \(\Lambda\)CDM cosmology (dark purple), a \(w\)CDM cosmology (medium purple) and a \(w_{0}w_{a}\)CDM cosmology (light purple) from the full combination of Planck 2018 + BAO + Pantheon SNIa + TDCOSMO + SLACS data. In this case, I did not use the Pantheon prior on \(\Omega_{\rm m}\), since the inclusion of the Pantheon dataset provides the same information as that prior. From this plot we can see that the values of \(H_{0}\) and \(\Omega_{\rm m}\) in the extended cosmologies are completely consistent with the \(\Lambda\)CDM values at \(1\sigma\) when using the full combination of data; the \(w\)CDM and \(w_{0}w_{a}\)CDM constraints are virtually identical. Furthermore, this combination of data inevitably provides a much stronger constraint on \(w\), \(w_{0}\) and \(w_{a}\) than the TDCOSMO + SLACS data alone, and removes any hint of a phantom dark energy component. In a \(w\)CDM cosmology, the dark energy equation of state is measured to be \(w=-1.025\pm 0.029\), a marginal increase in precision compared to the Planck 2018 + BAO + SNIa value of \(w=-1.028\pm 0.031\). Both of these measurements are consistent with a cosmological constant. Figure 4: The two-dimensional marginalised posterior distributions of \(H_{0}\) and \(\Omega_{\rm m}\) in the three cosmologies studied from the 40 TDCOSMO + SLACS lenses. Figure 3: The one and two-dimensional marginalised posterior distributions of \(H_{0}\), \(w_{0}\), \(w_{a}\) and \(\Omega_{\rm m}\) in a \(w_{0}w_{a}\)CDM cosmology from the 40 TDCOSMO + SLACS lenses. Figure 2: The one and two-dimensional marginalised posterior distributions of \(H_{0}\), w and \(\Omega_{\rm m}\) in a \(w\)CDM cosmology from the 40 TDCOSMO + SLACS lenses. In a \(w_{0}w_{a}\)CDM cosmology, \(w_{0}=-0.985^{+0.071}_{-0.091}\) and \(w_{a}=-0.18^{+0.33}_{-0.25}\), compared to \(w_{0}=-0.957\pm 0.08\) and \(w_{a}=-0.29^{+0.32}_{-0.26}\) from Planck 2018 + BAO + SNIa. These are again both almost equivalent in precision and consistent with a cosmological constant. This can be clearly seen in Figure 6, where I show the two-dimensional marginalised posterior distributions of \(w_{0}\) and \(w_{a}\) from the TDCOSMO + SLACS data (red) and from the full combination of Planck 2018 + BAO + Pantheon SNIa + TDCOSMO + SLACS (purple). The dashed lines in this plot show the values of the dark energy equation of state parameters which correspond to a cosmological constant, \(w_{0}=-1\) and \(w_{a}=0\). Lastly, I computed the \(\Delta\chi^{2}\equiv\chi^{2}-\chi^{2}_{\Lambda\rm CDM}\) for each case studied. The computed values are shown in Table 2. Whilst the \(\Delta\chi^{2}\) is negative for the \(w\)CDM cosmology obtained with both datasets, implying that the extended cosmology is favoured over \(\Lambda\)CDM, the significance of this decrease in \(\chi^{2}\) must be evaluated using a difference table due to the greater number of degrees of freedom in the \(w\)CDM cosmology with respect to \(\Lambda\)CDM. With one additional degree of freedom, and for a 95% level of significance, a \(|\Delta\chi^{2}|>3.841\) is required for the improvement in fit to be considered significant. This is clearly not the case here. From these results it is also evident that \(\Lambda\)CDM is a better fit to the data than the \(w_{0}w_{a}\)CDM cosmologies, since the \(\Delta\chi^{2}\) is on parity with \(\Lambda\)CDM for the TDCOSMO + SLACS data and still does not exceed 3.841 for the full combination of data. ## 5 Conclusions In this work, I presented the first constraints on the equation of state of dark energy from the seven TDCOSMO lenses plus 33 SLACS lenses using the hierarchical likelihood provided by TDCOSMO. I wrote an external likelihood package for the Cobaya software, making this likelihood readily available for public use in combination with other cosmological likelihoods and theory codes. I replicated the original TDCOSMO results in a \(\Lambda\)CDM cosmology and then explored two extended cosmologies, finding that the TDCOSMO + SLACS data was not able to place strong constraints on the equation of state of dark energy, obtaining only upper bounds on \(w\), \(w_{0}\) and \(w_{a}\). The bounds I obtained all implied the presence of a phantom dark energy component, \(w<-1\). The use of the TDCOSMO likelihood in combination with the Planck 2018 likelihood, BAO data and the Pantheon SNIa catalogue yielded more precise constraints, with all the dark energy equation of state parameters consistent with a cosmological constant, \(w=-1\). I computed the \(\Delta\chi^{2}\) to evaluate the fit of each model, finding no preference for the extended cosmologies over the \(\Lambda\)CDM case. In conclusion, while strong lensing time delays are beginning to provide a competitive (albeit lens-modelling-dependent) measurement of \(H_{0}\), it is clear that a larger dataset is needed before they can be useful when studying dark energy; perhaps on the order of hundreds or thousands of lenses (Shirialilou et al., 2020). Furthermore, an increase in precision on the \(H_{0}\) inference will naturally lead to a reduction in the effect of the geometrical degeneracy which is contributing to the hints of phantom dark energy in the TDCOSMO data. Fortunately a number of current or near-future experiments, such as JWST, Roman, LSST and Euclid are likely to provide such data in abundance (Collett, 2015). ## Acknowledgements I am grateful to Simon Birrer, Martin Millon and Judit Prat for valuable discussions about the TDCOSMO likelihood, and to Pierre Fleury for his comments on the manuscript. \begin{table} \begin{tabular}{l l c} \hline \hline Model & Data & \(\Delta\chi^{2}\) \\ \hline \(w\)CDM & TDCOSMO + SLACS & \(-0.5\) \\ \(w\)CDM & Full combination & \(-0.7\) \\ \(w_{0}w_{a}\)CDM & TDCOSMO + SLACS & \(0.0\) \\ \(w_{0}w_{a}\)CDM & Full combination & \(-2.0\) \\ \hline \hline \end{tabular} \end{table} Table 2: The \(\Delta\chi^{2}\) for each case studied. Figure 5: The two-dimensional marginalised posterior distributions of \(H_{0}\) and \(\Omega_{\rm m}\) in the three cosmologies studied from the full combination of Planck 2018 + BAO + Pantheon SNIa + TDCOSMO + SLACS. Figure 6: The two-dimensional marginalised posterior distributions of \(w_{0}\) and \(w_{a}\) from the TDCOSMO + SLACS data (red) and from the full combination of Planck 2018 + BAO + Pantheon SNIa + TDCOSMO + SLACS (purple). The dashed lines show the values of these parameters which correspond to a cosmological constant, \(w_{0}=-1\), \(w_{a}=0\). ## Data Availability All data associated with this article is publicly available.
2306.06535
On the global existence for the modified Camassa-Holm equation via the inverse scattering method
In this paper, we address the existence of global solutions to the Cauchy problem of the modified Camassa-Holm (mCH) equation, which is known as a model for the unidirectional propagation of shallow water waves. Based on the spectral analysis of the Lax pair, we apply the inverse scattering transform to rigorously analyze the mCH equation with zero background. By connecting the Cauchy problem to the Riemann-Hilbert (RH) problem, we establish a bijective map between potential and reflection coefficients within the $L^2$-Sobolev space framework. Utilizing a reconstruction formula and estimates on the time-dependent RH problem, we obtain a unique global solution to the Cauchy problem for the mCH equation.
Yiling Yang, Engui Fan, Yue Liu
2023-06-10T22:47:24Z
http://arxiv.org/abs/2306.06535v2
# On the global existence for the modified Camassa-Holm equation via the inverse scattering method ###### Abstract. In this paper, we address the existence of global solutions to the Cauchy problem of the modified Camassa-Holm (mCH) equation, which is known as a model for the unidirectional propagation of shallow water waves. Based on the spectral analysis of the Lax pair, we apply the inverse scattering transform to rigorously analyze the mCH equation with zero background. By connecting the Cauchy problem to the Riemann-Hilbert (RH) problem, we establish a bijective map between potential and reflection coefficients within the \(L^{2}\)-Sobolev space framework. Utilizing a reconstruction formula and estimates on the time-dependent RH problem, we obtain a unique global solution to the Cauchy problem for the mCH equation. **Keywords:** modified Camassa-Holm equation; inverse scattering transform; Cauchy projection operator; global solution. **AMS Subject Classification 2020:** 35Q51; 35Q15; 37K15; 35Q35. ###### Contents * 1 Introduction and main result * 2 Direct scattering transform * 2.1 Spectral analysis on the Lax pair * 2.2 Reflection coefficient * 3 Inverse scattering transform * 3.1 The RH problems to the Cauchy problem * 3.2 Solvability of the RH problems * 3.3 Estimates on the solution of RH problems * 4 The existence of global solutions * 4.1 Time evolution of reflection coefficient * 4.2 The proof of main result ## 1. Introduction and main result In this study, we investigate the existence of global solutions to the following Cauchy problem associated with the modified Camassa-Holm (mCH) equation \[m_{t}+\left(m\left(u^{2}-u_{x}^{2}\right)\right)_{x}+\kappa u_{x}= 0,\quad m=u-u_{xx},\ t>0,\ x\in\mathbb{R}, \tag{1.1}\] \[m(x,0)=m_{0}(x), \tag{1.2}\] where \(u(t,x)\) is the function in dimensionless space-time variables \((t,x)\), and \(\kappa\) is a positive constant characterizing the effect of the linear dispersion. The mCH equation (1.1) was first presented by Fokas [14] and Fuchssteiner using recursion operators [16], and later found by Olver and Rosenau [25] via tri-Hamiltonian duality to the bi-Hamiltonian of the mKdV equation (see also [18], referred to as the Fokas-Olver-Rosenau-Qiao equation). Recently the mCH equation (1.1) was considered as a model for the unidirectional propagation for shallow-water waves of mild amplitude over a flat bottom [5], where the solution \(u\) is related to the horizontal velocity in certain level of water, and \(\kappa>0\) is a parameter related to the critical shallow water speed. In the short-wave limit case, the mCH equation (1.1) reduces to the short-pulse equation [17] \[v_{xt}=\frac{1}{3}(v^{3})_{xx}+\kappa v,\] which is a model for the propagation of ultra-short light pulses in silica optical fibers [28] and it is also an approximation of nonlinear wave packets in dispersive media in the limit of few cycles on the ultra-short pulse scale [7]. Moreover, the short-pulse equation and the nonlinear Schrodinger equation are both derived from Maxwell's equations, but the numerical simulations in [7] shows that as the pulse length shortens, the nonlinear Schrodinger equation approximation becomes steadily less accurate, while the short-pulse equation provides a better and better approximation. It is noted that the mCH equation in (1.1) for any \(\kappa\geq 0\) is completely integrable and admits the Lax pairs [27, 29]. However, unlike the Camassa-Holm (CH) equation [4, 9, 15] and the Degasperis-Procesi (DP) equation [10], the mCH equation (1.1) with a positive parameter \(\kappa\) cannot be transformed into the mCH equation (1.1) with \(\kappa=0\) using the Galilean transformation. The local well-posedness to the Cauchy problem associated with the mCH equation (1.1) for the initial profile \(m_{0}\in H^{s}\), \(s>1/2\) was established in [17]. A blow-up mechanism to the mCH equation (1.1) is provided in [6]. However, unlike the CH equation case in [8], even if the initial potential \(m_{0}\) does not change sign, the second derivative item \(u_{xx}\) in the equation (1.1) may also blow-up in finite time. On the other hand, the mCH equation (1.1) admits the smooth soliton solutions [23] as the wave speed \(c\geq 2\kappa>0\) and the orbital stability of those smooth solitons in the Sobolev spaces \(H^{1}(\mathbb{R})\cap W^{1,4}(\mathbb{R})\) and the spectral stability of such smooth solitons are obtained without the condition of positive Randon measure [19]. It is worth mentioning that the behavior of solitons with \(\kappa=0\) is distinct from that of the smooth solitons (\(\kappa>0\)) of the mCH equation (1.1). This leads to new kinds of singular solitons known as peakons [6]. In this case, \(\kappa=0\), it was shown that the mCH equation (1.1) with a nonzero background (i.e. the initial value \(m_{0}(x)\to\gamma\neq 0\), as \(|x|\to\infty\)) may support smooth soliton solutions by the Backlund transformation [24] and the Riemann-Hilbert(RH) method [3]. Recently, the long time asymptotic behavior of the mCH equation (1.1) with \(\kappa>0\) was obtained by using \(\bar{\partial}\)-steepest decadent analysis [30]. The existence of global solutions to the Cauchy problem for the mCH equation (1.1) with \(\kappa=0\) and a nonzero background was studied [31]. The primary objective of this paper is to utilize the \(L^{2}\)-Sobolev space bijectivity between potential and reflection coefficient to establish the global well-posedness of the Cauchy problem for the mCH equation (1.1). The main result is stated as follows (the proof framework is described in Figure 1). **Theorem 1.1**.: _Assume that \(m_{0}\in H^{2,1}(\mathbb{R})\) is sufficiently small. Then there exists a unique global solution \(m\in C([0,+\infty),H^{2,1}(\mathbb{R}))\) to the Cauchy problem (1.1)-(1.2) with the initial value \(u(0,x)=m_{0}(x),\ \forall x\in\mathbb{R}.\) Furthermore, the map_ \[m_{0}\in H^{2,1}(\mathbb{R})\longrightarrow m\in C([0,+\infty),H^{2,1}( \mathbb{R}))\] _is locally Lipschitz continuous._ The key tool used to prove the result above is the inverse scattering theory, initially developed for the first-order spectral problem, and later applied to demonstrate global existence in integrable systems within the Schwartz space [1, 2]. Moreover, Zhou provided a rigorous framework to solve the Cauchy problem in weighted \(L^{2}\)-Sobolev spaces through solvability analysis of the RH problem in [33, 34]. Recently, this approach has found extensive application in proving the global well-posedness of integrable equations, including the nonlinear Schrodinger (NLS), derivative NLS, and mCH equations with a nonzero background [11, 12, 20, 31, 26]. We would like to point out that the assumption \(m_{0}\in H^{2,1}(\mathbb{R})\) made in Theorem 1.1 is excluded from the blow-up condition in [6]. Furthermore, compared with our previous results in [31], extending the inverse scattering transform approach to the mCH equation (1.1) presents certain distinctions and difficulties. The most significant distinction arises from the fact that the Figure 1. Bijective relation between potential \(m\) and reflection coefficients RH problem corresponding to the mCH equation (1.1) lacks an explicit symmetry expression between \(z\) and \(-1/z\), leading to two substantial difficulties: 1. In contrast to the derivative NLS equation [21, 26], the transformation of the Jost function from the spectral parameter \(z\) to \(k=z-\frac{1}{z}\) poses challenges in constructing the RH problem. As a result, direct application of the Fourier transformation for estimating the RH problem is not feasible. However, this obstacle is successfully addressed by leveraging the special structure of the phase function \(\theta(z)\) defined in (3.1) and exploiting the symmetry of the reflection coefficient between \(z\) and \(-1/z\) (see Lemma 4.4 in Section 4.2). 2. The eigenvalues and resonances associated with the Lax pair (2.3) may be located anywhere in \(\mathbb{C}^{+}\cup\mathbb{R}\) and can be non-simple, leading to challenges in controlling the norm of the solution to the RH problem. Notably, the eigenvalues on \(\mathbb{C}^{+}\) only affect the estimation of the solution from the RH problem in Section 3 but do not impact its unique solvability. On the other hand, the presence of the spectrum on \(\mathbb{R}\), known as resonance, introduces difficulties in estimating the RH problem. Research on resonance necessitates alternative methods for estimation, as exemplified in [32, 22]. To overcome these challenges, a small norm condition on the initial value \(m_{0}\) becomes necessary to ensure the absence of eigenvalues and resonance (see Subsection 2.2). The rest of the paper is structured as follows: In Section 2, we delve into the direct scattering transform, which maps the initial data \(m_{0}(x)\) to reflection coefficient. In Section 3, we establish two RH problems associated with the mCH equation (1.1). We rigorously demonstrate their solvability in the space \(H^{s}(\mathbb{R}_{k})\), where \(s>1/2\), and obtain estimates on their solutions. In Section 4, we analyze the time-evolution of reflection coefficient and the RH problem to present the proof of our main result, Theorem 1.1. **Notations.** We now introduce some notations used this paper. The classical Pauli matrices \(\{\sigma_{j}\}_{j=1,2,3}\) are defined by \[\sigma_{1}:=\begin{pmatrix}0&1\\ 1&0\end{pmatrix},\quad\sigma_{2}:=\begin{pmatrix}0&-i\\ i&0\end{pmatrix},\quad\sigma_{3}:=\begin{pmatrix}1&0\\ 0&-1\end{pmatrix}. \tag{1.3}\] If \(I\) is an interval on the real line \(\mathbb{R}\) and X is a Banach space, then \(C_{b}(I,\mathrm{X})\) denotes the space of bounded continuous functions on \(I\) taking values in X. It is equipped with the norm \[\|f\|_{C_{b}(I,\mathrm{X})}=\sup_{x\in I}\|f(x)\|_{\mathrm{X}}.\] We introduce the normed spaces: * A weighted \(L^{p}(\mathbb{R})\) space is specified by \[L^{p,s}(\mathbb{R})=\left\{f(x)\in L^{p}(\mathbb{R})|\ \langle\cdot\rangle^{s}f(x)\in L^{p}(\mathbb{R})\right\};\] * A weighted Sobolev space is defined by \[H^{l,s}(\mathbb{R})=\left\{f(x)\in L^{2}(\mathbb{R})|\ \langle\cdot\rangle^{s}\partial^{j}f(x)\in L^{2}(\mathbb{R}),\ \text{for}\ j=1,...,l\right\}.\] For the simplicity, the norm of \(f(x)\in L^{p}(\mathbb{R})\) and \(g(x)\in L^{p,s}(\mathbb{R})\) are abbreviated to \(\parallel f\parallel_{p}\), \(\parallel g\parallel_{p,s}\) respectively. If a fucntion \(f(z)\), \(z\in\mathbb{R}\) admits the symmetry \(f(z)=f(-1/z)\), then for \(k=z-1/z\), \[g(k)=f(z(k))\] is a well defined function on \(\mathbb{R}\). We call \(f\in H^{l,s}(\mathbb{R}_{k})\) if \(g\in H^{l,s}(\mathbb{R})\). Finally, the letter \(C\) will be used to denote universal positive constants which may vary from line to line. To emphasize the implied constant to depend on some parameter \(\alpha\), we shall indicate this by \(C(\alpha)\). ## 2. Direct scattering transform In this section, we provide the framework on the spectral analysis of the Lax pair to establish the direct scattering transform and address the RH problem with initial data \(m_{0}\in H^{2,1}(\mathbb{R})\). ### Spectral analysis on the Lax pair For convenience, we fix \(\kappa=2\) in the mCH equation (1.1) without loss of generality, since under a simple transformation \[x=\tilde{x},\ \ t=\frac{2}{\kappa}\tilde{t},\ \ u(x,t)=\sqrt{\frac{\kappa}{2}} \tilde{u}(\tilde{x},\tilde{t}), \tag{2.1}\] the equation (1.1) becomes \[\tilde{m}_{\tilde{t}}+\left(\tilde{m}\left(\tilde{u}^{2}-\tilde{u}_{\tilde{x} }^{2}\right)\right)_{\tilde{x}}+2\tilde{u}_{\tilde{x}}=0,\ \ \ \tilde{m}=\tilde{u}-\tilde{u}_{xx}. \tag{2.2}\] Then the mCH equation (1.1) with \(\kappa=2\) admits the Lax pair [27, 29] \[\Phi_{x}=X\Phi,\ \ \ \ \Phi_{t}=T\Phi, \tag{2.3}\] where \[X=-\frac{ik}{4}\sigma_{3}+\frac{i\lambda m(x,t)}{2}\sigma_{2},\] \[T=\frac{ik}{2\lambda^{2}}\sigma_{3}+\frac{ik}{4}\left(u^{2}-u_{x}^{2}\right) \sigma_{3}-i\left(\frac{2iu-ku_{x}}{2\lambda}+\frac{\lambda}{2}\left(u^{2}-u_ {x}^{2}\right)m\right)\sigma_{2}\] with \[k=k(z)=z-\frac{1}{z},\ \ \lambda=\lambda(z)=\frac{1}{2}(z+\frac{1}{z}),\] and \(z\in\mathbb{C}\) is a spectral parameter. Before making the direct scattering transform, we first write equation (1.1) and its Lax pair into the forms in a new space variable \(y\). From the conservation law of the equation (1.1), \[q_{t}+(q(u^{2}-u_{x}^{2}))_{x}=0,\] we introduce the coordinate transform \[dy=qdx-q(u^{2}-u_{x}^{2})dt, \tag{2.4}\] and obtain a reciprocal transformation \[y(x,t)=x-\int_{x}^{+\infty}\left(q(s,t)-1\right)ds, \tag{2.5}\] where \[q(x,t)=\sqrt{m(x,t)^{2}+1}. \tag{2.6}\] Then under variables \((y,t)\), the mCH equation (1.1) reads \[q_{t}+2u_{y}m=0, \tag{2.7}\] and the Lax pair (2.3) becomes \[\Phi_{y}=Y\Phi,\ \ \ \ \Phi_{t}=Q\Phi \tag{2.8}\] \[Y=\frac{1}{\sqrt{m^{2}+1}}X,\ \ \ \ Q=(u^{2}-u_{x}^{2})X+T. \tag{2.9}\] In the direct scattering transform, we first consider the partial spectral problem in the Lax pair (2.3) under \(t=0\), which is omitted as usual. For example, \(\Phi(z;0,y)\) is just written as \(\Phi(z;y)\) or \(\Phi\) for simplicity. In addition, under \(m_{0}\in H^{2,1}\), the map \(x\to y(x,0)\) is continuous differentiable bijection. Define a transformation \[\mu(z)\triangleq\mu(z;y)=F(y)^{-1}\Phi(z;y)e^{\frac{i}{4}(z-\frac{1}{z})y \sigma_{3}}, \tag{2.10}\] where \[F(y)=\sqrt{\frac{q_{0}+1}{2q_{0}}}\left(\begin{array}{cc}1&\frac{-im_{0}}{q _{0}+1}\\ \frac{-im_{0}}{q_{0}+1}&1\end{array}\right). \tag{2.11}\] It is thereby inferred that \[\mu_{y}=-\frac{i}{4}(z-\frac{1}{z})[\sigma_{3},\mu]+P\mu, \tag{2.12}\] where the Lie bracket \([A,B]\) is defined by \([A,B]=AB-BA\) and \[P=\frac{im_{0,x}}{2q_{0}^{3}}\sigma_{1}+\frac{m_{0}}{2zq_{0}^{2}}\left( \begin{array}{cc}-im_{0}&1\\ -1&im_{0}\end{array}\right).\] The equation (2.12) leads to two Volterra type integrals \[\mu_{\pm}(z;y)=I+\int_{\pm\infty}^{y}e^{-\frac{i}{4}(z-\frac{1}{z})(y-s) \hat{\sigma}_{3}}P(s,z)\mu_{\pm}(z;s)ds. \tag{2.13}\] Then \(\mu_{\pm}(z)\) admit two kinds of reduction conditions \[\mu_{\pm}(z)=\sigma_{2}\overline{\mu_{\pm}(\bar{z})}\sigma_{2}=\sigma_{1}\mu_ {\pm}(-z)\sigma_{1} \tag{2.14}\] and \[\mu_{\pm}(z)=F^{-2}\sigma_{2}\mu_{\pm}(-z^{-1})\sigma_{2}. \tag{2.15}\] Denote the matrix in column \[\mu_{\pm}(z;y)=\left(\mu_{\pm,1}(z;y),\mu_{\pm,2}(z;y)\right),\] where the subscript \(1\) and \(2\) indicate the first and second columns of \(\mu_{\pm}(z;y)\), respectively. The integral property of \(\mu_{\pm}(z;y)\) is given in the following proposition. **Proposition 2.1**.: _Let \(m_{0}\in H^{2,1}(\mathbb{R})\). Then \(\mu_{\pm}(z;y)\) uniquely exist such that \(\mu_{\pm}(z;y)-I\in L^{\infty}(\mathbb{R}^{\pm}\times\mathbb{R})\cap C_{b}( \mathbb{R}^{\pm},L^{2}(\mathbb{R}))\cap L^{2}(\mathbb{R}^{\pm}\times\mathbb{R})\), \(\partial_{z}\mu_{\pm}(z;y),\ k(z)\partial_{z}\mu_{\pm}(z;y)\in C_{b}(\mathbb{R} ^{\pm},L^{2}(\mathbb{R}))\). Furthermore, the non-diagonal terms of \(k(z)(\mu_{\pm}(z;y)-I)\) belong in \(C_{b}(\mathbb{R}^{\pm},L^{2}(\mathbb{R}))\)._ Proof.: Invoking the definition of variable \(y\) in (2.5), \(m_{0}\in H^{2,1}(\mathbb{R}_{x})\) is equivalent to \(m_{0}\in H^{2,1}(\mathbb{R}_{y})\). From the symmetry of \(\mu_{\pm}\) in (2.15), it is readily seen that \[\sup_{|z|\leq 1}|\mu_{\pm}(z)|=\sup_{|z|\geq 1}|F(y)^{-2}\sigma_{2}\mu_{\pm}(-z ^{-1})\sigma_{2}|,\] which implies that \[\inf_{y\in\mathbb{R}}|F(y)^{-2}|\sup_{|z|\geq 1}|\mu_{\pm}(z)| \leq\sup_{|z|\leq 1}|\mu_{\pm}(z)|\leq\|F^{-2}\|_{\infty}\sup_{|z|\geq 1}|\mu_{ \pm}(z)|,\] \[\int_{|z|\leq 1}|z^{-j}\mu_{\pm}(z)|^{2}dz\leq\int_{|z|\geq 1}\|F^ {-2}\|_{\infty}^{2}z^{2j-2}|\mu_{\pm}(z)|^{2}dz,\ j=0,1,2.\] Thus, it is sufficient to analyze the integral equation on (2.13) spaces \(L^{\infty}(\mathbb{R}^{\pm}\times\{|z|\geq 1\})\), \(C_{b}(\mathbb{R}^{\pm},L^{2}(\{|z|\geq 1\}))\) and \(L^{2}(\mathbb{R}^{\pm}\times\mathbb{R})\) which are abbreviated to \(L^{\infty}\), \(C_{b}\), \(L^{2}\) respectively. We denote \[n(y,z)=\mu_{+,1}(z;y)-e_{1}, \tag{2.16}\] where \(e_{1}=(1,0)^{T}\). Introduce the integral operator \(\mathcal{T}\) \[\mathcal{T}(f)(y,z)=-\int_{y}^{+\infty}K(y,s,z)f(s,z)ds, \tag{2.17}\] where the integral kernel \(K(y,s,z)\) is \[K(y,s,z)= \frac{im_{0,x}(s)}{2q_{0}^{3}(s)}\left(\begin{array}{cc}0&1\\ e^{\frac{i}{2}(z-1/z)(y-s)}&0\end{array}\right)\] \[+\frac{1}{2z}\frac{m_{0}(s)}{q_{0}^{2}(s)}\left(\begin{array}{ cc}-im_{0}(s)&1\\ -e^{\frac{i}{2}(z-1/z)(y-s)}&e^{\frac{i}{2}(z-1/z)(y-s)}im_{0}(s)\end{array} \right),\ x<s. \tag{2.18}\] Then the first column of (2.13) is changed to \[n=\mathcal{T}(e_{1})+\mathcal{T}(n), \tag{2.19}\] where \[\mathcal{T}(e_{1})=-\int_{y}^{+\infty}\left(\begin{array}{cc} \frac{-im_{0}^{2}(s)}{2zq_{0}^{2}(s)}\\ e^{\frac{i}{2}(z-1/z)(y-s)}\left(\frac{im_{0,x}(s)}{2q_{0}^{2}(s)}-\frac{m_{0} (s)}{2zq_{0}^{2}(s)}\right)\end{array}\right)ds,\] with estimates \[|\mathcal{T}(e_{1})|\leq(\|m_{0}\|_{2}^{2}+\|m_{0}\|_{1}+\|m_{0,x} \|_{1})/2,\] \[\int_{|z|\geq 1}|\mathcal{T}(e_{1})|^{2}dz\leq C(\int_{y}^{+ \infty}|m_{0}|^{2}ds+\int_{y}^{+\infty}|m_{0,x}|^{2}ds),\] \[\int_{\mathbb{R}^{+}}\int_{|z|\geq 1}|\mathcal{T}(e_{1})|^{2}dzdy \leq C(\|m_{0}\|_{2,1/2}+\|m_{0,x}\|_{2,1/2}).\] It follows from Lemma 5 in [30] that the integral operator \(\mathcal{T}\) maps \(L^{\infty}\cap C_{b}\cap L^{2}\) to itself with \[\|\mathcal{T}\|\leq C(\|m_{0}\|_{H^{1,1}}).\] In addition, \((I-\mathcal{T})^{-1}\) exists as a bounded operator on \(L^{\infty}\cap C_{b}\cap L^{2}\) also admitting \[\|(I-\mathcal{T})^{-1}\|\leq C(\|m_{0}\|_{H^{1,1}}).\] Thereby we conclude that \(n=(I-\mathcal{T})^{-1}\mathcal{T}(e_{1})\in L^{\infty}\cap C_{b}\cap L^{2}\). On the other hand, by denoting \(\mathcal{T}_{z}\) as a integral operator with the integral kernel \[\partial_{z}K(y,s,z)= \frac{-m_{0}(s)}{2z^{2}q_{0}^{2}(s)}\left(\begin{array}{cc}-im_ {0}(s)&1\\ 0&0\end{array}\right)-\frac{(1+1/z^{2})(y-s)m_{0,x}(s)e^{\frac{i}{2}(z-1/z)(y-s )}}{4q_{0}^{3}(s)}\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right)\] \[-\frac{m_{0}(s)e^{\frac{i}{2}(z-1/z)(y-s)}}{4zq_{0}^{2}(s)}(2/z-i (1+1/z^{2})(y-s))\left(\begin{array}{cc}0&0\\ -1&im_{0}(s)\end{array}\right),\ x<s,\] take \(z\)-derivative in (2.19) and obtain \[\partial_{z}n=\partial_{z}\mathcal{T}(e_{1})+\mathcal{T}_{z}(n)+T(\partial_{ z}n), \tag{2.20}\] where \[\partial_{z}\mathcal{T}(e_{1})=\int_{y}^{+\infty}\left(\begin{array}{c}-im_ {0}^{2}(s)\\ \frac{-im_{0}^{2}(s)}{2z^{2}q_{0}^{2}(s)}\\ e^{\frac{i}{2}(z-1/z)(y-s)}\left(\frac{m_{0}(s)}{2z^{2}q_{0}^{2}(s)}+\frac{i}{2 }(1+1/z^{2})(y-s)\left(\frac{im_{0,x}(s)}{2q_{0}^{2}(s)}-\frac{m_{0}(s)}{2zq_{ 0}^{2}(s)}\right)\right)\end{array}\right)ds,\] with the estimate \[\|\partial_{z}\mathcal{T}(e_{1})\|_{C_{b}}\leq C(\|m_{0}\|_{H^{1,1}}).\] Noting that for any functions \(f(y),\ g(y,z)\), it is found that \[\int_{|z|\geq 1}\big{|}\int_{y}^{+\infty}f(s)g(s,z)ds\big{|}^{2} dz\leq\|f\|_{2}\|g\|_{2}^{2},\] \[\text{or}\ \int_{|z|\geq 1}\big{|}\int_{y}^{+\infty}f(s)g(s,z)ds \big{|}^{2}dz\leq\|f\|_{1}\|g\|_{C_{b}}.\] Thus, \[\|\mathcal{T}_{z}n\|_{C_{b}}\leq C(\|m_{0}\|_{H^{1,1}})\|n\|_{2},\] which shows that \(\partial_{z}n\) exists in \(C_{b}\) with \[\partial_{z}n=(I-\mathcal{T})^{-1}(\mathcal{T}_{z}n+\mathcal{T}(e_{1})). \tag{2.21}\] Denote \(n=(n_{1},n_{2})^{T}\). Via integration by parts, it is adduced that \[\frac{i}{2}(z-1/z)n_{2}\triangleq I_{1}+I_{2}+I_{3}, \tag{2.22}\] where \[I_{1} =\int_{+\infty}^{y}e^{\frac{i}{2}(z-1/z)(y-s)}\partial_{s}\left( \frac{im_{0,x}}{2q_{0}^{3}}-\frac{m_{0}}{2zq_{0}^{2}}\right)ds\] \[I_{2} =\int_{+\infty}^{y}e^{\frac{i}{2}(z-1/z)(y-s)}\left[\partial_{s} \left(\frac{im_{0,x}}{2q_{0}^{3}}-\frac{m_{0}}{2zq_{0}^{2}}\right)n_{1}+ \partial_{s}\left(\frac{im_{0}^{2}}{2zq_{0}^{2}}\right)n_{2}\right]ds\] \[I_{3} =\int_{+\infty}^{y}e^{\frac{i}{2}(z-1/z)(y-s)}\left[\left(\frac{ im_{0,x}}{2q_{0}^{3}}-\frac{m_{0}}{2zq_{0}^{2}}\right)\partial_{s}n_{1}+\frac{im_{0}^{2}} {2zq_{0}^{2}}\partial_{s}n_{2}\right]ds.\] The \(C_{b}\) norm of the \(I_{1}\) is controlled by \(\|\partial_{s}\left(\frac{im_{0,x}}{2q_{0}^{3}}-\frac{m_{0}}{2q_{0}^{2}}\right)\| _{2}\), namely, \(\|m_{0}\|_{H^{2}}\). And for \(I_{2}\), it follows that \[\|I_{2}\|_{C_{b}}\leq C(\|m_{0}\|_{H^{2,1}})\|n\|_{2}.\] From the definition of \(n\) in (2.16) and the equation (2.12), it holds that \[\partial_{s}n=\frac{i}{2}(z-1/z)\left(\begin{array}{c}0\\ n_{2}\end{array}\right)+\left(\begin{array}{c}\frac{-im_{0}^{2}}{2zq_{0}^{2} }(n_{1}+1)+\left(\frac{im_{0,x}}{2q_{0}^{3}}+\frac{m_{0}}{2zq_{0}^{2}}\right) n_{2}\\ \left(\frac{im_{0,x}}{2q_{0}^{3}}-\frac{m_{0}}{2zq_{0}^{2}}\right)(n_{1}+1)+ \frac{im_{0}^{2}}{2zq_{0}^{2}}n_{2}\end{array}\right).\] Therefore, \(I_{3}\) can be rewritten as \[I_{3}=\int_{+\infty}^{y}e^{\frac{i}{2}(z-1/z)(y-s)}\left[\frac{- im_{0}^{2}}{4zq_{0}^{2}}\left(\frac{im_{0,x}}{q_{0}^{3}}-\frac{m_{0}}{zq_{0}^{2}} \right)-\frac{m_{0}^{4}}{4z^{2}q_{0}^{4}}\right]ds\] \[+\int_{+\infty}^{y}e^{\frac{i}{2}(z-1/z)(y-s)}\left(\frac{im_{0, x}}{q_{0}^{3}}-\frac{m_{0}}{zq_{0}^{2}}\right)\left(\frac{-im_{0}^{2}}{2zq_{0}^{ 2}}n_{1}+\left(\frac{im_{0,x}}{2q_{0}^{3}}+\frac{m_{0}}{2zq_{0}^{2}}\right)n_{ 2}\right)ds\] \[+\int_{+\infty}^{y}e^{\frac{i}{2}(z-1/z)(y-s)}\frac{im_{0}^{2}}{2 q_{0}^{2}}\left[\frac{i(1-1/z^{2})}{2}n_{2}+\left(\frac{im_{0,x}}{2zq_{0}^{3}}- \frac{m_{0}}{2z^{2}q_{0}^{2}}\right)n_{1}+\frac{im_{0}^{2}}{2z^{2}q_{0}^{2}}n _{2}\right]ds.\] Thus it is inferred that \[\|I_{3}\|_{C_{b}}\leq C(\|m_{0}\|_{H^{2,1}})\|n\|_{2}.\] For the term \(\partial_{z}n\), obviously, \[\|k(\cdot)(\partial_{z}\mathcal{T}(e_{1}))_{1}\|_{C_{b}}\leq\|m_{0}\|_{2},\] and \[k(z)(\partial_{z}\mathcal{T}(e_{1}))_{2}=\int_{y}^{+\infty}e^{ \frac{i}{2}(z-1/z)(y-s)}\frac{i}{2}(1-1/z^{2})\frac{m_{0}(s)}{2zq_{0}^{2}(s)}ds\] \[+\int_{y}^{+\infty}e^{\frac{i}{2}(z-1/z)(y-s)}\frac{(z-1/z)(1+1/z^ {2})(y-s)}{4}\left(\frac{im_{0,x}(s)}{2q_{0}^{3}(s)}-\frac{m_{0}(s)}{2zq_{0}^{ 2}(s)}\right)ds.\] Similarly, the \(C_{b}\) norm of first integral in the right side of above equation is controlled by \(\|m_{0}\|_{1}\), and from integration by parts, the second integral in the right side of above equation is controlled by \(\|m_{0}\|_{H^{2,1}}\). And \[k(z)\mathcal{T}_{z}n=\int_{y}^{+\infty}\frac{i}{2}(1-1/z^{2}) \frac{-m_{0}(s)}{2zq_{0}^{2}(s)}\left(\begin{array}{cc}-im_{0}(s)&1\\ -e^{\frac{i}{2}(z-1/z)(y-s)}&im_{0}(s)e^{\frac{i}{2}(z-1/z)(y-s)}\end{array} \right)n(s,z)ds\] \[-\int_{y}^{+\infty}\frac{i}{2}(1-1/z^{2})\frac{m_{0}(s)e^{-i\frac {i}{2}(z-1/z)(y-s)}(1+1/z^{2})(y-s)}{4q_{0}^{2}(s)}\left(\begin{array}{cc}0&0 \\ -1&im_{0}(s)\end{array}\right)n(s,z)ds\] \[+\int_{y}^{+\infty}\frac{i}{2}(z-1/z)\frac{-(1+1/z^{2})(y-s)m_{0, x}(s)e^{\frac{i}{2}(z-1/z)(y-s)}}{4q_{0}^{3}(s)}\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right)n(s,z)ds. \tag{2.23}\] The \(C_{b}\) norm of first two integrals in the right side of above equation are controlled by \(\|m_{0}\|_{2,1}\). For the last integral, integration by parts gives that \[\int_{y}^{+\infty}\frac{i}{2}(z-1/z)\frac{-(1+1/z^{2})(y-s)m_{0,x}( s)e^{\frac{i}{2}(z-1/z)(y-s)}}{4q_{0}^{3}(s)}n_{1}(s,z)ds\] \[= \int_{y}^{+\infty}(1+1/z^{2})e^{\frac{i}{2}(z-1/z)(y-s)}\partial_ {s}\left(\frac{(y-s)m_{0,x}(s)}{4q_{0}^{3}(s)}\right)n_{1}(s,z)ds\] \[\mu_{\pm}(z)=I+\frac{D_{1}}{z}+\mathcal{O}(z^{-2}),\quad\ z\to\infty, \tag{2.24}\] where \[D_{1}(y)=\frac{im_{0,x}}{(1+m_{0}^{2})^{3/2}}\sigma_{2}+\frac{i}{2}\int_{y}^{\pm \infty}\left(\frac{m_{0,x}^{2}}{q_{0}^{6}}+\frac{m_{0}^{2}}{q_{0}^{2}}\right)ds \sigma_{3}. \tag{2.25}\] By Abel's formula, \(\det\mu^{\pm}\equiv 1\). This asymptotics behavior together with (2.15) also imply that \[\mu_{\pm}(0;y)=F(y)^{-2}=\frac{1}{q_{0}(y)}I+\frac{im_{0}(y)}{q_{0}(y)}\sigma_ {1}. \tag{2.26}\] To analyze the property near \(z=\pm i\), we let \(U(\pm i)\) be a bounded closed neighborhood of \(z=\pm i\) in \(\mathbb{C}\setminus\{0\}\) and define a new transformation \[\mu^{0,\pm}(z)\triangleq\mu^{0,\pm}(z;x)=\Phi_{\pm}(z)e^{\frac{k}{2}x\sigma_{ 3}}, \tag{2.27}\] which leads to two Volterra type integrals \[\mu^{0,\pm}(z;x)=I+\int_{\pm\infty}^{x}e^{-\frac{i}{4}(z-\frac{1}{z})(x-s)\hat{ \sigma}_{3}}L_{0}(s,z)\mu^{0,\pm}(z;s)ds. \tag{2.28}\] Then \(\mu^{0,\pm}\) admit the following proposition. **Proposition 2.2**.: _Suppose \(m_{0}\in H^{2,1}(\mathbb{R})\). Then for \(\forall x\in\mathbb{R}\), \(\mu_{1}^{0,-}(\cdot;x)\) and \(\mu_{2}^{0,+}(\cdot;x),\) exist uniquely in \(L^{\infty}(U(i))\), and \(\mu_{1}^{0,+}(\cdot;x)\) and \(\mu_{2}^{0,-}(\cdot;x),\) exist uniquely in \(L^{\infty}(U(-i))\), respectively. Moreover, they satisfy the same limits,_ \[\lim_{z\to i\ in\ U(i)}\left(\mu_{1}^{0,-},\ \mu_{2}^{0,+} \right)=\lim_{z\to-i\ in\ U(-i)}\left(\mu_{1}^{0,+},\ \mu_{2}^{0,-}\right)=I, \tag{2.29}\] \[\lim_{z\to i\ in\ U(i)}\frac{\left(\mu_{1}^{0,-},\ \mu_{2}^{0,+} \right)-I}{z-i}=\lim_{z\to-i\ in\ U(-i)}\frac{\left(\mu_{1}^{0,+},\ \mu_{2}^{0,-}\right)-I}{z-i}=\mu^{0,(1)}, \tag{2.30}\] _where_ \[\mu^{0,(1)}=\left(\begin{array}{cc}0&-\frac{1}{2}(u_{0}+u_{0,x})\\ -\frac{1}{2}(u_{0}-u_{0,x})&0\end{array}\right).\] Proof.: We only present the proof of \(\mu_{1}^{0,+}\). From (2.28), it admits \[\mu_{1}^{0,+}=e_{1}+\mathcal{T}_{0}\mu_{1}^{0,+}, \tag{2.31}\] where \(\mathcal{T}_{0}\) is a integral operator on \(L^{\infty}(U(i))\) with integral kernel \[K_{0}(x,s,z)=\frac{m_{0}(s)}{4}(z+1/z)\left(\begin{array}{cc}0&1 \\ -e^{\frac{i}{2}(z-\frac{1}{z})(x-s)}&0\end{array}\right),\ x<s.\] Analogous to the proof of Proposition 2.1, \(I-\mathcal{T}_{0}\) is invertible with estimate \(\|I-\mathcal{T}_{0}\|\leq C(\|m_{0}\|,x,U(i))\). And it immediately comes that \(\lim_{z\to i\ in\ U(i)}\mu^{0}=I\). Furthermore, \[\frac{\mu_{1}^{0,+}-e_{1}}{z-i}=\frac{z+i}{4z}\int_{+\infty}^{x}m_{0}(s) \left(\begin{array}{cc}0&1\\ -e^{\frac{i}{2}(z-\frac{1}{z})(x-s)}&0\end{array}\right)\mu_{1}^{0,+}ds.\] Again by Lebesgue's dominated convergence theorem, \[\lim_{z\to i\ in\ U(i)}\frac{\mu_{1}^{0,+}-e_{1}}{z-i} =\frac{1}{2}\int_{+\infty}^{x}m_{0}(s)\lim_{z\to i\ in\ U(i)} \left(\begin{array}{cc}0&1\\ -e^{\frac{i}{2}(z-\frac{1}{z})(x-s)}&0\end{array}\right)\mu_{1}^{0,+}ds\] \[=\frac{1}{2}\int_{+\infty}^{x}m_{0}(s)\left(\begin{array}{cc}0 \\ -e^{-(x-s)}\end{array}\right)ds=-\frac{1}{2}(u-u_{x}).\] This completes the proof of Proposition 2.2. The relations (2.10) and (2.27) lead to \[\mu_{\pm}(z)=F^{-1}(x)\mu^{0,\pm}(z)e^{\frac{i}{4}(z-\frac{1}{z})c_{\pm}(x) \sigma_{3}}, \tag{2.32}\] where \[c_{\pm}(x)=\int_{\pm\infty}^{x}(q_{0}-1)dy. \tag{2.33}\] ### Reflection coefficient Since \(\Phi_{\pm}(z;y)\) are two fundamental matrix solutions of the Lax pair (2.3), there exists a linear relation between \(\Phi_{+}(z;y)\) and \(\Phi_{-}(z;y)\) for \(z\in\mathbb{R}\), namely \[\Phi_{-}(z;y)=\Phi_{+}(z;y)S(z), \tag{2.34}\] where \(S(z)\) is called scattering matrix \[S(z)=\left(\begin{array}{cc}a(z)&-\overline{b(\bar{z})}\\ b(z)&\overline{a(\bar{z})}\end{array}\right),\ \ \ \ \det[S(z)]=1.\] Combining with the transformation (2.10), the equation (2.34) is changed to \[\mu_{-}(z)=\mu_{+}(z)e^{-\frac{i}{4}(z-\frac{1}{z})y\vartheta_{3}}S(z). \tag{2.35}\] From (2.14), (2.15) and (2.35), it is shown that \(S(z)\) has the following symmetry reductions \[S(z)=\overline{S(\bar{z}^{-1})}=\sigma_{3}S\left(-z^{-1}\right) \sigma_{3}. \tag{2.36}\] Furthermore, by (2.35), \(a(z)\) and \(b(z)\) can be expressed by \(\mu_{\pm}(z)\) at \(x=0\) as \[a(z) =\mu_{-}^{11}(z;0)\overline{\mu_{+}^{11}(z;0)}+\mu_{-}^{21}(z;0) \overline{\mu_{+}^{21}(z;0)}, \tag{2.37}\] \[\overline{b(z)} =\mu_{-}^{11}(z;0)\mu_{+}^{21}(z;0)-\mu_{-}^{21}(z;0)\mu_{+}^{11 }(z;0). \tag{2.38}\] So \(a(z)\) is analytic on \(\mathbb{C}^{+}\) and continuous on \(\mathbb{R}\). From (2.24) and (2.37), we obtain the asymptotic of \(a(z)\) \[a(z)=1+\mathcal{O}(z^{-1}),\ \ \ \ b(z)=\mathcal{O}(z^{-1}),\ \ \ \ z\to\infty. \tag{2.39}\] On the other hand, taking \(z\to i\) in (2.37) and combining the expansions in 2.32 and 2.30, we get the asymptotic of \(a(z)\), \[a(z)=e^{\frac{1}{2}\int_{\mathbb{R}}(q_{0}-1)dx}\left(1+\mathcal{O}\left((z-i )^{2}\right)\right),\ \ \text{as}\ z\to i. \tag{2.40}\] The function \(a(z)\) may have zeros on \(\mathbb{C}^{+}\cup\mathbb{R}\), which is equivalent to an eigenvalue or a resonance of the spectral problem (2.3). From (2.13) and (2.35), it follows that \[a(z)=1+\int_{\mathbb{R}}\left(P\mu_{-}\right)_{11}dx, \tag{2.41}\] with \(P\) defined in (2.11). Thus \(\|a-1\|_{\infty}\) is controlled by the \(L^{\infty}\) norm of Jost function, so from Proposition 2.1, \(\|a-1\|_{\infty}\) is controlled by \(\|m_{0}\|_{H^{1,1}}\). When \(\|m_{0}\|_{H^{1,1}}\) is small, \(a\) must has no zero on \(\mathbb{C}^{+}\cup\mathbb{R}\). We define the _reflection coefficients_ by \[r(z)=\frac{b(z)}{a(z)},\ \ \ \tilde{r}(z)=\frac{b(z)}{a^{*}(z)},\ \ \ \ z\in\mathbb{R}. \tag{2.42}\] From the symmetry of \(a\) and \(b\) in (2.36), \(r(z)\) and \(\tilde{r}(z)\) admit symmetry reductions \[r(z)=\overline{r(z^{-1})}=r(-z^{-1}),\ \ \ \tilde{r}(z)=\overline{\tilde{r}(z^{-1})}= \tilde{r}(-z^{-1}).\] In addition, the asymptotic behavior of \(a\), \(b\) in (2.39) shows that \(r,\ \tilde{r}=\mathcal{O}(z^{-1}),\ z\to\infty\). Then the symmetry in (2.42) leads to \(r(0)=\tilde{r}(0)=0\). The above symmetries of \(r\) and \(\tilde{r}\) also imply that \(r(z(k))\) and \(\tilde{r}(z(k))\) are well-defined as the functions of \(k\) with \(k=z-1/z\). **Proposition 2.3**.: _If initial data \(m_{0}\in H^{2,1}(\mathbb{R})\) such that the Lax pair (2.3) has no any resonance and eigenvalue, then the reflection coefficients \(r,\ \tilde{r}\in H^{1,1}(\mathbb{R}_{k})\)._ Proof.: The proof is given by taking \(r\) as an example, since \(r\) and \(\tilde{r}\) have same property. Noting that for any function \(f(z)\) satisfying \(f(z)=f(-1/z)\), by denoting \(z_{+}(k)=(k+\sqrt{k^{2}+4})/2\), \(z_{-}(k)=(k-\sqrt{k^{2}+4})/2\), it is shown that \[\int_{\mathbb{R}}|f(z)|^{p}dz=\int_{\mathbb{R}}\frac{|f(k)|^{p}}{1+z_{-}(k)^{- 2}}dk+\int_{\mathbb{R}}\frac{|f(k)|^{p}}{1+z_{+}(k)^{-2}}dk=\int_{\mathbb{R}}|f (k)|^{p}dk.\] Together with \(|\partial_{k}f|=|\partial_{z}f||\partial_{k}/\partial_{z}|\leq C|\partial_{z}f|\), it is sufficient to prove that \(k(\cdot)r\), \(k(\cdot)r^{\prime}\in L^{2}(\mathbb{R})\). For \(m_{0}\in\mathsf{U}\), \(1/a\) is bounded on \(\mathbb{R}\). And Proposition 2.1 gives that \(\|(\cdot)b\|_{2}\leq\|\mu_{-1}^{1}\|L^{\infty}\|(\cdot)\mu_{+}^{21}\|_{C}+\|( \cdot)\mu_{-}^{21}\|_{C}\|\mu_{+}^{11}\|L^{\infty}\), which leads to \(zr(z)\in L^{2}(\mathbb{R})\). And for the \(z\)-derivative of \(r\), (2.42) also gives that \[r^{\prime}(z)=\frac{a(z)b^{\prime}(z)-a^{\prime}(z)b(z)}{a(z)^{2}},\] where \(a\), \(b\in L^{\infty}(\mathbb{R})\) and \((\cdot)a^{\prime},\ (\cdot)b^{\prime}\in L^{2}(\mathbb{R})\) via Proposition 2.1. And combining \((\cdot)b\), \((\cdot)b^{\prime}\in L^{2}(\mathbb{R})\), it follows that \((\cdot)b\in L^{\infty}(\mathbb{R})\). Then from \((\cdot)^{2}b^{\prime}\in L^{2}(\mathbb{R})\), we arrive at that \((\cdot)^{j}r^{\prime}\in L^{2}(\mathbb{R})\) for \(j=0,1\). Finally from \(r(z)=\overline{r(\bar{z}^{-1})}\), it is adduced that \(\langle k\rangle r\), \(\langle k\rangle r^{\prime}\in L^{2}(\mathbb{R})\). Thus we obtain the desired result of Proposition 2.3. The above proposition also shows that the maps \(m_{0}\to a-1\), \(m_{0}\to a^{\prime}\) and \(m_{0}\to b\) are locally Lipschitz continuous from \(H^{2,1}(\mathbb{R})\) to \(L^{\infty}(\mathbb{R})\), \(L^{2,1}(\mathbb{R})\) and \(H^{1,1}(\mathbb{R})\) respectively. Therefore, we finally arrive at the following lemma. **Lemma 2.1**.: _Assume that \(m_{0}\in H^{2,1}(\mathbb{R})\) such that the Lax pair (2.3) has no any resonance and eigenvalue. Then there exist two reflection coefficients \(r\) and \(\tilde{r}\) determined by the \(m_{0}\), and the two maps:_ \[m_{0}\in H^{2,1}(\mathbb{R}) \longrightarrow r\in H^{1,1}(\mathbb{R}_{k}),\text{ and}\] \[m_{0}\in H^{2,1}(\mathbb{R}) \longrightarrow\tilde{r}\in H^{1,1}(\mathbb{R}_{k}),\] _are locally Lipschitz continuous._ Note that when \(m_{0}\equiv 0\), then it follows that \(\tilde{r}=r\equiv 0\). Thus the following corollary is obtained from Lemma 2.1 directly. **Corollary 2.1**.: _If initial data \(m_{0}\in H^{2,1}(\mathbb{R})\) admits no eigenvalues or spectral singularity and \(\|m_{0}\|_{H^{2,1}}\leq\rho_{0}\) for some \(\rho_{0}>0\), then the reflection coefficients \(r,\ \tilde{r}\in H^{1,1}(\mathbb{R}_{k})\) with_ \[\|\tilde{r}\|_{H^{1,1}(\mathbb{R}_{k})},\ \|r\|_{H^{1,1}(\mathbb{R}_{k})}\leq C( \rho_{0})\|m_{0}\|_{H^{2,1}}. \tag{2.43}\] ## 3. Inverse scattering transform In this section, we shall develop an analytic framework for exploring complex analysis of the RH problems and determine their solvability based on a given reflection coefficient. ### The RH problems to the Cauchy problem For convenience, denote the phase function \[\theta(z;y)=-\frac{i}{4}\left(z-\frac{1}{z}\right)y. \tag{3.1}\] By the Jost functions \(\mu^{\pm}(z;y)\) and the function \(a(z)\), two piecewise analytical matrices are defined as follows, \[M_{l}(z;y)=\left\{\begin{array}{ll}\left(\frac{\mu_{1}^{-}(z;y)}{a(z)},\mu_{ 2}^{+}(z;y)\right),&\mbox{as $z\in\mathbb{C}^{+}$},\\ \\ \left(\mu_{1}^{+}(z;y),\frac{\mu_{2}^{-}(z;y)}{a^{*}(z)}\right),&\mbox{as $z\in \mathbb{C}^{-}$},\end{array}\right. \tag{3.2}\] \[M_{r}(z;y)=\left\{\begin{array}{ll}\left(\mu_{1}^{-}(z;y),\frac{\mu_{2}^{+}( z;y)}{a(z)}\right),&\mbox{as $z\in\mathbb{C}^{+}$},\\ \\ \left(\frac{\mu_{1}^{+}(z;y)}{a^{*}(z)},\mu_{2}^{-}(z;y)\right),&\mbox{as $z\in \mathbb{C}^{-}$}.\end{array}\right. \tag{3.3}\] which solve the following RH problems, respectively. **The RH problem 1**.: 1. _Analyticity:_ \(M_{l}(z)\) _is meromorphic in_ \(\mathbb{C}\setminus\mathbb{R}\)_;_ 2. _Symmetry:_ \[M_{l}(z)=\sigma_{3}\overline{M_{l}(-\bar{z})}\sigma_{3}=\sigma_{2}\overline{ M_{l}(\bar{z})}\sigma_{2}=\sigma_{3}M_{l}(0)^{-1}M_{l}(-1/z)\sigma_{3};\] (3.4) 3. _Jump condition:_ \(M_{l}\) _has continuous boundary values_ \([M_{l}]_{\pm}(z)\) _on_ \(\mathbb{R}\) _and_ \[[M_{l}]_{+}(z)=[M_{l}]_{-}(z)\tilde{V}(z),\ \ \ \ z\in\mathbb{R},\] _where_ \[V_{l}(z)=\left(\begin{array}{cc}1+|r(z)|^{2}&e^{2\theta(z)}\overline{r(z)} \\ e^{-2\theta(z)}r(z)&1\end{array}\right);\] (3.5) 4. _Asymptotic behavior:_ \(M_{l}(z)=I+\mathcal{O}(z^{-1}),\ \ \ z\to\infty\)_._ **The RH problem 2**.: 1. _Same Analyticity, Symmetry and Asymptotic behavior as in RHP 1;_ 2. _Jump condition:_ \(M_{r}(z)\) _has continuous boundary values_ \([M_{r}]_{\pm}(z)\) _on_ \(\mathbb{R}\) _and_ \[[M_{r}]_{+}(z)=[M_{r}]_{-}(z)V(z),\ \ \ \ z\in\mathbb{R},\] (3.6) _where_ \[V_{r}(z)=\left(\begin{array}{cc}1&e^{2\theta(z)}\overline{r(z)}\\ e^{-2\theta(z)}\tilde{r}(z)&1+|\tilde{r}(z)|^{2}\end{array}\right).\] (3.7) Using \(M_{r}(z)\) and \(M_{l}(z)\) instead of \(M_{r}(z;y)\) and \(M_{l}(z;y)\) underscores the fact that \(M_{r}\) and \(M_{l}\) are matrix functions dependent on \(z\), with \(y\) serving as a parameter. Consequently, in accordance with Liouville's theorem, it follows that there is a unique solution for each of these two RH problems. This in turn implies that \(M_{r}\) and \(M_{l}\) respectively represent the unique solutions to these two RH problems. For convenience, we define \[M(z;y)=\left\{\begin{array}{ll}M_{l}(z;y),&\mbox{as $y\in\mathbb{R}^{+}$},\\ &\\ M_{r}(z;y),&\mbox{as $y\in\mathbb{R}^{-}$},\end{array}\right. \tag{3.8}\] Thus, from the asymptotic behaviors of the functions \(\mu_{\pm}\) and (2.40), we arrive at reconstruction formula as follows. **Lemma 3.1**.: _The reconstruction formula is given by_ \[q(y)=\frac{1}{M_{11}(0)}, \tag{3.9}\] _where_ \[x(y)=y+c_{+}(x)=y-\ln\left(\frac{M_{12}(i)+M_{22}(i)}{M_{11}(i)+M_{21}(i)} \right). \tag{3.10}\] _And as \(z\to\infty\), it is shown from (2.24) that_ \[\lim_{z\to\infty}z\left(M_{l}-I\right)=i\eta\sigma_{2}+\zeta^{(+)}\sigma_{3}, \ \lim_{z\to\infty}z\left(M_{r}-I\right)=\eta\sigma_{2}+\zeta^{(-)}\sigma_{3}, \tag{3.11}\] _where_ \[\eta(y)=\frac{m_{0,x}(y)}{(1+m_{0}(y)^{2})^{3/2}},\ \zeta^{(\pm)}(y)=\frac{i}{2 }\int_{y}^{\pm\infty}\left(\frac{m_{0,x}(s)^{2}}{q(s)^{6}}+\frac{m_{0}(s)^{2}} {q(s)^{2}}\right)ds. \tag{3.12}\] On the other hand, the following proposition reveals that the solutions reconstructed from two RH problems in above formulas are actually same at \(y=0\). **Proposition 3.1**.: _For \(y\in\mathbb{R}^{+}\), denote \(q_{l}\), \(\eta_{l}\) and \(\zeta_{l}\) as those recovered from \(M_{l}(z;y)\), and for \(y\in\mathbb{R}^{-}\) denote \(q_{r}\), \(\eta_{r}\) and \(\zeta_{r}\) as those recovered from \(M_{r}(z;y)\). Then \(\eta_{l}(0)=\eta_{r}(0)\), \(\zeta_{l}(0)=\zeta_{r}(0)\) and \(q_{l}(0)=q_{r}(0)\)._ Proof.: Denote \[\tilde{a}(z)=\left\{\begin{array}{ll}a(z),&\mbox{as $z\in\mathbb{C}^{+}$},\\ &\\ a^{*}(z)^{-1},&\mbox{as $z\in\mathbb{C}^{-}$},\end{array}\right.\tilde{a}\to I,\ |z|\to\infty. \tag{3.13}\] Then it immediately follows that \(M_{l}(z;y)=M_{r}(z;y)\tilde{a}(z)^{-\sigma_{3}}\). Thus the result is obtained from (2.39), (2.40), \(a(0)=1\) and Lemma 3.1. ### Solvability of the RH problems In this subsection, firstly, we prove solvability of the RH problem 2 about \(M_{l}\) for \(y\in\mathbb{R}^{+}\) under the given reflection coefficient \(r\in H^{1,1}(\mathbb{R}_{k})\). We first give the following definition. For any function \(f(z)\in L^{p}(\mathbb{R})\), \(1\leq p<\infty\), the Cauchy operator \(\mathcal{C}\) is defined by \[\mathcal{C}f(z)=\frac{1}{2\pi i}\int_{\mathbb{R}}\frac{f(s)}{s-z}ds,\ z\in \mathbb{C}\setminus\mathbb{R}. \tag{3.14}\] The function \(\mathcal{C}f\) is analytic off the real line such that \(\mathcal{C}f(\cdot+i\gamma)\) is in \(L^{p}(\mathbb{R})\) for each \(0\neq\gamma\in\mathbb{R}\). When \(z\) approaches to a point on the real line transversely from the upper and lower half planes, that is, if \(\gamma\to\pm 0\) in the following expression, the Cauchy operator \(\mathcal{C}\) becomes the Cauchy projection operator \(\mathcal{C}_{\pm}\): \[\mathcal{C}_{\pm}f(z)=\lim_{\gamma\to 0}\frac{1}{2\pi i}\int_{ \mathbb{R}}\frac{f(s)}{s-(z\pm i\gamma)}ds,\ z\in\mathbb{R} \tag{3.15}\] which admits \[(\mathcal{C}_{\pm}f)^{\wedge}(z)=\pm X_{\pm}(z)\hat{f}(z), \tag{3.16}\] where \(X_{\pm}\) denotes the characteristic function on \(\mathbb{R}^{\pm}\). For any \(f\in L^{p}(\mathbb{R})\), \(1\leq p<\infty\), the Cauchy integral \(\mathcal{C}f\) is analytical off the real line, decays to zero as \(|z|\to\infty\), and approaches to \(\mathcal{C}_{\pm}f\) almost everywhere when a point \(z\in\mathbb{C}^{\pm}\) approaches to a point on the real axis by any non-tangential contour from \(\mathbb{C}^{\pm}\)[13]. If \(1<p<\infty\), then there exists a positive constant \(C_{p}\) such that \[\parallel\mathcal{C}_{\pm}f\parallel_{L^{p}}\leq C_{p}\parallel f \parallel_{L^{p}}. \tag{3.17}\] Moreover, \(C_{2}=1\). When \(f\in L^{1}(\mathbb{R})\), as \(z\to\infty\), \(\mathcal{C}f(z)=\mathcal{O}(z^{-1})\). For \(z\in\mathbb{R}\), denote \(w=w_{+}+w_{-}\) with \[w_{+}(z;y)=\left(\begin{array}{cc}0&0\\ r(z)e^{-2i\theta(z;y)}&0\end{array}\right),\ w_{-}(z;y)=\left(\begin{array}{ cc}0&\bar{r}(z)e^{2i\theta(z;y)}\\ 0&0\end{array}\right). \tag{3.18}\] In view of the Beals-Coifman theorem in [1], the solution of RH problem 2 is given by \[M(z;y)= I+\mathcal{C}(\mu w)(z;y),\ z\in\mathbb{C} \tag{3.19}\] and it exists if and only if there exists a solution \(\mu(z;y)\) of the Fredholm integral equations \[\mu(z;y)= I+\mathcal{C}_{w}(I)(z;y)+\mathcal{C}_{w}(\mu-I)(z;y), \tag{3.20}\] with \[\mathcal{C}_{w}f(z;y)=\mathcal{C}_{+}(fw_{-})(z;y)+\mathcal{C}_{- }(fw_{+})(z;y).\] And a simple calculation gives that \(\|\mathcal{C}_{w}I\|_{2}\leq C(\|r\|_{2}).\) We rewrite (3.20) as \[\mu(z;y)-I=\mathcal{K}(I)(z;y)+\mathcal{K}(\mu-I)(z;y),\ z\in \mathbb{R}, \tag{3.21}\] where \(\mathcal{K}\) is a operator on \(L^{2}(\mathbb{R})\) such that for any matrix function \(f=(f_{1},f_{2})\in L^{2}(\mathbb{R})\), \[\mathcal{K}(f)=(\mathcal{A}(f_{2}),\mathcal{B}(f_{1})), \tag{3.22}\] \[\mathcal{A}(f_{2})=\mathcal{C}_{-}(re^{-2i\theta}f_{2}),\ \ \ \ \mathcal{B}(f_{1})(z;y)=\mathcal{C}_{+}(\bar{r}e^{2i\theta}f_{1}). \tag{3.23}\] \(\mathcal{K}\) can also be a operator on the row function in \(L^{2}(\mathbb{R})\). The following proposition is given to show the invertibility of \(I-\mathcal{K}\). **Proposition 3.2**.: _Assume that \(r\in H^{s}(\mathbb{R})\), \(s>1/2\). Then \(I-\mathcal{K}\) is a bounded Fredholm operator: \(L^{2}(\mathbb{R})\to L^{2}(\mathbb{R})\) for \(y\in\mathbb{R}^{+}\) with index zero. Moreover, \((I-\mathcal{K})^{-1}\) exists and is also a bounded linear operator: \(L^{2}(\mathbb{R})\to L^{2}(\mathbb{R})\). And there exists a constant \(C\) such that_ \[\parallel(I-\mathcal{K})^{-1}\parallel\leq C. \tag{3.24}\] Proof.: We choose the first row of (3.20) for example to give the proof and denote it as \((\mu_{11}-1,\mu_{12})\). Recall the definition of \(\theta\) in (3.1). Then \[|e^{-2i\theta(z_{j})}|=e^{-\frac{y}{2}(1+1/|z_{j}|^{2})\mathrm{Im}z_{j}}=|e^{2i \theta(\bar{z}_{j})}|. \tag{3.25}\] For any \(f=(f_{1},f_{2})\in L^{2}(\mathbb{R})\), \[\mathcal{K}^{2}f=\left(\mathcal{A}\mathcal{B}f_{1},\mathcal{B}\mathcal{A}f_{2 }\right).\] Taking \(\mathcal{A}\mathcal{B}\) as an example, it is found that \[(\mathcal{A}\mathcal{B}(f_{1}^{\vee}))^{\wedge}(z)=\mathcal{C}_{-}(\mathcal{C }_{+}(f^{\vee}w_{-})w_{+})^{\wedge}(z)=\int_{\mathbb{R}}K(z,\lambda;y)f_{1}( \lambda)d\lambda,\] where \(K(z,\lambda;y)\) is a scalar function with \[K(z,\lambda;y)=-X_{-}(z)\int_{\mathbb{R}}X_{+}(s)(\bar{r}e^{-2\theta(\cdot;y) })^{\wedge}(z-s)(re^{2\theta(\cdot;y)})^{\wedge}(s-\lambda)ds.\] Since \[\|K(\cdot,\cdot;y)\|_{L^{2}(\mathbb{R}\otimes\mathbb{R})}\leq\|(\bar{r}e^{-2 \theta(\cdot;y)})^{\wedge}\|_{2,s}\|(re^{2\theta(\cdot;y)})^{\wedge}\|_{2,s} \leq C(y)\|r\|_{H^{s}}^{2}\] for a positive constant \(C(y)\) which is finite for every \(y\in\mathbb{R}^{+}\), it follows that \(\mathcal{A}\mathcal{B}=\mathcal{C}_{-}(\mathcal{C}_{+}(\cdot w_{-})w_{+})\) is the Hilbert-Schmidt operator. It in turn implies that \(\mathcal{K}^{2}\) is compact. Therefore, \(I-\mathcal{K}^{2}\) is a Fredholm operator with index zero, and so is \(I-\mathcal{K}\). Therefore, to prove invertibility of \(I-\mathcal{K}\), it is sufficient to prove that \(I-\mathcal{K}\) is injection. If there exists a matrix function \(f\in L^{2}(\mathbb{R})\) such that \((I-\mathcal{K})f=0\), we denote the first row of \(f\) as \((f_{11},f_{12})\). Define \[g_{11}=\mathcal{C}(re^{-2\theta}f_{12}),\ g_{12}=\mathcal{C}(\bar{r}e^{2 \theta}f_{11}). \tag{3.26}\] Thus, it appears that for \(z\in\mathbb{R}\), \[[g_{11}]_{-}(z) =f_{11}(z), [g_{11}]_{+}(z) =f_{11}(z)+re^{-2\theta}f_{12}(z),\] \[=f_{12}(z)+\bar{r}e^{2\theta}f_{11}(z), [g_{12}]_{+}(z) =f_{12}(z).\] And the function \(g_{11}g_{11}^{*}+g_{12}g_{12}^{*}\) is a analytic function on \(\mathbb{C}^{+}\). Using the result in the Cauchy-Goursat theorem gives that \[0=\int_{\mathbb{R}_{+}}g_{11}(z)g_{11}^{*}(z)+g_{12}(z)g_{12}^{*}(z)dz,\] where \(\mathbb{R}_{+}\) is used to denote \(\mathbb{R}\) when it is viewed as the boundary of \(\mathbb{C}^{+}\). On the other hand \[0=\int_{\mathbb{R}^{+}}g_{11}(z)g_{11}^{*}(z)+g_{12}(z)g_{12}^{*}(z)dz=\int_{ \mathbb{R}}|f_{11}(z)|^{2}+|f_{12}(z)|^{2}dz,\] which implies that \(\|f_{11}\|_{2}=\|f_{12}\|_{2}=0\). And the second row of \(f\) can obtain the same result, which finally implies that \(f=0\), \(a.e.\) on \(\mathbb{R}\). The last step is to prove that \(\|(I-\mathcal{K})^{-1}\|\) is uniformly bounded for \(y\in\mathbb{R}^{+}\). Similarly, we only give the estimate of the row function. For any matrix function \(h=(h_{11},h_{12})\in L^{2}(\mathbb{R})\), denote \[f^{(+)} =(f_{1}^{(+)},f_{2}^{(+)})=(I-\mathcal{K})^{-1}(\mathcal{C}_{-}h_{ 11},\mathcal{C}_{+}h_{12}),\] \[f^{(-)} =(f_{1}^{(-)},f_{2}^{(-)})=(I-\mathcal{K})^{-1}(\mathcal{C}_{+}h_ {11},\mathcal{C}_{-}h_{12}).\] Then it holds that \(f^{(+)}-f^{(-)}=(I-\mathcal{K})^{-1}(-h_{11},h_{12})\). And denote the analytic functions on \(\mathbb{C}\setminus\mathbb{R}\) with \[g^{(\pm)}_{11}=\mathcal{C}(re^{-2\theta}f^{(\pm)}_{2}),\ \ \ \ g^{(\pm)}_{12}=\mathcal{C}(\bar{r}e^{2\theta}f^{(\pm)}_{1}). \tag{3.27}\] For \(f^{(+)}\), the difference is that in this case, for \(z\in\mathbb{R}\), \[[g^{(+)}_{11}]_{-}(z) =f^{(+)}_{1}(z)-\mathcal{C}_{-}h_{11}(z),\] \[[g^{(+)}_{11}]_{+}(z) =f^{(+)}_{1}(z)-\mathcal{C}_{-}h_{11}(z)+re^{-2\theta}f^{(+)}_{2}( z),\] \[[g^{(+)}_{12}]_{-}(z) =f^{(+)}_{2}(z)-\mathcal{C}_{+}h_{12}(z)-\bar{r}e^{2\theta}f^{(+) }_{1}(z),\] \[[g^{(+)}_{12}]_{+}(z) =f^{(+)}_{2}(z)-\mathcal{C}_{+}h_{12}(z).\] Performing the same manipulations as above, we obtain that \[0 =\int_{\mathbb{R}_{+}}g^{(+)}_{11}(z)(g^{(+)}_{11}+ \mathcal{C}h_{11})^{*}(z)+(g^{(+)}_{12}+\mathcal{C}h_{12})(z)(g^{(+)}_{12})^{* }(z)dz\] \[=\int_{\mathbb{R}}|f^{(+)}_{1}(z)|^{2}+|f^{(+)}_{2}(z)|^{2}-(f^{( +)}_{2}(z)\overline{\mathcal{C}_{+}h_{2}(z)}+\mathcal{C}_{-}h_{1}(z)\overline {f^{(+)}_{1}(z)})dz,\] which leads to that \[\|f^{(+)}\|^{2}_{2}=\int_{\mathbb{R}}|f^{(+)}_{1}(z)|^{2}+|f^{(+)}_{2}(z)|^{2} dz=|\int_{\mathbb{R}}f^{(+)}_{2}(z)\overline{\mathcal{C}_{+}h_{2}(z)}+ \mathcal{C}_{-}h_{1}(z)\overline{f^{(+)}_{1}(z)}dz|\leq\|f^{(+)}\|_{2}\|h\|_{2}.\] Therefore, it is concluded that \(\|f^{(+)}\|_{2}\leq\|h\|_{2}\). For \(f^{(-)}\), note that for \(z\in\mathbb{R}\), \[[g^{(-)}_{11}]_{-}(z)=f^{(-)}_{1}(z)-\mathcal{C}_{+}h_{11}(z),\] and for \(z\in\mathbb{C}\setminus\mathbb{R}\), \[\mathcal{C}([g^{(-)}_{11}]_{-})(z)=\left\{\begin{array}{ll}0,&z \in\mathbb{C}^{+},\\ &=\mathcal{C}(f^{(-)}_{1}-\mathcal{C}_{+}h_{11})(z),\end{array}\right.\] which implies that \(\mathcal{C}_{-}(f^{(-)}_{1})(z)\equiv 0\) and \(f^{(-)}_{1}(z)=\mathcal{C}_{+}h_{11}(z)\). Thus, \(g^{(-)}_{11}\equiv 0\). Similarly, it also follows that \(f^{(-)}_{2}(z)=\mathcal{C}_{-}h_{12}(z)\). Then we arrive at that \(\|f^{(-)}\|_{2}\leq\|h\|_{2}\). Therefore, it transpires that \[\|f^{(+)}-f^{(-)}\|_{2}=\|(I-\mathcal{K})^{-1}(-h_{11},h_{12})\|_{2}\leq 2\|h \|_{2},\] which in turn implies \(\|(I-\mathcal{K})^{-1}\|\leq 2\). Thus we arrive at the desired result of Proposition 3.2. As a consequence of this proposition, the solution of (3.20) exists with \[\mu(z;y)=I+(I-\mathcal{K})^{-1}(\mathcal{K}(I))(z;y), \tag{3.28}\] and \[\|\mu(z;y)-I\|_{2}\leq 2\|\mathcal{K}(I)\|_{2}\leq 2\|r\|_{2}. \tag{3.29}\] Therefore, it follows from (3.19) that \[\|M(z;y)-I\|_{2}=\|\mathcal{C}(w)(z;y)+\mathcal{C}((\mu-I)w)(z;y)\|\leq(2\|r\| _{\infty}+1)\|r\|_{2}, \tag{3.30}\] When \(y\in\mathbb{R}^{-}\), following the same process as \(M_{l}\), \(M_{r}\) admits similar property shown in proposition below. **Proposition 3.3**.: _If \(\tilde{r}\in H^{s}(\mathbb{R})\), \(s>1/2\), \(y\in\mathbb{R}^{-}\), then there exists a unique \(M_{r}\) with the estimate_ \[\|\;M_{r}-I\;\|_{2} \leq\|\tilde{r}\|_{2}(2\|\tilde{r}\|_{\infty}+1). \tag{3.31}\] _Remark 3.1_.: When eigenvalues exist and are simple, there also exists a unique solution for these two RH problems, respectively. Denote the solution of RHP 1 with eigenvalues by \(M\). It is inferred that \(|M-I|\) is controlled by \(\|r\|_{H^{s}}\) and the absolute value of eigenvalues. But unlike the case in [31], the location of eigenvalue in our case cannot be controlled by \(\|m_{0}\|_{H^{1,1}}\). Then it is not easy to control \(|M-I|\) via initial value \(m_{0}\), which will bring difficulty in the estimates of the solution of (1.1). It is necessary to use small norm assumption to exclude the existence of eigenvalue. ### Estimates on the solution of RH problems The aim of this subsection is to give some basic estimates on the solution of RH problems. Consider the \(y\)-derivative at both sides of the equation (3.20) \[\partial_{y}\mu(z;y)=(\partial_{y}\mathcal{K})(I)(z;y)+(\partial_{ y}\mathcal{K})(\mu-I)(z;y)+\mathcal{K}(\partial_{y}\mu)(z;y), \tag{3.32}\] where for any \(f\in L^{2}\) \[(\partial_{y}\mathcal{K})f(z;y)= \mathcal{C}_{+}(f\partial_{y}w_{-})(z;y)+\mathcal{C}_{-}(f \partial_{y}w_{+})(z;y)\] \[= \frac{1}{2}\mathcal{C}_{+}(ik(\cdot)fw_{-})(z;y)+\frac{1}{2} \mathcal{C}_{-}(ik(\cdot)f\partial_{y}w_{+})(z;y).\] It holds that \[\|(\partial_{y}\mathcal{K})(I)\|_{2} \leq\|k(\cdot)r\|_{2},\hskip 14.226378pt\|(\partial_{y}\mathcal{K})( \mu-I)\|\leq\|\mu-I\|_{2}\|\|k(\cdot)r\|_{\infty}.\] It is thereby inferred that \(\partial_{y}\mu\) exists with \(\partial_{y}\mu=(I-\mathcal{K})^{-1}((\partial_{y}\mathcal{K})(I)+(\partial_ {y}\mathcal{K})(\mu-I))\), and \[\|\partial_{y}\mu\|_{2}\leq \|\mathcal{C}_{+}(k(\cdot)\bar{r}e^{2\theta})\|_{2}+\|\mathcal{C} _{-}(k(\cdot)re^{-2\theta})\|_{2}\] \[+4(\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}+\|\mathcal{C}_{-} (re^{-2\theta})\|_{2})\|k(\cdot)r\|_{\infty} \tag{3.33}\] Therefore, we arrive at the \(y\)-derivative of \(M(z;y)\) with \[\partial_{y}M(z;y)=\mathcal{C}((\partial_{y}\mu)w)(z;y)+\frac{i} {2}\mathcal{C}(k(\cdot)\mu w)(z;y). \tag{3.34}\] Next, we provide estimates of \(M\) and its \(y\)-derivative at \(z=i\) and \(z=0\), which will be used latter. Note that \(M\) has no jump at \(z=0\). Then \(M(0)\) is well-defined. \(M\) is analytic at \(z=i\) with expansion \[M(z)=M(i)+M^{i,(1)}(z-i)+\mathcal{O}((z-i)^{2}). \tag{3.35}\] The following proposition shows that \(M(i)\) and \(M(0)\) are uniformly bounded for \(y\in\mathbb{R}^{+}\). **Proposition 3.4**.: _If \(r\in H^{1,1}(\mathbb{R}_{k})\) satisfies \(\|\langle\cdot\rangle r\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})} \leq\rho_{0}\) for some \(\rho_{0}>0\), then there exists a positive constant \(C(\rho_{0})\) such that_ \[|M(i)-I|,\ |M(0)-I| \leq C(\rho_{0})\|r\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}( \mathbb{R}_{k})},\] \[|\partial_{y}M(i)|,\ |\partial_{y}M^{i,(1)}(y)| \leq C(\rho_{0})\|(\cdot)r\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty} (\mathbb{R}_{k})}.\] Proof.: Applying the Holder inequality yields the result. For example, \[|M(i;y)-I| =\Big{|}\frac{1}{2\pi}\int_{\mathbb{R}}\frac{(\mu(s;y)-I)w(s;y)}{ s-i}ds+\frac{1}{2\pi}\int_{\mathbb{R}}\frac{w(s;y)}{s-i}ds\Big{|}\] \[\leq(\|r\|_{\infty}\|\mu-I\|_{2}+\|r\|_{2})\|1/(\cdot-i)\|_{2}\] \[\leq C\|r\|_{2}(2\|r\|_{\infty}+1)\leq C(\rho_{0})\|r\|_{L^{2}( \mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})}.\] The others can be estimated in the same way. This completes the proof of Proposition 3.4. The following corollary is obtained directly from the above proposition. **Corollary 3.1**.: _There exists a constant \(\epsilon_{0}>0\) such that when \(\|\langle\cdot\rangle r\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k} )}<\epsilon_{0}\), it follows that_ \[\Big{|}M_{11}(0)-1\Big{|}<1.\] ## 4. The existence of global solutions ### Time evolution of reflection coefficient According to the time spectral problem in the Lax pair (2.3) and scattering relation (2.34), it is found that the time evolution of \(a(z;t)\) and \(b(z;t)\) satisfy the equations \[\partial_{t}a(z;t)=0,\ \partial_{t}b(z;t)=-\frac{2iz(z^{2}-1)}{z^{2}+1}b(z;t), \ z\in\mathbb{R}, \tag{4.1}\] which yield \[a(z;t)=a(z;0),\ b(z;t)=e^{-\frac{2iz(z^{2}-1)t}{z^{2}+1}}b(z;0), \ z\in\mathbb{R}. \tag{4.2}\] Hence, one can define the time-dependent reflection coefficient by \[r(z;t)=e^{-\frac{2iz(z^{2}-1)t}{(z^{2}+1)^{2}}}r(z;0),\ z\in \mathbb{R}.\] And we have the estimate in the following lemma which is obtained by a simple calculation. **Lemma 4.1**.: _When \(r(\cdot;0)\in H^{1,1}(\mathbb{R}_{k})\), it is inferred that \(r(\cdot;t)\in H^{1,1}(\mathbb{R}_{k})\). And_ \[\|r(\cdot;t)\|_{H^{1,1}(\mathbb{R}_{k})}\leq C(t)\|r(\cdot;0)\|_{ H^{1,1}(\mathbb{R}_{k})}, \tag{4.3}\] _where \(C(t)>0\) may grow at most polynomially in \(t\) but it remains finite for every \(t>0\). And if \(\tilde{r}(\cdot;0)\in H^{1,1}(\mathbb{R}_{k})\), then for every \(t\in\mathbb{R}^{+}\), it also leads to \(\tilde{r}(\cdot;t)=e^{-\frac{2iz(z^{2}-1)t}{(z^{2}+1)^{2}}}\tilde{r}(z;0)\in H ^{1,1}(\mathbb{R}_{k})\)._ Furthermore, we denote \(M_{l}(z;t,y)\) as the solution for the RH problem 1 under time-dependent reflection coefficient \(r(\cdot;t)\), and denote \(M_{r}(z;t,y)\) as the solution for the RH problem 2 under time-dependent reflection coefficient \(\tilde{r}(\cdot;t)\). From Propositions 3.2 and 3.3, the following proposition holds immediately. **Proposition 4.1**.: _Assume that \(r(\cdot;0),\ \tilde{r}(\cdot;0)\in H^{1,1}(\mathbb{R}_{k})\). There exist \(M_{l}(z;t,y)\) and \(M_{r}(z;t,y)\) uniquely for \(\forall t\in\mathbb{R}^{+}\)._ For convenience, we define \[M(z;t,y)=\left\{\begin{array}{ll}M_{l}(z;t,y),&\mbox{as $y\in\mathbb{R}^{+}$}, \\ &\\ M_{r}(z;t,y),&\mbox{as $y\in\mathbb{R}^{-}$},\end{array}\right. \tag{4.4}\] The crux of the matter is whether the functions reconstituted from the time-depended RH problem via Lemma 3.1 is the solution of mCH equation (1.1). In the rest of this subsection, we verify that the reconstruction formula remain accessible for \(M_{l}(z;t,y)\) with \(y\in\mathbb{R}^{+}\) and for \(M_{r}(z;t,y)\) with \(y\in\mathbb{R}^{-}\). The first step is to prove the existence of \(\partial_{t}M(z;t,y)\) and \(\partial_{y}M(z;t,y)\). We give the details when \(y\in\mathbb{R}^{+}\). Take \(t\)-derivative at the both side of (3.20) and obtain \[\partial_{t}\mu(z;y)=\partial_{t}\mathcal{C}(w)(z;y)+(\partial_{t}\mathcal{K} )(\mu-I)(z;y)+\mathcal{K}(\partial_{t}\mu)(z;y), \tag{4.5}\] where \[\|\partial_{t}\mathcal{C}(w)\|_{2}=\left\|\mathcal{C}\left(\frac{ z(z^{2}-1)}{(z^{2}+1)^{2}}w\right)\right\|_{2}\leq C\|r\|_{2},\] \[\|(\partial_{t}\mathcal{K})(\mu-I)\|_{2}=\left\|\mathcal{K} \left(\frac{z(z^{2}-1)}{(z^{2}+1)^{2}}(\mu-I)\right)\right\|_{2}\leq C\|r\|_{ \infty}\|\mu-I\|_{2}.\] So the function \(\partial_{t}\mu(z;y)\) exists uniquely in \(L^{2}(\mathbb{R})\) with \[\partial_{t}\mu(z;y)=(I-\mathcal{K})^{-1}\left(\partial_{t}\mathcal{C}(w)+( \partial_{t}\mathcal{K})(\mu-I)\right)(z;y),\] which establishes the existence of \(\partial_{t}M(z;t,y)\) with \[\partial_{t}M(z;t,y)=\mathcal{C}(\partial_{t}\mu w)(z;y)+\mathcal{C}((\mu-I) \partial_{t}w)(z;y)+\mathcal{C}(\partial_{t}w)(z;y). \tag{4.6}\] The existence of \(\partial_{y}M(z;t,y)\) is established in a similar analysis. Moreover, the following proposition is obtained in the similarly way as Proposition 3.4, **Proposition 4.2**.: _Suppose that \(r\in H^{1,1}(\mathbb{R}_{k})\) with \(\|r\|_{H^{1,1}(\mathbb{R}_{k})}\leq\rho_{0}\) for some \(\rho_{0}>0\). Then there exists a positive constant \(C(\rho_{0})\) such that_ \[|\partial_{t}M(i)|,\ |\partial_{t}\partial_{y}M(i)|,\ |\partial_{t}M^{i,( 1)}(y)|,\ |\partial_{t}\partial_{y}M^{i,(1)}(y)|,\] \[|\partial_{t}\partial_{y}M^{i,(1)}(y)|\leq C(\rho_{0})\|r\|_{H^{1, 1}(\mathbb{R}_{k})}.\] In view of the symmetry of \(M\) in (3.4) and the fact that det\(M\)=1, we denote \[M(0)=\left(\begin{array}{cc}\beta_{0}&\eta_{0}\\ \eta_{0}&\beta_{0}\end{array}\right),\ \ M(i)=\left(\begin{array}{cc}f_{0}& \frac{\eta_{0}}{2f_{0}}\\ \frac{\beta_{0}-1}{\eta_{0}}f_{0}&\frac{\beta_{0}+1}{2f_{0}}\end{array} \right),\ \ M^{i,(1)}=\left(\begin{array}{cc}\frac{\beta_{0}-1}{\eta_{0}}g_{1}&g_{2} \\ g_{1}&\frac{\beta_{0}-1}{\eta_{0}}g_{2}\end{array}\right),\] where \(\eta_{0}\in i\mathbb{R}\), \(\beta_{0},\ f_{0},\ g_{1},\ g_{2}\in\mathbb{R}\). Specially, when \(\eta_{0}=0\), \((\beta_{0}-1)/\eta_{0}=0\). It also follows that \(M(i)\sigma_{3}M(i)^{-1}=\sigma_{3}M(0)^{-1}\). We recall the symbol that \[\eta=\lim_{z\to\infty}z(M(z)-I)_{12}.\] In addition, define \[\Psi(z;y,t)=M(z;y,t)e^{-\frac{i}{4}(z-1/z)y\sigma_{3}+2i\frac{z(z^{2}-1)}{(z^{2}+ 1)^{2}}t\sigma_{3}}, \tag{4.7}\] which admits the proposition below. **Proposition 4.3**.: \(\Psi\) _defined in (4.7) satisfies the differential equations_ \[\Psi_{y}=A\Psi,\ \ \ \ \Psi_{t}=B\Psi, \tag{4.8}\] _where_ \[A= -\frac{iz}{4}\sigma_{3}+\frac{i}{2}\eta\sigma_{1}+\frac{i}{4z}( \beta_{0}^{2}+\eta_{0}^{2})\sigma_{3}+\frac{1}{2z}\eta_{0}\beta_{0}\sigma_{2},\] \[B= -\frac{1}{(z-i)^{2}}\left(\begin{array}{cc}\beta_{0}&-\eta_{0} \\ \eta_{0}&-\beta_{0}\end{array}\right)+\frac{i}{z-i}\left(\begin{array}{cc} \beta_{0}&-\eta_{0}\\ \eta_{0}&-\beta_{0}\end{array}\right)\] \[-\frac{1}{z-i}\left(\begin{array}{cc}2(\frac{\beta_{0}-1}{ \eta_{0}}g_{2}f_{0}+\frac{\eta_{0}}{2f_{0}}g_{1})&-2f_{0}g_{2}-2\frac{\beta_{ 0}-1}{2f_{0}}g_{1}\\ 2\frac{\beta_{0}-1}{\beta_{0}+1}g_{2}f_{0}+\frac{\eta_{0}+1}{f_{0}}g_{1}&-2( \frac{\beta_{0}-1}{\eta_{0}}g_{2}f_{0}+\frac{\eta_{0}}{2f_{0}}g_{1})\end{array}\right)\] \[+\frac{1}{(z+i)^{2}}\left(\begin{array}{cc}\beta_{0}&-\eta_{0} \\ \eta_{0}&-\beta_{0}\end{array}\right)+\frac{i}{z+i}\left(\begin{array}{cc} \beta_{0}&-\eta_{0}\\ \eta_{0}&-\beta_{0}\end{array}\right)\] \[+\frac{1}{z+i}\left(\begin{array}{cc}-2(\frac{\beta_{0}-1}{ \eta_{0}}g_{2}f_{0}+\frac{\eta_{0}}{2f_{0}}g_{1})&2\frac{\beta_{0}-1}{\beta_{ 0}+1}g_{2}f_{0}+\frac{\beta_{0}+1}{f_{0}}g_{1}\\ -2f_{0}g_{2}-2\frac{\beta_{0}-1}{2f_{0}}g_{1}&2(\frac{\beta_{0}-1}{\eta_{0}}g _{2}f_{0}+\frac{\eta_{0}}{2f_{0}}g_{1})\end{array}\right)\] _Moreover, via denoting_ \[\tilde{u} =-\frac{g_{1}}{f_{0}}-2\frac{\beta_{0}-1}{\eta_{0}^{2}}f_{0}g_{2},\] \[\tilde{q} =\frac{1}{\beta_{0}},\ \ \ \ \tilde{m}=\frac{\eta_{0}}{\beta_{0}i},\] _the compatibility condition_ \[A_{t}+AB-B_{y}-BA=0\] _yields the mCH equation in the \((y,t)\) variables. In addition, combining the variable transformation in (3.10), we obtain_ \[x_{y}=\beta_{0},\ \ \ \ \tilde{u}_{x}=\frac{g_{1}}{f_{0}}-2\frac{\beta_{0}-1}{ \eta_{0}^{2}}f_{0}g_{2}, \tag{4.9}\] Proof.: The definition of \(\Psi\) implies that the jump of \(\Psi\) is independent of \(y\) and \(t\). Consequently, \(\Psi_{y}\Psi^{-1}\) and \(\Psi_{t}\Psi^{-1}\) have no jump. We analyze \(\Psi_{y}\Psi^{-1}\) first. (4.7) gives that \[\Psi_{y}\Psi^{-1}=M_{y}M^{-1}-\frac{i}{4}\left(z-\frac{1}{z}\right)M\sigma_{3 }M^{-1},\] which is a meromorphic function, with possible singularities at \(z=0\) and \(z=\infty\). As \(z\to\infty\), \[\Psi_{y}\Psi^{-1}=-\frac{i}{4}z\sigma_{3}+\frac{i}{2}\eta\sigma_{1}+\mathcal{ O}(1/z),\] while as \(z\to 0\), \[\Psi_{y}\Psi^{-1}=\frac{i}{4z}M(0)\sigma_{3}M(0)^{-1}+\mathcal{O}(1).\] Therefore, the function \[\Psi_{y}\Psi^{-1}-\frac{i}{4z}M(0)\sigma_{3}M(0)^{-1}+\frac{i}{4}z\sigma_{3}- \frac{i}{2}\eta\sigma_{1}\] is holomorphic in \(\mathbb{C}\) and vanish at \(z\to\infty\). Then, by Liouville's theorem, it vanishes identically, which leads to the result \(\Psi_{y}=A\Psi\). Similarly, \[\Psi_{t}\Psi^{-1}=M_{t}M^{-1}+2i\frac{z(z^{2}-1)}{(z^{2}+1)^{2}}M\sigma_{3}M^{- 1},\] is a meromorphic function, with possible singularities at \(z=\pm i\). From the decomposition \[2i\frac{z(z^{2}-1)}{(z^{2}+1)^{2}}=\frac{i}{z+i}+\frac{i}{z-i}+\frac{1}{(z+i)^{ 2}}-\frac{1}{(z-i)^{2}},\] we obtain that \(\Psi_{t}=B\Psi\). Using the compatibility condition for the function \(\Psi\) results in the compatibility equation: \(A_{t}+AB-B_{y}-BA=0\). Considering this compatibility equation at the singular points of both \(A\) and \(B\), we can derive the algebraic and differential relationships among the coefficients of \(A\) and \(B\). These relationships can be further reduced to the equation (2.7). Moreover, when we combine the compatibility equation with the expression for the change of variables as presented in equation (3.10), it leads to the mCH equation (1.1). This completes the proof of Proposition 4.3. ### The proof of main result The main goal of this subsection is to give the proof of Theorem 1.1. In view of the result in Section 3, we use \(M_{l}(z;t,y)\) to recover the solution \(m(t,x(y,t))\) for \(y\in\mathbb{R}^{+}\), and \(M_{r}(z;t,y)\) to recover the solution \(m(t,x(y,t))\) for \(y\in\mathbb{R}^{-}\) through Lemma 3.1. The first step is to give the proof of the lemma below. **Lemma 4.2**.: _Assume that the initial data \(m_{0}\in H^{2,1}(\mathbb{R})\) such that \(\|m_{0}\|_{H^{2,1}}\) is small. Then there exists a unique global solution \(m(t,y)\) for every \(t\in\mathbb{R}^{+}\) to the Cauchy problem (1.1)-(1.2) for the mCH equation in \(C([0,+\infty);L^{\infty}(\mathbb{R}))\)._ Proof.: When \(m_{0}\in H^{2,1}(\mathbb{R})\), it follows from Lemma 2.1 that \(r(z;0)\in H^{1,1}(\mathbb{R}_{k})\). As shown in Proposition 4.1, there exists a unique solution \(M(z;t,y)\) defined in (4.4) for every \(t\in\mathbb{R}^{+}\), \(y\in\mathbb{R}\). As shown in Theorem 4.1 [17], there exists a time \(T>0\) such that the initial-value problem (1.1) has a unique solution \(m(t,\cdot)\in C([0,T],H^{2}(\mathbb{R}))\). Hence, \(m\) reconstituted from \(M(z;t,y)\) in Lemma 3.1 must be unique. It remains to prove \(m(t,\cdot)\in C([0,+\infty);L^{\infty}(\mathbb{R}))\). It suffices to show that \(q(t,\cdot)=\sqrt{1+m(t,\cdot)^{2}}\in C([0,+\infty);L^{\infty}(\mathbb{R}))\). From Lemma 4.1, for ever \(t\in[0,T]\), \(\|r(\cdot;t)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})}\) is independent of \(t\). The symmetry of \(M(z;t,y)\) in (3.4) implies that \[M_{11}(0;t,y)=M_{22}(0;t,y)\in\mathbb{R},\ \ \ \ M_{12}(0;t,y)=M_{21}(0;t,y)i\in \mathbb{R}\] Noting that \(\text{det}M\equiv 1\), thus \(|M_{11}(0;t,y)|\leq 1\). As shown in Corollary 3.1, when \(\|r(\cdot;0)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})}<\epsilon_ {0}\), it holds that for every \(t\in[0,T]\), \[\Big{|}M_{11}(0;t,y)-1\Big{|}<1.\] Therefore, from (3.9), it appears that \[1\leq|q(t,y)|<(1-|M_{11}(0;t,y)-1|)^{-1}<\infty. \tag{4.10}\] If there exists a maximal existence time \(T_{max}>0\) such that \(q(t,\cdot)\) only exists in \(C([0,T_{max}];L^{\infty}(\mathbb{R}))\), then it must appears that \(\lim_{t\to T_{max}-}\|q(t,\cdot)\|_{\infty}=\infty\), from which we derive a contradiction. This final argument then completes the proof of Lemma 4.2. Because the estimates in the above section are under the scaling transformation of \(y\). The equivalency between the integral norm in \(\mathbb{R}_{x}\) and \(\mathbb{R}_{y}\) is first given to arrive at the \(L^{2}\)-estimates under the variable \(x\). **Lemma 4.3**.: _Assume that \(\|r(\cdot;0)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})}<\epsilon_ {0}\). Then for any function \(h(x(\cdot))\in L^{p}(\mathbb{R}_{y})\) with \(x(y)\) is given in (3.10), \(1\leq p\leq\infty\), it follows that \(h\in L^{p}(\mathbb{R}_{x})\). Furthermore, it is also inferred that_ \[C_{1}(\|r(\cdot;0)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})})| y|\leq|x|\leq C_{2}(\|r(\cdot;0)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}( \mathbb{R}_{k})})|y|.\] _Therefore, \(h\in L^{p,s}(\mathbb{R}_{x})\) is equivalent to \(h\in L^{p,s}(\mathbb{R}_{y})\) for any \(s>0\)._ Proof.: The equivalency between \(L^{p}(\mathbb{R}_{x})\) and \(L^{p}(\mathbb{R}_{y})\) is obtained directly from (4.9) and (4.10) under \(\|r(\cdot;0)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})}<\epsilon_ {0}\). As shown in Lemma 4.1, for ever \(t\in\mathbb{R}^{+}\), \(\|r(\cdot;t)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k})}\) is independent of \(t\). Recall the symmetry of \(M(z)\) that \[M(z)=\sigma_{3}\overline{M(-\bar{z})}\sigma_{3}.\] This implies that \(M_{11}(i)\), \(M_{22}(i)\in\mathbb{R}\) and \(M_{12}(i)\), \(M_{21}(i)\in i\mathbb{R}\). Noting that \(\det M\equiv 1\), then the boundedness of Proposition 3.4 gives that the functions \(|M_{11}(i)+M_{21}(i)|\) and \(|M_{12}(i)+M_{22}(i)|\) have nonzero infimum for every \(t\in\mathbb{R}^{+},\ y\in\mathbb{R}\). In fact, taking \(|M_{11}(i)+M_{21}(i)|\), \(y\in\mathbb{R}^{+}\), as an example, \[1 =\left|M_{22}(i)M_{11}(i)+\operatorname{Im}\left(M_{12}(i)M_{21} (i)\right)\right|\] \[\leq C|M(i)||M_{11}(i)+M_{21}(i)|,\] which leads to that \[|M_{11}(i)+M_{21}(i)|\geq 1/C(\|r(\cdot;0)\|_{L^{2}(\mathbb{R}_{k})\cap L^{ \infty}(\mathbb{R}_{k})})>0.\] Thus, it is deduced from (3.10) that \[|x-y|\leq C(\|r(\cdot;0)\|_{L^{2}(\mathbb{R}_{k})\cap L^{\infty}(\mathbb{R}_{k })}).\] This concludes the result in Lemma 4.3. _Remark 4.1_.: In fact, it follows from the definition of \(y\) in (2.5) that \[|x-y|\leq\int_{\mathbb{R}}(\sqrt{m^{2}(t,s)+1}-1)ds,\] where \(\int_{\mathbb{R}}(\sqrt{m^{2}(t,s)+1}-1)ds\) is a conserved quantity independent of \(t\). Thus, \(|x-y|\) can be controlled by \(\|m_{0}\|_{2}\). The following result from the Fourier theory is useful to prove Theorem 1.1. **Lemma 4.4**.: _If \(f(z)\) is a complex valued function defined on \(\mathbb{R}\) and admits the symmetry \(f(z)=f(-1/z)\), then \(f(z(k))\) is well-defined under the variable \(k=z-1/z\). Denote \(\tilde{f}(k)=f(z(k))\). (a) If \(f\in H^{1,1}(\mathbb{R}_{k})\), then_ \[\int_{\mathbb{R}}f(z)e^{-\frac{i}{2}(z-1/z)y}dz=\int_{\mathbb{R}} \tilde{f}(k)e^{-\frac{i}{2}ky}dk,\] _which implies that_ \[\int_{\mathbb{R}}f(z)e^{-\frac{i}{2}(z-1/z)y}dz\in H^{1,1}( \mathbb{R}_{y}).\] _(b) Denote \(\mathcal{C}^{k}\) as the Cauchy operator on \(k\)-plane. Then \(\mathcal{C}_{\pm}(f)\) is also well-defined under the variable \(k\) and_ \[\mathcal{C}_{\pm}(f)(z)=\mathcal{C}^{k}_{\pm}(f)(k), \tag{4.11}\] _which yields that \(\|\mathcal{C}_{\pm}(f)\|_{L^{2}(\mathbb{R}_{z})}=\|\mathcal{C}^{k}_{\pm}(f)\| _{L^{2}(\mathbb{R}_{k})}\). (c) If \(g\) also admits that \(g(z)=g(-1/z)\) and \(\tilde{g}(k)=g(z(k))\) is in \(H^{1}(\mathbb{R}_{k})\), then_ \[\int_{\mathbb{R}^{+}}\|\mathcal{C}_{+}(fe^{2\theta})\|_{2}^{2} \|\mathcal{C}_{\pm}(ge^{\pm 2\theta})\|_{2}^{2}dy \leq C\|\tilde{f}\|_{H^{1/4}(\mathbb{R}_{k})}^{2}\|\tilde{g}\|_{H^ {1/4}(\mathbb{R}_{k})}^{2},\] \[\int_{\mathbb{R}^{+}}y^{2}\|\mathcal{C}_{+}(fe^{2\theta})\|_{2}^ {2}\|\mathcal{C}_{\pm}(ge^{\pm 2\theta})\|_{2}^{2}dy \leq C\|\tilde{f}\|_{H^{3/4}(\mathbb{R}_{k})}^{2}\|\tilde{g}\|_{H^ {3/4}(\mathbb{R}_{k})}^{2}.\] Proof.: Recall that for \(k=z-1/z\), the map \(k\to z\) is given by \[z_{+}(k) =(k+\sqrt{k^{2}+4})/2:\ \mathbb{R}\to\mathbb{R}^{+},\] \[z_{-}(k) =(k-\sqrt{k^{2}+4})/2:\ \mathbb{R}\to\mathbb{R}^{-}.\] Therefore, it is readily seen that \[\int_{\mathbb{R}}f(z)e^{-\frac{i}{2}(z-1/z)y}dz =\int_{\mathbb{R}}\tilde{f}(k)e^{-\frac{i}{2}ky}\frac{1}{1+z_{+}( k)^{-2}}dk+\int_{\mathbb{R}}\tilde{f}(k)e^{-\frac{i}{2}ky}\frac{1}{1+z_{-}(k)^{-2 }}dk\] \[=\int_{\mathbb{R}}\tilde{f}(k)e^{-\frac{i}{2}ky}dk.\] And the assertion in (b) is obtained from the fact that \[\frac{1}{s-z}+\frac{1}{s^{2}}\cdot\frac{1}{-\frac{1}{s}-z}=\frac{1+\frac{1}{s ^{2}}}{s-\frac{1}{s}-z+\frac{1}{z}}.\] For the claim in (c), we now give the details of the calculation for \(\|\mathcal{C}_{+}(fe^{2\theta})\|_{2}^{2}\|\mathcal{C}_{+}(ge^{2\theta})\|_{2} ^{2}\). It is found that \[\int_{\mathbb{R}^{+}} \|\mathcal{C}_{+}(fe^{2\theta})\|_{2}^{2}\|\mathcal{C}_{+}(ge^{2 \theta})\|_{2}^{2}dy=\int_{\mathbb{R}^{+}}\|\mathcal{C}_{+}^{k}(fe^{2\theta}) \|_{L^{2}(\mathbb{R}_{k})}^{2}\|\mathcal{C}_{+}^{k}(ge^{2\theta})\|_{L^{2}( \mathbb{R}_{k})}^{2}dy\] \[=\int_{\mathbb{R}^{+}}\int_{\mathbb{R}}X_{+}(k)|\hat{\tilde{f}}(k +y/4\pi)|^{2}dk\int_{\mathbb{R}}X_{+}(k)|\hat{\tilde{g}}(k+y/4\pi)|^{2}dkdy\] \[=\int_{\mathbb{R}^{+}}\int_{y/4\pi}^{\infty}|\hat{\tilde{f}}(k)|^ {2}dk\int_{y/4\pi}^{\infty}|\hat{\tilde{g}}(k)|^{2}dkdy\] \[\leq\left(\int_{\mathbb{R}^{+}}(\int_{y/4\pi}^{\infty}|\hat{\tilde{f}}(k)|^{2} dk)^{2}dy\right)^{1/2}\left(\int_{\mathbb{R}^{+}}(\int_{y/4\pi}^{\infty}|\hat{ \tilde{g}}(k)|^{2}dk)^{2}dy\right)^{1/2}\] \[\leq\int_{\mathbb{R}^{+}}(\int_{0}^{k/4\pi}|\hat{\tilde{f}}(k)|^{4 }dy)^{1/2}dk\int_{\mathbb{R}^{+}}(\int_{0}^{k/4\pi}|\hat{\tilde{g}}(k)|^{4}dy) ^{1/2}dk\] \[=\int_{\mathbb{R}^{+}}|\hat{\tilde{f}}(k)|^{2}(k/4\pi)^{1/2}dk\int _{\mathbb{R}^{+}}|\hat{\tilde{g}}(k)|^{2}(k/4\pi)^{1/2}dk\leq C\|\hat{\tilde{f }}\|_{2,1/4(\mathbb{R}_{k})}^{2}\|\hat{\tilde{g}}\|_{2,1/4(\mathbb{R}_{k})}^{2 },\] and \[\int_{\mathbb{R}^{+}}y^{2} \|\mathcal{C}_{+}(fe^{2\theta})\|_{2}^{2}\|\mathcal{C}_{\pm}(ge ^{\pm 2\theta})\|_{2}^{2}dy=\int_{\mathbb{R}^{+}}\int_{y/4\pi}^{\infty}y| \hat{\tilde{f}}(k)|^{2}dk\int_{y/4\pi}^{\infty}y|\hat{\tilde{g}}(k)|^{2}dkdy\] \[\leq\left(\int_{\mathbb{R}^{+}}(\int_{y/4\pi}^{\infty}y|\hat{ \tilde{f}}(k)|^{2}dk)^{2}dy\right)^{1/2}\left(\int_{\mathbb{R}^{+}}(\int_{y/4 \pi}^{\infty}y|\hat{\tilde{g}}(k)|^{2}dk)^{2}dy\right)^{1/2}\] \[\leq\int_{\mathbb{R}^{+}}(\int_{0}^{k/4\pi}y^{2}|\hat{\tilde{f}}( k)|^{4}dy)^{1/2}dk\int_{\mathbb{R}^{+}}(\int_{0}^{k/4\pi}y^{2}|\hat{\tilde{g}}(k)|^{ 4}dy)^{1/2}dk\] \[=\frac{1}{3}\int_{\mathbb{R}^{+}}|\hat{\tilde{f}}(k)|^{2}(k/4\pi) ^{3/2}dk\int_{\mathbb{R}^{+}}|\hat{\tilde{g}}(k)|^{2}(k/4\pi)^{3/2}dk\leq C\| \hat{\tilde{f}}\|_{2,3/4(\mathbb{R}_{k})}^{2}\|\hat{\tilde{g}}\|_{2,3/4( \mathbb{R}_{k})}^{2}.\] This completes the proof of Lemma 4.4. We shall now recover \(m\) from the RH problem \(M\) and give \(L^{2}\)-estimate of \(m\) and establish our main result in Theorem 1.1. Proof of Theorem 1.1.: We only give the details of the proof of \(m(t,\cdot)\in H^{2,1}(\mathbb{R}_{y}^{+})\). Together with Lemma 4.3 and the fact that \(|\partial_{x}y|\), \(|\partial_{x}^{2}y|\) is bounded as shown in Lemma 4.2, it follows that \(m(t,\cdot)\in H^{2,1}(\mathbb{R}_{x}^{+})\). Recall the reconstruction formula in Lemma 3.1: \[\eta=\lim_{z\to\infty}zM_{12}. \tag{4.12}\] From (3.19), it is adduced that \[\eta=\frac{i}{2\pi}\int_{\mathbb{R}}(\mu_{11}-1)\bar{r}e^{2\theta}dz+\frac{i}{ 2\pi}\int_{\mathbb{R}}\bar{r}e^{2\theta}dz. \tag{4.13}\] For convenience, denote \[\eta_{1}=\int_{\mathbb{R}}(\mu_{11}-1)\bar{r}e^{2\theta}dz,\ \eta_{2}=\int_{ \mathbb{R}}\bar{r}e^{2\theta}dz. \tag{4.14}\] Consequently, Lemma 4.4 gives that \(\eta_{2}\in H^{1,1}(\mathbb{R}_{y})\). And for \(\eta_{1}\), invoking (3.20), it holds that \[\eta_{1}=\int_{\mathbb{R}}\mathcal{C}_{-}(re^{-2\theta}\mu_{12})\bar{r}e^{2 \theta}dz=-\int_{\mathbb{R}}\mathcal{C}_{+}(\bar{r}e^{2\theta})re^{-2\theta} \mu_{12}dz, \tag{4.15}\] from which and the estimate in (3.29) we obtain that \[|\eta_{1}|\leq\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}\|\mu_{12}\|_{2}\|r\|_ {\infty}\leq 2\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}\|r\|_{\infty}(\|\mathcal{C} _{+}(\bar{r}e^{2\theta})\|_{2}+\|\mathcal{C}_{-}(re^{-2\theta})\|_{2}).\] For the term \(\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}^{2}\) in the right of above inequality, applying Lemma 4.4 yields \[\int_{\mathbb{R}^{+}}\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}^{4}dy\leq C\| \hat{r}\|_{2,1/4(\mathbb{R}_{k})}^{4},\] and \[\int_{\mathbb{R}^{+}}|y|^{2}\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}^{4}dy\leq C \|\hat{r}\|_{2,3/4(\mathbb{R}_{k})}^{4}.\] Similar calculation of \(\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}\|\mathcal{C}_{-}(re^{-2\theta})\|_{2}\) gives that \(\eta_{1}\in L^{2,1}(\mathbb{R}_{y}^{+})\). Take the \(y\)-derivative in (4.15) and obtain \[\partial_{y}\eta_{1}= -\int_{\mathbb{R}}\mathcal{C}_{+}(\frac{i}{2}k(\cdot)\bar{r}e^{2 \theta})re^{-2\theta}\mu_{12}dz-\int_{\mathbb{R}}\mathcal{C}_{+}(\bar{r}e^{2 \theta})\frac{i}{2}k(z)re^{-2\theta}\mu_{12}dz\] \[-\int_{\mathbb{R}}\mathcal{C}_{+}(\bar{r}e^{2\theta})re^{-2\theta }\partial_{y}\mu_{12}dz,\] which implies that \[|\partial_{y}\eta_{1}|\leq \|r\|_{\infty}\left(\|\mathcal{C}_{+}(k(\cdot)\bar{r}e^{2\theta} )\|_{2}\|\mu_{12}\|_{2}+\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}\|\mu_{12} \|_{2}+\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}\|\partial_{y}\mu_{12}\|_{ 2}\right)\] \[\leq 2\|r\|_{\infty}(\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}+\| \mathcal{C}_{-}(re^{-2\theta})\|_{2})\left(\|\mathcal{C}_{+}(k(\cdot)\bar{r}e^ {2\theta})\|_{2}+\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}\right)\] \[+2\|r\|_{\infty}\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}(\| \mathcal{C}_{+}(k(\cdot)\bar{r}e^{2\theta})\|_{2}+\|\mathcal{C}_{-}(k(\cdot)re ^{-2\theta})\|_{2})\] \[+8\|r\|_{\infty}\|\mathcal{C}_{+}(\bar{r}e^{2\theta}))\|_{2}(\| \mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}+\|\mathcal{C}_{-}(re^{-2\theta})\|_{ 2})\|k(\cdot)r\|_{\infty}\] The last inequality is obtained from (3.33). We give the details of the estimate of the term \(\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}\|\mathcal{C}_{+}(k(\cdot)\bar{r}e ^{2\theta})\|_{2}\). Similar claims can be applied to the others. Via Lemma 4.4, it holds that \[\int_{\mathbb{R}^{+}}\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}^{2}\| \mathcal{C}_{+}(k(\cdot)\bar{r}e^{2\theta})\|_{2}^{2}dy\leq C\|(\cdot)r\|_{H^{ 1/4}(\mathbb{R}_{k})}^{2}\|r\|_{H^{1/4}(\mathbb{R}_{k})}^{2},\] and \[\int_{\mathbb{R}^{+}}y^{2}\|\mathcal{C}_{+}(\bar{r}e^{2\theta})\|_{2}^{2}\| \mathcal{C}_{+}(k(\cdot)\bar{r}e^{2\theta})\|_{2}^{2}dy\leq C\|r\|_{H^{3/4}( \mathbb{R}_{k})}^{2}\|(\cdot)r\|_{H^{3/4}(\mathbb{R}_{k})}^{2}.\] This in turn implies that \(|\partial_{y}\eta_{1}|\in L^{2,1}(\mathbb{R}_{y})\). Therefore, we finally conclude that \(\eta\in H^{1,1}(\mathbb{R}_{y}^{+})\). By (4.13), the map: \[r\in H^{1,1}(\mathbb{R}_{k})\longrightarrow\eta\in H^{1,1}(\mathbb{R}_{y}^{+})\] is locally Lipschitz continuous. Indeed, if there exists another \(r_{1}\in H^{1,1}(\mathbb{R}_{k})\) with a solution \(\mu(r_{1})\) of corresponding equation (3.20) and \(\eta(r_{1})\) from (4.13), it is inferred that \[\eta(r_{1})-\eta(r)= \frac{i}{2\pi}\int_{\mathbb{R}}(\mu_{11}(r_{1})-\mu_{11}(r))\bar{ r}_{1}e^{2\theta}dz+\frac{i}{2\pi}\int_{\mathbb{R}}(\mu_{11}(r)-1)(\bar{r}_{1}- \bar{r})e^{2\theta}dz\] \[+\frac{i}{2\pi}\int_{\mathbb{R}}(\bar{r}_{1}-\bar{r})e^{2\theta}dz.\] Then by a similar way in the above expressions, it follows that \[\|\eta(r_{1})-\eta(r)\|_{H^{1,1}(\mathbb{R}_{y}^{+})}\leq C(\max\{\|r\|_{H^{1, 1}(\mathbb{R}_{k})},\|r_{1}\|_{H^{1,1}(\mathbb{R}_{k})}\})(\|r\|_{H^{1,1}( \mathbb{R}_{k})}-\|r_{1}\|_{H^{1,1}(\mathbb{R}_{k})}).\] On the other hand, consider \[\zeta^{(+)}=-\lim_{z\to\infty}z(M_{22}-1). \tag{4.16}\] Combining (3.19) and (3.20), it is adduced that \[\zeta^{(+)}=\frac{1}{2\pi i}\int_{\mathbb{R}}\mathcal{C}_{-}(re^{-2\theta}) \bar{r}e^{2\theta}ds-\frac{1}{2\pi i}\int_{\mathbb{R}}(\mu_{22}-1)re^{-2 \theta}\mathcal{C}_{+}(\bar{r}e^{2\theta})ds. \tag{4.17}\] An analogical calculation gives that \(\zeta^{(+)}\in H^{1,1}(\mathbb{R}^{+}_{y})\) and the map: \[r\in H^{1,1}(\mathbb{R}_{k})\longrightarrow\zeta^{(+)}\in H^{1,1}(\mathbb{R}^{+} _{y})\] is locally Lipschitz continuous. Recall that \[\eta=\frac{m_{x}}{(1+m^{2})^{3/2}},\ \zeta^{(+)}=\frac{i}{2}\int_{y}^{+\infty} \left(\frac{m_{x}^{2}}{q^{6}}+\frac{m^{2}}{q^{2}}\right)ds.\] Then by the boundedness of \(m\) and Sobolev-Gagliardo-Nirenberg inequality, we finally obtain \(m(t,\cdot)\in H^{2,1}(\mathbb{R}^{+}_{y})\) and the Lipschitz continuity from \(r\in H^{1,1}(\mathbb{R}_{k})\) to \(m(t,\cdot)\in H^{1,1}(\mathbb{R}^{+}_{y})\). For \(y\in\mathbb{R}^{-}\), the estimates are same. This gives the desired result in Theorem 1.1. **Acknowledgements** The work of Fan and Yang is partially supported by the National Science Foundation of China under grants 12271104, 51879045 and 12247182 and China Postdoctoral Science Foundation. The work of Liu is partially supported by the Simons Foundation under grant 499875.
2310.05902
Impact of interface traps on charge noise, mobility and percolation density in Ge/SiGe heterostructures
Hole spins in Ge/SiGe heterostructure quantum dots have emerged as promising qubits for quantum computation. The strong spin-orbit coupling (SOC), characteristic of heavy-hole states in Ge, enables fast and all-electrical qubit control. However, SOC also increases the susceptibility of spin qubits to charge noise. While qubit coherence can be significantly improved by operating at sweet spots with reduced hyperfine or charge noise sensitivity, the latter ultimately limits coherence, underlining the importance of understanding and reducing charge noise at its source. In this work, we study the voltage-induced hysteresis commonly observed in SiGe-based quantum devices and show that the dominant charge fluctuators are localized at the semiconductor-oxide interface. By applying increasingly negative gate voltages to Hall bar and quantum dot devices, we investigate how the hysteretic filling of interface traps impacts transport metrics and charge noise. We find that the gate-induced accumulation and trapping of charge at the SiGe-oxide interface leads to an increased electrostatic disorder, as probed by transport measurements, as well as the activation of low-frequency relaxation dynamics, resulting in slow drifts and increased charge noise levels. Our results highlight the importance of a conservative device tuning strategy and reveal the critical role of the semiconductor-oxide interface in SiGe heterostructures for spin qubit applications.
L. Massai, B. Hetényi, M. Mergenthaler, F. J. Schupp, L. Sommer, S. Paredes, S. W. Bedell, P. Harvey-Collard, G. Salis, A. Fuhrer, N. W. Hendrickx
2023-10-09T17:43:16Z
http://arxiv.org/abs/2310.05902v1
Impact of interface traps on charge noise, mobility and percolation density in Ge/SiGe heterostructures ###### Abstract Hole spins in Ge/SiGe heterostructure quantum dots have emerged as promising qubits for quantum computation. The strong spin-orbit coupling (SOC), characteristic of heavy-hole states in Ge, enables fast and all-electrical qubit control. However, SOC also increases the susceptibility of spin qubits to charge noise. While qubit coherence can be significantly improved by operating at sweet spots with reduced hyperfine or charge noise sensitivity, the latter ultimately limits coherence, underlining the importance of understanding and reducing charge noise at its source. In this work, we study the voltage-induced hysteresis commonly observed in SiGe-based quantum devices and show that the dominant charge fluctuators are localized at the semiconductor-oxide interface. By applying increasingly negative gate voltages to Hall bar and quantum dot devices, we investigate how the hysteretic filling of interface traps impacts transport metrics and charge noise. We find that the gate-induced accumulation and trapping of charge at the SiGe-oxide interface leads to an increased electrostatic disorder, as probed by transport measurements, as well as the activation of low-frequency relaxation dynamics, resulting in slow drifts and increased charge noise levels. Our results highlight the importance of a conservative device tuning strategy and reveal the critical role of the semiconductor-oxide interface in SiGe heterostructures for spin qubit applications. ## I Introduction Hole spins in germanium quantum dots (QDs) [1; 2; 3; 4] are promising qubits for semiconductor-based quantum computing [5]. The intrinsic spin-orbit coupling (SOC) enables fast and local qubit operations [2; 6; 7; 4], with single-qubit gate fidelities well above the fault-tolerant threshold [8]. In particular, strained germanium quantum wells (QWs) have enabled the operation of increasingly larger two-dimensional quantum dot arrays, with demonstrations of four-qubit logic [9], eight-QD analog quantum simulations [10], and multiplexed addressing of arrays with sixteen quantum dots [11]. However, the SOC also induces an interaction between the qubit state and uncontrolled charge fluctuators present in the semiconductor and gate stack [12; 13]. Recent work demonstrated that in most regimes of operation, qubit coherence is limited by charge noise [14]. For certain magnetic field orientations, the anisotropic characteristics of heavy hole states [15; 16; 17; 18] can enable operational regimes where the sensitivity to noise is suppressed [19; 20; 21; 22; 23], but, regardless of the approach chosen to decouple the qubit from noise, reducing charge noise at its source will eventually lead to an enhancement of the overall qubit performance. The origin of the dominant charge fluctuators is, however, still unclear and it is essential to get a better understanding of the location of these fluctuators to enable further optimization of the semiconductor and gate stack. To this end, we study the origin of the gate-induced hysteresis commonly observed in devices based on SiGe heterostructures [24; 25]. In the past, this hysteresis has also been utilized for reproducible tuning of QD arrays [26; 27]. We find that the hysteresis is caused by filling interface traps at the semiconductor-oxide interface. Using Hall bar (HB) and QD devices fabricated in the same material stack we measure the transport properties and charge noise environment of the Ge QW. As the mostly neutrally charged traps get populated by holes tunneling from the QW to the interface, the correspondingly increasing interface charge density and its spacial fluctuations strongly affect the hole gas properties in the QW. We compare different transport metrics as the voltage on the accumulation gate is decreased and find low-density mobility and percolation density to be affected in a strongly correlated manner. In contrast, peak mobility remains unaffected, proving that it is not an appropriate benchmark for devices operated at low densities such as spin qubits. We ultimately find that the population of interface traps has a negative impact on both low-density transport metrics and quantum dot charge noise. However, while changes in percolation density and low-density mobility are found to be persistent, the increase in charge noise decays over the timescale of days. This quantifies the detrimental effect that large negative gate voltages have on device stability, as often empirically observed. ## II Ge/SiGe heterostructure and device fabrication We fabricate Hall bar (Fig. 1a) and quantum dot (Fig. 1b) devices on a Ge/SiGe heterostructure. The heterostructure is composed of a strained germanium quantum well (sGe QW) embedded into two silicon-germanium buffer layers and grown using reduced-pressure chemical vapor deposition [29]. The sGe QW is buried 47 nm below the wafer surface, which is capped by a \(\sim\)1.5-nm-thick oxidized Si layer. Fig. 1d shows a transmission electron microscope (TEM) cross-section of the QW region. A schematic illustration of the full gate stack is presented in Fig. 1c. We create ohmic contacts to the QW by annealing Pt into the top SiGe barrier. A first layer of electrostatic gates (GL1, green in Fig. 1c) is defined on top of 7 nm of SiO\({}_{2}\) gate dielectric grown by plasma-enhanced atomic layer deposition (PE-ALD). The second layer of electrostatic gates (GL2, blue in Fig. 1c) is separated from GL1 by another 7 nm of SiO\({}_{2}\), resulting in a total spacing of 14 nm from the substrate surface. Two types of Hall bar devices are produced using the same fabrication process as the QD devices (see Methods), with the Hall bar top gate either defined in GL1 (HB\({}_{1}\)) or GL2 (HB\({}_{2}\)). The band alignment between the sGe and the SiGe layers defines an accumulation-mode quantum well for holes [28]. When an electric field is applied to the gate electrodes of the device, charges are loaded from the PtSiGe ohmic regions and a two-dimensional hole gas (2DHG) is accumulated, as illustrated in Fig. 1e. ## III Hall bar transport properties We study the magnetoresistance of Hall bar devices (Fig. 1a) at cryogenic temperatures as a function of the applied top gate voltage. After cooling the device down to \(\sim\)15 mK, we cyclically repeat the measurement protocol detailed in Fig. 2a (and Methods). Each measurement cycle starts by first applying an increasingly more negative voltage \(V_{\text{g}}=V_{\text{min}}\) to the gate, and then stepping \(V_{\text{g}}\) from 0 V to \(V_{\text{min}}\). For every \(V_{\text{g}}\) in each cycle, we sweep \(B_{z}\) to measure the Hall carrier density \(p\) and Hall transport mobility \(\mu\). Furthermore, we extract the percolation density as an alternative benchmark of the hole channel quality. Focusing on HB\({}_{2}\) with an oxide thickness of \(\sim\)15.5 nm, we first study the impact of hysteresis on the turn-on voltage \(V_{\text{t.o.}}\). Fig. 2b shows all turn-on curves of the channel, for \(V_{\text{min}}\) decreasing from \(-0.15\) V (red) to \(-3\) V (blue). We define \(V_{\text{t.o.}}\) as the gate voltage \(V_{\text{g}}\) at which the measured longitudinal current \(I_{\text{xx}}\) reaches 90% of the maximum current (\(V_{\text{t.o.}}\coloneqq V_{\text{g}}|I_{\text{xx}=0.9I_{\text{xx},\text{max}}}\)) and plot it as a function of \(V_{\text{min}}\) in Fig. 2c. We denote five distinct regimes (see Section IV), delimited by vertical dashed lines: * Depleted regime (\(-0.15\) V < \(V_{\text{min}}\)): channel has not turned on yet; * Non-hysteretic regime (\(-0.34\) V < \(V_{\text{min}}\) < \(-0.15\) V): channel turn-on voltage \(V_{\text{t.o.}}\) is independent of \(V_{\text{min}}\); * Screening regime, onset of hysteresis (\(-0.5\) V < \(V_{\text{min}}\) < \(-0.34\) V): \(V_{\text{t.o.}}\) begins to shift with \(V_{\text{min}}\); * Linear hysteretic regime (\(-1.45\) V < \(V_{\text{min}}\) < \(-0.5\) V): \(V_{\text{t.o.}}\) shifts proportionally to \(V_{\text{min}}\); * Saturated traps regime (\(V_{\text{min}}\) < \(-1.45\) V): \(V_{\text{t.o.}}\) asymptotically saturates to a finite value. Next, we explore the transport properties of the channel in these different regimes. We measure the longitudinal Figure 1: **Device layouts and Ge/SiGe heterostructure:** **a**, Schematic illustration of the measurement setup and Hall bars used for magnetoresistance measurements. The Hall bar gate is defined either in GL1 (green) or GL2 (blue) for HB\({}_{1}\) and HB\({}_{2}\), respectively. Nominally, the channel width is \(W=20\)\(\mu\)m and the length is \(L=100\)\(\mu\)m. We apply a source-drain bias \(V_{\text{SD}}\) and limit the measured longitudinal current \(I_{xx}\) with a serial impedance \(R=10\) M\(\Omega\). We measure the longitudinal and Hall voltages, \(V_{xx}\) and \(V_{xy}\), as a function of the gate voltage \(V_{\text{g}}\) and the out-of-plane magnetic field \(B_{z}\). **b**, False-coloured SEM-image (following the colour scheme of **c**) of a QD device similar to the one used for the QD measurements. The scale bar is 100 nm. The dashed red line corresponds to the cross-section depicted in **c**. We apply source and drain biases (\(V_{\text{S}}\) and \(V_{\text{D}}\)) and measure the differential current \(I_{\text{SD}}\). **c**, Cross-section of the Ge/SiGe heterostructure and gate stack of a QD device. The oxidized Si cap is coloured light blue to distinguish it from the grey PE-ALD SiO\({}_{2}\) oxide. **d**, Transmission electron microscopy (TEM) image of the sGe QW region. The scale bar is 20 nm. **e**, Schematic illustration of the valence band structure in the heterostructure when a negative gate voltage is applied. A 2DHG is accumulated in the sGe QW. The expected band offset between the sGe QW and the SiGe buffer is approximately 114 meV [28]. and Hall resistivity, \(\rho_{\rm xx}\) and \(\rho_{\rm xy}\) respectively, as a function of \(B_{\rm z}\) and \(V_{\rm g}\). Fig. 2d shows an example of these data for three different \(V_{\rm g}\) for \(V_{\rm min}=-0.8\) V. We extract the mobility-density curve for each \(V_{\rm min}\) cycle (see Methods) as plotted in Fig. 2e. Additionally, we measure the percolation density \(p_{\rm p}\) for six distinct values of \(V_{\rm min}\). We extract \(p_{\rm p}\) by fitting the longitudinal conductance \(\sigma_{xx}\) at low density to percolation theory [30; 31], as plotted in Fig. 2f (fitting procedure in Methods). We observe a clear change in the mobility-density curve and percolation density as \(V_{\rm min}\) is pushed towards more negative values, indicative of a change in the disorder potential impacting the channel. To this end, we extract and compare three different transport metrics (Fig. 2g): peak mobility \(\mu_{\rm peak}\) (top, triangles), low-\(p\) mobility \(\mu_{\bar{p}}\) (center, dots) at \(\bar{p}=10^{11}\) cm\({}^{-2}\) and percolation density \(p_{\rm p}\) (bottom, diamonds) as a function of \(V_{\rm min}\). The five regimes that we identified in the gate hysteresis behaviour are also reflected in the transport properties (vertical lines) and we will discuss their origin in Section IV. The ability to modify the transport properties of the channel by varying \(V_{\rm min}\) allows us to compare the different transport metrics. While peak Hall mobility is often used as a key benchmark for heterostructure quality, percolation density \(p_{\rm p}\) is more relevant for quantum materials where isolated charges are accumulated [30; 32]. Indeed, we observe that peak mobility is not representative of the low-density regime, as the trend of \(p_{\rm p}(V_{\rm min})\) is not mirrored by \(\mu_{\rm peak}(V_{\rm min})\). Unfortunately, percolation density is more difficult to accurately measure due to the high channel and contact resistances in the low-\(p\) regime and the complicated fitting procedure. However, we find that \(p_{\rm p}\) and \(\mu_{\bar{p}}\) are strongly anti-correlated as \(V_{\rm min}\) is decreased, suggesting that a change in the former can be inferred from a measurement of the latter. We thus propose the mobility at fixed low density as an easy-to-measure metric for benchmarking quantum materials. ## IV Different hysteresis regimes In this section, we discuss the origin of the observed regimes in \(V_{\rm t.o.}(V_{\rm min})\) and the corresponding features in \(\mu_{\bar{p}}(V_{\rm min})\) and \(p_{\rm p}(V_{\rm min})\). Our observations can be explained by the presence of a triangular quantum well (TQW) [33; 34; 35] in the SiGe barrier above the QW (see Fig. 3a, right panel) and a spatially varying density of neutral in-gap charge traps at the SiGe-oxide interface. The existence Figure 2: **Hall bar measurement data and analysis for HB\({}_{2}\)**: **a**, Schematic diagram illustrating the measurement protocol. **b**, Channel turn-on curves for \(V_{\rm min}\) decreasing from \(-0.15\) V (red) to \(-3\) V (blue). The grey dashed line marks 90% of \(I_{\rm xx,max}\), used to extract the turn-on voltage. **c**, Extracted turn-on voltage \(V_{\rm t.o.}\) as a function of \(V_{\rm min}\). The dashed vertical lines separate the different regimes-\(0.4\), Longitudinal resistivity \(\rho_{xx}\) (top) and Hall resistivity \(\rho_{xy}\) (bottom) as a function of the out-of-plane magnetic field \(B_{z}\), with \(V_{\rm min}=-0.8\) V. Different markers represent different \(V_{\rm g}\). The carrier density is extracted from the linear fit to \(p(B_{z})\) (solid grey lines). **e**, Hall mobility \(\mu\) as a function of carrier density \(p\) extracted for every \(V_{\rm min}\). **f**, Longitudinal conductance \(\sigma_{xx}\) as a function of \(p\), for 6 different \(V_{\rm min}\). The percolation density \(p_{\rm p}\) (diamonds) is extracted by fitting the solid data markers to percolation theory (see Methods). **g**, Different transport metrics as a function of \(V_{\rm min}\): peak mobility \(\mu_{\rm peak}\) (top), mobility \(\mu_{\bar{p}}\) at low density \(\bar{p}=10^{11}\) cm\({}^{-2}\) (middle) and percolation density \(p_{\rm p}\) (bottom). The dashed lines separate regimes 0-4. Error bars (shaded area) for \(p_{\rm p}\) are extracted by assessing the stability of the fit when extending the data range to include the transparent markers in **f** (details in Methods). of such interface traps is commonly observed in SiGeSiO\({}_{2}\) interfaces [36; 37; 38], with typical interface trap densities of \(d_{\rm i.t.}\sim 10^{12}\) cm\({}^{-2}\). The exact physical origin of the charge trapping cannot be determined from the measured data, but potential mechanisms include lattice-mismatch-induced dislocations in the heavily strained Si cap [39] or Ge-rich clusters at the interface [32]. As \(V_{\rm min}\) is pushed more negative after the initial cooldown, these traps fill, resulting in a changing charge environment as detected by the transport measurements. Fig. 3a details the different processes occurring for the regimes introduced in Section III. _Regime 0 -_ The Fermi level of the contacts lies above the highest-energy QW state, such that no charge is accumulated in the device. _Regime 1 -_ The Ge QW ground state rises above the Fermi energy of the contacts and a 2DHG is accumulated in the channel. The electric field across the SiGe barrier is small enough for the TQW to remain inaccessible and no charge accumulates at the surface (left panel of Fig. 3a). As a result, the charge density in the QW increases linearly with the applied gate voltage (\(p_{\rm QW}\propto|V_{\rm min}|\)) and no hysteresis is observed. While the mobility \(\mu_{\bar{p}}\) at fixed density is independent of \(V_{\rm min}\) and initially limited either by fixed charges in the oxide or a spacial variation of the interface charge denisty after cool down, \(\mu_{\rm peak}\) increases with \(|V_{\rm min}|\) as a result of improved screening against remote impurity scattering as \(p_{\rm QW}\) increases [35]. _Regime 2 -_ As the electric field strength across the SiGe barrier increases, the TQW starts to be populated by Fowler-Nordheim tunnelling (FNT) from the QW. From the TQW, charges will get trapped into in-gap interface states (middle panel of Fig. 3a). This accumulation of surface charge lowers the effective electric field across the SiGe barrier and stops the FNT in a self-regulated process [35]. As a result, decreasing \(V_{\rm min}\) will lead to an increase of the trapped charge density at the interface, \(p_{\rm i.t.}\), while the charge density in the QW stays saturated, \(p_{\rm QW}=p_{\rm QW,sat}\) (see Supplementary Fig. 1b). Any spatial fluctuations of the valence band edge across the Hall bar, induced e.g. by oxide or interface charges, will lead to a spread of the onset voltages for FNT (Supplementary Fig. 4a). This implies that regions with a deeper TQW will get charged more and become less deep, effectively smoothing out the potential fluctuations impacting the QW. The improvement of the low-density mobility with \(V_{\rm min}\) can therefore be attributed to a smoothing of the spatially varying disorder potential [40]. Regime 2 constitutes the gradual transition between regime 1 (density increasing solely in the QW) and regime 3 (increase of the trapped charge at the interface). _Regime 3 -_ After initial disorder potential fluctuations are smoothed, tunnelling to the surface will occur uniformly across the Hall bar. The maximum density in the QW is constant throughout this regime and all additional charge gets trapped in the SiGe-oxide interface traps, such that \(p_{\rm i.t.}\propto|V_{\rm min}|\). Due to the asymmetric tunnelling rates to the QW and the lack of a mobile channel to the ohmics, these charges remain trapped when the gate voltage is returned to 0 V. As a result, the turn-on voltage shifts linearly as \(V_{\rm min}\) is decreased and \(p_{\rm i.t.}\) increases linearly (see Methods and Supplementary Figs. 2,3). Transport metrics remain constant throughout this regime and are likely limited by disorder originating in the gate oxide, the QW, or the virtual substrate. _Regime 4 -_ As the charge density at the interface increases, all available interface traps are filled, resulting in the accumulation of a finite density \(p_{\rm TQW}\) in the triangular quantum well (right panel of Fig. 3a). By comparing the \(V_{\rm t.o.}(V_{\rm min})\) data to a one-dimensional Schrodinger-Poisson model, we estimate the density of the interface traps to be \(d_{\rm i.t.}\sim 10^{12}\) cm\({}^{-2}\) (see Methods Figure 3: **Charge trapping mechanism and gate capacitance study**: **a**, Schematic illustration of the valence band energy and hole density in regimes 1-4 when \(V_{\rm g}=V_{\rm min}\). Characteristic behaviour of the charge densities in the QW (\(p_{\rm QW}\)), trapped at the interface (\(p_{\rm i.t.}\)) and in the TQW (\(p_{\rm TQW}\)) are indicated at the bottom. **b**, Mobility at low density \(\mu_{\bar{p},\rm HB_{\rm i}}\) measured in HB\({}_{1}\) (green dots) and HB\({}_{2}\) (blue dots) as a function of \(V_{\rm min}\). The yellow and orange shadings indicate the voltage ranges over which the gate-capacitance study is performed for transition 1 and II, respectively. **c**, Correlation between the low-density mobility of each Hall bar as a function of the x-axis scaling factor \(R_{n}\) (see Methods) for transition \(n=\) I (yellow, top) and transition \(n\) = II (orange, bottom). The data are plotted for a density \(\bar{p}=10^{11}\) cm\({}^{-2}\) and the confidence intervals (grey bands) are calculated by repeating the procedure for different densities in the range \(\bar{p}=[0.7,2.0]\times 10^{11}\) cm\({}^{-2}\) (see Supplementary Fig. 1). The maximum correlations are corr(\(R_{\rm I}\)) = 0.98(2) and corr(\(R_{\rm II}\)) = 0.997(1). **d**, Illustration of the gate stack and heterostructure of HB1 (left) and HB2 (right) to scale, including a schematic of the effective planar capacitances between the gate, SiGe-oxide interface, and the QW. and Supplementary Figs. 2,3), in agreement with values measured in similar heterostructures [36; 37; 38]. Carriers that tunnel into the TQW can no longer be trapped at the interface, and as \(|V_{\rm g}|\) is reduced, they either tunnel back into the QW or directly into the leads if the percolation threshold in the TQW is reached. Therefore, these carriers do not lead to any further hysteresis. Again, assuming a spatially fluctuating interface trap density, the gate hysteresis gradually saturates as \(p_{\rm i.t.}=p_{\rm i.t.,sat}\propto d_{\rm i.t.}\) is reached for different \(V_{\rm min}\) across the Hall bar. Furthermore, at low density \(p_{\rm QW}\), a fluctuating potential landscape will be present, reflecting the spatially varying interface trap density (Supplementary Fig. 4b), which is now highly populated and positively charged. This disorder potential will lead to the rapid degradation of the low-density transport metrics as observed in Fig. 2g. Conversely, at high \(p_{\rm QW}\), charges loaded into the TQW will offset the interface trap fluctuations such that peak mobility is preserved or even increases slightly with more negative \(V_{\rm min}\). We strengthen our hypothesis by comparing the transition between the different charge loading mechanisms for Hall bars with different gate oxide thicknesses: HB\({}_{1}\) and HB\({}_{2}\). The transitional regimes (2 and 4) are characterized by a change in low-\(p\) mobility \(\mu_{\bar{p},{\rm HB}_{i}}\), due to spatial fluctuations of the interface quality across the Hall bar as detailed above. Therefore, to compare the transition voltages for both HBs, we plot \(\mu_{\bar{p},{\rm HB}_{i}}\) in Fig. 3b and observe that related features in \(\mu_{\bar{p}}\) do not appear at the same \(V_{\rm min}\) due to the different gate stacks and the corresponding difference in gate capacitance. To quantify the ratio between the transition voltages of each HB, we separate each mobility trace into two parts, isolating the two transitions in the form of abrupt changes in mobility. First, transition I at the onset of FNT (regime 2, yellow in Fig. 3b), corresponding to a steep increase in mobility due to screening of the initial disorder potential. Second, transition II (regime 4, orange in Fig. 3b), corresponding to a decrease in mobility when the interface traps become fully saturated. Next, we extract the ratio between the transition voltages for each Hall bar, by separately finding the \(R_{n}\) that maximizes corr\((R_{n})\coloneqq\) corr\((\mu_{\bar{p},{\rm HB}_{1}}(V_{\rm min}),\mu_{\bar{p},{\rm HB}_{2}}(R_{n}\times V_{\rm min}))\) for transition \(n={\rm I}\) and \(n={\rm II}\) (see Methods for details). We find \(R_{\rm f}\neq R_{\rm II}\), as shown in Fig. 3c. To explain this difference in the ratio of the transition voltages for both Hall bars, we employ a planar capacitor model as illustrated in Fig. 3d. When no charge is loaded at the SiGe-oxide interface, the electric field across the SiGe barrier is equal in both Hall bars when the ratio between the applied gate voltages equals \(R_{\rm QW}=C_{\rm QW,HB_{2}}/C_{\rm QW,HB_{1}}\), with \(C_{\rm QW,HB_{i}}^{-1}=C_{\rm SiGe}^{-1}+C_{\rm SiO_{2},HB_{i}}^{-1}\) being the series capacitance of the SiGe and SiO\({}_{2}\) layers. Using nominal layer thicknesses and dielectric constant values from literature [28], we find \(R_{\rm QW}=0.74\). This is in agreement with the extracted voltage ratio \(R_{\rm I}=0.71(3)\) for transition I, confirming that transition 1 occurs at a specific electric field in the SiGe barrier. This is consistent with our understanding that the onset of FNT occurs for a specific electric field resulting in a triangular barrier defined by the band offset and depth of the quantum well. In contrast, near transition II, the electric field across the SiGe is independent on \(V_{\rm min}\) as a result of the tunnelling equilibrium between the sGe QW and the SiGe TQW. Decreasing \(V_{\rm min}\) only leads to additional charge accumulation at the SiGe-oxide interface and increases the potential drop across the oxide layer. The ratio of gate voltages for which the electric field in the oxide is equal for both Hall bars is determined by the capacitance ratio \(R_{\rm SiO_{2}}\approx C_{\rm SiO_{2},{\rm HB}_{2}}/C_{\rm SiO_{2},{\rm HB}_{ 1}}=0.55\). This is in agreement with the extracted gate voltage ratio for transition II, \(R_{\rm II}=0.54(1)\), indicating that this transition occurs at a defined electric field in the gate oxide and thus a corresponding fixed charge density at the SiGe-oxide interface, compatible with our understanding of saturating the interface traps. We also note that by thermal cycling the system from base \(T\sim 15\) mK to room temperature and back, the device can be completely reset, which does not happen by sweeping the gate to \(V_{\rm g}=0\) V. After thermal cycling, the turn-on voltage is reverted to the original value (first red curve in Fig. 2b), indicative of a release of the trapped charges. ## V Charge noise Next, we perform charge noise measurements on a QD device (Fig. 1b), providing us with a local probe of the charge fluctuators that can limit hole spin qubit coherence [14]. We accumulate a single quantum dot under plunger gate P and observe clean, regular Coulomb peaks (CPs) in the measured source-drain current \(I_{\rm SD}\) (Fig. 4a). In addition, gates B1 and B2 can be used to control the tunnel coupling to the source and drain reservoirs, respectively. To observe the effects of gate hysteresis on charge noise, we employ a similar measurement protocol as for the Hall bars, where we measure the charge noise as we cyclically push the plunger gate voltage to more negative \(V_{\rm min}\), as detailed in Fig. 4a. After pushing the plunger gate voltage \(V_{\rm P}\) to \(V_{\rm min}\), we tune \(V_{\rm P}\) to locate the first measurable CP at \(V_{\rm P,CP}\) (see Methods and Supplementary Fig. 5a) and observe a hysteretic behaviour with \(V_{\rm P,CP}\) shifting linearly with \(V_{\rm min}\). Next, we assess the charge noise, using the Coulomb peak tracking (CPT) method, where \(V_{\rm P}\) is repeatedly and synchronously swept across the CP. This method allows us to probe very low-frequency noise, and we track the CP position \(V_{\rm P,CP}(t)\) for \(t=1.5\) hours by fitting the individual traces to a Gaussian function, as shown in Fig. 4b (see Methods). The CP position fluctuates over time, as a result of nearby charge fluctuators capacitively coupled to the QD. In Fig. 4c, we compare \(V_{\rm P,CP}(t)\) for different \(V_{\rm min}\) and find that the amplitude of the fluctuations increases for more negative \(V_{\rm min}\). To quantify this effect, we take the fast Fourier transform of \(V_{\rm P,CP}(t)\) and extract the power spectral density (PSD) \(S_{V}\) for each \(V_{\rm min}\). Using the plunger gate lever arm \(\alpha_{\rm P}\approx 0.23\) (see Supplementary Fig. 5b), we convert the PSD onto an energy scale and extract the noise spectral density \(S_{E}^{1/2}\) at \(f=10^{-2}\) Hz (Fig. 4d). As \(V_{\rm min}\) is decreased, the low-frequency noise \(S_{E,f=10^{-2}\rm{Hz}}^{1/2}\) increases over an order of magnitude and then saturates similarly to the low-density transport metrics. The observed trend of increasing noise and reduced stability of the Coulomb peaks is likely also linked to the filling of the SiGe-oxide interface traps. To get a better insight into the underlying physical mechanism, we fit every PSD trace \(S_{V}\) over the measured frequency range to a power law \(S_{0}/f^{\alpha}\) and compare the noise exponents \(\alpha\). We find that \(\alpha\) increases from \(\sim\)1.4 to \(\sim\)1.8 as \(V_{\rm{min}}\) is pushed more negative (Supplementary Fig. 6). A deviation from the expected \(1/f\) PSD can be caused by few fluctuators interacting strongly with the quantum dot [41] or a noisy relaxation process that leads to an Ornstein-Uhlenbeck behaviour [42] and corresponding \(1/f^{2}\) PSD. In our case, we observe that the CP position exhibits a noisy drift that increases with \(V_{\rm{min}}\) and masks the underlying \(1/f\) noise at low frequencies, despite letting the system settle for \(\sim\)10 min after pushing \(V_{\rm{P}}\) to \(V_{\rm{min}}\). We believe that this charge offset drift [43] is caused by the slow relaxation of the charges accumulated at the interface, as a result of low tunnel rates to nearby charge traps or back to the QW. This leads to a slow drift with a \(1/f^{2}\) noise spectrum. In the penultimate measurement cycle (\(V_{\rm{min}}=-2.05\) V, diamond data point in Fig. 4d), we investigate how this low-frequency noise evolves over time. We extract \(S_{E,f=10^{-2}\rm{Hz}}^{1/2}\) as a function of the waiting time \(T\) after setting \(V_{\rm{P}}\) to \(V_{\rm{min}}\) and repeatedly take 2-hour-long CPT measurements over a time span of \(>30\) hours (full data in Supplementary Fig. 7). The results are shown in Fig. 4e and we observe that the low-frequency noise intensity decreases monotonously, approaching the lowest noise level measured initially at \(V_{\rm{min}}=-0.75\) V. The two outliers are caused by a large jump of the CP position, \(V_{\rm{P,CP}}(t)\) during the CPT measurement. We then confirm that the increase in noise is gate voltage-induced and reproducible by pushing \(V_{\rm{P}}\) to \(V_{\rm{min}}=-2.15\) V and acquiring the leftmost data point in Fig. 4d. The charge noise increases to an intensity similar to the previous cycle. Since the characteristic time scale of the noise decay is of the order of a day, the increased noise power is visible only at very low frequencies (\(f<10^{-2}\) Hz) and cannot easily be observed using e.g. Figure 4: **Charge noise measurement and data analysis in QD**: **a**, Schematic illustration of the measurement protocol. We repeatedly perform charge noise measurements using the CPT method, as the plunger gate voltage is pushed more negatively. **b**, CPT charge noise measurement for \(V_{\rm{min}}=-1.05\) V, highlighting three \(I_{\rm{SD}}\) traces (dots) and the respective Gaussian fits (dashed white lines). The solid white line indicates the extracted \(V_{\rm{P,CP}}(t)\). **c**, CPT charge noise measurements for different \(V_{\rm{min}}\) values (white text). The solid line illustrates the extracted \(V_{\rm{P,CP}}(t)\), offset by \(-2\) mV for visibility. **d**, Charge noise spectral density \(S_{E,f=10^{-2}\rm{Hz}}^{1/2}\) extracted from the measured \(S_{V}\) as a function of \(V_{\rm{min}}\). The central values and the corresponding error bars plotted at each \(V_{\rm{min}}\) are respectively the means and the standard deviations of the noise in a frequency interval of \(\pm 5\%\) around \(f=10^{-2}\) Hz. **e**, Charge noise spectral density \(S_{E,f=10^{-2}\rm{Hz}}^{1/2}\) as a function of time \(T\) passed since the \(V_{\rm{P}}\) has been set to \(V_{\rm{min}}=-2.05\) V (diamond marker in **d**). The plotted central values and the corresponding error bars are obtained as in **d**. **f**, Voltage set on B1 (\(V_{\rm{B1,CP}}\), blue) and B2 (\(V_{\rm{B2,CP}}\), red) to stay on the CP resonance with symmetric reservoir tunnel rates (\(V_{\rm{P}}=-0.4\) V) as the voltage on B1 is pushed to \(V_{\rm{B1,min}}\) in cycles. the Coulomb peak flank (CPF) method (see Methods). Additionally, in a separate cool down, we fix \(V_{\rm P}=V_{\rm P,CP}\) and cyclically push the voltage on barrier gate B1 to increasingly negative voltages \(V_{\rm B1,min}\). After each cycle, we tune \(V_{\rm B1}\) and \(V_{\rm B2}\) to recover similar and symmetric tunnel rates (Supplementary Fig. 5c). We observe that this predominantly requires a gate voltage correction on gate B1, as shown in Fig. 4f. This shows that the charge trap filling is a local effect, arising close to the pushed gate, and thus confirms that charge hysteresis and noise are linked to charge traps at the SiGe-oxide interface rather than defects deeper down in the heterostructure stack. ## VI Conclusions We studied and modeled the voltage-induced hysteretic behaviour commonly observed in SiGe heterostructures that can lead to difficulties in tuning larger quantum devices. We pinpoint its origin to the incremental filling of a spatially varying density of charge traps at the SiGe-oxide interface. We find that the population of traps is locally induced, as a result of the maximum electric field applied between gate electrodes and the QW. This is ultimately detrimental to the properties of the 2DHG in the few-carrier regime. In particular, we find that both the mobility at low density and the percolation density as a function of the lowest applied gate voltage, V-min, are fully anti-correlated and change as a result of the spatially fluctuating trap density across the Hall bar. In contrast, we observe that the peak mobility is mostly unchanged, unveiling its unfitness as a benchmark for the quality of quantum materials. Charge noise shows an increased initial \(1/f^{2}\) component at low frequencies, which recovers over a timescale of about a day. We attribute this to a noisy and slow relaxation process of the accumulated charges at the SiGe-oxide interface. While the increased charge noise level recovers over time, the induced charge disorder is persistent, as revealed by the percolation density and mobility measurements, and can lead to qubit variability across the device. The interface trap population is fully reset by a thermal cycle of the device, but not by returning the gate voltage to 0 V. These results stress the need for a conservative tuning strategy and highlight the importance of the SiGe-oxide interface quality for the realization of reproducible, stable, and high-quality germanium quantum devices.
2306.16726
Chern numbers of terminal threefolds
Let X be a smooth threefold. We show that if $X_i\dashrightarrow X_{i+1}$ is a flip which appears in the $K_X$-MMP, then $c_1(X_i)^3-c_1(X_{i+1})^3$ is bounded by a constant depending only on $b_2(X)$.
Paolo Cascini, Hsin-Ku Chen
2023-06-29T06:59:39Z
http://arxiv.org/abs/2306.16726v2
# On the Chern numbers of smooth complex threefolds ###### Abstract. We show that the Chern numbers of a smooth complex projective threefold are bounded by a constant which depends only on the topological type of the threefold, provided that the cubic form of the threefold has non-zero discriminant. ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Terminal threefolds * 2.2 Negativity lemma * 3 Cubic forms after flips * 3.1 Topology of flipping contractions * 3.2 Geometry of flips ## 1. Introduction The aim of this paper is to prove the following: **Theorem 1.1**.: _Let \(X\) be a smooth complex projective threefold and let \(F_{X}\) be its associated cubic form. Assume that \(\Delta_{F_{X}}\neq 0\)._ _Then all the Chern numbers of \(X\) are bounded by a number that depends only on the topology of the manifold underlying \(X\)._ In the Theorem, \(F_{X}\) denotes the cubic form defined by the cup product on \(H^{2}(X,\mathbb{Z})\) and \(\Delta_{F_{X}}\) denotes its discriminant. In particular, our result gives a partial positive answer to a question raised by Kotschick regarding which linear combinations of Chern numbers are determined up to finite ambiguity by the topology ([14, SS1] and [15, SS1.1]). Note that the same result does not hold for non-Kahler complex threefolds [16], nor for complex projective varieties of dimension greater than three [21]. On the other hand, the question has a positive answer in the case of Kahler varieties underlying a spin manifold [22]. In the case of a smooth projective threefold \(X\), the only Chern numbers are \(c_{1}^{3}(X)\), \(c_{1}c_{2}(X)\) and \(c_{3}(X)\). The last one coincides with the ## 1. Introduction Let \(X\) be a smooth complex projective threefold and let \(F_{X}\) be the associated cubic form. Assume that \(\Delta_{F_{X}}\neq 0\). _Then \(\Delta_{F_{Y}}\neq 0\) and, up to an \(SL(b_{2}(Y),\mathbb{Z})\)-action, \(F_{Y}\) belongs to a finite set which depends only on \(F_{X}\) and \(b_{2}(X)\)._ _Moreover, \(p_{1}(Y)\in\operatorname{Hom}(H^{2}(Y,\mathbb{Q}),\mathbb{Q})\) belongs to a finite set which depends only on \(F_{X}\), \(p_{1}(X)\), \(b_{1}(X)\), \(b_{2}(X)\) and \(b_{3}(X)\)._ Theorem 1.3 gives a positive answer of Question 4 of [23], under a few extra assumptions. The proof of Theorem 1.3 relies on the Chen-Hacon factorisation of a terminal threefold flip or of a divisorial contraction onto a curve [7]. As a consequence of Theorem 1.3, we also obtain: **Corollary 1.4**.: _Let \(X\) be a projective smooth complex threefold of Kodaira dimension \(\kappa(X)\). Assume that the associated cubic form \(F_{X}\) of \(X\) has non-zero discriminant._ _Then_ 1. \(\kappa(X)\neq 1\)_;_ 2. _if_ \(\kappa(X)=2\)_,_ \(Y\) _is a minimal model of_ \(X\) _and_ \(Y\to Z\) _is the induced Iitaka fibration, then_ \(b_{2}(Z)=1\)_; and_ 3. _if_ \(\kappa(X)=-\infty\) _and_ \(Y\to Z\) _is the induced Mori fibre space, then_ \(\dim Z\neq 1\)_, and_ \(\dim Z=2\) _only when_ \(b_{2}(Z)=1\)_._ **Acknowledgements.** The first author would like to thank the National Center for Theoretical Sciences in Taipei and Professor J. A. Chen for their generous hospitality, where some of the work for this paper was completed. Some of the work was done when the second author was visiting Imperial College London. The second author thanks Imperial College London for their hospitality. The second author is supported by KIAS individual Grant MG088901. ## 2. Preliminaries ### Terminal threefolds We work over the complex numbers. #### 2.1.1. Singularities of terminal threefolds Three dimensional terminal singularities were classified by Reid [19] and Mori [17]. A three-dimensional Gorenstein terminal singularity is an isolated compound Du Val singularity. That is, an isolated hypersurface singularity defined by the equation \[f(x,y,z)+ug(x,y,z,u)=0\] such that \((f(x,y,z)=0)\subset\mathbb{A}^{3}\) defines a Du Val singularities. A three-dimensional non-Gorenstein terminal singularity belongs to one of the following six classes: \(cA/r\), \(cAx/2\), \(cAx/4\), \(cD/2\), \(cD/3\) and \(cE/2\) (see [20, (6.1)] for the explicit description of each class). Let \((P\in X)\) be a germ of three-dimensional terminal point. The _Cartier index_ of \(X\) at \(P\) is the smallest positive integer \(r\) such that \(rK_{X}\) is Cartier near \(P\). In particular, if \(D\) is any Weil \(\mathbb{Q}\)-Cartier divisor on \(X\) then \(rD\) is also Cartier near \(P\). It is known that (cf. [20, (6.4)]) one can deform this singularity to get a family of terminal singularities, such that a general member of this family has only cyclic-quotient singularities. Assume that \((P\in X)\) deforms to \(k\) cyclic quotient points \(P_{1}\),..., \(P_{k}\). Since \(P_{i}\) is a three-dimensional terminal cyclic-quotient singularity, it is of type \(\frac{1}{r_{i}}(1,-1,b_{i})\) for some \(0<b_{i}\leq\frac{r_{i}}{2}\). **Definition 2.1**.: Notation as above. 1. The data \(\mathcal{B}(P\in X)\coloneqq\{(r_{i},b_{i})\}_{i=1}^{k}\) is called the _basket data_ of the singularity \((P\in X)\). 2. The number \(aw(P\in X)\coloneqq k\) is called the _axial weight_ of the singularity. 3. We denote \(\Xi(P\in X)\coloneqq r_{1}+...+r_{k}\). We refer to [7, Remark 2.1] for explicit values of these invariants. **Remark 2.2**.: In [12] Kollar and Mori define another invariant, called the _axial multiplicity_ of \((P\in X)\). If \(P\in X\) is not a \(cAx/4\) singularity, than the axial multiplicity coincides with the axial weight. If \(P\in X\) is a \(cAx/4\) point then the axial multiplicity is equal to \(aw(P\in X)+1\). **Definition 2.3**.: Let \(Y\to X\) be a divisorial contraction between terminal varieties, which contracts a divisor \(E\) to a point \(P\in X\). We say that \(Y\to X\) is a _\(w\)-morphism_ if \(a(X,E)=\frac{1}{r_{P}}\), where \(r_{P}\) is the Cartier index of \(K_{X}\) near \(P\). **Definition 2.4**.: The _depth_ of a terminal singularity \(P\in X\), denoted by \(dep(P\in X)\), is the minimal length of a sequence \[X_{m}\to X_{m-1}\rightarrow\cdots\to X_{1}\to X_{0}=X,\] such that \(X_{m}\) is Gorenstein and \(X_{i}\to X_{i-1}\) is a \(w\)-morphism for all \(1\leq i\leq m\). Given a terminal threefold \(X\), we define \[dep(X)\coloneqq\sum_{P}dep(P\in X).\] Note that the existence of such a sequence follows from [9, Theorem 1.2]. **Lemma 2.5**.: _Let \(X\) be a terminal threefold and \(P\in X\) be a singular point._ _Then the following holds._ 1. _The basket data of_ \(P\in X\) _belongs to a finite set which depends only on_ \(dep(P\in X)\)_._ 2. _The Cartier index of_ \(X\) _at_ \(P\) _and the axial multiplicity of_ \(P\in X\) _are both bounded by_ \(2dep(P\in X)\)_._ Proof.: By [4, Lemma 3.2] we know that \(\Xi(P\in X)\leq 2dep(X)\). Hence the basket data of \(P\in X\) belongs to a finite set which depends only on \(dep(P\in X)\). Now from [7, Remark 2.1] we know that the Cartier index of \(K_{X}\) near \(P\) and the axial multiplicity of \(P\in X\) are both bounded by \(\Xi(P\in X)\). This proves (2). Let \(X\) be a terminal threefold. As in [23, SS3.1], we define the first Pontryagin class \(p_{1}(X)\in\operatorname{Hom}(H^{2}(X,\mathbb{Q}),\mathbb{Q})\) of \(X\), as \[p_{1}(X)=c_{1}(X)^{2}-2c_{2}(X).\] Note that if \(X\) is smooth, then \(p_{1}(X)\) only depends on the topology of the underlying manifold of \(X\) (e.g. see [18, SS1.1]). #### 2.1.2. The singular Riemann-Roch formula Given a projective terminal threefold \(X\) and a Weil divisor \(D\) on \(X\), we have the following version of the Riemann-Roch formula due to Reid [20]: \[\chi(\mathcal{O}_{X}(D))=\chi(\mathcal{O}_{X}) +\frac{1}{12}D(D-K_{X})(2D-K_{X})+\frac{1}{12}D.c_{2}(X)\] \[+\sum_{P\in\mathcal{B}(X)}\left(-i_{P}\frac{r_{P}^{2}-1}{12r_{P} }+\sum_{j=1}^{i_{P}-1}\frac{\overline{jb_{P}}(r_{P}-\overline{jb_{P}})}{2r_{P }}\right),\] where \(\mathcal{B}(X)=\{(r_{P},b_{P})\}\) is the basket data of \(X\) and \(i_{P}\) is an integer such that \(\mathcal{O}_{X}(D)\cong\mathcal{O}_{X}(i_{P}K_{X})\) near \(P\). In particular, if \(D=K_{X}\), we have \[K_{X}.c_{2}(X)=-24\chi(\mathcal{O}_{X})+\sum_{P\in\mathcal{B}(X)}\left(r_{P}- \frac{1}{r_{P}}.\right)\] #### 2.1.3. Chen-Hacon factorisation We will extensively use the following result: **Theorem 2.6** ([7] Theorem 3.3).: _Assume that either \(X\dashrightarrow X^{\prime}\) is a flip over \(W\), or \(X\to W\) is a divisorial contraction to a curve such that \(X\) is not Gorenstein over \(W\)._ _Then there exists a diagram_ _such that_ _(1) \(Y_{1}\to X\) is a \(w\)-morphism,_ _(2) \(Y_{k}\to X^{\prime}\) is a divisorial contraction,_ _(3) \(Y_{1}\dashrightarrow Y_{2}\) is either a flip or a flop over \(W\) and_ _(4) \(Y_{i}\dashrightarrow Y_{i+1}\) is a flip over \(W\) for \(i>0\)._ _Moreover, \(X^{\prime}\to W\) is a divisorial contraction to a point if \(X\to W\) is divisorial._ **Remark 2.7**.: Notation as in the above theorem. 1. Assume that \(X\dashrightarrow X^{\prime}\) is a flip, \(C_{Y_{1}}\) is the flipping/flopping curve of \(Y_{1}\dashrightarrow Y_{2}\) and \(C_{X}\) is the image of \(C_{Y_{1}}\) on \(X\). Then \(C_{X}\) is a flipping curve of \(X\dashrightarrow X^{\prime}\). This fact follows from the construction of the diagram. 2. By [7, Proposition 3.5], we have \[dep(X)-1=dep(Y_{1})\geq dep(Y_{2})>...>dep(Y_{k}).\] In particular, \(k\leq dep(X)+1\). 3. \(dep(X^{\prime})<dep(X)\) by [7, Proposition 3.5 and the proof of Proposition 3.6]. **Lemma 2.8**.: _Assume that \(X\) is a smooth threefold and_ \[X=X_{0}\dashrightarrow X_{1}\dashrightarrow...\dashrightarrow X_{k}\] _is a sequence of steps of a \(K_{X}\)-MMP._ _Then \(dep(X_{i})<\rho(X)\) for all \(i\)._ Proof.: By [7, Proposition 2.15, Proposition 3.5, Proposition 3.6] we know that \(dep(X_{i})\geq dep(X_{i+1})-1\) if \(X_{i}\to X_{i+1}\) is a divisorial contraction, and \(dep(X_{i})<dep(X_{i+1})\) if \(X_{i}\dashrightarrow X_{i+1}\) is a flip. It follows that if \(m\) is the number of divisorial contractions in the MMP, then \(dep(X_{i})\leq m\) for all \(i\). Since \(m<\rho(X)\), we know that \(dep(X_{i})<\rho(X)\). ### Negativity lemma We recall the negativity lemma for flips: **Lemma 2.9**.: _Let \((X,D)\) be a log pair and assume that \(X\dashrightarrow X^{\prime}\) is a \((K_{X}+D)\)-flip._ _Then for any exceptional divisor \(E\) over \(X\), we have that \(a(E,X,D)\leq a(E,X^{\prime},D_{X^{\prime}})\) where \(D_{X^{\prime}}\) is the strict transform of \(D\) in \(X^{\prime}\). Moreover, the inequality is strict if the centre of \(E\) in \(X\) is contained in the flipping locus._ Proof.: The Lemma is a special case of [13, Lemma 3.38]. **Corollary 2.10**.: _Let \((X,D)\) be a log pair and assume that \(X\dashrightarrow X^{\prime}\) is a \((K_{X}+D)\)-flip and \(C\subset X\) is an irreducible curve which is not a flipping curve._ _Then \((K_{X}+D)\cdot C\geq(K_{X^{\prime}}+D_{X^{\prime}})\cdot C_{X^{\prime}}\) where \(C_{X^{\prime}}\) and \(D_{X^{\prime}}\) are the strict transform on \(X^{\prime}\) of \(C\) and \(D\) respectively. Moreover, the inequality is strict if \(C\) intersects the flipping locus non-trivially._ Proof.: Let be a common resolution such that \(C\) is not contained in the indeterminacy locus of \(\phi\). Then Lemma 2.9 implies that \(F\coloneqq\phi^{*}(K_{X}+D)-{\phi^{\prime}}^{*}(K_{X^{\prime}}+D_{X^{\prime}})\) is an effective divisor and is supported on exactly those exceptional divisors whose centres on \(X\) are contained in the flipping locus. Hence \[(K_{X}+D)\cdot C-(K_{X^{\prime}}+D_{X^{\prime}})\cdot C_{X^{\prime}} =(\phi^{*}(K_{X}+D)-{\phi^{\prime}}^{*}(K_{X^{\prime}}+D_{X^{ \prime}}))\cdot C_{W}\] \[=F\cdot C_{W}\geq 0\] where \(C_{W}\) is the strict transform of \(C\) on \(W\). The last inequality is strict if and only if \(C_{W}\) intersects \(F\) non-trivially, or equivalently, \(C\) intersects the flipping locus non-trivially. ## 3. Cubic forms after flips ### Topology of flipping contractions **Convention 3.1**.: Let \(R\) be a commutative ring. A _cubic form_ in \(R\) is an homogenous polynomial \(F\in R[x_{0},\ldots,x_{n}]\) of degree three. We denote by \(\Delta_{F}\) its discriminant (e.g. see [3, SS2.3]). Recall that if \(R\) is an algebraically closed field of characteristic zero then \((F=0)\subset\mathbb{P}^{n}\) is singular if and only if \(\Delta_{F}=0\). Following [3, Definition 2.9], given a cubic form \(G\in R[x_{1},\ldots,x_{n}]\) and elements \(b=(b_{1},\ldots,b_{n})\in R^{n}\) and \(a\in R\), we denote by \((a,b,G)\) the cubic form \[F(x_{0},\ldots,x_{n})\coloneqq ax_{0}^{3}+x_{0}^{2}\cdot\sum_{i=1}^{n}b_{i}x_ {i}+G(x_{1},\ldots,x_{n})\in R[x_{0},\ldots,x_{n}]\] We say that two cubic forms \(F_{1},F_{2}\in R[x_{0},\ldots,x_{n}]\) are _equivalent_ if there exists \(T\in\operatorname{SL}(n+1,R)\) such that \(F_{1}(T\cdot x)=F_{2}(x)\). If the cubic form \(F\in R[x_{0},\ldots,x_{n}]\) is equivalent to \((a,b,G)\) for some cubic form \(G\in R[x_{1},\ldots,x_{n}]\) and elements \(b=(b_{1},\ldots,b_{n})\in R^{n}\) and \(a\in R\), then, for simplicity, we will just denote it by \[F\sim(a,b,G).\] Let \(X\) be a threefold. Given \(\sigma_{1}\), \(\sigma_{2}\in H^{2}(X,\mathbb{Z})\), we denote by \(\sigma_{1}\cdot\sigma_{2}\in H_{2}(X,\mathbb{Z})\) the image of \(\sigma_{1}\cup\sigma_{2}\) under the morphism \(H^{4}(X,\mathbb{Z})\to H_{2}(X,\mathbb{Z})\) defined by taking the cap product with the fundamental class of \(X\). We denote by \(F_{X}\) the cubic form defined by the cup product on \(H^{2}(X,\mathbb{Z})\). As in [3, Definition 2.12]), we define \[S_{X}\coloneqq\sup\{|a|\in\mathbb{Z}\mid F_{X}\sim(a,b,G)\text{ for some }b\in\mathbb{Z}^{n}\text{ and }G\in\mathbb{Z}[x_{1},\ldots,x_{n}]\}.\] **Lemma 3.2**.: _Assume that \(X\to W\) is a birational morphism between varieties with rational singularities such that \(\rho(X/W)=1\)._ _Then \(b_{2}(X)=b_{2}(W)+1\)._ Proof.: The proof of [3, Lemma 2.16 (3)] works unchanged under our assumption. **Lemma 3.3**.: _Let \(\phi\colon X\to W\) be a small birational morphism between three-dimensional varieties with rational singularities and such that \(\rho(X/W)=1\)._ _Then there exists a cycle \(C\) which generates \(H_{2}(X/W,\mathbb{Z})\) and a subspace \(V\subset H_{2}(X,\mathbb{Z})\) such that \(H_{2}(X,\mathbb{Z})=\langle[C],V\rangle\) and for all \(\sigma\in H^{2}(W,\mathbb{Z})\) and \(\tau\in H^{2}(X,\mathbb{Z})\), we have that \(\tau\cdot\phi^{*}\sigma\in V\)._ Proof.: Assume that \(H_{2}(X/W,\mathbb{Z})=\langle[C_{1}],...,[C_{k}]\rangle\). If \(k=1\) we choose \(C=C_{1}\). Otherwise since by Lemma 3.2 we have that \(b_{2}(X)=b_{2}(W)+1\), it follows that \(a[C_{k-1}]\equiv b[C_{k}]\) for some integers \(a\), \(b\), such that \(a\) and \(b\) are relatively prime. There are integers \(s\), \(t\) such that \(sa+tb=1\). If we let \(C^{\prime}_{k-1}=tC_{k-1}+sC_{k}\), then one can see that \(b[C^{\prime}_{k-1}]\equiv[C_{k-1}]\) and \(a[C^{\prime}_{k-1}]\equiv[C_{k}]\). Hence \(\langle[C_{1}],...,[C_{k}]\rangle=\langle[C_{1}],...,[C_{k-2}],[C^{\prime}_{k -1}]\rangle\). After repeating this process \(k-2\) times, we can find a cycle \(C\) such that \(\langle[C]\rangle=\langle[C_{1}],...,[C_{k}]\rangle\). Now let \([C]\), \([\Gamma_{1}]\),..., \([\Gamma_{n}]\) be a basis of the free part of \(H_{2}(X,\mathbb{Z})\), such that \(\mathrm{Supp}(\Gamma_{i})\) does not contain any component of \(exc(\phi)\) for all \(i\). Let \(A\) be an ample divisor on \(X\). Then we may assume that \(A\) intersects \(\Gamma_{i}\) transversally and \(A\) does not intersect \(\Gamma_{i}\cap exc(\phi)\) for all \(i\). In this case we know that \(A\cdot\Gamma_{i}=\phi_{*}A\cdot\phi_{*}\Gamma_{i}\) for all \(i\). Let \(V=\langle[\Gamma_{1}],...,[\Gamma_{n}],H_{2}(X,\mathbb{Z})_{tor}\rangle\) where \(H_{2}(X,\mathbb{Z})_{tor}\) is the torsion part of \(H_{2}(X,\mathbb{Z})\). Given \(\sigma\in H^{2}(W,\mathbb{Z})\) and \(\tau\in H^{2}(X,\mathbb{Z})\), we may write \(\tau\cdot\phi^{*}\sigma=\lambda[C]+\Gamma\) with \(\Gamma\in V\). Since \(A\cdot\Gamma=\phi_{*}A\cdot\phi_{*}\Gamma\), we have that \[\lambda A\cdot C+A\cdot\Gamma=A\cdot\tau\cdot\phi^{*}\sigma=\phi_{*}A\cdot \phi_{*}\tau\cdot\sigma=\phi_{*}A\cdot\phi_{*}\Gamma=A\cdot\Gamma.\] Thus, \(\lambda=0\) and our claim follows. **Proposition 3.4**.: _Let \(\phi\colon X\to W\) is a small birational morphism between three-dimensional varieties with rational singularities and such that \(\rho(X/W)=1\)._ _Then there exists a basis \(\eta_{1},\ldots,\eta_{n}\) of \(H^{2}(X,\mathbb{Z})\) such that \(\eta_{1}\cdot\eta_{i}\cdot\eta_{j}=0\) if \((i,j)\neq(1,1)\) and \(\phi^{*}H^{2}(W,\mathbb{Q})=\langle\eta_{2},...,\eta_{n}\rangle_{\mathbb{Q}}\)._ _In particular, let \(F_{X}\) and \(F_{W}\) be the associated cubic forms of \(X\) and \(W\) respectively. Then \(F_{X}\sim(a,0,F_{W})\) for some \(a\in\mathbb{Z}\)._ Proof.: We may write \(H_{2}(X,\mathbb{Z})=\langle[C],V\rangle\) as in Lemma 3.3. By the universal coefficient theorem, there exists a basis \(\eta_{1}\),..., \(\eta_{n}\) of the free part of \(H^{2}(X,\mathbb{Z})\) such that \(\eta_{1}\cdot C=1\) and \(\eta_{1}\cdot\Gamma=0\) for all \(\Gamma\in V\). Replacing \(\eta_{i}\) by \(\eta_{i}-(\eta_{i}\cdot C)\eta_{1}\) we may assume that \(\eta_{i}\cdot C=0\) for \(i>1\), hence \(a_{i}\eta_{i}=\phi^{*}\sigma_{i}\) for some \(a_{i}\in\mathbb{N}\) and for some \(\sigma_{i}\in H^{2}(W,\mathbb{Z})\). Now Lemma 3.3 says that \(a_{i}a_{j}\eta_{i}\cdot\eta_{j}\in V\) if \((i,j)\neq(1,1)\) (we define \(a_{1}=1\) for convenience). Thus \(\eta_{1}\cdot\eta_{i}\cdot\eta_{j}=0\) for \((i,j)\neq(1,1)\). If follows that \(F_{X}=\eta_{1}^{3}x_{1}^{3}+F_{W}(x_{2},...,x_{n})\) or, equivalently, \(F_{X}\sim(\eta_{1}^{3},0,F_{W})\). **Corollary 3.5**.: _Assume that \(X\dasharrow X^{\prime}\) is a three-dimensional terminal flip._ _Then \(\Delta_{F_{X}}=0\) if and only if \(\Delta_{F_{X^{\prime}}}=0\)._ Proof.: Assume that \(X\dasharrow X^{\prime}\) is a flip over \(W\), then Proposition 3.4 implies that \(F_{X}\sim(a,0,F_{W})\) and \(F_{X^{\prime}}\sim(a^{\prime},0,F_{W})\) for some integers \(a\) and \(a^{\prime}\). Thus \((F_{X}=0)\) has singularities if and only if \((F_{W}=0)\) has singularities, which is also equivalent to the fact that \((F_{X^{\prime}}=0)\) has singularities. This implies that \(\Delta_{F_{X}}=0\) if and only if \(\Delta_{F_{X^{\prime}}}=0\). **Lemma 3.6**.: _Let \(X\) be a threefold and let \(F_{X}\) be the cubic form associated to \(X\)._ _Then \(\Delta_{F_{X}}=0\) if and only if there exists \(\tau\in H^{2}(X,\mathbb{C})\) such that \(\tau^{2}=0\)._ Proof.: Fix a basis \(v_{1}\),..., \(v_{n}\) of \(H^{2}(X,\mathbb{Z})\). Then \[F_{X}=((x_{1},...,x_{n})(v_{1},...,v_{n})^{t})^{3}.\] If \(T\in SL(n,\mathbb{C})\), then \(T\cdot F_{X}=((x_{1},...,x_{n})T(v_{1},...,v_{n})^{t})^{3}\). Now \(\Delta_{F_{X}}=0\) if and only if the hypersurface \((F_{X}=0)\subset\mathbb{P}^{n-1}_{(x_{1},...,x_{n})}\) has singularities. Fix a point \(P=(a_{1},...,a_{n})\in(F_{X}=0)\). There exists a linear transformation \(T\in SL(n,\mathbb{C})\) such that \[((x_{1},...,x_{n})\cdot T)|_{x_{1}=a_{1},...,x_{n}=a_{n}}=(1,0,...,0).\] Now \(P\) is a singular point of \((F_{X}=0)\) if and only if \((T^{-1}(v_{1}))^{3}=(T^{-1}(v_{1}))^{2}(T^{-1}(v_{i}))=0\) for all \(i=2\),..., \(n\). This is equivalent to saying that \((T^{-1}(v_{1}))^{2}=0\). **Corollary 3.7**.: _Let \(Z\) be a threefold and let \(F_{Z}\) be the cubic form associated to \(Z\). Assume that \(Z\to V\) is a morphism such that either \(V\) is a curve or \(V\) is a surface such that \(b_{2}(V)>1\)._ _Then \(\Delta_{F_{Z}}=0\)._ Proof.: By Lemma 3.6, it is enough to show that there exists \(\tau\in H^{2}(Z,\mathbb{C})\) such that \(\tau^{2}=0\). Assume first that \(V\) is a curve. Then it is enough to take \(\tau\in H^{2}(Z,\mathbb{Z})\) to be the class corresponding to a general fibre of \(Z\to V\). Assume now that \(V\) is a surface with \(b_{2}(V)>1\). Let \(\sigma_{1},\sigma_{2}\in H^{2}(V,\mathbb{Z})\) be linearly independent elemnts. If \(\sigma_{2}^{2}=0\) then we can take \(\tau\) to be the pull-back of \(\sigma_{2}\) on \(V\). Otherwise there exists \(\lambda\in\mathbb{C}\) such that \((\sigma_{1}+\lambda\sigma_{2})^{2}=0\) and we can take \(\tau\) to be the pull-back of this class. ### Geometry of flips **Convention 3.8**.: Let \(H_{0}\) be a (germ of a) Du Val surface and let \(\bar{H}\to H_{0}\) be the minimal resolution of \(H_{0}\). We denote \(\bar{\rho}(H_{0})\coloneqq\rho(\bar{H}/H_{0})\). Let \(\phi\colon H\to H_{0}\) be a partial resolution of \(H_{0}\), i.e. \(H\) also admits Du Val singularities and \(\bar{H}\) is also the minimal resolution of \(H_{0}\). Let \(D\) be an effective divisor on \(H\). We denote \[\delta_{H\to H_{0}}(D)\coloneqq\sum_{i=1}^{m}\operatorname{mult}_{\Gamma_{i}}D\] where the sum runs over all the \(\phi\)-exceptional divisors. **Lemma 3.9**.: _Fix a positive integer \(n\). Let_ be a three-dimensional terminal flip and let \(W\to W_{0}\) be a birational morphism. Assume that_ 1. _there exist an analytic neighborhood_ \(U_{0}\subset W_{0}\) _and a Du Val section_ \(H_{0}\in|-K_{U_{0}}|\)_, such that the image of the flipping locus of_ \(X\dashrightarrow X^{\prime}\) _on_ \(W_{0}\) _is contained in_ \(U_{0}\)_, and both_ \(H\to H_{0}\) _and_ \(H^{\prime}\to H_{0}\) _are partial resolutions, where_ \(H\) _and_ \(H^{\prime}\) _are the proper transforms of_ \(H_{0}\) _on_ \(X\) _and_ \(X^{\prime}\) _respectively;_ 2. _there is a divisor_ \(D^{\prime}\subset X^{\prime}\) _which is contracted by the induced morphism_ \(X^{\prime}\to W_{0}\)_; and_ 3. \(\bar{\rho}(H_{0})\)_,_ \(\delta_{H^{\prime}\to H_{0}}(D^{\prime}|_{H^{\prime}})\) _and_ \(dep(X)\) _are all bounded by_ \(n\)_._ _Then_ 1. _for any flipped curve_ \(C^{\prime}\)_, we have that_ \(|D^{\prime}\cdot C^{\prime}|\) _is bounded by an integer which depends only on_ \(n\)_;_ 2. _for any flipping curve_ \(C\)_, if_ \(D\) _is the proper transform of_ \(D^{\prime}\) _on_ \(X\) _then_ \(|D\cdot C|\) _is bounded by an integer which depends only on_ \(n\)_; and_ 3. \(\delta_{H\to H_{0}}(D|_{H})\) _is bounded by an integer which depends only on_ \(n\)_._ Proof.: Since \(H_{0}\) is Du Val, inversion of adjunction implies that \((U_{0},H_{0})\) is canonical. Since \(H\to H_{0}\) is a partial resolution, it follows that if \(U_{X}\subset X\) is the pre-image of \(U_{0}\) on \(X\) and \(\psi\colon U_{X}\to U_{0}\) is the induced morphism, then \(K_{U_{X}}+H=\psi^{*}(K_{U_{0}}+H_{0})\) and, in particular, \(H\in|-K_{U_{X}}|\). Moreover, all the flipping curves of \(X\dashrightarrow X^{\prime}\) are contained in \(U_{X}\). Thus, \(H\cdot C=-K_{X}\cdot C\) for any flipping curve \(C\). Likewise, \(H^{\prime}\cdot C^{\prime}=-K_{X^{\prime}}\cdot C^{\prime}\) for any flipped curve \(C^{\prime}\). Let \(C^{\prime}\subset X^{\prime}\) be a flipped curve. Since \(H^{\prime}\cdot C^{\prime}=-K_{X^{\prime}}\cdot C^{\prime}<0\), we know that \(C^{\prime}=\Gamma^{\prime}_{k}\) for some \(1\leq k\leq q\) where \(\Gamma^{\prime}_{1},\ldots,\Gamma^{\prime}_{q}\) are all the exceptional divisors of \(H^{\prime}\to H_{0}\). Since \(\rho(\bar{H}/H_{0})\) is bounded by \(n\), the type of singularity of \(H_{0}\) is bounded, so the intersection numbers \(\Gamma^{\prime}_{i}\cdot\Gamma^{\prime}_{j}\) are all bounded by some constant which depends only on \(n\) for all \(1\leq i,j\leq q\). Thus, \(|D^{\prime}\cdot C^{\prime}|=|(D^{\prime}|_{H^{\prime}}).\Gamma^{\prime}_{k}|\) is bounded. We can write \(D^{\prime}\equiv_{W}\lambda K_{X^{\prime}}\) for some rational number \(\lambda\). Notice that \(dep(X^{\prime})<dep(X)\leq n\) and so the Cartier indices of \(K_{X}\) and \(K_{X^{\prime}}\) are both bounded by \(2n\) by Lemma 2.5. Since \(|D^{\prime}\cdot C^{\prime}|\) is bounded and the Cartier index of \(K_{X^{\prime}}\) is bounded, it follows that \(|\lambda|\) is bounded by an integer which depends only on \(n\). Moreover, we have that \(D\equiv_{W}\lambda K_{X}\). Since \(|K_{X}\cdot C|<1\) by [1, Theorem 0] and \(\lambda\) is bounded, it follows that \(|D\cdot C|\) is bounded by an integer that depends only on \(n\). Now we prove (3). Assume first that there is exactly one flipping curve \(C\subset X\). Then we can write \(D|_{H}=\sum a_{i}\Gamma_{i}+mC\), where \(m\) is a non-negative integer and the sum runs over all the exceptional curves of \(H\to H_{0}\) which are strict transforms on \(H\) of those curves \(\Gamma^{\prime}_{1},\ldots,\Gamma^{\prime}_{q}\) in \(H^{\prime}\) which are not flopped curves. We know that \(\delta_{H\to H_{0}}(D|_{H})=\sum_{i}a_{i}+m\) and \(\sum_{i}a_{i}\leq\delta_{H^{\prime}\to H_{0}}(D^{\prime}|_{H^{\prime}})\). Since the intersection number \[D\cdot C=D|_{H}\cdot C=\sum a_{i}\Gamma_{i}\cdot C+mC^{2},\] \(\Gamma_{i}\cdot C\) and \(C^{2}\) are all bounded because \(\bar{\rho}(H_{0})\) is bounded, we know that \(m\) is bounded. Thus, \(\delta_{H\to H_{0}}(D|_{H})\) is bounded. In general we can run analytic \(K_{X}\)-MMP over \(W\) and get a composition of flips \(X=X_{0}\dashrightarrow X_{1}\dashrightarrow...\dashrightarrow X_{k}=X^{\prime}\) such that the flipping locus of \(X_{i}\dashrightarrow X_{i+1}\) is irreducible for all \(i\). One can show that \(\delta_{H\to H_{0}}(D|_{H})\) is bounded by applying the above argument \(k\) times. **Lemma 3.10**.: _Fix a positive integer \(n\). Let \(Y\to X\) be a three-dimensional terminal divisorial contraction which contracts a divisor \(E\) to a smooth curve \(C\). Assume that_ 1. \(dep(Y)\leq n\)_;_ 2. _there exists a birational morphism_ \(X\to W_{0}\) _such that_ \(C\) _is contracted by this morphism;_ 3. _there exist an analytic neighbourhood_ \(U_{0}\subset W_{0}\) _which contains the image of_ \(C\) _on_ \(W_{0}\)_, and a Du Val section_ \(H_{0}\in|-K_{U_{0}}|\)_, such that_ \(\bar{\rho}(H_{0})\leq n\)_; and_ 4. _if_ \(H_{X}\) _and_ \(H_{Y}\) _are the strict transforms of_ \(H_{0}\) _on_ \(X\) _and_ \(Y\) _respectively, then_ \(H_{Y}\to H_{X}\to H_{0}\) _are partial resolutions and we have that_ \(C\subset H_{X}\)_._ _Then \(\delta_{H_{Y}\to H_{0}}(E|_{H_{Y}})\) is bounded by an integer which depends only on \(n\)._ Proof.: We will prove the statement by induction on \(dep(Y)\). Assume first that \(dep(Y)=0\). In this case \(Y\) is Gorenstein and \(X\) is smooth near \(C\) by [8, Theorem 4]. Fix \(P\in C\). If \(H_{X}\) is smooth at \(P\), then \(H_{Y}\cong H_{X}\) near \(P\). If \(H_{X}\) is singular near \(P\). Then the neighbourhood (\(P\in C\subset H_{X}\)) is given by [10, Theorem 1.1]. Since \(\bar{\rho}(H_{X})\leq\bar{\rho}(H_{0})\leq n\), there are only finitely many possibilities and, therefore, \(\delta_{H_{Y}\to H_{0}}(E|_{H_{Y}})\) is bounded by an integer which depends only on \(n\). In general we have a factorisation as in Theorem 2.6. We know that \(Y^{\prime}\to X\) is a divisorial contraction to a point and \(Z_{k}\to Y^{\prime}\) is a divisorial contraction which contracts \(E_{Z_{k}}\) to \(C^{\prime}\), where \(E_{Z_{k}}\) is the strict transforms of \(E\) on \(Z_{k}\) and \(C^{\prime}\) is the strict transform of \(C\) on \(Y\). **Claim**.: if \(E_{Z_{i}}\) and \(H_{Z_{i}}\) are the strict transforms of \(E\) and \(H_{X}\) on \(Z_{i}\) respectively, then for all \(i=1,\dots,k\) we have that \(\delta_{H_{Z_{i}}\to H_{0}}(E_{Z_{i}}|_{H_{Z_{i}}})\) is bounded by an integer depending only on \(n\). Assuming the claim, since \(\delta_{H_{Z_{1}}\to H_{0}}(E_{Z_{1}}|_{H_{Z_{1}}})\) is bounded by an integer depending only on \(n\), it follows that \(\delta_{H_{Y}\to H_{0}}(E|_{H_{Y}})\) is bounded by an integer depending only on \(n\). We now prove the claim in several steps: **Step 1:**: if \(i>1\), or \(i=1\) and \(Z_{1}\dashrightarrow Z_{2}\) is a flip, then \(\delta_{H_{Z_{i}}\to H_{0}}(E_{Z_{i}}|_{H_{Z_{i}}})\) is bounded by a number which depends only on \(n\). Indeed, by [6, Lemma 3.4] we know that \(H_{Z_{i}}\to H\) is a partial resolution. It also follows by our assumptions and by Remark 2.7 that \(dep(Z_{i})<dep(Y)\leq n\). Thus, \(\delta_{H_{Z_{k}}\to H_{0}}(E_{Z_{k}}|_{H_{Z_{k}}})\) is bounded by the induction hypothesis. Now assume that \(\delta_{H_{Z_{i+1}}\to H_{0}}(E_{Z_{i+1}}|_{H_{Z_{i+1}}})\) is bounded by an integer depending only on \(n\), we want to show that \(\delta_{H_{Z_{i}}\to H_{0}}(E_{Z_{i}}|_{H_{Z_{i}}})\) is also bounded. We know that \(dep(Z_{i})<n\), hence \(\delta_{H_{Z_{i}}\to H_{0}}(E_{Z_{i}}|_{H_{Z_{i}}})\) is bounded by Lemma 3.9. **Step 2:**: if \(Z_{1}\dashrightarrow Z_{2}\) is a flop and no flopped curve is contained in \(H_{Z_{2}}\), then \(\delta_{H_{Z_{1}}\to H_{0}}(E_{Z_{1}}|_{H_{Z_{1}}})=\delta_{H_{Z_{2}} \to H_{0}}(E_{Z_{2}}|_{H_{Z_{2}}})\). Indeed, if \(H_{Z_{2}}\) does not intersect the flopped curve of \(Z_{1}\dashrightarrow Z_{2}\), then \(\delta_{H_{Z_{1}}\to H_{0}}(E_{Z_{1}}|_{H_{Z_{1}}})=\delta_{H_{Z_{2}} \to H_{0}}(E_{Z_{2}}|_{H_{Z_{2}}})\). If \(H_{Z_{2}}\) intersects the flopped curve, then since \(H_{Z_{2}}\equiv_{X}-K_{Z_{2}}\), we know that \(H_{Z_{2}}\) intersects the flopped curve trivially, so \(H_{Z_{2}}\) contains the flopping curve. **Step 3:**: Let \(F=exc(Y^{\prime}\to X)\), then \(\delta_{H_{Y^{\prime}}\to H_{0}}(F|_{H_{Y^{\prime}}})\) is bounded by a number which depends only on \(n\). Indeed, let \(\Xi_{1}\),..., \(\Xi_{m}\) be the irreducible components of \(F\cap H_{Y^{\prime}}\), then for all \(2\leq j\leq m\) we have that \(\Xi_{j}\equiv\lambda_{j}\Xi_{1}\) for some positive rational number \(\lambda_{j}\), where the numerical equivalence is intended as cycles in \(Y^{\prime}\). Moreover, by interchanging \(\Xi_{1}\) and \(\Xi_{j}\) for some \(j\) we may assume that \(\lambda_{j}\geq 1\) for all \(j\). We can write \[F|_{H_{Y^{\prime}}}=a_{1}\Xi_{1}+...+a_{m}\Xi_{m}\equiv(a_{1}+\lambda_{2}a_{2 }+...+\lambda_{m}a_{m})\Xi_{1}.\] Then \[0<(a_{1}+...+a_{m})(-F.\Xi_{1}) \leq(a_{1}+\lambda_{2}a_{2}+...+\lambda_{m}a_{m})(-F.\Xi_{1})\] \[=-F^{2}.H_{Y^{\prime}}\] \[=F^{2}.K_{Y^{\prime}}=a(F,X)F^{3}\leq 4\] where the last inequality follows from [11, Table 1, Table 2]. By Remark 2.7, we have that \(dep(Y^{\prime})\leq n\). Thus, Lemma 2.5 implies that the Cartier index of \(F\), being not greater than the Cartier index of \(K_{Y^{\prime}}\), is bounded by \(2n\). It follows that \[\delta_{H_{Y^{\prime}}\to H_{0}}(F|_{H_{Y^{\prime}}})=a_{1}+...+a_{m} \leq 8n.\] **Step 4:**: Let \(F_{Z_{j}}\) be the strict transform of \(F\) on \(Z_{j}\) for \(j=1,\ldots,k\). Then \(\delta_{H_{Z_{j}}\to H_{0}}(F_{Z_{j}}|_{H_{Z_{j}}})\) is bounded by a number which depends only on \(n\) for any \(j=2,\ldots,k\). By Step 3, we know that \(\delta_{H_{Y^{\prime}}\to H_{0}}(F|_{H_{Y^{\prime}}})\) is bounded. Since \(\bar{\rho}(H_{0})\leq n\), we have that both the singularities of \(H_{Y^{\prime}}\) and the birational morphism \(\phi\colon H_{Z_{k}}\to H_{Y^{\prime}}\) have only finitely many possibilities. We know that \(F|_{H_{Y^{\prime}}}\) is supported on the exceptional locus of \(H_{Y^{\prime}}\to H_{0}\), hence since \(F_{Z_{k}}|_{H_{Z_{k}}}\leq\phi^{*}(F|_{H_{Y^{\prime}}})\), it follows that \(\delta_{H_{Z_{k}}\to H_{0}}(F_{Z_{k}}|_{H_{Z_{k}}})\) is bounded by an integer which depends only on \(n\). Now since \(Z_{j}\dashrightarrow Z_{j+1}\) are all flips for \(j>1\), \(\delta_{H_{Z_{j}}\to H_{0}}(F_{Z_{j}}|_{H_{Z_{j}}})\) is bounded by Lemma 3.9, for all \(j>1\). **Step 5:**: If \(Z_{1}\dashrightarrow Z_{2}\) is a flop, then \(\delta_{H_{Z_{1}}\to H_{0}}(E_{Z_{1}}|_{H_{Z_{1}}})\) is bounded by a number which depends only on \(n\). By Step 1 we know that \(\delta_{H_{Z_{2}}\to H_{0}}(E_{Z_{2}}|_{H_{Z_{2}}})\) is bounded. Thus, by Step 2, we may assume that there exists a flopped curve \(C_{Z_{2}}\subset Z_{2}\) which is contained in \(H_{Z_{2}}\). Then since \(\delta_{H_{Z_{2}}\to H_{0}}(E_{Z_{2}}|_{H_{Z_{2}}})\) is bounded, it follows that \(E_{Z_{2}}\cdot C_{Z_{2}}\) is bounded. Let \(Z_{1}\to V\) and \(Z_{2}\to V\) be the flopping contractions. Then \(E_{Z_{2}}\equiv_{V}\lambda F_{Z_{2}}\) for some rational number \(\lambda\) which is bounded. It follows that \(E_{Z_{1}}\equiv_{V}\lambda F_{Z_{1}}\). Now let \(C_{Z_{1}}\) be a flopping curve of \(Z_{1}\dashrightarrow Z_{2}\) and let \(C_{Y}\) be the image of the \(C_{Z_{1}}\) on \(Y\). Then \[K_{Y}\cdot C_{Y}+a(F_{Z_{1}},Y)F_{Z_{1}}\cdot C_{Z_{1}}=K_{Z_{1}}\cdot C_{Z_{1 }}=0.\] By [1, Theorem 0], we know that \(0>K_{Y}\cdot C_{Y}>-1\). Moreover, \(a(F_{Z_{1}},Y)=1/r\) where \(r\) is the Cartier index of \(K_{Y}\) near a singular point of \(Y\). It follows that \(0<F_{Z_{1}}\cdot C_{Z_{1}}<r\leq 2n\) by Lemma 2.5. Since \(E_{Z_{1}}\equiv_{V}\lambda F_{Z_{1}}\) for some bounded \(\lambda\) and since the Cartier index of \(F_{Z_{1}}\) is bounded, it follows that \(E_{Z_{1}}\cdot C_{Z_{1}}\) is bounded by an integer which depends only on \(n\). Using the same argument as in the proof of Lemma 3.9 (3) (the flopping locus of \(Z_{1}\dashrightarrow Z_{2}\) is irreducible by [6, Lemma 3.11]), it follows that \(\delta_{H_{Z_{1}}\to H_{0}}(E_{Z_{1}}|_{H_{Z_{1}}})\) is bounded. Finally, Step 1 and Step 5 imply the claim. **Lemma 3.11**.: _Fix a positive integer \(n\). Let \(X\dashrightarrow X^{\prime}\) be a three-dimensional terminal flip over \(W\) such that \(dep(X)=n\)._ _Then for any flipped curve \(C^{\prime}\subset X^{\prime}\), we have that \(K_{X^{\prime}}\cdot C^{\prime}\) is bounded by an integer which depends only on \(n\)._ Proof.: If there is more than one flipping curve on \(X\), then we can run an analytic \(K_{X}\)-MMP over \(W\). Indeed, we can decompose the map \(X\dashrightarrow X^{\prime}\) into a sequence of analytic flips \[X=X_{1}\dashrightarrow X_{2}\dashrightarrow...\dashrightarrow X_{k-1} \dashrightarrow X_{k}=X^{\prime}\] such that the flipping locus of \(X_{i}\dashrightarrow X_{i+1}\) is irreducible for all \(i\) and any flipped curve of \(X\dashrightarrow X^{\prime}\) is the proper transform of a flipped curve of \(X_{i}\dashrightarrow X_{i+1}\) on \(X^{\prime}\) for some \(i\). By Corollary 2.10, we only need to prove that the statement holds for the flip \(X_{i}\dashrightarrow X_{i+1}\) for all \(i\). Thus, we may assume that the flipping locus of \(X\dashrightarrow X^{\prime}\) is irreducible. In this case, a general member \(H_{W}\in|-K_{W}|\) has Du Val singularities by [12, Theorem 2.2]. Moreover, the singularities of \(H_{W}\) depend only on the Cartier indices and the axial multiplicities of the singular points on \(X\). By Lemma 2.5, it follows that \(\bar{\rho}(H_{W})\) is bounded by an integer which depends only on \(n\). Consider the diagram as in Theorem 2.6. By Remark 2.7 we know that \(dep(Y_{i})<n\) for all \(i\). By induction on \(n\), we may assume that if \(Y_{i}\dasharrowright Y_{i+1}\) is a flip, then for any flipped curve \(C_{i+1}\subset Y_{i+1}\) we have that \(K_{Y_{i+1}}\cdot C_{i+1}\) is bounded by an integer depending only on \(n\). Let \(E=exc(Y_{k}\to X^{\prime})\). We distinguish two cases: **Case 1:**\(C^{\prime}\) is not contained in the centre of \(E\) on \(X^{\prime}\). Let \(C_{Y_{k}}\) be the proper transform of \(C^{\prime}\) on \(Y_{k}\). Then \[K_{Y_{k}}\cdot C_{Y_{k}}=K_{X^{\prime}}\cdot C^{\prime}+a(E,X^{\prime})E\cdot C _{Y_{k}}\geq K_{X^{\prime}}\cdot C^{\prime}>0.\] This means that \(C_{Y_{k}}\) is the proper transform of a flipped curve of \(Y_{i}\dasharrowright Y_{i+1}\) on \(Y_{k}\) for some \(i\). By the induction hypothesis and Corollary 2.10, we know that \(K_{Y_{k}}\cdot C_{Y_{k}}\) is bounded by an integer depending only on \(n\), hence so is \(K_{X^{\prime}}\cdot C^{\prime}\). **Case 2:**\(Y_{k}\to X^{\prime}\) is a divisorial contraction to \(C^{\prime}\). Notice that \(C^{\prime}\) is a smooth curve by [6, Corollary 3.3] and if \(H_{Y_{k}}\) is the proper transform of \(H_{W}\) on \(Y_{k}\) then \(H_{Y_{k}}\to H_{W}\) is a partial resolution by [6, Lemma 3.4]. By Lemma 3.10 we know that \(\delta_{H_{Y_{k}}\to H_{W}}(E|_{H_{Y_{k}}})\) is bounded by an integer which depends only on \(n\). Notice that \(Y_{k}\to X^{\prime}\) is generically a blow-up along \(C^{\prime}\). Since \(H_{Y_{k}}\to H_{W}\) is a partial resolution, if we denote by \(H^{\prime}\) the strict transform of \(H_{W}\) on \(X^{\prime}\) then \(\operatorname{mult}_{C^{\prime}}H^{\prime}=1\) and, in particular, there is exactly one component of \(E\cap H_{Y}\) which maps surjectively to \(C^{\prime}\). Let \(C_{Y_{k}}\) be this component, then \(|E\cdot C_{Y_{k}}|\) is bounded by an integer which depends only on \(n\) since \(\delta_{H_{Y_{k}}\to H_{W}}(E|_{H_{Y_{k}}})\) is bounded. Now \(K_{Y_{k}}\cdot C_{Y_{k}}=K_{X^{\prime}}\cdot C^{\prime}+E\cdot C_{Y_{k}}\). If \(K_{Y_{k}}\cdot C_{Y_{k}}\leq 0\), then \(K_{X^{\prime}}\cdot C^{\prime}\leq-E\cdot C_{Y_{k}}\) is bounded. If \(K_{Y_{k}}\cdot C_{Y_{k}}>0\), then \(C_{Y_{k}}\) is the proper transform of a flipped curve of \(Y_{i}\dasharrowright Y_{i+1}\) for some \(i\). It follows that \(K_{Y_{k}}\cdot C_{Y_{k}}\) is bounded by the induction hypothesis and by Corollary 2.10. This implies that \(K_{X^{\prime}}\cdot C^{\prime}\) is bounded. **Lemma 3.12**.: _Let \(f\colon Y\to X\) be a divisorial contraction between projective terminal threefolds which contracts a divisor \(E\) to a smooth curve \(C\)._ _Then \(|K_{Y}^{3}-K_{X}^{3}|\) is bounded by an integer which depends only on \(g(C)\), \(K_{X}\cdot C\) and the basket data of \(Y\)._ Proof.: Since \(Y\to X\) is generically a blow-up along \(C\), we know that \(a(E,X)=1\). Hence \[K_{Y}^{3}-K_{X}^{3}=3E^{2}\cdot f^{*}K_{X}+E^{3}=-3K_{X}\cdot C+E^{3}.\] Thus, we only need to bound \(|E^{3}|\). Consider the following two exact sequences \[0\to\mathcal{O}_{Y}(-E)\to\mathcal{O}_{Y}\to\mathcal{O}_{E}\to 0\] and \[0\to\mathcal{O}_{Y}(K_{Y}-E)\to\mathcal{O}_{Y}(K_{Y})\to\mathcal{O}_{E}(K_{Y} )\to 0.\] The singular Riemann-Roch formula (cf. Section 2.1.2) yields \[\chi(\mathcal{O}_{E})=\chi(\mathcal{O}_{Y})-\chi(\mathcal{O}_{Y}(-E))=\frac{1 }{12}(-5K_{X}\cdot C+6E^{3})+\frac{1}{12}E\cdot\mathrm{c}_{2}(Y)+\theta_{1}\] and \[\chi(\mathcal{O}_{E}(K_{Y}))=\chi(\mathcal{O}_{Y}(K_{Y}))-\chi(\mathcal{O}_{ Y}(K_{Y}-E))=\frac{1}{12}K_{X}\cdot C+\frac{1}{12}E\cdot\mathrm{c}_{2}(Y)+ \theta_{2}\] where \(\theta_{1}\) and \(\theta_{2}\) are constants which depend only on the basket data of \(Y\) and the integer \(i_{P}\) such that \(-E\sim i_{P}K_{Y}\) at any singular point \(P\in Y\). On the other hand, by the Kawamata-Viehweg vanishing theorem we have that \(R^{k}f_{*}(\mathcal{O}_{Y}(iK_{Y}-jE))=0\) if \(k>1\) and \(i-1-j\leq 0\). Thus, \(\chi(\mathcal{O}_{E})=\chi(f_{*}\mathcal{O}_{E})=\chi(\mathcal{O}_{C})\) and \(\chi(\mathcal{O}_{E}(K_{Y}))=\chi(f_{*}(\mathcal{O}_{E}(K_{Y})))\). Note that the push-forward of the second exact sequence is \[0\to\mathcal{O}_{X}(K_{X})\to\mathcal{O}_{X}(K_{X})\otimes f_{*}\mathcal{O}_{ Y}(E)=\mathcal{O}_{X}(K_{X})\to f_{*}(\mathcal{O}_{E}(K_{Y}))\to 0,\] which implies \(\chi(\mathcal{O}_{E}(K_{Y}))=\chi(f_{*}(\mathcal{O}_{E}(K_{Y})))=0\), or \(\frac{1}{12}E\cdot c_{2}(Y)=-\frac{1}{12}K_{X}\cdot C-\theta_{2}\). Thus, \[\chi(\mathcal{O}_{C})=\chi(\mathcal{O}_{E})=\frac{1}{2}(-K_{X}\cdot C+E^{3}) +\theta_{1}-\theta_{2},\] or \[E^{3}=K_{X}\cdot C+2\chi(\mathcal{O}_{C})+2(\theta_{2}-\theta_{1}).\] Since \(\theta_{1}\) and \(\theta_{2}\) take only finitely many possible value if the basket data of \(Y\) is given, \(|E^{3}|\) is bounded by an integer which depends only on \(g(C)\), \(K_{X}\cdot C\) the basket data of \(Y\). **Lemma 3.13**.: _Fix a positive integer \(n\). Let \(Y\to X\) be a divisorial contraction to a point between terminal threefolds such that \(dep(Y)=n\)._ _Then \(|K_{Y}^{3}-K_{X}^{3}|\) is bounded by an integer which depends only on \(n\)._ Proof.: Let \(E\) be the exceptional divisor. Then \(K_{Y}^{3}-K_{X}^{3}=a^{3}E^{3}\) where \(a=a(E,X)\). By [11, Table 1, Table 2] we know that \(0<aE^{3}\leq 4\). Let \(r\) be the Cartier index of \(Y\), then \(E^{3}\geq\frac{1}{r^{3}}\), hence \(a\leq 4r^{3}\). Thus, \(0<a^{3}E^{3}<64r^{6}\). Since \(r\) is bounded by an integer which depends only on \(n\) by Lemma 2.5, we have that \(|a^{3}E^{3}|=|K_{Y}^{3}-K_{X}^{3}|\) is bounded by an integer which depends only on \(n\) **Lemma 3.14**.: _Fix a positive integer \(n\). Let \(X\dasharrow X^{\prime}\) be a three-dimensional terminal flip such that \(dep(X)=n\)._ _Then \(|K_{X}^{3}-K_{X^{\prime}}^{3}|\) is bounded by an integer which depends only on \(n\)_ Proof.: By Theorem 2.6, we have a diagram We know that \(dep(Y_{i})<n\) for all \(i\). If \(Y_{i}\dasharrow Y_{i+1}\) is a flip, then by induction on \(n\), we may assume that \(|K_{Y_{i}}^{3}-K_{Y_{i+1}}^{3}|\) is bounded by an integer depending on \(n\). Also if \(Y_{i}\dasharrow Y_{i+1}\) is a flop, then \(K_{Y_{i}}^{3}=K_{Y_{i+1}}^{3}\). Since \(k\leq n+1\), we know that \(|K_{Y_{1}}^{3}-K_{Y_{k}}^{3}|\) is bounded by an integer which depends only on \(n\). We know that \(Y_{1}\to X\) is a \(w\)-morphism, so \(|K_{Y_{1}}^{3}-K_{X}^{3}|\) is bounded by an integer which depends only on \(n\) by Lemma 3.13. Similarly, if \(Y_{k}\to X^{\prime}\) is a divisorial contraction to a point, then we can also bound \(|K_{Y_{k}}^{3}-K_{X^{\prime}}^{3}|\). Assume that \(Y_{k}\to X^{\prime}\) is a divisorial contraction to a curve \(C^{\prime}\), then \(C^{\prime}\) is a smooth rational curve. Since \(dep(Y_{k})<n\), the basket data of \(Y_{k}\) is bounded by Lemma 2.5. Hence \(|K_{Y_{k}}^{3}-K_{X^{\prime}}^{3}|\) is bounded by an integer which depends only on \(n\) by Lemma 3.11 and Lemma 3.12. Finally since \(|K_{Y_{1}}^{3}-K_{X}^{3}|\), \(|K_{Y_{1}}^{3}-K_{Y_{k}}^{3}|\) and \(|K_{Y_{k}}^{3}-K_{X^{\prime}}^{3}|\) are all bounded, it follows that \(|K_{X}^{3}-K_{X^{\prime}}^{3}|\) is bounded by an integer which depends only on \(n\). **Proposition 3.15**.: _Fix a positive integer \(n\) and a cubic homogeneous polynomial \(F\) with \(\Delta_{F}\neq 0\)._ _Then there are finitely many cubic homogeneous polynomials \(F_{1}\),..., \(F_{k}\) such that if \(X\dasharrow X^{\prime}\) is a three-dimensional terminal flip with \(dep(X)=n\) and \(F_{X}\sim F\), then \(F_{X^{\prime}}\sim F_{i}\) for some \(i\)._ _Moreover, assume that \(L\in\operatorname{Hom}(H^{2}(X,\mathbb{Q}),\mathbb{Q})\) represents \(p_{1}(X)\), then \(p_{1}(X^{\prime})\) belongs to a finite set which depends only on \(n\), \(F\), \(L\) and \(\chi(\mathcal{O}_{X})\)._ Proof.: Assume that \(X\dasharrow X^{\prime}\) is a flip over \(W\). By Proposition 3.4 we know that \(F_{X}\sim(a,0,F_{W})\) for some \(a\in\mathbb{Z}\). By [3, Theorem 3.1], the triple \((a,0,F_{W})\) belongs to a finite set. Proposition 3.4 also asserts that \(F_{X^{\prime}}\sim(a^{\prime},0,F_{W})\) for some \(a^{\prime}\in\mathbb{Z}\). In order to prove the first claim, we only need to show that the integer \(a^{\prime}\) is bounded by a constant which depends only on \(n\). Let \(r\) be the Cartier index of \(K_{X}\) and let \(\eta_{1}\),..., \(\eta_{n}\) be the basis of \(H^{2}(X,\mathbb{Z})\) in Proposition 3.4. Then we can write \([rK_{X}]=\lambda\eta_{1}+\tau\) for some \(\tau\in\langle\eta_{2},...,\eta_{n}\rangle\). We have \[K_{X}^{3}=\frac{\lambda^{3}}{r^{3}}a+\frac{1}{r^{3}}\tau^{3}.\] Let \(C\) be a flipping curve. Then we know that \(\tau\cdot C=0\) since \(\langle\eta_{2},...,\eta_{n}\rangle_{\mathbb{Q}}=\phi^{*}H^{2}(W,\mathbb{Q})\). Since \(|K_{X}\cdot C|<1\) by [1, Theorem 0], we know that \(\lambda<r\). Similarly let \(r^{\prime}\) be the Cartier index of \(K_{X^{\prime}}\) and \(\eta_{1}^{\prime}\),..., \(\eta_{n}^{\prime}\) be the basis of \(H^{2}(X^{\prime},\mathbb{Z})\) in Proposition 3.4. If we write \([r^{\prime}K_{X^{\prime}}]=\lambda^{\prime}\eta_{1}^{\prime}+\tau^{\prime}\) for some \(\tau^{\prime}\in\langle\eta_{2}^{\prime},...,\eta_{n}^{\prime}\rangle\), then \[K_{X^{\prime}}^{3}=\frac{{\lambda^{\prime}}^{3}}{{r^{\prime}}^{3}}a^{\prime}+ \frac{1}{{r^{\prime}}^{3}}{{\tau^{\prime}}^{3}}.\] Notice that \(\tau=\mu\phi^{*}\sigma\) and \(\tau^{\prime}={\mu^{\prime}}{\phi^{\prime}}^{*}\sigma^{\prime}\) for some rational numbers \(\mu\), \(\mu^{\prime}\) and for some \(\sigma\), \(\sigma^{\prime}\in H^{2}(W,\mathbb{Z})\), where \(\phi\colon X\to W\) and \(\phi^{\prime}\colon X^{\prime}\to W\) are the induced morphisms. We first show: **Claim**.: Let \(\theta_{1}\), \(\theta_{2}\in H^{2}(W,\mathbb{Z})\). Then \(\frac{\mu}{r}\sigma\cdot\theta_{1}\cdot\theta_{2}=\frac{\mu^{\prime}}{r^{ \prime}}\sigma^{\prime}\cdot\theta_{1}\cdot\theta_{2}\). Indeed, we know that \(K_{X}\cdot\phi^{*}\theta_{1}\cdot\phi^{*}\theta_{2}=K_{X^{\prime}}\cdot\phi^{ \prime*}\theta_{1}\cdot\phi^{\prime*}\theta_{2}\) and \(\eta_{1}\cdot\phi^{*}\theta_{1}\cdot\phi^{*}\theta_{2}=\eta_{1}^{\prime}\cdot \phi^{\prime*}\theta_{1}\cdot\phi^{\prime*}\theta_{2}=0\). Therefore \(\frac{1}{r}\tau\cdot\phi^{*}\theta_{1}\cdot\phi^{*}\theta_{2}=\frac{1}{r^{ \prime}}\tau^{\prime}\cdot\phi^{\prime*}\theta_{1}\cdot\phi^{\prime*}\theta_ {2}\) and so \(\frac{\mu}{r}\sigma\cdot\theta_{1}\cdot\theta_{2}=\frac{\mu^{\prime}}{r^{ \prime}}\sigma^{\prime}\cdot\theta_{1}\cdot\theta_{2}\), as claimed. We now show that \(\frac{\mu}{r}\sigma=\frac{\mu^{\prime}}{r^{\prime}}\sigma^{\prime}\). Indeed, let \(\sigma_{0}=\frac{\mu}{r}\sigma-\frac{\mu^{\prime}}{r^{\prime}}\sigma^{\prime}\), then \(\sigma_{0}\cdot\sigma_{0}\cdot\eta=0\) for any \(\eta\in H^{2}(W,\mathbb{C})\). If \(\sigma_{0}\neq 0\), then Lemma 3.6 implies that \(\Delta_{F_{W}}=0\), and hence \(\Delta_{F}=0\), a contradiction. Thus, we have that \[\frac{1}{r^{3}}\tau^{3}=\frac{\mu^{3}}{r^{3}}\sigma^{3}=\frac{{\mu^{\prime}}^{ 3}}{{r^{\prime}}^{3}}{{\sigma^{\prime}}^{3}}=\frac{1}{{r^{\prime}}^{3}}{{ \tau^{\prime}}^{3}},\] which implies that \[K_{X}^{3}-K_{X^{\prime}}^{3}=\frac{\lambda^{3}}{r^{3}}a-\frac{{\lambda^{\prime}} ^{3}}{{r^{\prime}}^{3}}a^{\prime},\] and, since \(\lambda<r\), it follows that \[|a^{\prime}|\leq{r^{\prime}}^{3}(|K_{X}^{3}-K_{X^{\prime}}^{3}|+|a|).\] By Lemma 2.5 and Lemma 3.14 we have that \(|K_{X}^{3}-K_{X^{\prime}}^{3}|\) and \(r^{\prime}\) are bounded by a constant which depends only on \(n\), which implies that \(|a^{\prime}|\) is bounded, as claimed. We now prove the second claim. Assume that \(p_{1}(X)\) is represented by \(L\in\operatorname{Hom}(H^{2}(X,\mathbb{Q}),\mathbb{Q})\). We want to show that \(p_{1}(X^{\prime})\) has only finitely many possibilities. We will view \(\eta_{1}\),..., \(\eta_{n}\) as a generating set of \(H^{2}(X,\mathbb{Q})\) and \(\eta_{1}^{\prime}\),..., \(\eta_{n}^{\prime}\) as a generating set of \(H^{2}(X^{\prime},\mathbb{Q})\). We can assume that \(\eta_{i}\) and \(\eta_{i}^{\prime}\) are pull-back of the same element in \(H^{2}(W,\mathbb{Q})\) for all \(i>1\). It follows that \(\eta_{i}\cdot p_{1}(X)=\eta_{i}^{\prime}\cdot p_{1}(X)\) for \(i>1\). Hence we only need to show that \(\eta_{1}^{\prime}\cdot p_{1}(X^{\prime})\) takes only finitely many possible values. Since \(\frac{\mu}{r}\sigma=\frac{\mu^{\prime}}{r^{\prime}}\sigma^{\prime}\), we know that \(\frac{1}{r}\tau\) and \(\frac{1}{r^{\prime}}\tau^{\prime}\) are pull-back of the same element in \(H^{2}(W,\mathbb{Q})\), hence\(\frac{1}{r}\tau\cdot p_{1}(X)=\frac{1}{r^{\prime}}\tau^{\prime}\cdot p_{1}(X^{ \prime})\). Now we know that \[\frac{\lambda}{r}\eta_{1}\cdot p_{1}(X)-\frac{\lambda^{\prime}} {r^{\prime}}\eta_{1}^{\prime}\cdot p_{1}(X^{\prime}) =K_{X}\cdot p_{1}(X)-K_{X^{\prime}}\cdot p_{1}(X^{\prime})\] \[=K_{X}^{3}-K_{X^{\prime}}^{3}-2K_{X}.c_{2}(X)+2K_{X^{\prime}}.c_{ 2}(X^{\prime}).\] As above, by Lemma 2.5 and Lemma 3.14, we have that \(\lambda\), \(r\), \(r^{\prime}\) and \(K_{X}^{3}-K_{X^{\prime}}^{3}\) are all bounded by an integer which depends only on \(n\) and \(\eta_{1}\cdot p_{1}(X)=L(\eta_{1})\) is fixed. By the singular Riemann-Roch formula (cf. Section 2.1.2) we have that \(K_{X}.c_{2}(X)\) depends only on \(\chi(\mathcal{O}_{X})\) and the basket data of \(X\). The basket data has only finitely many possibilities by Lemma 2.5, hence \(K_{X}.c_{2}(X)\) is bounded. Similarly, we have that \(K_{X^{\prime}}\cdot c_{2}(X^{\prime})\) is bounded. Thus, \[\eta_{1}^{\prime}\cdot p_{1}(X^{\prime})=\frac{r^{\prime}}{\lambda^{\prime}} \left(\frac{\lambda}{r}L(\eta_{1})-K_{X}^{3}+K_{X^{\prime}}^{3}+2K_{X}.c_{2}(X )-2K_{X^{\prime}}.c_{2}(X^{\prime})\right)\] is bounded. Since \({r^{\prime}}^{2}\eta_{1}^{\prime}\cdot p_{1}(X^{\prime})\) is an integer, it follows that \(\eta_{1}^{\prime}\cdot p_{1}(X^{\prime})\) takes only finitely many possible values, as claimed. Proof of Theorem 1.3.: Let \[X=X_{0}\dashrightarrow X_{1}\dashrightarrow...\dashrightarrow X_{k}=Y\] be a sequence of steps for a \(K_{X}\)-MMP and let \(F_{X_{i}}\) be the cubic form associated to \(X_{i}\). We know that \(F_{X_{0}}\sim F\) and \(\Delta_{F_{X_{0}}}\neq 0\). We want to show that \(F_{X_{i}}\) belongs to a finite set and \(\Delta_{F_{X_{i}}}\neq 0\) for all \(i\). First notice that \(k\leq 2\rho(X)\leq 2b_{2}(X)\) by [4, Lemma 3.1] and \(dep(X_{i})<\rho(X)\leq b_{2}(X)\) by Lemma 2.8. Fix \(1\leq i\leq k\). Assume that \(F_{X_{i-1}}\) belongs to a finite set and \(\Delta_{F_{X_{i-1}}}\neq 0\). If \(X_{i-1}\to X_{i}\) is a divisorial contraction, then \(F_{X_{i}}\) belongs to a finite set and \(\Delta_{F_{X_{i}}}\neq 0\) by [3, Theorem 3.1, Proposition 4.7, Proposition 4.8]. Assume that \(X_{i-1}\dashrightarrow X_{i}\) is a flip, then \(F_{X_{i}}\) belongs to a finite set and \(\Delta_{F_{X_{i}}}\neq 0\) by Corollary 3.5 and Proposition 3.15. Thus, by induction on the number of steps of the MMP we have that \(F_{Y}=F_{X_{k}}\) belongs to a finite set and \(\Delta_{F_{Z}}\neq 0\). Finally the finiteness of the possible choice of \(p_{1}(Y)\) follows from [23, Proposition 8] and Proposition 3.15 (notice that \(\chi(\mathcal{O}_{X_{i}})=\chi(\mathcal{O}_{X})\leq 1+b_{1}(X)+b_{2}(X)+b_{3}(X)\) since terminal singularities are rational). Proof of Theorem 1.4.: Let \(X\dashrightarrow Z\) be the outcome of a \(K_{X}\)-MMP. By Theorem 1.3 be know that \(\Delta_{F_{Z}}\neq 0\). Notice that if \(\kappa(X)\geq 0\), then the Iitaka fibration of \(X\) induces a morphism \(Z\to V\) since the abundance conjecture holds for threefolds. Now Corollary 3.7 implies the statement. Proof of Theorem 1.2.: Let \[X=X_{0}\dashrightarrow X_{1}\dashrightarrow...\dashrightarrow X_{k}\] be a \(K_{X}\)-MMP. First notice that \(b_{2}(X_{i})\leq b_{2}(X)\) and by [5, Theorem 1.1] we have that \(b_{3}(X_{i})\) is bounded by an integer which depends only on \(b_{2}(X)\) and \(b_{3}(X)\). Also, Theorem 1.3 implies that the integer \(S_{X_{i}}\) (cf. SS3.1) is bounded by a constant which depends only on \(b_{2}(X)\) and \(F_{X}\). Thus, by [3, Theorem 1.3], it follows that if \(X_{i}\dasharrow X_{i+1}\) is a divisorial contraction, then \(|K_{i}^{3}-K_{X_{i+1}}^{3}|\) is bounded by a constant which depends only on \(b_{2}(X)\), \(b_{3}(X)\) and \(F_{X}\). If \(X_{i}\dasharrow X_{i+1}\) is a flip, then \(|K_{X_{i}}^{3}-K_{X_{i+1}}^{3}|\) is bounded by a constant which depends only on \(b_{2}(X)\) by Lemma 2.8 and Lemma 3.14. If \(X\) is not uniruled then \(K_{X_{k}}^{3}=vol(X,K_{X})\) is bounded by a constant which depends only on \(b_{2}(X)\) and \(b_{3}(X)\) by [3, Theorem 1.2]. If \(X\) is uniruled and \(X_{k}\) is Fano then \(K_{X_{k}}^{3}\) is bounded by [2, Theorem 1.1]. Otherwise \(X_{k}\) has a conic bundle structure by Corollary 1.4. Since \(dep(X_{k})<\rho(X)\leq b_{2}(X)\) by Lemma 2.8, we know that \(\Xi(X_{k})\) is bounded by Lemma 2.5. Moreover, \(p_{1}(X_{k})\) has only finitely many possibility by Theorem 1.3. Thus \(K_{X_{k}}^{3}\) is bounded by [23, Proposition 9]. In any case we know that \[|K_{X}^{3}|\leq\sum_{i=0}^{k-1}|K_{X_{i}}^{3}-K_{X_{i+1}}^{3}|+K_{X_{k}}^{3}\] is bounded by a constant which depends only on \(p_{1}(X)\), \(b_{1}(X)\), \(b_{2}(X)\), \(b_{3}(X)\) and \(F_{X}\). Proof of Theorem 1.1.: As explained in the introduction, the Theorem follows immediately from Theorem 1.2.
2309.00919
Electron Spin Polarization as a Predictor of Chiroptical Activity in Helical Molecules
Chiral structures, breaking spatial inversion symmetry, exhibit non-zero chiroptical activity (COA) due to the interaction between their electric and magnetic responses under external electromagnetic fields, an effect that is otherwise absent in achiral systems. Non-magnetic chiral structures also exhibit Chiral Induced Spin Selectivity (CISS), where spin-polarization (SP) emerges without external magnetic influence. We have obtained a COA-SP connection for a model system of an electron constrained to a helix including spin-orbit coupling (SOC), and in the presence of an external electromagnetic field. Despite its simplicity, this model captures the relevant physics required to address the problem. In particular, our results reveal that the norm of the SP vector can be used as a predictor of COA. In addition to SOC and the breaking of space inversion, a non-vanishing SP requires the breaking of time-reversal symmetry (TRS), as demanded by Onsager's reciprocity. Beyond the relationship between SP and COA, we obtain the novel result that TRS breaking is also necessary to yield a non-vanishing contribution of the SOC to the COA.
Solmar Varela, Rafael Gutierrez, Gianaurelio Cuniberti, Ernesto Medina, Vladimiro Mujica
2023-09-02T11:55:29Z
http://arxiv.org/abs/2309.00919v1
# Electron Spin Polarization as a Predictor of Chiroptical Activity in Helical Molecules ###### Abstract Chiral structures, breaking spatial inversion symmetry, exhibit non-zero chiroptical activity (COA) due to the interaction between their electric and magnetic responses under external electromagnetic fields, an effect that is otherwise absent in achiral systems. Non-magnetic chiral structures also exhibit Chiral-Induced Spin Selectivity (CISS), where spin-polarization (SP) emerges without external magnetic influence. We have obtained a COA-SP connection for a model system of an electron constrained to a helix including spin-orbit coupling (SOC), and in the presence of an external electromagnetic field. Despite its simplicity, this model captures the relevant physics required to address the problem. In particular, our results reveal that the norm of the SP vector can be used as a predictor of COA. In addition to SOC and the breaking of space inversion, a non-vanishing SP requires the breaking of time-reversal symmetry (TRS), as demanded by Onsager's reciprocity. Beyond the relationship between SP and COA, we obtain the novel result that TRS breaking is also necessary to yield a non-vanishing contribution of the SOC to the COA. **Keywords:** CISS, chiroptical activity, spin-orbit coupling, perturbation theory Spin polarization (SP) in molecules and solids is usually understood as a response to an external magnetic field. The discovery that for chiral materials, molecules, solids, and interfaces, this magnetic response can be obtained even in the absence of external magnetic fields in processes involving electron transfer, electron transport, and bond polarization through chiral centers, has had profound consequences in fundamental physics, chemistry, and biology, as well as in important applications in spintronics, NMR, and Quantum Information Sciences (QIS)[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. The theoretical description of this phenomenon, known as the Chiral-Induced Spin Selectivity (CISS) effect, requires first, the inclusion of spin-orbit coupling (SOC), and second, in addition to space-inversion symmetry breaking associated with chirality, the breaking of time-reversal symmetry (TRS) [12, 13, 14, 15]. These basic symmetry requirements do not exclude the fact, recently established in a number of theoretical studies, that a true comprehension of the physics of the CISS effect demands the inclusion of electron-phonon, spin-phonon, and electron-electron interactions in addition to non-adiabatic effects[16, 17, 18, 19, 20]. The CISS effect has been extensively investigated in experiments involving photoemission[21, 22, 23, 24], electron transport[25, 26, 27, 28], electron transfer[29, 30], and more recently, in electrochemistry and photoluminescence[31, 32, 33, 34, 35, 36, 37]. The interpretation of the CISS effect as a magnetic response linked to electron SP in chiral systems strongly suggests that there must exist a connection with the optical activity response observed in these systems when exposed to circularly polarized light [38, 39, 40, 41, 42, 31].This connection arises from the interplay between the electric and magnetic induced dipole moments in chiral systems. For optically active molecules, the optical response is normally measured either by the rotational power, which expresses the angle of rotation of the polarization plane of the light, or through the Chiroptical Activity (COA), related to the difference in the intensity of absorption of polarized photons. On the other hand, SP is defined as a three-dimensional vector containing the expectation value of the three Pauli spin matrices, which in the CISS mechanism is associated with electron transport, electron transfer, or bond polarization in chiral systems. Both SP and COA can be interpreted as responses of a chiral system to perturbations, but the underlying physical mechanisms for these two phenomena are fundamentally different because COA requires photon absorption, whereas CISS-SP does not necessarily involve photoexcitation. The latter might explain why the explicit connection between them has so far remained elusive [43, 44, 45]. We have used a model Hamiltonian of an electron confined to a helical box in the presence of SOC induced by the confining field of the inversion asymmetric helix [46]. In the absence of SOC, the model was initially solved exactly by Tinoco et al. [47], and we extend it further by including an external electromagnetic field. Using a perturbative approach to solve the Schrodinger equation, we approximately obtain the two components of the \(1/2-\)spinors for this problem. We then proceed to calculate both the COA and the three components of the CISS-induced magnetic response, where the \(z\)-component corresponds to the average value of \(\sigma_{z}\) Pauli matrix, assuming electron motion along the \(z\)-axis. Despite the apparent simplicity of our model, it is the first explicit analytical solution to a long-standing problem in the field of CISS-induced molecular SP and how it can be connected to COA. Our model is also related to the description of the phenomenon of field-mediated chirality transfer ref.[41]. The conclusion is that the interaction with circularly polarized radiation carries information both about SP and COA, an important result that we are only beginning to understand and that might have important consequences in the description of SP using chiral photons in molecules with emerging topological features in the electronic structure as, for example, spin textures[48]. Our model also affords an important digression into the inclusion of controlled schemes to take into account TRS breaking. This is a fundamental aspect of any formalism used to describe SP because it cannot occur in a system where TRS is preserved, as implied by Onsager relations and the onset of Kramers' degeneracy [49, 50]. We have used a simple realization of a Buttiker's probe to show that TRS, and hence spin polarization, is extremely sensitive to decoherence, a fact that permits folding within a single concept the qualitative effect of electron correlation, electron-phonon interaction, spin textures, and non-adiabatic effects, that have been invoked to explain the anomalous high value of the spin-orbit coupling that is apparently required to reproduce the experimental values of spin polarization[16, 17, 51]. In fact, depending on how TRS is broken, either through a decoherence probe or by changing the boundary conditions of the helix, and hence the relative spin populations, we can use this result to reproduce important asymmetries in the enantiospecific photon absorption that have been observed in complex chiral solids and interfaces[39, 40]. The inclusion of TRS breaking is also essential in understanding the dependence of both SP and COA on the geometrical factors of the helix, e.g. its length, which has been analyzed experimentally [44, 45, 52, 53, 54]. ## 1 Results and discussion For optically active molecules, the chiroptical properties are usually quantified in terms of the anisotropic dissymmetry factor \(g_{CD}\), which characterizes the COA. It can be written in terms of the extinction coefficients \(\varepsilon_{-}(\varepsilon_{+})\) for left-(right-) polarized incident light, corresponding to the optical transition between the \(n\)-th and \(m\)-th electronic states, as: \[g_{CD}=2\frac{(\varepsilon_{-}-\varepsilon_{+})}{(\varepsilon_{-}+\varepsilon _{+})}=\frac{4R_{nm}}{|\boldsymbol{\mu}_{nm}|^{2}+|\mathbf{m}_{nm}|^{2}}. \tag{1}\] Here, \(R_{nm}\) is the rotatory strength, and \(\boldsymbol{\mu}_{nm}\) and \(\mathbf{m}_{nm}\) are the transition electric dipole moment, and the transition magnetic dipole moment, respectively. The rotatory strength \(R_{nm}\), also known as Rosenfeld's tensor, is defined as the imaginary part of the scalar product of the two transition moments: \[R_{nm} = \text{Im}(\boldsymbol{\mu}_{nm}\cdot\boldsymbol{m}_{mn}). \tag{2}\] This definition is the usual one in the dipole approximation, which excludes quadrupole and higher-order contributions[55]. For an isotropic system, \(R_{nm}\) can be written as \(R_{nm}=\left(1/3\right)\left(R_{11}+R_{22}+R_{33}\right)_{nm}\), with \(\left(R_{ii}\right)_{nm}\) being the tensor components relative to the direction of propagation of light, e.g., \(\left(R_{33}\right)_{nm}\) refers to light incident along the \(z\)-direction [47]. Due to the fact that organic molecules are primarily composed of low atomic number elements, atomic spin-orbit coupling (ASOC) is considered to be very weak, hence the usual definition of the rotatory strength in eq.(2) omits the contribution of the electronic spin and any influence of ASOC on the circular dichroism (CD) spectrum. Nonetheless, the discovery of the CISS effect presents an opportunity to investigate the role of an effective SOC that can be substantially different from the bare ASOC involved in spectroscopy. The inclusion of spin effects requires a redefinition of the \(R_{nm}\) to consider the total magnetic moment \(\mathbf{J}\), adding the spin contribution \(\mathbf{m}_{s}\) to the orbital one, i.e., \(\mathbf{J}=\mathbf{m}+\mathbf{m}_{s}\). We then consider the Hamiltonian for an electron with charge \(e\) and rest mass \(m_{e}\) as \(\mathcal{H}=H^{0}+H^{\prime}\), where \(H^{0}\) and \(H^{\prime}\) refer to the non-perturbed system and the perturbation, respectively. The non-perturbed Hamiltonian includes the kinetic energy and the SOC via a Rashba-like term, \(H_{SO}=\mathbf{\sigma}\cdot(\mathbf{p}\times\mathbf{\alpha})\), where \(\sigma_{i}\) are Pauli's matrices, \(\mathbf{p}\) is the linear momentum operator, and \(\mathbf{\alpha}\) is a parameter of the model that includes the electric field and a coupling constant that controls the magnitude of the SOC. The SOC in a tight-binding model of a real molecule is connected to the ASOC (\(\sim\)6 meV for carbon atoms) and can be enhanced by the orbital overlap between neighboring \(\pi\)-orbitals, resulting in an effective intrinsic SOC of the order of meV (like in carbon nanotubes [56]), Fig. 1: **Helical system, energy spectrum, and the Bloch space.****a**, Structure of the helix in the molecular coordinate system \((\hat{x},\hat{y},\hat{z})\) in the presence of an external time-dependent electromagnetic field with frequency \(\omega\). The structural parameters are given by the radius \(a\), the pitch \(2\pi b\), and the \(\varphi\) angle such that \(0\leq\varphi\leq 2\pi K\), with \(K\) the number of turns. An effective SO magnetic field \(\mathbf{B}_{SO}\) is considered perpendicular to the helix axis. **b**, Energies of electrons in units of \(\hbar\omega_{0}\) on an counter-clockwise helix (\(\zeta=+1\)) as a function of the SOC strength, with \(K=2\). Each energy represents the two Kramers’ pairs, i.e. \(E^{-,+}_{\hat{n},+}=E^{+,+}_{\hat{n},-}\), and \(E^{+,+}_{\hat{n}\hat{n}+1,+}=E^{-,+}_{\hat{n}+1,-}\). For convenience, we have defined frequencies \(\omega_{SO}=2\alpha a/h(a^{2}+(b/2\pi)^{2})\) and \(\omega_{0}=h/m(a^{2}+(b/2\pi)^{2})\)[46]. **c**-**f** Schematic representation of the vector \(\Psi\) and the corresponding states \(\psi^{\nu}_{s}\) for the enantiomer \(\zeta=+1\) on Bloch sphere, showing the role of the SOC, and the dephasing (changes in \(\delta\) phase) and voltage (changes in \(z\) coefficient) probes. which depends explicitly on the chirality and geometry of the molecule [57, 58, 59, 60]. An effective SOC in a helical model can also be related to the confining molecular electrostatic field \(\mathbf{E}_{helix}\), which generates a SOC whose magnitude is given by \(\mathbf{\alpha}=(e\hbar/4m_{e}^{2}c^{2})\mathbf{E}_{helix}\), and where the electric field has helical symmetry[61, 46]. In both scenarios, the electron is subject to an effective momentum-dependent, spin-orbit magnetic field \(\mathbf{B_{SO}}\), which interacts with its spin. We assume here that \(\mathbf{B_{SO}}\) is oriented perpendicular to the longitudinal axis of the helix (see Fig.1**a**). The perturbation \(H^{\prime}\) includes an external electromagnetic radiation of frequency \(\omega\). An important ingredient of our model is that the stationary wavefunctions \(\Psi_{\tilde{n}}^{\zeta}\), necessary to describe the system to first order in the perturbation[55], can be constructed as linear combinations of eigenstates of \(H_{0}\), \(\psi_{\tilde{n},s}^{\nu,\zeta}\). These eigenfunctions \(\psi_{\tilde{n},s}^{\nu,\zeta}\) have been derived in a previous work [46] and the different quantum numbers correspond to the direction of electron propagation \(\nu=+1(-1)\), the label for spin components \(s=\pm 1\), and the helicity of the state \(s\nu\), i.e., the projection of the spin angular momentum in the direction of propagation. The label \(\zeta=\pm 1\) corresponds to the two enantiomers, and \(n\) labels the energy channels \(\tilde{n}=(n-K)/2K\), \(n=1,2,3....\) Explicitly, the wavefunctions for the two enantiomers are given by: \[\Psi_{\tilde{n}}^{+}=ze^{i\delta}(\psi_{\tilde{n}+1,+}^{+,+}+\psi_{\tilde{n},- }^{+,+})-qe^{i\eta}(\psi_{\tilde{n}+1,-}^{-,+}+\psi_{\tilde{n},+}^{-,+}), \tag{3}\] \[\Psi_{\tilde{n}}^{-}=ze^{i\delta}(\psi_{\tilde{n}+1,+}^{-,-}+\psi_{\tilde{n}, -}^{-,-})-qe^{i\eta}(\psi_{\tilde{n}+1,-}^{+,-}+\psi_{\tilde{n},+}^{+,-}), \tag{4}\] where the coefficients \(ze^{i\delta}\) and \(qe^{i\eta}\) have been added to explore the influence of decoherence on the model by changing the phases and amplitudes of the wavefunctions. These wavefunctions satisfy the same boundary conditions used in Tinoco's landmark article [47], i.e., for a helix of length \(2\pi K\) we have \(\Psi_{\tilde{n}}^{\zeta}|_{\varphi=0}=\Psi_{\tilde{n}}^{\zeta}|_{\varphi=2 \pi K}=0\). The eigenfunctions also obey Kramers' degeneracy, which implies that states with equal helicity are degenerate. The inclusion of SOC opens a gap \(\Delta\) separating states with \(s\nu=+1\) from those with \(s\nu=-1\), but Kramers' degeneracy is preserved unless TRS is broken (see Fig.1(b) for eigenvalues energy spectrum). ### Bloch sphere and Expectation Values In general, the superposition of states can be represented as Bloch vectors residing on the Bloch sphere, where the state \(\Psi_{\tilde{n}}^{\zeta}\) can be written as \(\left|\Psi\right\rangle=\cos\left(\theta^{\prime}/2\right)\left|0\right\rangle+ e^{i\varphi^{\prime}}\sin\left(\theta^{\prime}/2\right)\left|1\right\rangle\). A similar representation can be made for the states \(\psi_{s}^{\nu}\), where, for simplicity, we omit the labels \(\tilde{n}\) and \(\zeta\). The influence of SOC and TRS on the states on the Bloch sphere for \(\tilde{n}=1\) and \(\zeta=+1\) is illustrated in Figure 1**c**-**f**. In the simplest case, when SOC is zero and TRS is preserved (Fig. 1**c**), the angle of inclination \(\theta\) of the spinors \(\psi_{s}^{\nu}\) with respect to the vertical \(z\)-axis is equal to zero. Each of these states represents a pure state in the direction of \(\left|0\right\rangle\), resulting in \(\Psi\) being in the direction of \((\left|0\right\rangle+\left|1\right\rangle)/\sqrt{2}\). The degree of polarization can be defined as \(P=\sqrt{P_{x}^{2}+P_{y}^{2}+P_{z}^{2}}\), where \(P_{z}\) represent the expectation values of \(\left|\Psi\right\rangle\)[62]. Hence, in this case, \(P=1\), since \(P_{x}=1\), and \(P_{y}=P_{z}=0\). The effect of SOC is depicted in Figure 1**d**. The SO-induced magnetic field \(\mathbf{B}_{SO}\) causes all the states \(\psi_{\tilde{n},s}^{\nu,\zeta}\) to rotate by an angle \(\theta\), and the spinor \(\Psi_{\tilde{n}}^{\zeta}\) to tilt by an angle \(\theta^{\prime}\) with respect to the molecular axis, allowing us to interpret the impact of SOC as a relaxation of the state vector along the direction of the SO field. This rotation results in a decrease in \(P_{x}\) and an increase in \(P_{z}\), causing \(P\) to be less than 1. The inclination angle \(\theta^{\prime}\) of the spinor \(\Psi\) can be modulated by decoherence in the presence of SOC, as shown in Figures 1**e**-**f**. While a change in the \(\delta\) phase (or \(\eta\)) does not affect the orientation of the states \(\psi\), a change in the \(z\) (or \(q\)) coefficient breaks the degeneracy of states with the same helicity (Kramers' pairs), resulting in an additional rotation of the states that adds to the \(\theta\) angle, and therefore is also reflected in \(\theta^{\prime}\). In other words, breaking TRS by a change of the amplitudes in the wavefunction acts as an additional contribution to the magnitude of the SO interaction, causing the angle \(\theta\) of certain states to increase (or decrease) due to this effect. ### Time Reversal Symmetry, Voltage Probes and Decoherence Effects TRS plays an important role in this work because Onsager's reciprocity relation precludes the possibility of having non-zero SP in the linear regime unless TRS is explicitly broken. The fact that in the original experiments on electron SP in gas phase molecules, the measured polarization was very low[62] can be understood as a natural consequence of the fact that in gas phase TRS is difficult to break unless multi-photon effects are included [63]. The coefficients \(ze^{i\delta}\) and \(qe^{i\eta}\) in Eqs.(3,4) have been introduced to simulate spin-insensitive Buttiker's probes, allowing for the exploration of decoherence effects on the model, which introduces a disruption of symmetry in both \(\nu\) directions and breaks TRS[12]. These Buttiker's probes can serve as dephasing probes, which preserve unitarity, where only complex phases are changed[64], or as voltage probes, where coefficients are modified leading to non-unitary evolution[65, 66, 67]. It can be seen that when \(ze^{i\delta}=qe^{i\eta}=1\), TRS is preserved. In what follows, we will consider the combined influence of SOC and the breaking of TRS in our model on both the chiroptical response characterized by the anisotropic dissymmetry factor \(g_{CD}\) and the spin polarization vector \(\mathbf{P}=(P_{x},P_{y},P_{z})\). We will show one of our central results that, under certain conditions, the norm of the polarization vector \(P\) is a predictor of chiroptical activity. ### The Effect of SOC and TRS Breaking on the Rotatory Strength and COA A central element in establishing the connection between COA and the SP is to generalize the definition of the Rosenfeld tensor to include the spin contributions. The rotatory strength \(R_{nm}\) (and its components) can then be expressed in the general form: \[R_{nm}=\frac{3e^{2}}{4m_{e}c}\left[\left(\frac{a^{2}b}{a^{2}+b^{2}}\right)r_{ nm}+2\hbar S_{nm}\right], \tag{5}\] where \(r_{nm}\) is a dimensionless function that contains the contribution of the orbital magnetic moment and depends on the quantum numbers \(n\) and \(m\) and the number of turns \(K\). The spin contribution to the total orbital magnetic moment is included in the coefficient \(S_{nm}\), and SOC is hidden in the wavefunctions with which the transition matrix elements are built (see Supplementary Material for more details). Figure 2 shows the influence of the SOC strength as well as of decoherence on the rotatory strength \(R\) for the transition from the fundamental state \(n=1\) to the first excited state \(m=2\). Figure 2**a** illustrates first Tinoco's case when TRS is preserved. Breaking TRS via a dephasing probe (change in relative phase \(\delta-\eta\)) in the presence of SOC shifts the spectrum, but by an amount which is independent of the SOC magnitude. On the other hand, the introduction of a voltage probe, implying a change in the amplitude \(z\) in the wavefunction, results in a boundary condition determining a non-zero SP, analogous to a magnetic tip in a junction. This manifests in varying rotational strengths (see Fig.2**b**). The tensor components \(R_{11}\) and \(R_{22}\) for the first transition, corresponding to incident radiation along the \(\mathbf{B}_{SO}\) direction (perpendicular to the molecular axis), are the only affected by the voltage probe inducing decoherence. This change affects the components in a way that leaves the average \(R\) components almost unchanged. The CD spectrum of the helix for an isotropic system, for the first transitions from the fundamental state, is shown in Fig.3 when TRS is preserved. This spectrum remains the same with and without SO coupling (see 3**a**), indicating that the electronic spin effect is negligible when TRS is preserved. This result can be interpreted as a manifestation of Bardarson's theorem[50]. Both the dephasing and voltage probe effects are observable in the spectrum as a change in the magnitude Figure 2: **Rotatory strength with SO coupling and decoherence**. Rotatory strength \(R\) and its components \(R_{11}\) for the transition from \(n=0\) to \(m=1\) as a function of the magnitude of the SO coupling. **a** shows Tinoco’s case where TRS is preserved, and **b** shows the effect of decoherence induced with a voltage probe. The characteristic values of the helix used are \(a=0.075\) nm, \(2\pi b=3.54\) nm, and \(K=1\), for positive chirality \(\zeta=+1\). of the CD intensity in graphs **b** and **c**, respectively. But, while dephasing is evidenced as an asymmetry in the CD spectra of the two enantiomers, the effect of the voltage probe generates the same change in the magnitude of the CD intensity for both enantiomers, in addition to a shift in the maximum absorption peaks, so that the spectra for the two enantiomers are mirror images, even in the presence of SOC. In contrast, the impact of the voltage probe on the CD components \(\text{CD}_{\text{\tiny{II}}}\), relative to the direction of incidence of the radiation field, is shown in figure 4. Inducing decoherence (without SOC) affects the spectrum in such a way that the three CD components show mirror images (4, **a**- **c**) However, the coexistence of the SOC with decoherence generates an asymmetry in \(\text{CD}_{11}\) and \(\text{CD}_{22}\) for the two enantiomers (see figures 4, **d**-**e**), in contrast to \(\text{CD}_{33}\) which is symmetrical. This asymmetry is related to the orientation of the helix with respect to the direction of the incident radiation. In this way, the voltage probe simulates a magnetic probe as in the experiments, which moves the local field of the helix, changing the momentum and modifying the spin population injected into the molecule[39, 40, 54]. Figure 3: **Circular dichroism spectrum with SOC and decoherence.** The CD spectrum for a helix is shown as a function of the incident radiation wavelength \(\lambda\) for a helix with preserved TRS (**a**) with SO coupling; (**b**) with a dephasing probe; and (**c**) with a voltage probe, for both enantiomers \(\zeta\). The CD spectrum of the helix for an isotropic system, for the transitions from the fundamental state \(n=1\) to \(m\), was defined in terms of the rotatory strength in the form CD\(=\frac{A}{\Gamma\sqrt{\pi}}\sum_{m\neq 1}R_{1m}Exp\left[-\left(\frac{\lambda- \lambda_{m1}}{\Gamma}\right)^{2}\right]\), with \(\Gamma\) the half-width of the Gaussian, \(\lambda\) the incident radiation wavelength, and \(\lambda_{m1}=\lambda_{m}-\lambda_{1}\)[68]. ### Spin Polarization as a Predictor of Chiroptical Activity As mentioned in the introductory part, one of the main motivations for this work is to establish an explicit connection between the SP induced by the CISS effect and the COA. Before entering into the details of this connection, one should emphasize a fundamental difference: while COA requires photon absorption, SP can occur as a ground-state magnetic response associated with electron transport, electron transfer, and bond polarization that occurs in the absence of external magnetic fields and also in photo-induced ET reactions. The relationship presented here is related to a very interesting interplay between SP (connected to the \(P_{z}\) component), and spin coherence[69]. Here, we will concentrate on the simpler case of the comparison between ground state SP and COA, because it offers important insights into the connection between the two phenomena and the interpretation of recent experiments by Waldeck et al. [31] about a correlation between the length dependence of both magnitudes and the possible use of SP as a predictor of COA. Since the COA is a scalar and the SP is a vector quantity, we have considered as a plausible scalar predictor the norm of the polarization vector \(P\), which carries information about the global SP and is independent of the coordinate system. Our conjecture is also inspired by the fact that Figure 4: **Circular dichroism components for oriented helix with SOC and decoherence.** Figures **a-c** illustrate the effect of the voltage probe on the CD\({}_{\text{it}}\) components without SO coupling for incident radiation in the \(x\), \(y\), and \(z\) directions, with \(\imath=1,2,3\) respectively. **d-f**, Effect of decoherence coupled with SO interaction on the CD components, revealing an asymmetry in CD\({}_{11}\) and CD\({}_{22}\) components for both enantiomers (solid lines). We have used \(\delta=\eta=0\), and \(q=1\). the polarization vector is simply related to the spin component of the total magnetic moment by the equation \(\mathbf{m}_{s}=C\mathbf{P}\), where \(C\) is proportional to the gyromagnetic ratio. This connection of the CISS-related response to the magnetic moment is very significant, because it emphasizes the nature of this effect as a magnetic response to electron transport. It should be stressed that in our model, transport is mimicked by changing the boundary conditions with a parameter \(\epsilon\), defining the stationary states in eqs.(3,4). In this case, we consider \(\Psi_{\bar{n}}^{\zeta}|_{\varphi=0+\epsilon}\neq\Psi_{\bar{n}}^{\zeta}|_{ \varphi=2\pi K}\). From this point of view, our system is equivalent to an electron in a box with adjustable boundary conditions. Figures 5 displays the graphical relationship between \(P\), calculated for the ground state \(n=1\), and the dissymmetry factor \(g_{CD}\), corresponding to the transition \(n=1\to m=2\) as a function of the number of turns in the helix \(K\), in presence of SOC when: i) TRS is preserved (panels \(\mathbf{a}\) and \(\mathbf{b}\)); ii) TRS is broken by decoherence and changing the boundary conditions ( panels \(\mathbf{c}\) and \(\mathbf{d}\)). Despite the fact that both phenomena have very different physical origins, there is a clear correspondence that originates from the fact that both quantities are sensitive to the CISS-induced magnetic response. That the SP component is the key predictor is also apparent from the analysis Figure 5: **Length dependence of SP and COA.** Norm of the polarization vector \(P\) as a function of \(K\) for the fundamental state (\(n=1\)) (\(\mathbf{a}\)) when TRS is preserved, and (\(\mathbf{c}\)) when TRS is broken by boundary-induced decoherence, in presence of SOC. Dissymmetry factor \(g_{CD}\) as a function of \(K\) for the transition from the ground state with \(n=1\) to the first excited state \(m=2\) (\(\mathbf{b}\)) when TRS is preserved, and (\(\mathbf{d}\)) when TRS is broken using a voltage probe with SOC. Total average magnetic moment for the ground state as a function of the helix length for (\(\mathbf{e}\)) the Tinoco’s case, and (\(\mathbf{f}\)) when TRS is breaking by a voltage probe in the presence of SOC. We have used \(\delta=\eta=0\), and \(q=1\). of the plot of the total average magnetic moment for the ground state, fig.5**e** as a function of the length of the helix. Although this magnitude also increases with length in the presence of SOC and TRS breaking, it does not show the saturation behavior of both \(P\) and \(g_{CD}\), which indicates that it is dominated by the orbital angular momentum contribution. It is important to notice that the predictor, the SP, exhibits a rather small variation as a function of length, mostly because of the constraints imposed by TRS in gas phase. The numerical proportionality between the two magnitudes is simply calculated to be \(P\approx c10^{-6}g_{CD}\). This type of relationship, one of our central results, should survive the transition to more realistic models and also to chiral interface, where the breaking of TRS arises from either non-linear conditions or because the system under consideration has open boundaries. Figures 6**a** and **b** show the behavior of the magnetic moment expectation values for the ground state \(|\mathbf{m}_{11}|\) due to the SOC, compared to spinless electrons, and the magnitude of the transition magnetic dipole moment \(|\mathbf{m}_{12}|\) for the first transition, respectively. While both magnitudes have a monotonic behavior when TRS is preserved with SOC, interference effects that arise from the inclusion of decoherence can make for the non-monotone behavior of the transition moment. In any event, it is clear that these magnitudes do not have the same predicting behavior on COA as the SP, despite the fact that they depend on length. Figure 6: **Length dependence of magnetic moment.****a**, Magnetic moment expectation value for the ground state as a function of the length \(K\). **b**, Transition magnetic dipole moment as a function of \(K\) for the transition from \(n=1\) to \(m=2\). We have used \(\delta=\eta=0\), and \(q=1\). ## 2 Conclusions and Final Remarks Using a very simple model of an electron in a helix, including the spin degree of freedom and spin-orbit coupling, we have explored a fundamental connection between optical activity and the CISS effect in chiral systems. We have also investigated the fundamental role of time-reversal symmetry in strongly modulating both the spin polarization and the chiroptical response, as well as the influence of spin-orbit coupling on the optical activity that involves a strongly asymmetric enantiomeric response, not observed in simple chiral molecules in gas phases or in solutions, but that has been found in chiral solids and interfaces. All our findings seem to point to a deeper connection that necessarily involves the CISS-modulated magnetic response and the fact that the chiroptical activity involves a dephasing of the electric and magnetic fields as they propagate through the chiral structure. This dephasing should be connected to the fact that the onset of the CISS effect triggers an intramolecular magnetic response that needs to be explicitly included in the spin polarization of the system. We are currently exploring this fundamental connection and also the extension of our model to describe the photon-induced spin polarization in electron transfer processes. **Supplementary information.** Acknowledgments.The authors acknowledge fruitful discussion with Prof. Jesus M. Ugalde. S.V. acknowledges the support given by the Eleonore-Trefftz-Programm and the Dresden Junior Fellowship Programme by the Chair of Materials Science and Nanotechnology at the Dresden University of Technology, and the W.M. Keck Foundation through the grant "Chirality, spin coherence and entanglement in quantum biology." R.G. and G.C. acknowledge the support of the German Research Foundation (DFG) within the project Theoretical Studies on Chirality-Induced Spin Selectivity (CU 44/55-1), and by the transCampus Research Award Disentangling the Design Principles of Chiral-Induced Spin Selectivity (CISS) at the Molecule-Electrode Interface for Practical Spintronic Applications (Grant No. tCRA 2020-01), and Programme trans-Campus Interplay between vibrations and spin polarization in the CISS effect of helical molecules (Grant No. tC2023-03). E.M. acknowledges funding from project POLI17945 of USFQ and the Dresden Fellowship Programme. V.M acknowledges the support of Ikerbasque, the Basque Foundation for Science, the German Research Foundation for a Mercator Fellowship within the project Theoretical Studies on Chirality-Induced Spin Selectivity (CU 44/55-1), and the W.M. Keck Foundation through the grant "Chirality, spin coherence and entanglement in quantum biology." ## Author contributions ... ## Competing interests The authors declare no competing interests.
2301.04385
Strange and charm contributions to the HVP from C* boundary conditions
We present preliminary results for the determination of the leading strange and charm quark-connected contributions to the hadronic vacuum polarization contribution to the muon's g-2. Measurements are performed on the RC* collaboration's QCD ensembles, with 3+1 flavors of O(a) improved Wilson fermions and C* boundary conditions. The HVP is computed on a single value of the lattice spacing and two lattice volumes at unphysical pion mass. In addition, we compare the signal-to-noise ratio for different lattice discretizations of the vector current.
Anian Altherr, Lucius Bushnaq, Isabel Campos, Marco Catillo, Alessandro Cotellucci, Madeleine Dale, Patrick Fritzsch, Roman Gruber, Javad Komijani, Jens Lücke, Marina Krstić Marinković, Sofie Martins, Agostino Patella, Nazario Tantalo, Paola Tavella
2023-01-11T10:16:18Z
http://arxiv.org/abs/2301.04385v1
# Strange and charm contributions to the HVP from C+ ###### Abstract: We present preliminary results for the determination of the leading strange and charm quark-connected contributions to the hadronic vacuum polarization contribution to the muon's \(g-2\). Measurements are performed on the RC\({}^{\star}\) collaboration's QCD ensembles, with \(3+1\) flavors of \(O(a)\) improved Wilson fermions and C\({}^{\star}\) boundary conditions. The HVP is computed on a single value of the lattice spacing and two lattice volumes at unphysical pion mass. In addition, we compare the signal-to-noise ratio for different lattice discretizations of the vector current. Introduction The anomalous magnetic moment of the muon is one of the quantities which is getting a deal of attention in relation to new physics searches. The combined result of the BNL's E821 experiment [1] and the first run of the E989 experiment at Fermilab [2] shows a precision of 0.35 ppm and a tension with the Standard Model's prediction [3] of 4.2 \(\sigma\), if one does not include recent lattice determinations, most notably the result by the BMW collaboration [4]. The next runs of the E989 experiment and the upcoming experiments at J-PARC [5] and CERN [6] aim to further reduce the experimental uncertainty. Theoretically, the dominant source of uncertainty is the leading hadronic vacuum polarization. The most precise result for \(a_{\mu}^{\rm LO,HVP}\) is obtained using the dispersive relations and the experimental data for the cross section of \(e^{+}e^{-}\) to hadrons. Currently, the precision of the dispersive approach is about 0.6% [3]. Independent results can be obtained using the lattice framework, which does not require experimental inputs and has started to produce competitive results for the muon's \(g-2\). The most precise result from lattice simulations is the one from the BMW collaboration, which shows a precision of about 0.8% [4]. The target precision on the HVP for the next few years is of few per mille. To achieve this precision, it is necessary to include the strong and electromagnetic isospin-breaking corrections, which contribute at the percent level. In this work, we present preliminary results for the leading connected contributions to HVP from strange and charm quarks. This is the first and necessary step for a long-term research project aiming to evaluate the full HVP diagram, by including the isospin-breaking effects as well as the disconnected terms. The novelty of our approach is the use of C\({}^{\star}\) boundary conditions, which allows for defining QED on the lattice with a local and gauge-invariant formulation. The configurations used for this work have been generated by the RC\({}^{\star}\) collaboration using the openQ*D-1.1 code [7]. The lattice setup and the methods for the observable are described in sections 2 and 3. Our preliminary results are presented in section 4. ## 2 Lattice setup We perform measurements on two QCD ensembles generated by the RC\({}^{\star}\) collaboration. The configurations are produced at the SU(3) symmetric point, i.e \(m_{u}=m_{d}=m_{s}\simeq(m_{u}^{phys}+m_{d}^{phys}+m_{s}^{phys})/3\), by using the Luscher-Weisz action for the SU(3) field and \(O(a)\) improved Wilson fermions. The ensembles are generated with periodic boundary conditions in time and C\({}^{\star}\) boundary conditions in the spatial directions, i.e. all the fields are periodic up to charge conjugation \[U_{\mu}(x+L_{k}\hat{k})=U_{\mu}^{*}(x),\qquad\psi_{f}\left(x+L_{k}\hat{k} \right)=C^{-1}\overline{\psi}_{f}^{T}(x),\qquad\overline{\psi}_{f}\left(x+L_ {k}\hat{k}\right)=-\psi_{f}^{T}(x)C. \tag{1}\] The action parameters, lattice sizes, and pion masses are shown in Table 1. More details about the tuning of the parameters in the simulations, the scale setting, and the calculations of the meson masses are given in Ref. [8]. In particular, the values of the lattice spacing in Table 1 are determined from the auxiliary scale \(t_{0}\) with the reference value of the CLS determination \((8t_{0})^{1/2}=0.415\) fm [9]. The two ensembles are generated with the same bare parameters but different lattice volumes. This gives us the possibility to get an idea about the finite-volume effects. To obtain the results shown in section 4 we use respectively 200 and 108 independent configurations for the ensembles A400a00b324 and B400a00b324. ## 3 Methods for the hadronic vacuum polarization In the time-momentum representation (TMR) [10], the leading HVP contribution to \(a_{\mu}=(g_{\mu}-2)/2\) is given by the convolution \[a_{\mu}^{\rm HVP}=\left(\frac{\alpha}{\pi}\right)^{2}\sum_{t=0}^{\infty}G(t) \bar{K}(t;m_{\mu}), \tag{2}\] where \(G(t)\) is the spatially summed correlator of two electromagnetic currents \[G(t)=-\frac{1}{3}\sum_{k=1,2,3}\sum_{\bar{x}}\left\langle V_{k}(x)V_{k}(0) \right\rangle, \tag{3}\] and \(\bar{K}(t;m_{\mu})\) is the QED kernel, for which we use the expression in Appendix B in [11]. There are two commonly used discretizations of the vector current in lattice QCD: the local vector current \[V_{\mu,f}^{l}\left(x\right)=\bar{\psi}_{f}\left(x\right)\gamma_{\mu}\psi_{f} \left(x\right), \tag{4}\] and the point-split or conserved one defined by \[V_{\mu,f}^{c}\left(x\right)=\frac{1}{2}\Big{[}\bar{\psi}_{f}\left(x+\hat{\mu }\right)\left(1+\gamma_{\mu}\right)U_{\mu}^{\dagger}(x)\psi_{f}\left(x\right) -\bar{\psi}_{f}\left(x\right)\left(1-\gamma_{\mu}\right)U_{\mu}(x)\psi_{f} \left(x+\hat{\mu}\right)\Big{]}, \tag{5}\] where we use the label \(f\) to denote the vector current operator of a single flavor. By inserting the expression of the current in the expectation value in equation (3) and considering all the possible Wick contractions between the fields, one obtains two different types of contributions: the connected terms that are flavor diagonal, and the disconnected diagonal and off-diagonal (\(f^{\prime}\neq f\)) terms, \[\left\langle V_{k}\left(x\right)V_{k}\left(0\right)\right\rangle=\sum_{f}q_{f} ^{2}\times\quad\raisebox{-17.071654pt}{\includegraphics[scale=17.071654pt]{./.eps}}\quad+\sum_{f,f^{\prime}}q_{f}\,q_{f^{\prime}}\times\quad\raisebox{-17.071654pt}{\includegraphics[scale=17.071654pt]{./.eps}}\quad. \tag{6}\] In the following, we will focus only on the connected terms. The local vector current in equation (4) is neither conserved nor improved on the lattice. If we consider only the connected contractions, it renormalizes independently for each flavor \(f\)[12, 13] \[V_{\mu,f}^{R}=Z_{V}^{m_{f}}\left(V_{\mu,f}^{l}+ac_{V}\,\partial_{\nu}T_{\mu \nu,f}\right), \tag{7}\] where \(T_{\mu\nu,f}=-\bar{\psi}_{f}\,\frac{1}{2}[\gamma_{\mu},\gamma_{\nu}]\psi_{f}\) is the tensor current, \(c_{V}\) is a constant, and \(m_{f}\) is the mass of the valence quark with flavor \(f\). The current in equation (5) is instead conserved on the lattice but still requires \(O(a)\) improvements. In this work, we do not consider any improvements at the \begin{table} \begin{tabular}{c|c c c c c c} \hline Ensemble & V & \(\beta\) & \(\kappa_{u,d,s}\) & \(\kappa_{c}\) & \(c_{sw,\rm SU(3)}\) & \(a\) [fm] & \(m_{\pi^{\pm}}\) [MeV] \\ \hline A400a00b324 & \(64\times 32^{3}\) & 3.24 & 0.1344073 & 0.12784 & 2.18859 & 0.05393(24) & 398.5(4.7) \\ B400a00b324 & \(80\times 48^{3}\) & 3.24 & 0.1344073 & 0.12784 & 2.18859 & 0.05400(14) & 401.9(1.4) \\ \hline \end{tabular} \end{table} Table 1: Parameters of the ensembles used in this work. The lattice spacings and pion masses have been computed in Ref. [8]. observable level, thus we neglect the term proportional to the tensor current. In this case, we see from equation (7) that the local vector current for a flavor \(f\) renormalizes multiplicatively through the mass-dependent renormalization factor \(Z_{V}^{m_{f}}\). We describe our method to determine \(Z_{V}^{m_{f}}\) in section 4.2. The choice of the local or conserved currents at the source and sink points of the quark propagator leads to different discretizations of the correlator \(G(t)\) in TMR, but share the same continuum limit once renormalization constants are taken into account. ### Signal-to-noise ratio Before performing the measurements, we study the effect of the discretization of the current on the signal-to-noise ratio of the correlator. By using the two expressions of the current in equations (4) and (5), it is indeed possible to define three types of correlator: the local-local (\(ll\)), the conserved-conserved (\(cc\)), and the mixed one (\(cl\)). For instance, for the local-local correlator, the expression to be evaluated is the following \[G_{f}^{ll}(t)_{conn}= \frac{1}{3}\sum_{k=1,2,3}\sum_{\vec{x}}q_{f}^{2}\ \mathrm{tr}\left[\gamma_{k}D_{f}^{-1}(x|0)\gamma_{k}D_{f}^{-1}(0|x)\right], \tag{8}\] with \(D^{-1}(x|0)\) being the quark propagator from \(0\) to \(x\). For these measurements, we use \(60\) configurations and \(10\) point sources per configuration. The aim is to understand which choice is the most convenient in terms of signal-to-noise ratio and computational cost. With the conserved-conserved correlator, we do not need to determine the renormalization factor. However, we expect to have a noisier result when using the conserved current due to the fluctuations of the gauge field. Moreover, employing the conserved current both at the sink and source points requires \(3\) additional inversions of the Dirac operator per point source, one for each spatial direction \(\hat{k}=1,2,3\). Figure 1 shows the three different correlators of the light quark measured on the A400a00b324 ensemble. The left panel shows the correlators plotted against time, and the right panel illustrates the relative statistical noise of \(G^{cl}(t)\) and \(G^{cc}(t)\) compared to the local-local correlator. As shown, the conserved-local correlator is only slightly (\(5\%\) to \(10\%\)) noisier than the local-local one; the conserved-conserved correlator is instead much noisier. By taking into account that the Figure 1: Example of comparison of the correlators \(G^{kk}(t)\), with \(k=c,l\) (left), and the relative errors (right) for the light quark. \(c\) and \(l\) denote the conserved and the local discretization of the vector current. computational cost is even four times larger, the conserved-conserved correlator is not a good choice to achieve the overall target precision. The other two correlators are equivalent choices unless the uncertainty in \(Z_{V}\) gets significant, then the conserved-local correlator has the advantage to be less sensitive to the precision of the renormalization factor since it appears only once in this correlator. In section 4 we will show results for both local-local and local-conserved correlators, pointing out the significant difference in the charm contribution, due to large discretization effects. ## 4 Strange and charm quark-connected contribution ### Tuning procedure To evaluate the leading order strange and charm quark-connected contribution to HVP, it is necessary to perform the continuum limit and the extrapolation to the physical pion mass and take into account all the systematics. In this work, we consider only one value for the lattice spacing and pion mass and two different volumes. Before evaluating the correlator in equation (8), we tune the hopping parameters \(\kappa_{f}\) of the valence quarks. We choose the value of \(\kappa_{s}\) and \(\kappa_{c}\) by matching the physical value of the masses of the mesons \(\phi\) and \(J/\psi\)[14] \[m_{\phi}^{phys}=1019.461(20)\ \mathrm{MeV},\qquad m_{J/\psi}^{phys}=3096.900(6)\ \mathrm{MeV} \tag{9}\] with our lattice results, obtained respectively from the two-point functions of the interpolators \[\mathcal{O}_{s}=\bar{s}\gamma_{\mu}s,\qquad\mathcal{O}_{c}=\bar{c}\gamma_{\mu}c. \tag{10}\] In this matching procedure we are neglecting the disconnected terms and the QED corrections, which enter into the physical masses and are instead missing in our calculations. In Tables 2 and 3 we show the different choices of \(\kappa_{s/c}\) and the results for the effective masses of the vector mesons \(s\bar{s}\) and \(c\bar{c}\) for both ensembles. \begin{table} \begin{tabular}{c c c|c c c} \hline \(\kappa_{s}\) & \(am_{V}\,(s\bar{s})\) & \(m_{V}\,(s\bar{s})\) [MeV] & \(\kappa_{c}\) & \(am_{V}\,(c\bar{c})\) & \(m_{V}\,(c\bar{c})\) [MeV] \\ \hline 0.134407 & 0.2644(50) & 967(19) & 0.12784 & 0.8540(5) & 3125(14) \\ 0.1343 & 0.2731(24) & 999(10) & 0.12794 & 0.8463(5) & 3097(14) \\ 0.13422 & 0.2808(22) & 1027(9) & 0.12800 & 0.8418(5) & 3080(14) \\ \hline \end{tabular} \end{table} Table 2: Ensemble A400a00b324: mass of the vector mesons for several choices of the hopping parameters in the valence sector. Values in MeV are obtained by using the reference value \((8t_{0})^{1/2}\) = 0.415 fm [8]. \begin{table} \begin{tabular}{c c c|c c c} \hline \(\kappa_{s}\) & \(am_{V}\,(s\bar{s})\) & \(m_{V}\,(s\bar{s})\) [MeV] & \(\kappa_{c}\) & \(am_{V}\,(c\bar{c})\) & \(m_{V}\,(c\bar{c})\) [MeV] \\ \hline 0.134407 & 0.2522(33) & 923(13) & 0.12784 & 0.8536(7) & 3123(14) \\ 0.134220 & 0.2715(22) & 993(9) & 0.12794 & 0.8458(9) & 3095(14) \\ 0.134152 & 0.2794(19) & 1022(8) & & & \\ \hline \end{tabular} \end{table} Table 3: Ensemble B400a00b324: mass of the vector mesons for several choices of the hopping parameters in the valence sector. Values in MeV are obtained by using the reference value \((8t_{0})^{1/2}\) = 0.415 fm [8]. We plot the masses of the vector mesons as a function of the inverse of the corresponding hopping parameters \(\kappa_{s/c}^{-1}\), which are linear in the bare masses of the valence quarks \(s\) and \(c\). In Fig. 2 we show this dependence for both strange (left) and charm (right) quarks. The purple bands in the plots correspond to the physical masses in equation (9) converted to lattice units. ### Renormalization constants Evaluating \(a_{\mu}^{HVP,s/c}\) from the local-local or conserved-local correlators requires determining the renormalization factor \(Z_{V}^{m_{f}}\) and the improvement coefficient introduced in (7). In this work, we do not consider any improvement terms and evaluate the mass-dependent renormalization factor \(Z_{V}^{m_{f}}\) of the local current from the ratio [15] \[R(t)=\frac{\sum_{\vec{x},k}\left\langle V_{k,f}^{c}\left(x\right)V_{k,f}^{l} \left(0\right)\right\rangle}{\sum_{\vec{x},k}\left\langle V_{k,f}^{l}\left(x \right)V_{k,f}^{l}\left(0\right)\right\rangle}\,. \tag{11}\] When \(t\) is small, the quantity is affected by the different discretization effects of the two currents. At large time, \(R(t)\) saturates and we can determine \(Z_{V}^{m_{f}}\) by fitting the plateau region to a constant. An example of such a fit is shown in Fig. 3. We applied the same method for both ensembles and for both \(Z_{V}^{m_{s}}\) and \(Z_{V}^{m_{c}}\). In Table 4 we show the fit ranges and the values obtained for \(Z_{V}^{m_{s/c}}\) for the tuned hopping parameters \(\kappa_{s/c}^{tun}\). The errors are determined with the bootstrap procedure. \begin{table} \begin{tabular}{c|c c|c c} \hline Ensemble & fit range & \(Z_{V}^{m_{s}}\) & fit range & \(Z_{V}^{m_{c}}\) \\ \hline A400a00b324 & [15,24] & 0.6712(7) & [24,30] & 0.6066(2) \\ B400a00b324 & [15,24] & 0.6707(5) & [23,31] & 0.6066(4) \\ \hline \end{tabular} \end{table} Table 4: Mass-dependent renormalization factor obtained from the ratio method defined in Eq. (11). Figure 2: Masses of the vector mesons \(s\bar{s}\) (left) and \(c\bar{c}\) (right) as functions of the inverse of the hopping parameter. The purple bands and their central value represent the physical mass of the mesons \(\phi\) (left) and \(J/\psi\) (right) converted to lattice units. ### Results The evaluation of the leading HVP contribution to \(a_{\mu}\) requires an integration over the euclidean time \[a_{\mu}^{\text{HVP},f}=\left(\frac{\alpha}{\pi}\right)^{2}\sum_{t=0}^{\infty}G^ {f}\left(t\right)\tilde{K}(t;m_{\mu}). \tag{12}\] One of the problems related to this task is that the signal deteriorates with the lattice time \(t\), due to the exponentially increasing errors of the correlator. Another related issue comes from the finite size of the box: the integration domain is indeed restricted to \([0,T/2]\), then we have to extrapolate the correlator to infinite time. In addition, the correlator is affected by finite-volume effects (FVE) due to the finite temporal (\(T\)) and spatial (\(L\)) extents. Concerning the finite-volume effects, it has been found [16] that for given \(L\) the leading finite-\(L\) corrections are the exponentials \(e^{-m_{\pi}L},e^{-m_{\pi}\sqrt{2}L}\) and \(e^{-m_{\pi}\sqrt{3}L}\). Similarly, the leading contribution arising from finite \(T\) is \(e^{-m_{\pi}T}\). As a consequence, the finite-\(T\) effects are higher order corrections since usually in the simulations \(T=2L\). These results have been derived for a periodic torus in four dimensions and are affected by the choice of the boundary conditions. In our setup, the boundary condition in the time direction is periodic, then the results for finite-\(T\) corrections found in Ref. [16] still apply. However, we use C\({}^{\star}\) boundary conditions in all three spatial directions, which means that the finite-\(L\) corrections are in general different. Some studies have shown that in pure QCD C\({}^{\star}\) boundary conditions lead to small improvements for the FVE, with a leading correction \(e^{-m_{\pi}\sqrt{2}L}\)[17]. A detailed numerical study of the finite-volume effects for ensembles with C\({}^{\star}\) boundary conditions will be carried out in future work. In this work, we make a direct comparison of the results for the integrand \(G\left(t\right)\tilde{K}(t,m_{\mu})\) and \(a_{\mu}^{\text{LO,HVP}}\) on the two available QCD ensembles. To control the large-time behavior of the correlator, we use the following quantity \[G_{\text{constructed}}(t)=\begin{cases}G(t)&(t\leq t_{0,\text{cut}})\\ G_{\text{1-exp}}(t)&(t_{0}>t_{0,\text{cut}})\end{cases} \tag{13}\] Figure 3: Determination of the renormalization constant \(Z_{V}^{m_{\pi}}\) for the local vector current with \(K_{s}^{tun}=0.13422\) on the A400a00b324 ensemble. where \(t_{0,\rm cut}\) is a properly chosen cut-off and \(G_{\rm 1-exp}(t)\) denotes the exponential extrapolation of the correlator at large time \[A\exp(-t\cdot m_{\rm eff}). \tag{14}\] The two parameters \(m_{\rm eff}\) and \(A\) are the effective mass and the amplitude obtained through a fit procedure to the correlator. For the masses, we use the results reported in Tables 2 and 3 for the tuned hopping parameters. The parametrization with a single-exponential is a crude approximation that introduces some systematics since we are neglecting the excited states contributing to the correlator. We plan to use a more accurate model for the tail of the correlator in future works. The plots in Fig. 4 show the integrands both for the charm and strange quarks contributions and the two ensembles. The lattice data for the charm contribution are sufficiently precise and do not require any extrapolation or improvement. In the case of the strange contributions, the tail of the integrand is approximated as described above. The results of the integration are listed in Table 5. We estimate \(a_{\mu}^{\rm LO,HVP}\) using the two different discretizations of the correlator: conserved-local and local-local. The strange contribution is not affected by the choice of the correlator, the results are indeed compatible with the current uncertainties for both ensembles. By contrast, the contribution from the charm quark is particularly sensitive to the choice of discretization. The finite-size effects are negligible for the charm quark contribution and lead instead to a difference of about \(2\sigma\) for the strange quark. \begin{table} \begin{tabular}{c|c|c|c} \hline Ensemble & Type & \(a_{\mu}^{s}\times 10^{-10}\) & \(a_{\mu}^{c}\times 10^{-10}\) \\ \hline A400a0b324 & \begin{tabular}{c} _ll_ \\ _cl_ \\ \end{tabular} & \begin{tabular}{c} 46.7(7) \\ 46.2(7) \\ \end{tabular} & \begin{tabular}{c} 7.83(8) \\ 6.18(7) \\ \end{tabular} \\ \hline B400a00b324 & \begin{tabular}{c} _ll_ \\ _cl_ \\ \end{tabular} & \begin{tabular}{c} 48.5(7) \\ 48.0(7) \\ \end{tabular} & \begin{tabular}{c} 7.81(9) \\ 6.16(7) \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 5: Results for \(a_{\mu}^{s,c}\) in units of \(10^{-10}\) determined using the TMR and two different discretizations of the observable: local-local and conserved-local. Figure 4: Comparison of the integrands for the strange (left) and charm (right) contributions between the two ensembles. The tail of the integrands for the strange contribution is approximated by a single exponential. ### Partial errors budget The errors in Table 5 are the quadratic sum of the statistical and part of the systematic errors as follows. The uncertainties taken into account are the statistical errors from the correlators, the lattice spacing and \(Z_{V}\), and the systematics from the choices of the cut-off (for the strange quark) and the fit range used to determine \(A\) and \(m_{\rm eff}\). The statistical error is determined by using the bootstrap method. Since \(Z_{V}\) appears as a multiplicative factor in front of the whole expression, we employ the standard error propagation for it. The lattice spacing's values are determined by using the reference scale \((8t_{0})^{1/2}=0.415\) fm as an absolute value, without taking into account the systematics coming from the uncertainty on this scale. Thus, our current error on \(a\) is only a statistical partial uncertainty. The dependence on the lattice spacing is in the QED kernel \(\tilde{K}(t,m_{\mu})\): we numerically propagate the partial error \(\delta a\) by repeating the evaluation of \(a_{\mu}\) for \(N\) values of the lattice spacing drawn from a normal distribution \({\cal N}(a,\delta a)\). We use at least \(N=100\) for each result. Finally, we repeat the calculation for several values of the fit range and the cut-off and apply a weighted averaging procedure to get the total systematics. We remark that there are still several unaccounted uncertainties. We are currently missing the systematics introduced by the single-exponential extrapolation of the correlator and by the use of the reference scale \(t_{0}\) without an error for the determination of the lattice spacing. In this work we did not perform a quantitative numerical study of the finite-size effects and we measured at one value of the lattice spacing and pion mass, thus we have not performed yet an extrapolation of the results to the continuum and physical point. ## 5 Conclusions and outlooks We have measured the connected contribution to the leading hadronic vacuum polarization from strange and charm quarks, in a setup with C\({}^{\star}\) boundary conditions in the three spatial directions. We performed the analysis on two ensembles with different volumes, indicating that the finite-size effects are under control. As expected, we find that the charm contribution is considerably affected by the choice of the correlator, due to the sensitivity to the discretization effects. In addition, we have shown that a more precise determination of the lattice spacing is needed to reach the target precision. Our plans for future works include the evaluation of the isospin-breaking effects as well as the disconnected terms, and a quantitative study of the finite-size effects. ## Acknowledgments We acknowledge access to Piz Daint at the Swiss National Supercomputing Centre, Switzerland under the ETHZ's share with the project IDs s1101, eth8 and go22. Financial support by the SNSF (Project No. 200021_200866) is gratefully acknowledged. L.B., S.M., and M.K.M received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 813942. M.D.received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 765048. A.C.'s and J.L.'s research is funded by the Deutsche Forschungsgemeinschaft Project No. 417533893/ GRK-2575 "Rethinking Quantum Field Theory".
2308.07387
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks
Federated learning is a promising direction to tackle the privacy issues related to sharing patients' sensitive data. Often, federated systems in the medical image analysis domain assume that the participating local clients are \textit{honest}. Several studies report mechanisms through which a set of malicious clients can be introduced that can poison the federated setup, hampering the performance of the global model. To overcome this, robust aggregation methods have been proposed that defend against those attacks. We observe that most of the state-of-the-art robust aggregation methods are heavily dependent on the distance between the parameters or gradients of malicious clients and benign clients, which makes them prone to local model poisoning attacks when the parameters or gradients of malicious and benign clients are close. Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients' parameters or gradients is low respectively but at the same time their adverse effect on the global model's performance is high. Experiments on three publicly available medical image datasets demonstrate the efficacy of the proposed DISBELIEVE attack as it significantly lowers the performance of the state-of-the-art \textit{robust aggregation} methods for medical image analysis. Furthermore, compared to state-of-the-art local model poisoning attacks, DISBELIEVE attack is also effective on natural images where we observe a severe drop in classification performance of the global model for multi-class classification on benchmark dataset CIFAR-10.
Indu Joshi, Priyank Upadhya, Gaurav Kumar Nayak, Peter Schüffler, Nassir Navab
2023-08-14T18:09:58Z
http://arxiv.org/abs/2308.07387v1
DISBELIEVE: Distance Between Client Models is Very Essential for Effective Local Model Poisoning Attacks ###### Abstract Federated learning is a promising direction to tackle the privacy issues related to sharing patients' sensitive data. Often, federated systems in the medical image analysis domain assume that the participating local clients are _honest_. Several studies report mechanisms through which a set of malicious clients can be introduced that can poison the federated setup, hampering the performance of the global model. To overcome this, robust aggregation methods have been proposed that defend against those attacks. We observe that most of the state-of-the-art robust aggregation methods are heavily dependent on the distance between the parameters or gradients of malicious clients and benign clients, which makes them prone to local model poisoning attacks when the parameters or gradients of malicious and benign clients are close. Leveraging this, we introduce DISBELIEVE, a local model poisoning attack that creates malicious parameters or gradients such that their distance to benign clients' parameters or gradients is low respectively but at the same time their adverse effect on the global model's performance is high. Experiments on three publicly available medical image datasets demonstrate the efficacy of the proposed DISBELIEVE attack as it significantly lowers the performance of the state-of-the-art _robust aggregation_ methods for medical image analysis. Furthermore, compared to state-of-the-art local model poisoning attacks, DISBELIEVE attack is also effective on natural images where we observe a severe drop in classification performance of the global model for multi-class classification on benchmark dataset CIFAR-10. Keywords:Federated Learning Model Poisoning Attacks Deep Learning ## 1 Introduction The success of deep models for medical image analysis [13] greatly depends on sufficient training data availability. Strict privacy protocols and limited availability of time and resources pose challenges in collecting sizeable medical image datasets [12]. Although different medical institutions may be willing to collaborate, strict privacy protocols governing patients' information restrict data sharing. Federated learning (FL) offers a promising solution that allows different institutions to share information about these models without revealing personal information about the patients [20, 18, 6]. Federated Learning is a machine learning paradigm that learns a single shared global model by collaboratively learning from different local models on distributed systems without sharing the data. A federated learning setup involves multiple clients and a global server [18]. The global server initializes the global model and sends the parameters back to the clients. The clients then train their local models on the data present locally. Once the local models are trained, the parameters are sent to the global model for aggregation. The global model then uses an _aggregation algorithm_ to aggregate all the parameter updates and transmits the updated parameters back to the clients, and the cycle repeats until convergence. The federated learning setup allows the clients to preserve the privacy of their data. The success of a federated learning system is majorly dependent on the use of an aggregation algorithm. For example, _Federated Averaging_[18] is an aggregation algorithm in which all the parameters accumulated at the global server from different clients are averaged. However, not all clients would act truthfully in real-world scenarios, and there may be some _byzantine_ clients. A client is said to be a byzantine client if it acts malicious intentionally due to the presence of an adversary or unintentionally due to faulty equipment or hardware issues [26]. Studies report that even a single byzantine worker can seriously threaten the FL systems [4]. A malicious byzantine worker with an adversary who knows the client's data and model parameters can induce _local poisoning attacks_ to degrade the performance of the global model in an FL system. A local poisoning attack in an FL system is a process through which the training of the global model is adversely affected due to either data perturbation or perturbation in model parameters (or gradients) at the local client's side. These attacks are termed as _local data poisoning attacks_ or _local model poisoning attacks_, respectively. Several studies indicate that state-of-the-art aggregation methods, for instance, using federated averaging in the presence of a byzantine client, will reduce the performance of the global server. Therefore, to defend against attacks by byzantine clients, the global server uses _robust aggregation algorithms_[26, 25]. This research studies the efficacy of state-of-the-art robust aggregation methods for FL systems for medical image analysis and highlights their vulnerability to local model poisoning attacks. We observe that the state-of-the-art robust aggregation methods heavily rely on the distance between malicious and benign client model parameters (or gradients). We argue that some model poisoning attacks can exist when the parameters or gradients of malicious clients are close in Euclidean space to those of benign clients that circumvent the existing state-of-the-art robust aggregation methods. **Research Contribution:** We introduce the DISBELIEVE attack that demonstrates the limitation of state-of-the-art robust aggregation methods for FL on medical images in defending against local model poisoning attacks. The novelty of the proposed attack lies in the fact that it maximizes the objective loss function while ensuring that the Euclidean distance between the malicious parameters and benign parameters is kept marginal. As a result, the attacker can optimally reduce the global model's performance without being detected by the aggregation algorithms. Experiments on three publicly available datasets of different medical image modalities confirm the efficacy of DISBELIEVE attack in significantly reducing the classification performance of the global model (by up to 28%). We also benchmark two current state-of-the-art local model poisoning attack methods and demonstrate that the proposed DISBELIEVE attack is stronger, leading to higher performance degradation. Lastly, we demonstrate that DISBELIEVE attack also effectively works on natural images, as similar trends are reported on the CIFAR-10 dataset. ## 2 Related Work ### Robust Aggregation Algorithms Robust aggregation algorithms are defense methods that prevent malicious clients from significantly affecting parameter updates and global model performance. KRUM [3] is among the earliest methods for robust aggregation and proposes that for each communication round, only one of the clients is selected as an honest participant, and updates from the other clients are discarded. The client that is chosen as honest is the one whose parameters are closer in Euclidean space to a chosen number of its neighbors. On the other hand, Trimmed Mean [26] assumes malicious clients to have extreme values of parameters and proposes to avoid malicious clients by selecting parameters around the median. Recently, the Distance-based Outlier Suppression (DOS) [1] algorithm was proposed to defend against byzantine attacks in FL systems for medical image analysis. DOS proposes to detect malicious clients using COPOD, a state-of-the-art outlier detection algorithm [15]. Subsequently, it assigns less weight to the parameters from those malicious clients. Specifically, it uses Euclidean and cosine distances between parameters from different clients and computes an outlier score for each client. Later, these scores are converted to weights by normalizing them using a softmax function. We note that all these state-of-the-art robust aggregation algorithms assume that malicious clients' parameters (or gradients) are significantly different from benign clients' parameters (or gradients). However, we hypothesize that an attack can be introduced such that parameters (or gradients) of malicious and benign clients are only marginally different, while it can still severely degrade the global model's performance. ### Attacks in Federated Learning There are various kinds of attacks in a federated learning paradigm, such as _inference attacks, reconstruction attacks, poisoning attacks_[5, 16, 11]. In inference attacks, the attacker can extract sensitive information about the training data from the learned features or parameters of the model, thus causing privacy issues. Reconstruction attacks, on the other hand, try to generate the training samples using the leaked model parameters [5]. GAN's [7] have successfully extracted private information about the client's data even when model parameters are unclear due to the use of differential privacy [9]. Poisoning attacks in a federated learning paradigm can be categorized as _data poisoning attacks_ or _model poisoning attacks_. Both these attacks are designed to alter the behavior of the malicious client's model [17]. In data poisoning attacks, the attacker tries manipulating the training data by changing the ground truth labels or carefully poisoning the existing data [23]. In model poisoning attacks, the attacker aims to alter the model parameters or gradients before sending them to the global server [17]. In this research, we design a model poisoning attack that can bypass state-of-the-art robust aggregation algorithms such as DOS, Trimmed Mean, and KRUM. We evaluate the performance of existing state-of-the-art model poisoning attacks such as LIE attack [2] and Min-Max attack [19]. We note that the LIE attack forces the malicious parameters (or gradients) to be bounded in a range \((\mu-z\sigma,\mu+z\sigma)\) where \(\mu\) and \(\sigma\) are the mean and standard deviation along parameters of the malicious clients, and \(z\) is a parameter that sets the lower and upper bounds for deviation around the mean [2]. On the other hand, Min-Max adds deviation to parameters or gradients and then scales them such that their distance from any other non-malicious parameter is less than the maximum distance between two benign updates. However, instead of relying on standard deviation to approximate the range across which malicious clients' parameters (or gradients) can be manipulated, the proposed attack computes the malicious parameters (or gradients) by maximizing the classification loss (as opposed to minimizing it) to degrade the global model's performance. Additionally, we propose to approximate the range across which the parameters (or gradients) can be perturbed by evaluating the distance between the malicious clients' parameters (or gradients) in Euclidean space. Figure 1: **Intuition behind our proposed local model poisoning attack:** (a) Green nodes represent the parameters of benign clients, Pink node represent the parameters of malicious clients, Yellow node represents the mean of malicious clients parameters (i.e average of parameters of Pink nodes), Red node represents the malicious parameters (of model \(M\)). We ensure that the shift in parameters of model \(M\) from mean is less than the threshold \(P_{dist}\) where \(P_{dist}\) is the maximum distance between any two attacked clients parameters. (b) Green nodes represent gradients of benign clients, Pink nodes represent the malicious clients gradients, Yellow node represents the mean of malicious clients gradients (i.e average gradients of Pink nodes), Blue node represents gradient of trained malicious model \(M\), Red node represents gradient of malicious model \(M\) after scaling. We ensure that after scaling gradients the distance from mean of gradients is less than threshold \(G_{dist}\) where \(G_{dist}\) is the minimum distance between any two attacked clients gradients. ``` 1:Calculate mean of parameters: \[\mu^{param}=\frac{1}{f}\sum_{i=1}^{f}W_{i}^{mal}\] 2:Set the threshold value: \[P_{dist}=Max_{i,k\in f\ i\neq k}||W_{i}^{mal}-W_{k}^{mal}||_{2}^{2}\] 3:Combine all the training data from malicious clients 4:Initialize the malicious model \(M\) with parameters \(\mu^{param}\) 5:Train \(M\) with \(Loss=-Loss_{class}\) until: \[||W_{model}^{mal}-\mu^{param}||_{2}^{2}\leq P_{dist}\] 6:Return \(W_{model}^{mal}\) ``` **Algorithm 1** DISBELIEVE Attack on Parameters ``` 1:Calculate the mean of parameters and gradients: \[\mu^{param}=\frac{1}{f}\sum_{i=1}^{f}W_{i}^{mal}\qquad\mu^{grad}=\frac{1}{f} \sum_{i=1}^{f}Grad_{i}^{mal}\] 2:Set the threshold value: \[G_{dist}=Min_{i,k\in f\ i\neq k}||Grad_{i}^{mal}-Grad_{k}^{mal}||_{2}^{2}\] 3:Combine all the training data from malicious clients 4:Initialize the malicious model \(M\) with parameters \(\mu^{param}\) 5:Train \(M\) with \(Loss=-Loss_{class}\) 6:\(Grad_{model}^{mal}\gets Gradients\ of\ M\) 7:\(start\gets 0.001\), \(end\gets 1000\) 8:while\(|start-end|>0.01\)do 9:\(sf\leftarrow(start+end)/2\) 10:\(Grad_{new}^{mal}=sf*\frac{Grad_{model}^{mal}}{||Grad_{model}^{mal}||}\) 11:\(diff=||Grad_{new}^{mal}-\mu^{grad}||_{2}^{2}\) 12:if\(diff>G_{dist}\)then\(start=sf\)else\(end=sf\) 13:endwhile 14:Return the \(Grad_{new}^{mal}\) ``` **Algorithm 2** DISBELIEVE Attack on Gradients ## 3 Proposed Method Formally, we assume a total of \(n\) federated learning clients out of which \(f\) clients (\(1<f<n/2\)) have been compromised such that rather than improving global models' accuracy, the compromised clients work towards decreasing the performance of the global model. We further assume that all the attackers corresponding to different malicious clients are working together or that a single attacker controls all the malicious clients. The attacker thus has access to all the malicious client's model parameters and training data. Our goal is to create malicious parameters or gradients that can bypass the robust aggregation algorithms and reduce the performance of the global model. In this direction, this research introduces a model poisoning attack (DISBELIEVE attack) that creates a single malicious model (\(M\)) with access to parameters, gradients, and training data of all the \(f\) clients. \(M\) serves as a proxy for \(f\) clients and aims towards pushing the output of the global model away from the distribution of the ground truth labels. To be specific, the malicious model (\(M\)) is trained to generate malicious parameters or gradients by minimizing the loss \(L_{model}=-L_{class}\) as opposed to benign clients where the loss given by \(L_{model}=L_{class}\) is minimized. Here \(L_{class}\) refers to cross-entropy loss. Once the malicious parameters (or gradients) are computed, \(M\) forwards these malicious values to all the \(f\) clients, which then transmit these values to the global model. Note that all the \(f\) clients receive the same malicious parameters (or gradients) from \(M\). Our work leverages the shortcomings of robust federated learning aggregation algorithms such as KRUM [3] and DOS [1], which are based on the assumption that malicious parameters or gradients are significantly different from the parameters or gradients of benign clients in euclidean space respectively. Therefore, to reduce the defense capabilities of these aggregation algorithms, it is essential to perturb the parameters (or gradients) so that their Euclidean distance from benign clients' parameters (or gradients) does not become significant. This can be ensured if the Euclidean distance between the malicious parameters (or gradients) and the mean of benign clients' parameters (or gradients) remains bounded. Due to the normal distribution of data, it is safe to assume that the mean of parameters (or gradients) of clients controlled by the attacker is closer to the mean of benign clients parameters (or gradients) respectively in the Euclidean space [2]. The local model poisoning attack can be introduced on model parameters or gradients [1, 2]. However, the critical difference between parameters and gradients is that gradients have direction and magnitude, whereas parameters only have magnitude. Hence, we propose different attacks on parameters and gradients. Details on the strategy for attacking parameters or the gradients are provided in Section 3.1 and Section 3.2, respectively. The attacker initially chooses the clients it wants to attack and accumulates the chosen clients' model parameters, gradients, and training data. Subsequently, the attacker computes the mean of chosen (attacked) clients' model parameters (\(\mu^{param}\)) and gradients (\(\mu^{grad}\)) and initializes a new malicious model \(M\) with these mean values. \[\mu^{param}=\frac{1}{f}\sum_{i=1}^{f}W_{i}^{mal}\qquad\qquad\mu^{grad}=\frac{1 }{f}\sum_{i=1}^{f}Grad_{i}^{mal}\] Here, \(W_{i}^{mal}\) and \(Grad_{i}^{mal}\) refer to the model parameters or gradients of the \(i^{th}\) malicious client respectively. ### DISBELIEVE attack on Parameters The initialized malicious model, \(M\), is trained on the accumulated training data for minimizing the loss function \(L_{model}=-L_{class}\) until the Euclidean distance between the malicious model's (\(M\)) parameters and the mean values is less than the maximum distance between any two attacked client's parameters. \[||W_{model}^{mal}-\mu^{param}||_{2}^{2}\leq P_{dist}\quad where,\quad P_{dist }=Max_{i,k\in f\;i\neq k}||W_{i}^{mal}-W_{k}^{mal}||_{2}^{2}\] Here, \(W_{model}^{mal}\) refers to the malicious parameters after training of the malicious model \(M\), and \(P_{dist}\) refers to a threshold. The threshold \(P_{dist}\) is critical to ensure a successful attack as it controls how far the malicious parameters can be from the mean of parameters in Euclidean space. Through the proposed attack, we suggest setting this value to the maximum Euclidean distance between any two malicious client parameters. Intuitively this is a reliable value within an upper bound on the malicious parameters by which they can deviate within a fixed bounded Euclidean space around the mean (see Figure 0(a)). The pseudo-code for the attack is given in Algorithm 1. ### DISBELIEVE attack on Gradients For attacking gradients, as described in Algorithm 2, we train the malicious model \(M\) with the similar loss function, \(Loss=-Loss_{class}\), however, without any thresholding. Once the model \(M\) is trained, we accumulate the malicious gradients (\(Grad_{model}^{mal}\)) and scale them by a scaling factor \(sf\) to make sure that their distance from the mean of gradients of malicious clients (\(\mu^{grad}\)) is smaller than the minimum distance between any two malicious client's gradients (\(G_{dist}\)) (see Figure 0(b)). \[G_{dist}=Min_{i,k\in f\;i\neq k}||Grad_{i}^{mal}-Grad_{k}^{mal}||_{2}^{2}\] To find the optimum scaling factor (\(sf\)), we use a popular search algorithm known as binary search[19]. We initialize a start value of 0.001 and an end value of 1000. An optimal \(sf\) is computed using the divide and conquer binary search algorithm in between these values, which makes sure that after scaling the unit gradient vector, its distance to the mean of gradients (\(\mu^{grad}\)) is less than \(G_{dist}\) \[||sf*\frac{Grads_{model}^{mal}}{||Grads_{model}^{mal}||}-\mu^{grad}||_{2}^{2} \leq G_{dist}\] For calculating gradients, the minimum distance (\(G_{dist}\)) is preferred over the maximum distance (\(P_{dist}\)) when attacking parameters. This preference arises because maximizing the objective loss function results in gradients pointing in the opposite direction compared to the direction of benign gradients. By using the minimum distance, we can prevent malicious gradients from becoming outliers. ## 4 Experiments ### Datasets **CheXpert-Small**: CheXpert [10] is a large publicly available dataset containing over 200,000 chest X-ray images for 65,240 patients. However, consistent with the experimental protocol used by state-of-the-art DOS [1], we use the smaller version of CheXpert, also known as CheXpert-small, that contains 191,456 X-Ray images of the chest. The dataset contains 13 pathological categories. A single observation from the dataset can have multiple pathological labels. Each sample's pathological label is classified as either negative or positive. Consistent with the state-of-the-art aggregation method DOS [1], we preprocess all the images by rescaling them to 224\(\times\)224 pixels using the torchxrayvision library. **Ham10000**: Ham10000 [24] or HAM10k is a publicly available benchmark dataset containing dermatoscopic images of common pigmented skin lesions. It is a multi-class dataset with seven diagnostic categories and 10000 image samples. As suggested in [1], we use this dataset to evaluate the model performance in non-iid settings where each image is resized to 128\(\times\)128. **Breakhis**: The breakthroughs dataset [22] is a public breast cancer histopathological database that contains microscopic images of breast cancer tissues. The dataset contains 9109 images from 82 different patients. The images are available in magnifying scales such as 40X, 100X, 200X, and 400X. Each image is a 700\(\times\)460 pixels sized image, and we rescale each image to 32\(\times\)32 for our classification task. We use this dataset for binary classification of 400X magnified microscopic images where we classify cancer present in images as either benign or malignant. **CIFAR-10**: The Cifar-10 [14] is a popular computer vision dataset that contains 60000 natural images of size 32\(\times\)32. The dataset contains ten classes, and each class has 6000 images. 50000 images are reserved for training, and 10000 images are used for testing. ### Experimental Setup and Implementation Details The experimental setup used in this research is consistent with the experimental protocols suggested in [1]. Subsequently, we use Chexpert-Small [10] and Ham10k datasets [24] for parameter-based attacks. Likewise, the CheXpert-small dataset is used to train the Resnet-18 [8] model with a batch size of 16 for 40 communication rounds, and the number of local epochs is set to 1, whereas the Ham10k dataset is trained on a custom model with two convolutional layers and three fully connected layers with a batch size of 890 for 120 communication rounds and the number of local epochs were set to 3. For both datasets, the number of clients is fixed at 10, the number of attackers is fixed at 4, and the learning rate is set to 0.01. For preserving the privacy of clients and their data, federated learning setups usually share gradients instead of model parameters. Hence, we also evaluate our attack for gradient aggregation on the Breakhis [22]. Furthermore, to assess the generalization ability of the proposed DISBELIEVE attack on natural images, we evaluate the proposed DISBELIEVE attack on the CIFAR-10 dataset with a gradient aggregation strategy at the global server. For experiments on Breakhis dataset, VGG-11 [21] model is trained for binary classification. Training occurs for 200 communication rounds with a batch size of 128 and a learning rate 0.0001. For the CIFAR-10 dataset, we use the VGG-11 [21] model with ten output classes for 500 communication rounds with a batch size of 1000 and a learning rate of 0.001. Adam optimizer was used for both datasets. The total number of clients and attackers for both datasets is fixed at 10 and 3, respectively. Figure 3: Performance of different attacks on Breakhis (top-row) and CIFAR-10 (bottom-row) datasets under different gradient aggregation methods. Left to right (in order): AUC scores when attacks are made on DOS, Trimmed Mean and Krum. Figure 2: Performance of different attacks on Ham10k (top-row) and CheXpert (bottom-row) datasets under different parameter aggregation methods. Left to right (in order): AUC scores when attacks are made on DOS, Trimmed Mean and Krum. ## 5 Results and Discussions ### Baselines The DISBELIEVE attack is evaluated against three state-of-the-art defense methods: DOS [1], Trimmed Mean [26], and KRUM [3]. Comparisons are also made with prominent attacks, including LIE [2] and Min-Max [19], under different defense methods. Under any defense, AUC scores are highest in the absence of attacks. The LIE attack slightly reduces AUC scores while remaining relatively weaker due to parameter bounding. Conversely, introducing noise and scaling parameters makes the Min-Max attack more potent, consistently reducing AUC scores more significantly across various aggregation methods. ### Vulnerability of State-of-the-art Defense Methods The proposed DISBELIEVE attack reveals the vulnerability of the current state-of-the-art robust aggregation algorithms (Trimmed Mean [26], KRUM [3], and DOS [1]) over local model poisoning attacks. We empirically validate that our proposed local model poisoning attack (DISBELIEVE attack) can successfully circumvent all three state-of-the-art robust aggregation algorithms (refer Figure 2, Figure 3). For both parameters and gradient aggregation, DISBELIEVE attack consistently reduces the global model's area under the curve (AUC) scores on all three benchmark medical image datasets. Furthermore, to assess the effectiveness of the proposed DISBELIEVE attack on natural images apart from the specialized medical images, we additionally conduct DISBELIEVE attack on a popular computer vision dataset, CIFAR-10. For natural images, we also find (refer Figure 3) that the DISBELIEVE attack reduces the global model's AUC score for different state-of-the-art aggregation algorithms DOS, Trimmed Mean, and KRUM. Tables 1 and 2 show that when subjected to DISBELIEVE attack, the AUC scores fall drastically for all datasets compared to the AUC scores in case of no attack. Therefore, these results demonstrate the vulnerability of state-of-the-art robust aggregation methods to the proposed local model poisoning attack. \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Attack & DOS & Trimmed Mean & KRUM \\ \hline \multirow{3}{*}{Ham10k} & No Attack & 0.72 & 0.75 & 0.70 \\ & LIE Attack & 0.70 & 0.74 & 0.70 \\ & Min-Max Attack & 0.61 & 0.68 & 0.58 \\ & Ours & 0.52 & 0.70 & 0.51 \\ \hline \multirow{3}{*}{CheXpert} & No Attack & 0.71 & 0.71 & 0.70 \\ & LIE Attack & 0.69 & 0.71 & 0.65 \\ \cline{1-1} & Min-Max Attack & 0.59 & 0.70 & 0.59 \\ \cline{1-1} & Ours & 0.44 & 0.52 & 0.43 \\ \hline \hline \end{tabular} \end{table} Table 1: Area Under the Receiver Operating Characteristic Curve (AUC) scores with different types of poisoning attack on model parameters \begin{table} \begin{tabular}{c c c c c} \hline \hline Dataset & Attack & DOS & Trimmed Mean & KRUM \\ \hline \multirow{3}{*}{Breakhis} & No Attack & 0.81 & 0.78 & 0.83 \\ & LIE Attack & 0.84 & 0.77 & 0.79 \\ & Min-Max Attack & 0.50 & 0.74 & 0.72 \\ & Ours & 0.50 & 0.75 & 0.50 \\ \hline \multirow{3}{*}{CIFAR-10} & No Attack & 0.83 & 0.84 & 0.81 \\ & LIE Attack & 0.64 & 0.71 & 0.60 \\ \cline{1-1} & Min-Max Attack & 0.50 & 0.60 & 0.50 \\ \cline{1-1} & Ours & 0.50 & 0.78 & 0.50 \\ \hline \hline \end{tabular} \end{table} Table 2: Area Under the Receiver Operating Characteristic Curve (AUC) scores with different types of poisoning attack on model gradients ### Superiority of DISBELIEVE attack over State-of-the-art Local Model Poisoning Attacks The state-of-the-art robust aggregation algorithm for medical images DOS is only evaluated against additive Gaussian noise, scaled parameter attacks, and label flipping attacks. We additionally benchmark the performance of two state-of-the-art model poisoning attacks, namely Min-Max [19] and LIE [2] on all the three medical image datasets (refer Figure 2 and Figure 3). Results establish the superiority of the proposed DISBELIEVE attack over state-of-the-art model poisoning attacks on different medical image datasets. While using DOS and KRUM aggregation, the DISBELIEVE attack reduces the global model's AUC score by a more significant margin than both Min-Max and LIE for all the datasets. In the case of trimmed mean, the results of DISBELIEVE attack are comparable on Ham10k (parameter aggregation) and Breakhis (gradient aggregation) datasets with the Min-Max attack and better on CheXpert (parameter aggregation) dataset when compared to the Min-Max and LIE attacks. To compare the effectiveness of DISBELIEVE attack with state-of-the-art model poisoning attacks on the natural image dataset (CIFAR-10), we observe that DISBELIEVE attack performs better than LIE and Min-Max on DOS and KRUM defenses. Tables 1 and 2 compare state-of-the-art model poisoning attacks and the proposed DISBELIEVE attack under different state-of-the-art robust aggregation algorithms for parameter and gradient aggregation, respectively. ## 6 Conclusion and Future Work This research highlights the vulnerability of state-of-the-art robust aggregation methods for federated learning on medical images. Results obtained on three public medical datasets reveal that distance-based defenses fail once the attack is designed to ensure that the distance between malicious clients and honest clients' parameters or gradients is bounded by the maximum or minimum distance between parameters or gradients of any two attacked clients, respectively. Moreover, we also demonstrate that the proposed DISBELIEVE attack proves its efficacy on natural images besides domain-specific medical images. In the future, we plan to design a robust aggregation algorithm for federated learning in medical images that can withstand the proposed local model poisoning attack. **Acknowledgment.** This work was done as a part of the IMI BigPicture project (IMI945358).
2303.14937
LEURN: Learning Explainable Univariate Rules with Neural Networks
In this paper, we propose LEURN: a neural network architecture that learns univariate decision rules. LEURN is a white-box algorithm that results into univariate trees and makes explainable decisions in every stage. In each layer, LEURN finds a set of univariate rules based on an embedding of the previously checked rules and their corresponding responses. Both rule finding and final decision mechanisms are weighted linear combinations of these embeddings, hence contribution of all rules are clearly formulated and explainable. LEURN can select features, extract feature importance, provide semantic similarity between a pair of samples, be used in a generative manner and can give a confidence score. Thanks to a smoothness parameter, LEURN can also controllably behave like decision trees or vanilla neural networks. Besides these advantages, LEURN achieves comparable performance to state-of-the-art methods across 30 tabular datasets for classification and regression problems.
Caglar Aytekin
2023-03-27T06:34:42Z
http://arxiv.org/abs/2303.14937v1
# LEURN: Learning Explainable Univariate Rules with Neural Networks ###### Abstract In this paper, we propose LEURN: a neural network architecture that learns univariate decision rules. LEURN is a white-box algorithm that results into univariate trees and makes explainable decisions in every stage. In each layer, LEURN finds a set of univariate rules based on an embedding of the previously checked rules and their corresponding responses. Both rule finding and final decision mechanisms are weighted linear combinations of these embeddings, hence contribution of all rules are clearly formulated and explainable. LEURN can select features, extract feature importance, provide semantic similarity between a pair of samples, be used in a generative manner and can give a confidence score. Thanks to a smoothness parameter, LEURN can also controllably behave like decision trees or vanilla neural networks. Besides these advantages, LEURN achieves comparable performance to state-of-the-art methods across 30 tabular datasets for classification and regression problems. ## 1 Introduction Although there is an immense amount of work on explainable artificial intelligence, a human explainable white box neural network still does not exist. The efforts in explainable artificial intelligence has been focused on saliency maps [44, 53, 56, 41, 11, 15, 54, 30, 13], approximation by explainable methods [21, 25, 42, 52, 16, 51], or hybrid models [31, 37, 32, 33, 39, 2, 29, 49, 50]. Many of these works employ some form of decision trees in order to reach explainability. This is due to the commonly assumed understanding that decision trees are more explainable than neural networks since they extract clear rules on input features. An interesting fact that have been pointed out by several studies, is that neural networks can be equivalently represented as decision trees [5, 55, 7, 34, 46]. This seems to conflict with trying to explain neural networks with decision trees, as they are decision trees themselves. However, the nature of the decision trees extracted by neural networks are different from commonly used ones in two aspects: they are multivariate and their branches grows exponentially with depth. Exponentially growing number of branches hurts global explainability as it may even become infeasible to store the tree. Although one may still focus on local (sample-wise) explanations, multivariate rules are much harder to explain than univariate ones, as they mix features. Finally, with increasing neural network depth, it becomes even harder to make sense of these rules as the contribution of each rule is not clear. Motivated by above observations, in this paper, we propose a special neural network architecture (LEURN) that provides an exact univariate decision tree where each rule contributes linearly to the next rule selection and the final decision. In the first block, LEURN directly learns univariate rules and outputs an embedding of the rules and response to the rules. Then a linear layer is applied on this embedding in order to find the next rule. In successive blocks, all embeddings from previous blocks are concatenated and given as input to a linear layer. Final layer takes all previous embeddings as input, consists of a single linear layer and an activation based on the application -sigmoid for binary classification, softmax for multiclass classification and identity for regression. Thus, there is a white-box rule selection and final decision making processes with clear contributions of previous rules in a linear manner. Besides human explainability, the proposed architecture provides additional properties such as: feature importance, feature selection, pairwise semantic similarity, generative usage, and confidence scoring. ## 2 Related Work ### Neural Networks for Tabular Data We have reviewed the difference of commonly used decision trees and neural network extracted ones in the previous chapter. This difference also shows itself in performance of neural networks and decision trees in tabular data. Particularly, in [43], [8] and [19], extensive comparisons of deep learning methods and tree-based methods were made. The comparison output tends to be in favor of tree-based methods in terms of both accuracy, training speed and robustness to dataset size. Following our discussion in previous section, the performance difference should come ei ther from the multivariate rules or the exponentially growing branches, as these are the only differences between neural network trees and common ones. This highly motivates us to evaluate LEURN on tabular data, as it removes one of these possibilities since it results into univariate trees. Hence, we believe the results of this evaluation is crucial to understand whether the performance gap comes from exponentially growing branches. Motivated by above, our application focus is structured data, hence we review deep learning based methods for tabular data next. Majority of works investigate feature transformation [18][47][57][10], transfer-learning [27][40], self-supervised learning [6][4][20], [45], attention [4][20][45], regularizations [23], univariate decisions [1][36] or explicitly tree-like models [22][35] in order to improve deep learning performance on tabular data. **Feature Transformation:** The approach in [47] suggests transforming tabular data into two-dimensional embeddings, which can be thought of as an image-like format. These embeddings are then utilized as the input for traditional convolutional neural networks. [57] and [10] also takes a similar approach. [18] finds embedding on numerical features are beneficial for deep learning methods. **Transfer Learning:** In [27], the authors highlight a key advantage of neural models over others: their ability to learn reusable features and to be fine-tuned in new domains. The authors show this via using upstream data. [40] also investigates pre-training strategies for tabular data. **Attention:**[4] uses sequential attention to select relevant features at each decision step. Tabtransformer [20] also makes use of self-attention. SAINT [45] applies attention differently: Both on rows (samples) and columns (features). [26] is another method that utilizes both row and column attention. **Self Supervised Learning:**[4][20][45][6] provide ways to incorporate self-supervised learning with unlabeled data to achieve to better performance especially in small dataset size regime. **Regularization:**[23] investigates the effects of regularization combinations on MLPs' tabular data performance and finds that with correct regularization, MLPs can outperform many alternatives. **Univariate Decisions:** The Neural Additive Models (NAMs) [1] are a type of ensemble consisting of multiple multilayer perceptrons (MLPs), each MLP being responsible for one input feature thus making univariate decisions. The output values are summed and passed through a logistic sigmoid function for binary classification, or just summed for regression. [36] propose a new sub-category of NAMs that uses a basis decomposition of shape functions. The basis functions are shared among all features, and are learned together for a specific task, making the model more scalable for large datasets with high-dimensional, sparse features. **Differentiable Explicit Tree Methods:** Although MLPs with piece-wise linear activation functions are already differentiable implicit trees [5], there have been efforts to create other special architectures that results into explicit trees as follows. [22] proposes a gating mechanism for feature representation with attention based feature selection and builds on differentiable non-linear decision trees. In [35], the authors present NODE, a deep learning architecture for tabular data that combines the strengths of ensembles of oblivious decision trees and end-to-end gradient-based optimization with multi-layer representation learning. We find that univariate decisions and differentiable tree methods are promising in terms of interpretability and in line with our works. Yet, the extracted decision trees in the literature are either soft [35], or non-linear [22] and univariate decisions made in [1][36] are not still explainable. On the contrary, LEURN provides an explainable and exact univariate decision tree. Out of the above reviewed methods, the approach in [35] (NODE) is the most similar to ours. Hence we find it important here to highlight some key differences. In NODE, each layer contains multiple trees where each tree is trained via differentiable feature selection and decision making via entmax. Note that made decisions and feature selections are soft and there is no exact corresponding tree. In our proposed method, we make hard decisions via quantized \(tanh\) which results into exact univariate decision trees. Another difference is that our feature selection is not explicit, but implicit through learnable feature importances. In NODE,main computational power is spent on feature selection and thresholds for selected features are directly learned. Instead, in our work, we spend main computational power on learning thresholds/rules as weighted combinations of a special embedding of previous responses and previous thresholds. Most importantly, our method is able to explain exactly why a threshold was selected for a particular layer via linear contribution of previous thresholds and their responses. Moreover our method has additional properties such as providing semantic similarity, generative usage, confidence scoring, etc. ### Post-processing Explainers The linear contributions of previous rules to others and final decision in LEURN is similar to the additive feature attributions mentioned in [28]. LIME [38] and SHAP [28] stand out as popular approaches as they approximate additive feature importances per sample. These approaches are not novel neural network architectures, but they extract approximate explanations from existing neural networks. Thus their additive feature importances are local approximations. LEURN differs from these approaches, because it is a novel neural network architecture and not a post processing approximate explainer. Moreover, LEURN's additive contributions are exact, not approximations, as they are a built-in feature of the architecture. Finally, LIME and SHAP's additive contributions are only evident for features at decision level, whereas LEURN's additive contributions applies on every processing stage to intermediate rules on finding the next rule, as well as for the final decision. ## 3 Proposed Method ### Decision Tree Analysis of Vanilla Neural Networks A vanilla neural network with \(n\) layers can be formulated in Eq. 1. \[\begin{split} NN(\textbf{x})=\mathbf{W}_{n-1}^{T}\sigma(\mathbf{W}_{n-2}^ {T}\sigma(...\mathbf{W}_{1}^{T}\sigma(\mathbf{W}_{0}^{T}\mathbf{x}+\mathbf{\beta}_{0})\\ +\mathbf{\beta}_{1}...)+\mathbf{\beta}_{n-2})+\mathbf{\beta}_{n-1}\end{split} \tag{1}\] In Eq. 1, \(\mathbf{W}_{i}\) and \(\mathbf{\beta}_{i}\) are respectively the weight matrix and bias vector of a network's \(i^{th}\) layer, \(\sigma\) is an activation function, and **x** the input. Let us consider that \(\sigma\) is a piece-wise linear activation. Then, a layer makes decisions based on whether linear combinations of input features are larger than a set thresholds. This fact was used to extract decision trees from neural networks in [5, 55, 7, 34, 46]. The extracted decision trees differ from commonly used ones in two aspects, which we review as follows. ### Exponentially Growing Branches Neural networks extract exponentially growing width trees due to shared rules embedded as layer weights. In each layer \(i\), the input's response to a rule is checked per filter \(j\) and the effective operation depends which region of activation the response falls into. This results into \(k^{\sum_{i}m_{i}}\) possible processing paths, where \(k\) is the total number of linear regions in the activation and \(m_{i}\) is the filter number in layer \(i\). This massive tree size hurts explainability in the global sense as it prevents to see all the decision mechanism at once. With today's very deep architectures it even becomes infeasible to store neural network extracted trees. But, this feature doesn't prevent local explainability where the goal is to understand the decision mechanism of the neural network per sample. At this point, we would also like to mention that exponential partitioning via shared rules has close connections to extrapolation, thus generalization [5], hence this may be a favorable property to keep. ### Multivariate Decisions The decision rules of neural networks are multivariate. This is obvious as the checked rule per filter is whether a linear combination of all features is larger than a value indicated by negative bias for that filter. The set of multivariate rules extracted by vanilla neural networks are difficult to make sense of, as they mix features. This is especially true for large number of features which is usually the case for modern neural networks. Moreover, multivariate decisions make identification of redundant rules very difficult. As stated in [5], neural network extracted decision trees may consist of redundant rules, and the real depth/width may actually be a lot smaller than exponential formulation provided in previous subsection. However, checking whether a set of multivariate rules encapsulates another set is actually very difficult compared to univariate rules. Thus, in this work we wish to avoid this property of neural networks. ### LEURN: Learning Univariate Rules by Neural Networks In this section, we propose a special neural network architecture that results into univariate rules, while keeping generalization abilities of neural networks. We will refer to this architecture as LEURN. The main idea in LEURN is to learn univariate rules in the form of thresholds in each layer to be directly applied to the batch-normalized (\(BN\)) neural network input \(\mathbf{x}\). In layer \(0\), we directly learn a rule vector \(\mathbf{\tau}_{0}\) elements of which are separate rules for each input variable. To find the rule vector for next layers, we employ the following. For a layer \(i\), first we find an indicator vector \(\mathbf{s}_{i}\) which indicates the category of each input feature \(j\) with respect to the rule \(\tau_{ij}\). This is achieved via a quantized \(tanh\) activation which extracts thresholds around \(\mathbf{\tau}_{i}\), and outputs unique values for each category whose boundaries are indicated by these thresholds. Then, we find an embedding \(\mathbf{e}_{i}\) by element-wise multiplication of the indicator vector \(\mathbf{s}_{i}\) with the activated threshold vector \(tanh(\mathbf{\tau}_{i})\). This jointly encodes the thresholds used in the layer and how the input responded to them. The next thresholds \(\mathbf{\tau}_{i+1}\) are learned by a linear layer (FC) applied on the concatenated embeddings from previous layers \(\mathbf{e}_{0:i}\). LEURN's rule learning is formulated in Eq. 2 and visualized in Fig. 1. \[\begin{split}\mathbf{s}_{i}=qtanh(BN(\mathbf{x})+\mathbf{\tau}_{i})\\ \mathbf{e}_{i}=\mathbf{s}_{i}tanh(\mathbf{\tau}_{i})\\ \mathbf{\tau}_{i+1}=\mathbf{W}_{i}^{T}\mathbf{e}_{0:i}+\mathbf{\beta}_{i}\end{split} \tag{2}\] The output of the neural network \(\mathbf{y}\) is calculated from all embeddings as follows. \[\mathbf{y}=\gamma(\mathbf{W}_{n-1}^{T}\mathbf{e}_{0:n-1}+\mathbf{\beta}_{n-1}) \tag{3}\] In Eq. 3, \(\gamma\) is the final activation of the neural network. This can be sigmoid or softmax for binary or multi-label classification, or simply identity for regression problems. In summary, LEURN makes univariate decisions via \(\tau\) rules and quantized \(tanh\) with number of branches equal to \(tanh\) quantization regions. For every branch, a new rule or final decision is found via weighted linear combination of embeddings, so contribution of each rule and response is additive. #### 3.2.1 Properties Next, we make a few observations about LEURN. **Equivalent Univariate Decision Trees**: LEURN results into univariate decision trees. For an input that is \(n\)-dimensional, in each layer there are \(k^{n}\) possible indicator vectors which corresponds to \(k^{n}\) branches per layer, where \(k\) is the number of regions in quantized \(tanh\). Each branch is separated into another \(k^{n}\) branches in the next layer. Many of these branches are very unlikely to be utilized during training due to the limited number of data, so this rule sharing property helps generalization in inference time as the rules in unseen categories are made up from the seen ones. Note that this final property is a general property of vanilla neural networks as well. **Explainability**: LEURN is more explainable compared to vanilla neural networks. In vanilla neural networks, the rules in the equivalent decision tree are in the form of multivariate inequalities, whereas in LEURN they are in the form of univariate inequalities. Univariate inequalities are easier to make sense for humans. LEURN is in this sense a white box which makes explanations in the following hypotethical example: Model checked and found that price/earning ratio (PER) of a company is smaller than 20 and operating income increase (OII) was more than 5\(\%\). Based on this outcome, model then checked and found that PER of the company is larger than 15 and OII was less than 10\(\%\). Therefore, model decided to invest 1k$ on the company. Contribution of each rule-check to final invested money was as follows: PE ratio being smaller than 20: +10008, operating income increase being more than 5\(\%\): +10008, PE ratio being larger than 15: -5008, operating income increase being less than 10\(\%\): -5008. As a higher granularity of explainability, LEURN also provides how the PE\(<\)20 and OII\(>\)5 rules linearly contributed to finding PE\(>\)15 and OII\(<\)10 rules. **Easy Architecture Search**: The last linear layer has fixed output units defined by the problem, the rest of the linear layers also have fixed output units which is equal to neural network input dimensions. This makes neural architecture search easier as there are no hyperparameters in the form of number of features in a layer. The architecture related hyperparameters are only the depth of the network and the number of quantization regions for \(tanh\). **Feature Selection**: LEURN can do feature selection as follows. Each embedding element is directly related to a particular input feature. The embedded information is critical in finding next rules or in the output of the neural network. Hence, if absolute value of the unit in \(\mathbf{W}_{i}\) that corresponds to a particular feature is or close to zero, it is a clear indicator that the particular feature is uninformative, hence not selected in that layer. **Feature Importance**: LEURN can extract global feature importance. Last layer's input is all the embeddings used throughout the neural network. Feature importance can then be measured simply by checking the weighted (via \(\mathbf{W}_{n-1}\)) contribution of each embedding element -hence related feature and rule- in the classification, averaged over the training set. **Pairwise Semantic Similarity**: LEURN can measure semantic similarity between two samples via using any popular distance metric on the embeddings of these samples. **Generative Usage**: LEURN can be directly used in a generative manner. A training sample can be fed to the neural network and its univariate rules can be extracted. This simply defines a category that belongs to that sample in the form of upper and lower limits for each feature in the input. Then, one can generate a different sample from the same category by sampling randomly from the univariate rules per feature. This is very difficult to do for vanilla neural networks, as it is harder to sample from multivariate inequalities. **Parametrized Decision Smoothness**: Finally LEURN provides a controlled smoothness in model predictions based on the number of quantized regions \(k\) in the \(qtanh\) function. As \(k\) grows, decision boundaries become smoother, this hyperparameter is useful to provide alternatives to different datasets based on their properties. Figure 1: LEURN’s Rule Learning ## 4 Experimental Results ### Preliminary Experiments on Toy Data #### 4.1.1 Parametrized Decision Smoothness In [19], authors stated three key differences between neural network based methods and tree-based methods. These were: decision boundary smoothness, robustness to uninformative features and rotation invariance. In this section, we experiment on the behaviour of LEURN on a toy dataset and observe that with different \(qtanh\) quantization regions \(k\), LEURN behaves differently in terms of the above three aspects. For all experiments in this section, we use Half Moon dataset with 10000 training samples and 1000 validation samples. Accuracies are reported on best validation performance in a training. First, we investigate decision boundary smoothness. For all LEURNs, we have used fixed depth of \(d=2\) and experimented on \(qtanh\) quantization regions in following set: \(k\in\{2,5,10,20\}\). As it can be observed from Fig. 2, as the quantization regions grow, decision boundary becomes smoother. Note that this result is trivially evident without this experiment, but we still provide these figures for completeness. Second, we rotate the Half Moon dataset with angles in the following set: \(\{0,11.25,22.5,45\}\). We average results of 10 trainings for each quantization region in: \(\{2,5,10,20\}\). To provide a challenging case, we have fixed network depth to 1 in this experiment. As it can be seen from Table 1, as the quantization regions grow, LEURN becomes more robust to rotation, which is a vanilla neural networks-like behaviour according to [19]. Smaller quantization regions struggle to be robust to rotation, similar to decision tree behaviour, according to [19]. Finally, we experiment on uninformative features. We add 10 additional randomly generated features to Half Moon dataset in order to provide a challenging case with dominating uninformative features. We have used a fixed depth of 3 in this experiment so that all variants are able to provide perfect score without uninformative features. We have observed that all LEURN variants with different quantization regions can successfully handle uninformative features by providing perfect score on validation set. We conclude from these three experiments that, LEURN provides smoother boundaries (MLP-like) with rotation invariant performance for higher quantization regions, and sharper boundaries with rotation variant performance for lower quantization regions (DT-like). For all cases, LEURN was found to be robust to uninformative features. #### 4.1.2 Global Feature Importance We repeat our last experiment in previous section with \(k=2,d=2\). We measure the feature importance as the contribution in the final classification. The process is as follows. The input of the last layer is a concatenation of previous embeddings used throughout LEURN. Each embedding unit corresponds to an information from a particular feature. Thus the value of the multiplied embedding unit with corresponding weight unit in the last layer is treated as that feature's importance. As there are multiple embeddings per feature, we first sum each contributions, then take absolute value of the sum to get the final importance value. Note that these values are calculated over the training set. With this method, we find that informative features in the last experiment, are given at least 4.34 times more importance than uninformative features. Feature importance can also be calculated in all intermediate fully connected layers. It can also be calculated locally, i.e. for a particular sample instead of over entire training set. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline k & 2 & 5 & 10 & 20 \\ \hline \(\mu\) & 96.79 & 98.44 & 98.76 & 99.01 \\ \hline \(\sigma\) & 3.22 & 1.91 & 1.64 & 1.26 \\ \hline \end{tabular} \end{table} Table 1: Mean and standard deviances of LEURNs with different \(qtanh\) quantization regions, across different rotations. Figure 2: Decision Boundaries of learned LEURN models with different \(qtanh\) quantization regions \(k\), for Half Moon dataset. #### 4.1.3 Semantic Similarity, Generative Usage and Confidence Scores In this section we use the RBF kernel response of a pair of LEURN embeddings as a similarity score between corresponding pair of samples. We use Half Moon dataset with several \(k\) and \(d\) hyperparameter choices. In Fig. 3, we select a reference sample from training dataset indicated by the green dot, and visualize semantically similar regions with red dots, we set opacity parameter to similarity value, hence stronger red dots means more similar samples to the reference. Smaller \(k\) and \(d\) results into more distinct and local categories where everything in the category is strongly similar to the reference and similarity sharply stops in category boundaries. On the contrary, larger \(k\) and \(d\) results into distributed and non-local similarities. One can select the desired behaviour based on application. As LEURN generates univariate trees, there is a distinct category for each sample, this category can be easily found via storing the thresholds that are employed throughout LEURN and finding closest upper and lower boundaries from these thresholds. Note that these thresholds are found through \(\tau\) vectors and \(qtanh\) quantization regions. Every sample in these boundaries have exactly 0 embedding distance, hence 1 similarity to the reference sample. An example of generated samples can be ovserved in Fig 4. We observe that \(h\) and \(d\) grows, the regions gets tighter. This is expected due to the fact that both \(k\) and \(d\) partitions the space into more regions. At first, this can be confused to conflict results of Fig. 3, however we remind the reader that most red dots in Fig. 3 do not have exactly 1 similarity, thus -although semantically similar- not exactly in same category with reference sample. Moreover some regions may vary depending on the category that each training finds due to the random initialization. The embedding similarity can also be used to assign confidence scores to predictions in test set. We simply find the maximum similarity that a test sample has to training dataset in the nearest neighbour sense. This value is directly used as a confidence score. In Fig. 5, we visualize confidence scores via setting confidence score to opacity. Similar to our other observations in this section, lower \(k\) and \(d\) values result into larger regions that have high confidence Figure 4: Generated samples from region that reference sample (green) falls into Figure 3: Embedding similarities to reference sample for LEURN models with different \(qtanh\) quantization regions \(k\) and depths \(d\), for Half Moon dataset. whereas higher \(k\) and \(d\) values result into tighter categories, thus lower confidence scores in extrapolated regions. ### Explainability As observed from Eq. 2 and 3, contribution of each embedding in finding the next threshold and the final decision is simply linear. As an embedding is a scalar which encodes a previous threshold and input response to it, it is possible to write linear weights of each previous threshold in finding next threshold according to the input response. There can be sometimes redundant thresholds found in intermediate layers. For the sake of clarity, we handle these as follows. We check whether a previous threshold is above/below the upper/lower limits that are found via previous thresholds. If so, this means that threshold is redundant. Then, the contribution of that threshold in the network is added to the contribution of the previous threshold that defines upper/lower limits. This provides a much clearer and human readable explanation. Note that it also keeps original processing with no alteration, it is just another way to reformulate the neural network/decision tree. Finally, we by-pass the threshold finding mechanisms in the neural network that results into redundant thresholds to further simplify the tree, while keeping it equivalent to the original one. Finally, we distribute the contribution of the biases to each rule equally. Next, we test explainability module on two cases: Half-Moon and Adult Census Income (OpenML [48]\(\#1590\)) datasets. #### Toy Data Next, we explain the decisions of a LEURN trained for the Half Moon dataset with \(k=2\) and \(d=2\). The output of the explanation described in the above paragraph is visualized in Fig. 6 which is the direct output of the explainer module. The corresponding univariate rules utilized in each layer of LEURN are visualized in Fig. 7. As it can be observed from Fig. 6 and Fig. 7, in the first layer, network checks rules for input features separately. Note that these rules \((x,1.018),(y,0.498)\) are same for any sample/category and these are the only directly learned thresholds in the network. Based on the position of our sample (indicated by green dot in Fig. 7) w.r.t to these rules, network decides to check other rules: \((x,0.181),(y,-0.490)\) respectively. This decision is sensible, since from the position of our sample, it is not clear to which class it belongs to with previous thresholds. The new thresholds tighten the boundary, but still are not sufficient due to a few still-included blue samples in the bounded region. So finally, \((x,-0.998)\) rule is found which is enough to accurately classify our sample. We also check the contributions of each sensible threshold in the final classification score (last section of Fig. 6). Note that positive score means orange class here. \(x<0.181\) results into a negative contribution in class orange. This is meaningful, because most samples at this region belongs to blue class. \(y<-0.490\) results into a positive contribution in class orange. This is meaningful, because most samples at this region belongs to orange class. \(x>-0.998\) results into a positive contribution in class orange. This is meaningful, because all samples at this region belongs to orange class. It is also interesting to examine how the previous thresholds contributed to finding the new thresholds. Let us examine the case in Layer 1. \(x<0.181\) pushes to check \((x,-0.009)\) rule whereas \(y<-0.490\) pushes to check \((x,-0.989)\) rule. Note that both are separately enough to classify our sample. A limitation of the method in terms of explainability is that the rules applied on the input features and their linear contribution to either finding the next rule or making the final decision still needs human effort for interpreta Figure 5: Confidence scores of samples Figure 6: Explanation of LEURN for Half-Moon Dataset tion/speculation. For example, there is no explicit explanation given by the network for why \(y<-0.490\) results into a positive contribution in class orange, but we interpret/speculate it. Still, LEURN simplifies the interpretability problem to an unprecedentedly easy level for humans. We believe best use of LEURN is to assist a human expert in the application field where the human expert is required to verify and make sense of the explanations that LEURN makes in the form of Fig. 6. ### Adult Census Income Dataset In this section we train LEURN on Adult Income Census dataset. Adult Income Census dataset contains samples having features of an adult's background, and the task is to predict whether the adult earns more than 50000$. In Table 2, we provide the rules and explanations LEURN finds for a sample. We also compare the additive feature contributions of LEURN and SHAP. Note that SHAP is applied to trained LEURN model. First we analyze individual feature contributions. The sample is classified as earning more than 50000$ and the largest contributing positive factor was executive managerial job title which makes excellent sense. Other contributing factors were: Education number, Bachelor degree education and married civilian marital status which are sensible. Note that first two are repeating features in the dataset, and both are given high importance which is consistent. An interesting observation is that LEURN finds redundant rules for final weight (fnlwgt) and capital gain features as the found rules that sample belong to exceeds the minimum and maximum values for these features in the dataset. The contributions coming from these redundant rules may be considered as a bias of the particular category. Finally, we observe a non-sensible contribution coming from capital loss. Although the capital loss interval here points towards no loss case, the model assigns a negative contribution here which maybe does not make sense. It is important to note \begin{table} \begin{tabular}{|c|c|c|c|} \hline Feature & LEURN Rule & LEURN Scores & SHAP Scores \\ \hline \hline workclass & State-gov & 0.0131 & -0.2027 \\ \hline education & Bachelors & 0.1897 & -0.2514 \\ \hline marital-status & Married-civ-spouse & 0.1719 & 0.7643 \\ \hline occupation & Exec-managerial & 0.2967 & 0.1796 \\ \hline relationship & Husband & 0.0985 & 0.2535 \\ \hline race & White & 0.0117 & -0.0982 \\ \hline sex & Male & 0.0162 & 0.1020 \\ \hline native-country & USA & 0.1193 & -0.9945 \\ \hline age & \(71.75>X>66.79\) & 0.1234 & 0.9786 \\ \hline fnlwgt & \(14254151.00>X>-612719168.00\) & -0.0034 & -0.0244 \\ \hline education num & \(13.13>X>12.83\) & 0.2005 & 0.5779 \\ \hline capital-gain & \(7433545.31>X>-1413678.12\) & -0.0882 & -0.2000 \\ \hline capital loss & \(238.01>X>-10163.00\) & -0.0609 & -0.0912 \\ \hline hours per week & \(40.52>X>39.91\) & -0.0342 & 0.0619 \\ \hline \end{tabular} \end{table} Table 2: LEURN Rules, LEURN Scores and SHAP Scores assigned to features of a sample from Adult Income Census Dataset Figure 7: Utilized univariate rules in every layer. here that, it is a very important feature to present these rules to humans as non-sensible decisions such as this particular one can be directly visible during monitoring. Although in this particular case this does not alter the final decision, it may be important in some other scenarios. As the LEURN scores are directly drawn from each feature's and rule's additive contribution in the final layer, they are exact. When this is considered, one would expect that SHAP would extract similar importances per feature. However, it is clearly evident from Table 2 that SHAP explanations deviate considerably from these exact contributions. Some key differences we see from Table 2 is that SHAP finds native country being USA is one of the biggest negative factors, whereas in reality contribution from that feature is positive. SHAP also shows that a Bachelor degree, a state government job and being white are negative indicators, whereas in LEURN's exact contributions they are positive. Another interesting observation is SHAP gives more importance to being married to a civilian spouse than executive managerial job title, which does not make sense. We observe similar behaviours across the dataset. ### Comparison to State-of-the-Art in Tabular Data In this section, we provide comparisons to popular methods for tabular data. The methods, datasets and evaluation procedure follows [45] as we use their code for retrieving the datasets, preprocessing and splits. Therefore, the performance of competitor methods have been copy-pasted from [45]. Compared methods are Random Forests [9], Extra Trees [17], k-NN [3], LightGBM [24], XGBoost [12], CatBoost [14], TabNet [4], TabTransformer [20] and SAINT [45]. In Tables 3, 4 and 5, one can find comparisons of LEURN with state-of-the-art methods. Following [45], we have used datasets that are available in OpenML [48], and used OpenML identifiers in the comparison tables. The scores are given in area under receiver operating characteristics curve, accuracy and root mean square error for binary classification, multiclass classification and regression problems respectively. \begin{table} \begin{tabular}{||c|c c c c c c c c c c|} \hline Method / OpenML ID & 188 & 1596 & 4541 & 40664 & 40685 & 40687 & 40975 & 41166 & 41169 & 42734 & Average \\ \hline \hline RandomForest & 0.653 & 0.953 & 0.607 & 0.951 & 0.999 & 0.697 & 0.967 & 0.671 & 0.358 & 0.743 & 0.760 \\ \hline ExtraTrees & 0.653 & 0.946 & 0.595 & 0.951 & 0.999 & 0.697 & 0.956 & 0.648 & 0.341 & 0.736 & 0.752 \\ \hline KNeighborsDist & 0.442 & 0.965 & 0.491 & 0.925 & 0.997 & 0.720 & 0.893 & 0.620 & 0.205 & 0.685 & 0.694 \\ \hline KNeighborsUnif & 0.422 & 0.963 & 0.489 & 0.910 & 0.997 & 0.739 & 0.887 & 0.605 & 0.189 & 0.693 & 0.689 \\ \hline LightGBM & 0.667 & 0.969 & 0.611 & 0.984 & 0.999 & 0.716 & 0.981 & 0.721 & 0.356 & 0.754 & 0.776 \\ \hline XGBoost & 0.612 & 0.928 & 0.611 & 0.984 & 0.999 & 0.730 & 0.984 & 0.707 & 0.356 & 0.752 & 0.766 \\ \hline CatBoost & 0.667 & 0.871 & 0.604 & 0.986 & 0.999 & 0.730 & 0.962 & 0.692 & 0.376 & 0.747 & 0.763 \\ \hline MLP & 0.388 & 0.915 & 0.597 & 0.992 & 0.997 & 0.682 & 0.984 & 0.707 & 0.378 & 0.733 & 0.737 \\ \hline TabNet & 0.259 & 0.744 & 0.517 & 0.665 & 0.997 & 0.275 & 0.871 & 0.599 & 0.243 & 0.630 & 0.580 \\ \hline TabTransformer & 0.660 & 0.715 & 0.601 & 0.947 & 0.999 & 0.697 & 0.965 & 0.531 & 0.352 & 0.744 & 0.721 \\ \hline SAINT & 0.680 & 0.946 & 0.606 & 1.000 & 0.999 & 0.735 & 0.997 & 0.701 & 0.377 & 0.752 & 0.779 \\ \hline LEURN & 0.644 & 0.963 & 0.595 & 0.995 & 0.997 & 0.768 & 0.994 & 0.654 & 0.343 & 0.746 & 0.769 \\ \hline \end{tabular} \end{table} Table 4: Comparison to state-of-the-art in Multi-Class Classification Datasets \begin{table} \begin{tabular}{||c|c c c c c c c c c c|} \hline Method / OpenML ID & 31 & 44 & 1017 & 1111 & 1487 & 1494 & 1590 & 4134 & 42178 & 42733 & Average \\ \hline \hline RandomForest & 0.778 & 0.986 & 0.798 & 0.774 & 0.910 & 0.928 & 0.908 & 0.868 & 0.840 & 0.670 & 0.846 \\ \hline ExtraTrees & 0.764 & 0.986 & 0.811 & 0.748 & 0.921 & 0.935 & 0.903 & 0.856 & 0.831 & 0.659 & 0.841 \\ \hline KNeighborsDist & 0.501 & 0.873 & 0.722 & 0.517 & 0.741 & 0.868 & 0.684 & 0.808 & 0.755 & 0.576 & 0.705 \\ \hline KNeighborsUnif & 0.489 & 0.847 & 0.712 & 0.516 & 0.734 & 0.865 & 0.669 & 0.790 & 0.764 & 0.578 & 0.696 \\ \hline LightGBM & 0.751 & 0.989 & 0.807 & 0.803 & 0.911 & 0.923 & 0.930 & 0.860 & 0.853 & 0.683 & 0.851 \\ \hline XGBoost & 0.761 & 0.989 & 0.781 & 0.802 & 0.903 & 0.915 & 0.931 & 0.864 & 0.854 & 0.681 & 0.848 \\ \hline CatBoost & 0.788 & 0.987 & 0.838 & 0.818 & 0.914 & 0.931 & 0.930 & 0.858 & 0.856 & 0.686 & 0.860 \\ \hline MLP & 0.705 & 0.980 & 0.745 & 0.709 & 0.913 & 0.932 & 0.910 & 0.818 & 0.841 & 0.647 & 0.820 \\ \hline TabNet & 0.472 & 0.978 & 0.422 & 0.718 & 0.625 & 0.677 & 0.917 & 0.701 & 0.830 & 0.603 & 0.694 \\ \hline TabTransformer & 0.764 & 0.980 & 0.729 & 0.763 & 0.884 & 0.913 & 0.907 & 0.809 & 0.841 & 0.638 & 0.823 \\ \hline SAINT & 0.790 & 0.991 & 0.843 & 0.808 & 0.919 & 0.937 & 0.921 & 0.853 & 0.857 & 0.676 & 0.859 \\ \hline LEURN & 0.772 & 0.985 & 0.817 & 0.810 & 0.915 & 0.930 & 0.912 & 0.858 & 0.848 & 0.649 & 0.850 \\ \hline \end{tabular} \end{table} Table 3: Comparison to state-of-the-art in Binary Classification Datasets For all trainings, we perform automatic hyperparameter selection as follows. LEURN has three hyperparameters: depth (\(d\)), tanh quantized region number (\(k\)) and dropout rate (\(r\)). We define search intervals \(d\in\{0,1,2,5,10\}\), \(k\in\{1,2,5,10\}\), \(r\in\{0,0.1,0.3,0.5,0.7,0.9\}\). During hyperparameter search, first we set \(k=1,r=0.9\) (most regularized case) and find best depth. We sweep \(d\) from smallest to largest and stop search when performance metric becomes worse. Next, with \(d=d_{best}\) and \(r=0.9\) set, we sweep \(k\) from smallest to largest and stop when there is no improvement. Finally, we set \(d=d_{best}\) and \(k=k_{best}\), and sweep \(r\) from largest to smallest and stop when there is no improvement. In the above process, the main idea is to start from most regularized case, and check performance improvement when regularization is softened in a controllable way. The search order of \(d,k,r\) is emprical. When performance metric is checked in each stage, we perform 5 trainings with random training and validation sets where best performance is found via best validation error. Once hyperparameter search is complete, we perform 20 trainings with selected hyperparameters on random training, validation and test splits. We save best performing model on validation data, and report test performance averaged over 20 trainings. The split ratios follow [45]. As one can observe, LEURN is comparable and sometimes favorable to state-of-the-art methods, while having explainability advantages. ## 5 Conclusion We have introduced LEURN: Learning explainable univariate rules with neural networks. We have shown that LEURN makes human explainable decisions by its special design that results into learning rules with additive contributions. Several other advantages of LEURN was highlighted and tested on a toy dataset. LEURN was tested on 30 public tabular datasets, and it was found comparable to state-of-the-art methods.
2305.06065
Apollonius Problem and Caustics of an Ellipsoid
In the paper we discuss Apollonius Problem on the number of normals of an ellipse passing through a given point. It is known that the number is dependent on the position of the given point with respect to a certain astroida. The intersection points of the astroida and the ellipse are used to study the case when the given point is on the ellipse. The problem is then generalized for 3-dimensional space, namely for Ellipsoids. The number of concurrent normals in this case is known to be dependent on the position of the given point with respect to caustics of the ellipsoid. If the given point is on the ellipsoid then the number of normals is dependent on position of the point with respect to the intersections of the ellipsoid with its caustics. The main motivation of this paper is to find parametrizations and classify all possible cases of these intersections.
Yagub N. Aliyev
2023-05-10T11:31:59Z
http://arxiv.org/abs/2305.06065v3
# Apollonius problem and caustics of an ellipsoid ###### Abstract. In the paper we discuss Apollonius Problem on the number of normals of an ellipse passing through a given point. By following the footsteps of Apollonius, it is shown that the number is dependent on the position of the given point with respect to a certain astroida. The special case when the point is on the ellipse is studied using the intersection points of the astroida and the ellipse. The problem is then generalized for 3 dimensional space, namely for Ellipsoids. The number in this case is shown to be dependent on the position of the given point with respect to caustics of the ellipsoid. If the given point is on the ellipsoid then the number of normals is dependent on position of the point with respect to the intersections of the ellipsoid with its caustics. MSC2020: Primary 53A05, Secondary 53A04 How many normals can one draw from a point to an ellipse? In the current paper we will try to solve this problem and its generalization to 3 dimensions, using the methods of calculus, which were not around when Apollonius of Perga (c. III-II centuries BC) first asked and answered this question in his famous work _Conics_[3]. Their number is not the only interesting question about these normals. For example, theorem proved by Joachimstal in 1843 states that if \(AB_{1}\), \(AB_{2}\), \(AB_{3}\), and \(AB_{4}\) are these normals, then the points \(B_{1},\ B_{2},\ B_{3}\), and the point diametrically opposite to \(B_{4}\), with respect to the center \(O\), of the ellipse, are concyclic [25] (see also Sect. 17.2 in [12], [21], [13], [15], [30]). There are more results related to this fact in [11], Sect. 17.7.3. The problem about the number of normals, which Apollonius called as _the shortest_ and sometimes _the longest line segments_, appeared in the fifth book of Apollonius, which survived only in Arabic translation [31]. For the outline of the solution of Apollonius, one can check [38], Chapter VII, p. 260-261. There is a lively discussion of this problem also in pages 131-135 of [35], [22]. The problem was also mentioned by V.I. Arnold in his paper [4], Chapter IV and related popular lecture [5] available online both as a brochure and as a YouTube video. **Apollonius problem for plane.** Let the ellipse be defined by \[\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1, \tag{1}\] where we assume that \(a>b>0\). Let us take an arbitrary point \(A(X,Y)\) on the plane of the ellipse. We want to find point \(B(x,y)\) on the ellipse such that \(AB\) is perpendicular to the tangent of the ellipse at \(B\). The slope of this tangent line is \(y^{\prime}=-\frac{b^{2}x}{a^{2}y}\), and therefore \(\frac{y-Y}{x-X}=\frac{a^{2}y}{b^{2}x}\). From this we obtain the equation of rectangular hyperbola \(y=\frac{xY}{cX-(c-1)x}\), where \(c=\frac{a^{2}}{b^{2}}\). The intersection points \(B_{1},\ B_{2},\ B_{3}\), and \(B_{4}\) of this hyperbola with the ellipse give us the required normals \(AB_{1}\), \(AB_{2}\), \(AB_{3}\), and \(AB_{4}\). In his solution, Apollonius also used this hyperbola, which is now known as Apollonius hyperbola [11], Sect. 17.5.5.6. The asymptotes of the hyperbola are \(x=\frac{a^{2}X}{a^{2}-b^{2}}\) and \(y=\frac{b^{2}Y}{b^{2}-a^{2}}\). One of the branches of this hyperbola passes through the center of the ellipse and therefore, there are at least 2 intersection points with the ellipse. The other branch may or may not intersect the ellipse. In the cases when \(X=0\) and \(Y=0\), the hyperbola degenerates to a pair of perpendicular lines \(x=0,y=\frac{b^{2}Y}{b^{2}-a^{2}}\) and \(x=\frac{a^{2}X}{a^{2}-b^{2}},y=0\), respectively. Let us denote by \(n(A)\) the total number of intersections of the hyperbola with the ellipse. Since the intersection points are the solutions of a fourth order equation, \(n(A)\) can not exceed 4. Let us find points \(A\), where \(n(A)\) jumps from 4 to 2. This happens when Apollonius hyperbola is tangent to the ellipse i.e. the slopes are equal at the intersection point: \(-\frac{x}{cy}=\frac{cXY}{(cX-(c-1)x)^{2}}\). Using this and the equation of the ellipse, we obtain \[\frac{x}{a}=\frac{a}{a^{2}-b^{2}}\left(\sqrt[3]{\frac{b^{2}Y^{2}X}{a^{2}}}+X \right),\ \frac{y}{b}=\frac{b}{b^{2}-a^{2}}\left(\sqrt[3]{\frac{a^{2}X^{2}Y}{b^{2}}}+Y \right),\] which when used back in the equation of the ellipse, after some simplifications gives \[\sqrt[3]{a^{2}X^{2}}+\sqrt[3]{b^{2}Y^{2}}=\sqrt[3]{\left(a^{2}-b^{2}\right)^{2}}. \tag{2}\] It is the equation of _astroida_ in \(X,Y\) coordinates. In the interior region of the astroida \(n(A)=4\). Outside of the astroida \(n(A)=2\). On the astroida itself \(n(A)=3\), except the vertices of the astroida \(\left(\pm\frac{a^{2}-b^{2}}{a},0\right)\) and \(\left(0,\pm\frac{a^{2}-b^{2}}{b}\right)\), where again \(n(A)=2\). This is essentially what was done by Apollonius, which is a remarkable achievement, taking into account the mathematical tools available at the time. In [11], Sect. 17.7.4, it was mentioned that this astroida is the evolute of the ellipse and therefore drawing normals to the ellipse can be done by drawing tangent lines of the astroida. Let us now suppose that the point \(A(X,Y)\) is on the ellipse: \(X=x,\ Y=y\). Since Apollonius hyperbola passes through \(A(X,Y)\), automatically, one of the normals disappears, because one of the points \(B_{1}\), \(B_{2}\), \(B_{3}\), and \(B_{4}\), coincide with \(A\). For the points of the ellipse (1) in the astroida (2), \(n(A)=3\). For the points of the ellipse (1) outside the astroida (2), \(n(A)=1\). For the intersection points \(N_{1},\ N_{2},\ N_{3}\), and \(N_{4}\) of the ellipse (1) and the astroida (2), \(n(A)=2\). The coordinates of these points can be easily determined: \((\pm x_{0},\pm y_{0})\) and \((\pm x_{0},\mp y_{0})\), where \[x_{0}=\sqrt{\frac{a^{4}(a^{2}-2b^{2})^{3}}{(a^{2}-b^{2})(a^{2}+b^{2})^{3}}},\ y_{0}=\sqrt{\frac{b^{4}(2a^{2}-b^{2})^{3}}{( a^{2}-b^{2})(a^{2}+b^{2})^{3}}}.\] Thus we proved the following **Theorem 1**.: _For the ellipse (1) and the astroida (2), the following cases are possible:_ 1. _If_ \(a^{2}>2b^{2}\) _then the points_ \((\pm x_{0},\pm y_{0})\) _and_ \((\pm x_{0},\mp y_{0})\) _separate the ellipse into 4 regions where_ \(n(A)=3\) _and_ \(n(A)=1\)_._ 2. _If_ \(a^{2}\leq 2b^{2}\)_, then for all the points of the ellipse (1),_ \(n(A)=1\)_._ Noting this, we can say that Apollonius problem for the number of concurrent normals of an ellipse is completely solved. There is also a three dimensional variant of this problem, where one takes point \(A(X,Y,Z)\) outside of the plane of the ellipse \(\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}=1,\ z=0\) and counts the number of lines \(AB\), such that \(B(x,y,0)\) is on the ellipse, and \(AB\) is perpendicular to the tangent of the ellipse at the point \(B\). But this variant is easily reduced to the planar case. Consider the projection \(A^{\prime}(X,Y,0)\) of \(A\) onto the plane \(z=0\). If \(A^{\prime}B\) is a normal of the ellipse then by The Theorem of the Three Perpendiculars, \(AB\) is also perpendicular to the tangent of the ellipse at the point \(B\). Therefore, \(n(A)\) is 2, 3, or 4 dependending on the position of point \(A\) with respect cylindrical surface defined by the same equation for the astroida (2). Apollonius did not mention any practical uses for his results, except that these normals corresponding to minimal and maximal distances, are worth investigating for their own sake and that, in contrast to the tangents (See Appendix A), the normals were not studied much by the earlier mathematicians. Because of this connection with the extremal distances, there can be applications in optics, wavefronts, mathematical billiards, etc. One of the applications of these results in astronomy can be a possible explanation for the presence of 4 images of a distant quasar, whose light is being bent around elliptical Einstein Ring formed by two galaxies 3.4 billion light-years away [23]. Figure 2: A possible application of Apollonius Problem. Image credit: ESA/Hubble & NASA, T. Treu; Acknowledgment: J. Schmidt. Public domain. Used with permission. **Apollonius problem for space.** Let us now consider three dimensional generalization of this problem. How many concurrent normals of an ellipsoid are there? In this form the problem was studied in [26]. The answer will be given in the next section. Chapter III, Sect, E2 and E3 of [32] contains detailed discussion of the history and many references for this generalization and the previous planar case. Let an ellipsoid be defined by \[\frac{x^{2}}{a^{2}}+\frac{y^{2}}{b^{2}}+\frac{z^{2}}{c^{2}}=1, \tag{3}\] where we assume that \(a>b>c>0\). Let us take an arbitrary point \(A(X,Y,Z)\) and find the number \(n(A)\) of points \(B(x,y,z)\) on the ellipsoid such that \(AB\) is the normal line of the plane tangent to the ellipsoid at \(B\). Since the normal vector of the plane tangent to the ellipsoid at \(B(x,y,z)\) is \(\mathbf{N}=\left(\frac{x}{a^{2}},\frac{y}{b^{2}},\frac{z}{c^{2}}\right)\), \[\frac{x-X}{\frac{x}{a^{2}}}=\frac{y-Y}{\frac{y}{b^{2}}}=\frac{z-Z}{\frac{z}{c^ {2}}}=-t,\] where \(t\) is a parameter. From this we find parametric representation of _Apollonius curve_ \[\mathbf{r}(t)=\left(\frac{a^{2}X}{a^{2}+t},\frac{b^{2}Y}{b^{2}+t},\frac{c^{2} Z}{c^{2}+t}\right),\] whose intersections with the ellipsoid give the base points of the normals through \(A\). The asymptotes of this curve are the lines \[\mathbf{r}_{1}(t)=\left(t,\frac{b^{2}Y}{b^{2}-a^{2}},\frac{c^{2}Z}{c^{2}-a^{2} }\right),\] \[\mathbf{r}_{2}(t)=\left(\frac{a^{2}X}{a^{2}-b^{2}},t,\frac{c^{2}Z}{c^{2}-b^{2} }\right),\] \[\mathbf{r}_{3}(t)=\left(\frac{a^{2}X}{a^{2}-c^{2}},\frac{b^{2}Y}{b^{2}-c^{2}}, t\right).\] If \(X=0\), \(Y=0\), and \(Z=0\) then Apollonius curve splits into a line, which served earlier as an asymptote of the curve, and a hyperbola: \[\mathbf{r}(t)=\mathbf{r}_{1}(t),\ \mathbf{r}(t)=\left(0,\frac{b^{2}Y}{b^{2}+t}, \frac{c^{2}Z}{c^{2}+t}\right);\] \[\mathbf{r}(t)=\mathbf{r}_{2}(t),\ \mathbf{r}(t)=\left(\frac{a^{2}X}{a^{2}+t},0, \frac{c^{2}Z}{c^{2}+t}\right);\] \[\mathbf{r}(t)=\mathbf{r}_{3}(t),\ \mathbf{r}(t)=\left(\frac{a^{2}X}{a^{2}+t}, \frac{b^{2}Y}{b^{2}+t},0\right),\] respectively. Apollonius curve passes through the center of the ellipsoid when \(t=\pm\infty\), and goes to infinity when \(t=-a^{2},-b^{2},-c^{2}\). Therefore, there are at least 2 intersections (maximal and minimal distances from \(A\)) with the ellipsoid. On the other hand, these intersections are determined by \[\left(\frac{aX}{a^{2}+t}\right)^{2}+\left(\frac{bY}{b^{2}+t}\right)^{2}+\left( \frac{cZ}{c^{2}+t}\right)^{2}=1, \tag{4}\] which is a sixth order equation with respect to \(t\), and therefore can not have more than 6 real solutions. As before, let us denote the number of normals through \(A\) by \(n(A)\). We want to find points \(A\), where \(n(A)\) jumps from 2 to 4, or from 4 to 6. This happens when Apollonius curve is tangent to the ellipse i.e. \(\mathbf{r}^{\prime}(t)=\left(-\frac{a^{2}X}{(a^{2}+t)^{2}},-\frac{b^{2}Y}{(b^{2}+t )^{2}},-\frac{c^{2}Z}{(c^{2}+t)^{2}}\right),\) is orthogonal to \(\mathbf{N}=\left(\frac{X}{a^{2}+t},\frac{Y}{b^{2}+t},\frac{Z}{c^{2}+t}\right)\). This can be expressed as \(\mathbf{r}^{\prime}(t)\cdot\mathbf{N}=0\), or as \[\frac{a^{2}X^{2}}{(a^{2}+t)^{3}}+\frac{b^{2}Y^{2}}{(b^{2}+t)^{3}}+\frac{c^{2}Z ^{2}}{(c^{2}+t)^{3}}=0. \tag{5}\] The equations (4) and (5) define the surface known as _Caustics of an Ellipsoid_ also known as _focal surface, surface of centers, evolute of an ellipsoid_ or _Cayley's astroida_[14]. Cayley used the name _Centro-surface of an Ellipsoid_, and the equations (4) and (5), which appear in p. 358 of [14], were obtained using the fact that the points of this surface are the centers of principal curvatures of the ellipsoid (3) (See also p. 218 in [33]). For the definition of the principal curvatures, see, for example, p. 158, [18]. A. Cayley's graph of the surface appears in p. 330 of [14]. One can also find many other images depicting this surface in various papers and books. See for example pp. 49-53 in [8], p. 154 in Ch. 7 of [9], [16], [24], p. 218 in [7], p. 257 in [39], p. 49 in [27], [40]. Part of the surface where the two surfaces corresponding to minimal and maximal curvatures intersect, was shown and mentioned in pages 37 and 109, respectively, of [6]. S.K. Lando gave two popular lectures about the caustics, available online, one with a demonstration of the surface at the end [29]. Another representation of the surface together with some applications of it in astronomy and physics appeared in [37] (See also [36]). The idea of using more general caustics in cosmology is due to Ya. B. Zel'dovich (See [41] and the references therein). There are many visualizations of this surface as a physical model. Before the dawn of computer graphics and 3D printers, handmade models and sculptures represented the best medium for such mathematical objects [18]. In [28], there is a description of a model made out of gypsum by student H.A. Schwarz in the Arts Faculty (later Prof. in Univ. Berlin), which is also mentioned in Sect. 197 (p. 282), [17] (see also p. 198, [32]). Stereographic photo of one such model by unknown artist/maker from the same time period is shown in Figure 4, [19]. Two more models of this surface can be found in The Collection of Mathematical Models and Instruments in The University of Gottingen [20] (See Fig. 5). Similar models for the centers of curvature of paraboloids were described in [34]. Note that in general, it is not easy to exclude the parameter \(t\) from the equations (4) and (5), to get an explicit equation for the caustics. But if, for example, \(c=0\), then the equations (4) and (5) are transformed to \[\left(\frac{aX}{a^{2}+t}\right)^{2}+\left(\frac{bY}{b^{2}+t}\right)^{2}=1,\ \frac{a^{2}X^{2}}{(a^{2}+t)^{3}}+\frac{b^{2}Y^{2}}{(b^{2}+t)^{3}}=0,\] from which one can easily eliminate parameter \(t\), and obtain the equation (2) for the astroida. This gives us another solution for the planar case considered in the previous section. Similarly, Figure 4: Stereograph Card, Unknown artist/maker, Centro-Surface. Ellipsoid. about 1860. Gift of Weston J. and Mary M. Naef, Getty Museum Collection. Open Content program. No copyright. Used with permission. Figure 5: Curvature centre point surface models: 239 and 242. Curvature centre point surface of the triaxial ellipsoid. Gypsum; curved surfaces are unified. (Göttinger Sammlung mathematischer Modelle und Instrumente, Georg-August-Universität Götingen) if \(b=c\) then one can introduce a new variable \(Y^{\prime}\), such that \((Y^{\prime})^{2}=Y^{2}+Z^{2}\) and then the equations (4) and (5) can be written as \[\left(\frac{aX}{a^{2}+t}\right)^{2}+\left(\frac{bY^{\prime}}{b^{2}+t}\right)^{2 }=1,\ \frac{a^{2}X^{2}}{(a^{2}+t)^{3}}+\frac{b^{2}(Y^{\prime})^{2}}{(b^{2}+t)^{3}}=0,\] from which again the parameter \(t\) is easily eliminated to get \[\sqrt[3]{a^{2}X^{2}}+\sqrt[3]{b^{2}(Y^{\prime})^{2}}=\sqrt[3]{(a^{2}-b^{2})^{2 }},\] or \[\sqrt[3]{a^{2}X^{2}}+\sqrt[3]{b^{2}(Y^{2}+Z^{2})}=\sqrt[3]{(a^{2}-b^{2})^{2}},\] which is a surface of revolution generated by rotating astroida (2) around \(x\) axis (see Fig. 6). **Caustics of Ellipsoid in GeoGebra and Maple.** In this section a method of generating the surface, based on the cartesian coordinates, will be described. The formulas for Gaussian curvature and mean curvature of an ellipsoid are given in Corollary 13.41, p. 413 in [1] (see also Chapter 4, [10]): \[K(x,y)=\frac{1}{\left(abc\left(\frac{x^{2}}{a^{4}}+\frac{y^{2}}{b^{4}}+\frac {z^{2}}{c^{4}}\right)\right)^{2}},\ H(x,y)=\frac{|x^{2}+y^{2}+z^{2}-a^{2}-b^{ 2}-c^{2}|}{2(abc)^{2}\left(\frac{x^{2}}{a^{4}}+\frac{y^{2}}{b^{4}}+\frac{z^{2} }{c^{4}}\right)^{\frac{3}{2}}}.\] The principal curvatures \(k_{1}\) and \(k_{2}\) are the roots of the quadratic equation \(x^{2}-2Hx+K=0\) (Corollary 13.26, p. 400, in [1]): \[k_{1}=\frac{1}{H-\sqrt{H^{2}-K}},\ k_{1}=\frac{1}{H+\sqrt{H^{2}-K}}.\] The corresponding radii of the curvature are \(R_{1}=\frac{1}{k_{1}}\) and \(R_{2}=\frac{1}{k_{2}}\), and the respective centers of the curvature \(C_{1}(x_{1},y_{1},z_{1})\) and \(C_{2}(x_{1},y_{1},z_{1})\) can be determined using the formula \[C_{1}(x_{1},y_{1},z_{1})=(x,y,z)-R_{1}\cdot\frac{\mathbf{N}}{|\mathbf{N}|},\ C_{2}(x_{2},y_{2},z_{2})=(x,y,z)-R_{2}\cdot \frac{\mathbf{N}}{|\mathbf{N}|},\] where as before \(\mathbf{N}=\left(\frac{x}{a^{2}},\frac{y}{b^{2}},\frac{z}{c^{2}}\right)\). The GeoGebra Activity demonstrating the surface, can be found in [https://www.geogebra.org](https://www.geogebra.org) but it takes several minutes for the applet to open. The Maple Learn document can be found in [https://learn.maplesoft.com](https://learn.maplesoft.com). The images created using GeoGebra and Maple 2022 are shown in Figure 7 and Figure 8, respectively. In the case of an ellipsoid of revolution, for example, when \(b=c\), one of the caustics becomes a surface of revolution, shown in Figure 6, the other caustics degenerates to a line segment on the axis of symmetry of the surface shown in Figure 6, between its vertices. The number of normals of the ellipsoid (3) for the points \(A\) on this line segment is infinite (\(n(A)=\infty\)), except the endpoints of this line segment where \(n(A)=2\). For the other points of the space, the situation is identical to the planar case considered in Sect. 2. **The intersections of an ellipsoid and its caustics with the coordinate planes.** The intersections of the ellipsoid (3) and its caustics with the coordinate planes \(x=0,\ y=0,\ z=0\) are the following curves shown in Figure 10: Figure 8. The ellipsoid (yellow) and its caustics (red and blue) with transparency applied. Created using Maple 2022. Figure 7. The centers corresponding to smaller (left, \(R_{2}\)) and greater (right, \(R_{1}\)) principal radii of curvature. The surfaces intersect (center). Created using GeoGebra. 1. Ellipse \((a\cos t,b\sin t,0)\) (black), 2. Ellipse \((a\cos t,0,c\sin t)\) (yellow), 3. Ellipse \((0,b\cos t,c\sin t)\) (red), 4. Astroida \(\left(\frac{a^{2}-b^{2}}{a}\cos^{3}t,\frac{a^{2}-b^{2}}{c}\sin^{3}t,0\right)\) (pink), 5. Astroida \(\left(\frac{a^{2}-c^{2}}{a}\cos^{3}t,0,\frac{a^{2}-c^{2}}{c}\sin^{3}t\right)\) (light blue), 6. Astroida \(\left(0,\frac{b^{2}-c^{2}}{b}\cos^{3}t,\frac{b^{2}-c^{2}}{c}\sin^{3}t\right)\) (purple), 7. Ellipse \(\left(\frac{a^{2}-c^{2}}{a}\cos t,\frac{b^{2}-c^{2}}{b}\sin t,0\right)\) (green), 8. Ellipse \(\left(\frac{a^{2}-b^{2}}{a}\cos t,0,\frac{b^{2}-c^{2}}{c}\sin t\right)\) (dark blue), 9. Ellipse \(\left(0,\frac{a^{2}-b^{2}}{b}\cos t,\frac{a^{2}-c^{2}}{c}\sin t\right)\) (orange). Let us now find intersections of Ellipses 1,2,3, Astroidas 4,5,6 and Ellipses 7,8,9, respectively. * If \(a^{2}\geq 2b^{2}\) then Ellipse 1 and Astroida 4 intersect at \((\pm x_{0},\pm y_{0},0)\) and \((\pm x_{0},\mp y_{0},0)\), where \[x_{0}=\sqrt{\frac{a^{4}(a^{2}-2b^{2})^{3}}{(a^{2}-b^{2})(a^{2}+b^{2})^{3}}}, \ y_{0}=\sqrt{\frac{b^{4}(2a^{2}-b^{2})^{3}}{(a^{2}-b^{2})(a^{2}+b^{2})^{3}}},\] * If \(a^{2}\geq 2c^{2}\) then Ellipse 2 and Astroida 5 intersect at \((\pm x_{1},0,\pm z_{1})\) and \((\pm x_{1},0,\mp z_{1})\), where \[x_{1}=\sqrt{\frac{a^{4}(a^{2}-2c^{2})^{3}}{(a^{2}-c^{2})(a^{2}+c^{2})^{3}}}, \ z_{1}=\sqrt{\frac{c^{4}(2a^{2}-c^{2})^{3}}{(a^{2}-c^{2})(a^{2}+c^{2})^{3}}},\] * If \(b^{2}\geq 2c^{2}\) then Ellipse 3 and Astroida 6 intersect at \((0,\pm y_{2},\pm z_{2})\) and \((0,\pm y_{2},\mp z_{2})\), where \[y_{2}=\sqrt{\frac{b^{4}(b^{2}-2c^{2})^{3}}{(b^{2}-c^{2})(b^{2}+c^{2})^{3}}}, \ z_{2}=\sqrt{\frac{c^{4}(2b^{2}-c^{2})^{3}}{(b^{2}-c^{2})(b^{2}+c^{2})^{3}}},\] Figure 9: The number of normals in the regions of space separated by the caustics of the ellipsoid. Half of the caustics is hidden to make the inner regions visible. * Ellipse 1 and Ellipse 7 do not intersect, * If \(b^{2}\geq 2c^{2}\) then Ellipse 2 and Ellipse 8 intersect at \((\pm x_{3},0,\pm z_{3})\) and \((\pm x_{3},0,\mp z_{3})\), where \[x_{3}=\sqrt{\frac{a^{2}(a^{2}-b^{2})^{2}(2c^{2}-b^{2})}{(a^{2}-c^{2})(2a^{2}c^{ 2}-a^{2}b^{2}-b^{2}c^{2})}},\] \[z_{3}=\sqrt{\frac{c^{2}(c^{2}-b^{2})^{2}(2a^{2}-b^{2})}{(c^{2}-a^{2})(2a^{2}c^{ 2}-a^{2}b^{2}-b^{2}c^{2})}},\] * If \(2b^{2}\geq a^{2}\geq 2c^{2}\) then Ellipse 3 and Ellipse 9 intersect at \((0,\pm y_{4},\pm z_{4})\) and \((0,\pm y_{4},\mp z_{4})\), where \[y_{4}=\sqrt{\frac{b^{2}(b^{2}-a^{2})^{2}(2c^{2}-a^{2})}{(b^{2}-c^{2})(2b^{2}c^{ 2}-a^{2}b^{2}-a^{2}c^{2})}},\] \[z_{4}=\sqrt{\frac{c^{2}(c^{2}-a^{2})^{2}(2b^{2}-a^{2})}{(c^{2}-b^{2})(2b^{2}c^{ 2}-a^{2}b^{2}-a^{2}c^{2})}}.\] * If \(a^{2}+c^{2}\geq 2b^{2}\) then Astroida 4 and Ellipse 7 intersect at \((\pm x_{5},\pm y_{5},0)\) and \((\pm x_{5},\mp y_{5},0)\), where \[x_{5}=\sqrt{\frac{(a^{2}-c^{2})^{3}(2b^{2}-a^{2}-c^{2})^{3}}{a^{2}(b^{2}-a^{2}) (a^{2}+b^{2}-2c^{2})^{3}}},\ y_{5}=\sqrt{\frac{(b^{2}-c^{2})^{3}(2a^{2}-b^{2}- c^{2})^{3}}{b^{2}(a^{2}-b^{2})(a^{2}+b^{2}-2c^{2})}},\] Figure 10: The intersections of a triaxial ellipsoid and its caustics with the coordinate planes. The colors are explained in the text. The tangency points of the two caustics and some of the intersection points are also shown. * Astroida 5 and Ellipse 8 are tangent to each other at the points \((\pm x_{6},0,\pm z_{6})\) and \((\pm x_{6},0,\mp z_{6})\), where \[x_{6}=\sqrt{\frac{(b^{2}-c^{2})^{3}}{c^{2}(a^{2}-c^{2})}},\ z_{6}=\sqrt{\frac{(a ^{2}-b^{2})^{3}}{a^{2}(a^{2}-c^{2})}},\] These points also divide Astroida 5 and Ellipse 8 into parts which belong to different caustics. One can check that these points are on, in and outside the ellipsoid (3) if \(\frac{1}{a^{2}}+\frac{1}{c^{2}}=\frac{3}{b^{2}}\), \(<\frac{3}{b^{2}}\) and \(>\frac{3}{b^{2}}\), respectively. * If \(a^{2}+c^{2}\leq 2b^{2}\) then Astroida 6 and Ellipse 9 intersect at \((0,\pm y_{7},\pm z_{7})\) and \((0,\pm y_{7},\mp z_{7})\), where \[y_{7}=\sqrt{\frac{(b^{2}-a^{2})^{3}(2c^{2}-b^{2}-a^{2})^{3}}{b^{2}(c^{2}-b^{2} )(b^{2}+c^{2}-2a^{2})^{3}}},\] \[z_{7}=\sqrt{\frac{(c^{2}-a^{2})^{3}(2b^{2}-c^{2}-a^{2})^{3}}{c^{2}(b^{2}-c^{2} )(b^{2}+c^{2}-2a^{2})^{3}}}.\] One can experiment with these intersection points by moving the sliders in GeoGebra Activity [https://www.geogebra.org/3d/nnnvbxfw](https://www.geogebra.org/3d/nnnvbxfw). **The number of normals for the points of an ellipsoid.** As for an ellipse, if the point \(A\) is on the ellipsoid (3) then the number of normals \(n(A)\) through \(A\) decreases by 1, because in this case, one of the points \(B_{1},B_{2},\ldots,B_{6}\) coincides with \(A\). The description of regions on an ellipsoid where \(n(A)\) jumps from 1 to 3, or from 3 to 5, is not as trivial as in the planar case. It seems that the problem in this setting escaped the attention of the previous authors. In the remaining part of the paper we will describe all possible cases of the position of the regions with different values for \(n(A)\). For this, we need to determine the intersections of the ellipsoid and its caustics (See Figure 8). These intersections are some curves on the ellipsoid (3), and it seems that there are no simple parametrizations for these curves. But using the intersections of these curves with the coordinate planes, which we found in the previous section, one can categorize all possible cases. In Fig. 11 some examples for the positions of the regions of the ellipsoid with different values for \(n(A)\) are shown, where these intersections have many different shapes and positions. **Theorem 2**.: _For the ellipsoid (3) and its caustics defined by (4) and (5), the following cases are possible:_ 1. _If_ \(a^{2}<2c^{2}\) _then there are no intersections of the caustics with the ellipsoid (3),_ 2. _If_ \(b^{2}<2c^{2}\leq a^{2}\) _then only one of the caustics intersects the ellipsoid (3),_ 3. _If_ \(b^{2}\geq 2c^{2}\)_, then both of the caustics intersects the ellipsoid (3)._ _In all cases, for the points of the ellipsoid (3) lying outside of the two caustics \(n(A)=1\), for the points of the ellipsoid (3) lying in only one of these caustics \(n(A)=5\), for the points of the ellipsoid (3) lying in both of these caustics \(n(A)=5\), and for the intersection points of the ellipsoid (3) and these caustics \(n(A)=2\) or \(4\), except some of the points of the ellipsoid (3), where the caustics intersect each other or these caustics intersect the coordinate planes._ Depending on whether \(a^{2}\leq 2b^{2}\) or \(a^{2}>2b^{2}\), the red caustic passes through the ellipsoid (3) or the ellipsoid (3) passes through this caustic. More detailed categorization of the cases of intersection of the ellipsoid and its caustics can be done based on the sign of expressions \(\frac{1}{a^{2}}+\frac{1}{c^{2}}-\frac{3}{b^{2}}\) and \(a^{2}+c^{2}-2b^{2}\). **Conclusion.** In the paper Apollonius problems for 2 dimensions (ellipse) and 3 dimensions (ellipsoid) were discussed. The number of concurrent normals of an ellipse (an ellipsoid) is dependent on the position of the point of concurrency with respect to caustics of the ellipse (the ellipsoid). The cases when the point of concurrency is on the ellipse (the ellipsoid), required the study of several different cases of intersections of the caustics with the given ellipse (ellipsoid). It would be interesting to generalize the results to 4 (see [27]) and higher dimensions. **Appendix.** The tangent lines of an ellipse, and the tangent lines and planes of an ellipsoid are much easier to study than the normals. For completeness, the problem on the number of concurrent tangent lines (planes) of an ellipse (ellipsoid) will be discussed here. Let us take point \(A(X,Y)\) outside of the ellipse (1) and find point \(B(x_{1},y_{1})\) and \(C(x_{2},y_{2})\) on the ellipse (1) such that \(AB\) and \(AC\) are tangent lines of the ellipse (1) at \(B\) and \(C\), respectively. Since \(AB\) and \(AC\) have the same slopes as the ellipse (1) at the point \(B\) and \(C\), respectively, we have \(\frac{y-Y}{x-X}=-\frac{b^{2}x}{a^{2}y}\), which can be written as \(\frac{x(x-X)}{a^{2}}+\frac{y(y-Y)}{b^{2}}=0\). This is equation of ellipse through the points \(O\) and \(A\), with center at \(\left(\frac{X}{2},\frac{Y}{2}\right)\) and semiaxes parallel to the semiaxes of the original ellipse. Its intersections with the original ellipse (1) are on the line \(\frac{xX}{a^{2}}+\frac{yY}{b^{2}}=1\), which is obtained from subtraction of the equations of the ellipses. The coordinates of the points \(B\) and \(C\) are then determined by \[x_{1}=a\cdot\frac{\frac{X}{a}-\frac{Y}{b}\sqrt{\left(\frac{X}{a}\right)^{2}+ \left(\frac{Y}{b}\right)^{2}-1}}{\left(\frac{X}{a}\right)^{2}+\left(\frac{Y}{ b}\right)^{2}},\ y_{1}=b\cdot\frac{\frac{Y}{b}+\frac{X}{a}\sqrt{\left(\frac{X}{a} \right)^{2}+\left(\frac{Y}{b}\right)^{2}-1}}{\left(\frac{X}{a}\right)^{2}+ \left(\frac{Y}{b}\right)^{2}},\] \[x_{2}=a\cdot\frac{\frac{X}{a}+\frac{Y}{b}\sqrt{\left(\frac{X}{a}\right)^{2}+ \left(\frac{Y}{b}\right)^{2}-1}}{\left(\frac{X}{a}\right)^{2}+\left(\frac{Y}{ b}\right)^{2}},\ y_{2}=b\cdot\frac{\frac{Y}{b}-\frac{X}{a}\sqrt{\left(\frac{X}{a} \right)^{2}+\left(\frac{Y}{b}\right)^{2}-1}}{\left(\frac{X}{a}\right)^{2}+ \left(\frac{Y}{b}\right)^{2}}.\] Figure 11: The ellipsoid (green) and its caustics (red and blue) with transparency applied. Created using Maple 2022. \((a,b,c)=(4.9,4.4,4)\) (top left); \((5,4,3)\) (top center); \((4,3,2)\) (top right); \((4,3,1.5)\) (bottom left); \((5,4.7,1.5)\) (bottom center); \((5,3,1)\) (bottom right). If we denote by \(t(A)\) the total number of tangent lines of the ellipse passing through \(A\), then \(t(A)=2\) outside of the ellipse, \(t(A)=0\) inside of the ellipse, and \(t(A)=1\) on the ellipse. Similarly, for the ellipsoid (3), if \(A\) is outside of the ellipsoid (3), then the points \(B\), such that \(AB\) is tangent to the ellipsoid (3) can be determined by intersecting (3) with another ellipsoid \(\frac{x(x-X)}{a^{2}}+\frac{y(y-Y)}{b^{2}}+\frac{z(z-Z)}{c^{2}}=0\). All these intersection points are on the plane \(\frac{xX}{a^{2}}+\frac{yY}{b^{2}}+\frac{zZ}{c^{2}}=1\). The number of tangent lines \(tl(A)\) and tangent planes \(tp(A)\) of the ellipsoid (3), which pass through \(A\) is infinity (\(tl(A)=tp(A)=\infty\)) for exterior points of the ellipsoid (3), \(tl(A)=tp(A)=0\) for interior points, and \(tl(A)=\infty\) and \(tp(A)=1\) for the points of the ellipsoid (3) itself.
2307.07502
Inflationary cross-correlations of a non-minimal spectator and their soft limits
Light spectator fields may not be dynamically relevant for the inflationary phase of the early universe, but they can still induce interesting imprints on cosmological observables. In this paper, we compute the cross-correlations of the inflationary perturbations, both scalar and tensor, with the fluctuations of a non-minimally interacting spectator field using the in-in formalism and investigate the consistency relations associated with such cross-correlations. In particular, the scalar consistency relation is derived semi-classically by generalizing the consistency relation obtained earlier for cosmic magnetic fields. Notably, we find that the direct coupling between the inflaton and the spectator solely determines the local non-linearity parameter associated with the scalar cross-correlation during slow-roll inflation, regardless of the specific form of the Lagrangian for the spectator field. Further, we calculate the tensor correlation with spectator fluctuations, explore the associated soft limits, and demonstrate the violation of the conventional tensor consistency relation with a non-minimal derivative coupling. Our analysis stresses that the violation of tensor consistency relations does not necessarily imply the superhorizon evolution of tensor modes. Instead, such violations can arise due to the non-minimal derivative coupling of the spectator field to gravity. Finally, we discuss the wider implications of our results in the context of cosmological soft theorems.
P. Jishnu Sai, Rajeev Kumar Jain
2023-07-14T17:48:52Z
http://arxiv.org/abs/2307.07502v2
# Inflationary cross-correlations of a non-minimal spectator and their soft limits ###### Abstract Light spectator fields may not be dynamically relevant for the inflationary phase of the early universe, but they can still induce interesting imprints on cosmological observables. In this paper, we compute the cross-correlations of the inflationary perturbations, both scalar and tensor, with the fluctuations of a non-minimally interacting spectator field using the in-in formalism and investigate the consistency relations associated with such cross-correlations. In particular, the scalar consistency relation is derived semi-classically by generalizing the consistency relation obtained earlier for cosmic magnetic fields. Notably, we find that the direct coupling between the inflaton and the spectator solely determines the local non-linearity parameter associated with the scalar cross-correlation during slow-roll inflation, regardless of the specific form of the Lagrangian for the spectator field. Further, we calculate the tensor correlation with spectator fluctuations, explore the associated soft limits, and demonstrate the violation of the conventional tensor consistency relation with a non-minimal derivative coupling. Our analysis stresses that the violation of tensor consistency relations does not necessarily imply the superhorizon evolution of tensor modes. Instead, such violations can arise due to the non-minimal derivative coupling of the spectator field to gravity. Finally, we discuss the wider implications of our results in the context of cosmological soft theorems. ## 1 Introduction The inflationary epoch in the very early universe provides a natural framework for understanding the large scale homogeneity and isotropy of our observed universe and the origin of primordial density perturbations which induce the temperature anisotropies in the cosmic microwave background (CMB) and later give rise to the formation of large scale structures in the universe [1, 2, 3, 4, 5, 6, 7]. Inflationary cosmology also presents itself as an interesting avenue to probe the primaeval interactions of quantum fields that may be dynamical during inflation but need not necessarily drive inflation. The quantum fluctuations of such fields are often assumed to be Gaussian in nature and, thus, completely described by the two-point correlation function or the power spectrum [8, 9, 10]. However, interactions among different fields or with gravity are only imprinted in the higher-order correlation functions and therefore, primordial non-Gaussianities (NG) are usually considered a novel measure of quantum interactions during inflation [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. If the energy scale of inflation is very high, precise measurements of NG can provide interesting insights into quantum interactions at energy scales which are far beyond the reach of any laboratory experiments in the near future [22, 23, 24]. In order to probe the underlying physics of inflation and to gain further insights into the nature of primordial field interactions, the study of higher-order NG correlators has received enormous attention over the years within the community, leading to the development of numerous theorems and identities associated with these correlators [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. Among these, cosmological soft theorems hold particular significance which usually indicate a relation between an \((n+1)\)- and an \(n\)-point correlation function in the limit in which one of the modes is _soft_, i.e. its momentum is very small as compared to the others [42, 43, 44, 45]. These soft theorems are often related to a non-linearly realised symmetry of the action of cosmological perturbations, which might be spontaneously broken by the state of the underlying theory. These symmetries are closely connected to cosmological adiabatic modes, which are crucial for understanding the statistics of primordial fluctuation in the early universe [46, 47, 48, 49, 50]. An adiabatic mode is a cosmological perturbation that, on superhorizon scales, appears locally identical to a gauge mode and can be absorbed through a coordinate transformation [45, 51]. Their existence is significant for various reasons, including constraining the number of degrees of freedom present during inflation and their utility in deriving cosmological soft theorems. One of the most well-known cosmological soft theorems in single field inflation is the Maldacena consistency relation (CR) [25], which relates the bispectrum of the comoving curvature perturbation \(\zeta\) to the power spectrum in the squeezed limit as \[\lim_{k_{1}\to 0}\frac{1}{P_{\zeta}(k_{1})}\langle\zeta_{\mathbf{k}_{1}} \zeta_{\mathbf{k}_{2}}\zeta_{\mathbf{k}_{3}}\rangle=-(2\pi)^{3}\delta^{(3)}( \mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3})\;\left(\frac{\partial\ln[k_{2}^ {3}P_{\zeta}]}{\partial\ln k_{2}}\right)P_{\zeta}(k_{2}), \tag{1}\] where \(P_{\zeta}\) is the power spectrum. This CR is derived using the background wave approach in which the long-wavelength \(\zeta\) which is conserved outside the horizon, can be absorbed by an appropriate redefinition of coordinates. Similarly, one can also calculate the soft theorems associated with the bispectrum \(\langle\gamma_{\mathbf{k}_{1}}\gamma_{\mathbf{k}_{2}}\gamma_{\mathbf{k}_{3}}\rangle\) of tensor perturbations \(\gamma\), as well as the cross-correlations between the curvature and tensor perturbations such as \(\langle\gamma_{\mathbf{k}_{1}}\zeta_{\mathbf{k}_{2}}\zeta_{\mathbf{k}_{3}}\rangle\) and \(\langle\zeta_{\mathbf{k}_{1}}\gamma_{\mathbf{k}_{2}}\gamma_{\mathbf{k}_{3}}\rangle\)[52, 53, 54, 55]. In addition to the semi-classical Maldacena formalism, cosmological soft theorems can be derived in different ways such as using Ward-Takahashi identities [48, 56], Slavnov-Taylor identities [38, 39], operator-product expansion [57], and the wave functional technique [58]. In effective field theories, it is natural to expect the presence of additional degrees of freedom during inflation besides the inflaton [24, 59]. If their impact on the overall background dynamics remains insignificant, they are commonly referred to as the _light degrees of freedom3_ or the spectator fields [60, 61, 62]. While the background expansion is driven by the energy density of the inflaton, the quantum fluctuations of the spectator during inflation leave remarkable imprints on the spectra of primordial perturbations. For this reason, imprints of light spectator fields on cosmological observables both during and after inflation have been discussed extensively in the literature which can provide new insights into the physics of the early universe. Some of the prominent examples of spectator fields are curvaton, axions, and primordial gauge fields. For instance, curvaton is a light scalar field during inflation which generates curvature perturbations at late times after the inflaton field has decayed. Moreover, the initial isocurvature perturbations of the curvaton are converted to the adiabatic curvature perturbation after inflation when the curvaton density becomes a significant fraction of the total energy density [63, 64, 65]. Besides being consistent with the power spectrum constraints from the CMB, the curvaton also induces a large amount of NG which makes it distinguishable from conventional single-field inflation [66, 67, 68, 69, 70, 71]. A broadly similar conclusion applies to most of the spectator scenarios that their NG signal is very different than single field models, however, it crucially depends on the underlying dynamics of these models. To study the bispectra of primordial perturbations in these models, it is important to study the cross-correlations between the fluctuations of such light fields and the inflationary scalar and tensor perturbations. Similar cross-correlations have already been explored in the literature, for the case of primordial gauge field with the inflationary curvature and tensor perturbations [72, 73, 74, 75, 76, 77, 78, 79, 80]. Importantly, the squeezed limit of such correlators gives rise to a new set of CRs. Interestingly, ref. [74] proposed a new simple semi-classical derivation of the CR for the cross-correlation between the scalar metric perturbation and two powers of the magnetic field, for the kinetic coupling scenario that included a direct coupling between the inflaton and gauge field. One of the key inputs for this derivation was the inherent nature of the spectator field, specifically the conformal invariance of the gauge field in the absence of direct coupling which played a crucial role in deriving such CR4. Footnote 4: We are particularly indebted to Martin S. Sloth for several valuable discussions and private communications on these topics. To explore the significance of the nature of the spectator field in deriving such CRs, we consider a directly coupled light scalar spectator field \(\sigma\), with the following Lagrangian \[S_{\sigma}=\int d^{4}x\sqrt{-g}\,\lambda(\phi)\mathcal{L}_{\sigma}\, \tag{2}\] where \(\lambda(\phi)\) is the direct coupling between the inflaton \(\phi\) and \(\sigma\). In the lower-dimensional effective UV complete theories, it is both expected and natural to encounter such direct couplings to the dilaton field or the moduli of the internal dimensions. Therefore, this direct coupling is also referred to as dilatonic coupling [81]. Such direct couplings have also been studied in the models of inflationary magnetogenesis wherein \(\mathcal{L}_{\sigma}\) is identified with the gauge field Lagrangian [82, 83, 84, 85, 86, 87, 88, 89, 90]. In general, we can choose \(\mathcal{L}_{\sigma}\) in many different ways such as with minimal, non-minimal, conformal, non-conformal or even with derivative couplings. This motivates us to set up a model of non-minimally interacting spectator which also contains a non-trivial derivative coupling with gravity. We find that this derivative coupling plays a very crucial role in understanding the scalar and tensor CRs and the soft limits of their corresponding bispectra. In this paper, we study the inflationary correlation functions of the primordial curvature and tensor perturbations with the fluctuations of a non-minimally interacting scalar spectator field. Using the in-in formalism, we compute the full bispectrum of the scalar correlator and discuss in detail its squeezed limit. For the scalar cross correlation, the associated bispectrum in the squeezed limit is completely determined by the overall dilatonic coupling of the spectator field and does not depend on the explicit structure of its Lagrangian. Further, we derive a semi-classical CR associated with this scalar correlator and find that it agrees with the soft limit of the full bispectrum. We discuss various conditions under which the scalar CR can be violated. Besides the scalar bispectrum, we also compute the cross-correlation of the tensor mode with the spectator field and investigate its soft limit. Interestingly, the conventional tensor CR does not agree with the soft limit of the tensor cross-correlation. Usually, the violation of these CRs is associated with the non-adiabatic nature of the scalar and tensor fluctuations on superhorizon scales. However, in our case, we find that the violation arises due to the non-minimal derivative coupling of the spectator field. The violation of these fundamental CRs may also indicate a violation of the equivalence principle (EP) on cosmological scales. Thus, we use our setup to highlight the boundaries of the universality of the tensor CRs. This paper is organised as follows: In the following section, we discuss our scenario for the spectator field and obtain the solutions for its Fourier modes. In section 3, we compute the cross-correlation of the primordial curvature perturbation with the spectator perturbations and derive the associated soft theorem. In section 4, we present a similar calculation for the tensor correlation, its soft limit and the corresponding soft theorem. Finally, in section 5, we conclude our results and discuss the implications of scalar and tensor CRs. In the two appendices A and B, we present the calculations of the energy-momentum exchange relation for the spectator and the evaluation of various integrals which appear in the scalar bispectrum, respectively. Throughout this paper, we work in natural units with \(\hbar=c=1\), and the Planck mass \(M_{\rm Pl}^{2}=1/8\pi G\) is set to unity. Our metric convention is \((-,+,+,+)\). ## 2 Dynamics of a non-minimally interacting spectator field It is crucial to employ certain simplified models to examine the subtleties of the cross-correlations between inflationary perturbations and spectator fields. In this section, we shall introduce a toy model for a non-minimally interacting spectator field \(\sigma\). This particular set-up incorporates the non-minimal and derivative coupling of the spectator to gravity, through the Ricci scalar and the Ricci tensor, and a direct coupling with the inflaton field \(\phi\). The action for \(\sigma\) for such a scenario can be written as [91] \[S_{\sigma}=-\frac{1}{2}\int d^{4}x\sqrt{-g}\;\lambda(\phi)\left[\Big{(}g^{\mu \nu}+\alpha R^{\mu\nu}\Big{)}\partial_{\mu}\sigma\partial_{\nu}\sigma+2V( \sigma)+\frac{\xi}{6}R\sigma^{2}\right], \tag{3}\] where \(\phi\) is the inflaton field and \(\lambda(\phi)\) characterise a direct coupling between the inflaton and spectator. In the above action, \(\xi\) and \(\alpha\) are constants that indicate the strength of non-minimal and derivative coupling of \(\sigma\) to gravity, respectively and \(V(\sigma)\) is the potential of the spectator. Moreover, we limit ourselves to work with the quadratic potential, i.e., \(V(\sigma)=\frac{1}{2}m^{2}\sigma^{2}\). Here, \(R\) and \(R_{\mu\nu}\) are the Ricci scalar and Ricci tensor which are straightforward to calculate using their respective definitions from the given metric of spacetime. The homogeneous and isotropic background during inflation is described by the spatially flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric which is given as, \[ds^{2}=-dt^{2}+a^{2}(t)\,d{\bf x}^{2}=a^{2}(\tau)\left(-d\tau^{2}+d{\bf x}^{2} \right)\, \tag{4}\] where \(\tau\) is the conformal time, defined by \(d\tau=dt/a\) and \(a(\tau)\) is the scale factor. In the FLRW background, the Ricci scalar \(R\) and non-zero components of \(R_{\mu\nu}\) are, \[R=6\left(\frac{a^{\prime\prime}}{a^{3}}\right),\ \ \ R_{00}=-3\left[\frac{a^{ \prime\prime}}{a}-\left(\frac{a^{\prime}}{a}\right)^{2}\right],\ \ R_{ij}=\delta_{ij}\left[\frac{a^{\prime\prime}}{a}+\left(\frac{a^{\prime}}{a }\right)^{2}\right]. \tag{5}\] Here, an overprime denotes a derivative with respect to \(\tau\). The scale factor \(a(\tau)\) is determined by the typical background dynamics associated with slow-roll inflation. As mentioned in the introduction, we assume that the scalar spectator field \(\sigma\) is light5 and does not significantly affect the background dynamics. Therefore, we assume that the classical background value of the spectator field, \(\sigma(t)\), is approximately zero. However, during inflation, quantum fluctuations can occur around this background value, denoted as \(\sigma({\bf x},\tau)\). Ideally, one would denote it as \(\delta\sigma\), but for notational convenience, we denote the fluctuations as \(\sigma\) since the background is zero. These fluctuations will evolve in the time-dependent background of inflation. Therefore, it is natural to expect a cross-correlation between these fluctuations and the inflationary perturbations. To explore these cross-correlations in our specific model, we can use the standard quantization procedure. This involves the mode expansion of \(\sigma({\bf x},\tau)\) in the following manner, \[\sigma({\bf x},\tau)=\int\frac{d^{3}{\bf k}}{(2\pi)^{3}}\left[a_{ \bf k}\sigma_{k}(\tau)+a_{-{\bf k}}^{\dagger}\sigma_{k}^{*}(\tau)\right]e^{i{ \bf k}\cdot{\bf x}}\, \tag{6}\] where \(a_{\bf k}\) and \(a_{\bf k}^{\dagger}\) are the annihilation and creation operators, respectively, and \(\sigma_{k}(\tau)\) represents the mode function associated with momentum \({\bf k}\) at time \(\tau\). The mode function \(\sigma_{k}(\tau)\) obeys the classical equation of motion, which can be obtained by varying the action (3) with respect to \(\sigma\). We define the two-point correlation function of \(\sigma\) in Fourier space as \[\langle\sigma({\bf k},\tau)\sigma({\bf k}^{\prime},\tau)\rangle=(2 \pi)^{3}\delta^{(3)}({\bf k}+{\bf k}^{\prime})P_{\sigma}(k,\tau) \tag{7}\] where the power spectrum of \(\sigma\) is simply given by \(P_{\sigma}(k,\tau)=|\sigma_{k}(\tau)|^{2}\). To enhance the clarity of our analysis, let's first set \(\alpha=0\) in eq. (3) and vary the action, which leads to the equation of motion in the Fourier space as, \[\sigma_{k}^{\prime\prime}+\left(2\frac{a^{\prime}}{a}+\frac{\lambda^{\prime}} {\lambda}\right)\sigma_{k}^{\prime}+\left(k^{2}+a^{2}m^{2}+\xi\frac{a^{\prime \prime}}{a}\right)\sigma_{k}=0. \tag{8}\] One can canonically normalize the field \(\sigma\) by defining a variable \(\tilde{\sigma}_{k}=aS\sigma_{k}\) with \(S=\sqrt{\lambda}\) and recast the equation of motion in terms of \(\tilde{\sigma}\) as a harmonic oscillator with a time-dependent mass term, \[\tilde{\sigma}_{k}^{\prime\prime}+\left(k^{2}+a^{2}m^{2}-(1-\xi) \frac{a^{\prime\prime}}{a}-\frac{S^{\prime\prime}}{S}-2\frac{a^{\prime}S^{ \prime}}{aS}\right)\tilde{\sigma}_{k}=0. \tag{9}\] It can be noted from the above equation that by setting \(\xi=1\), \(m=0\) and for a constant \(\lambda\), the equation for \(\tilde{\sigma}\) becomes identical to that of a massless scalar field in the Minkowski spacetime. This particular choice of parameters, known as the conformal coupling, causes \(\tilde{\sigma}\) to exhibit a behaviour resembling that of a massless scalar field propagating in the flat spacetime. The direct coupling \(\lambda(\phi)\) behaves as a dilatonic coupling which typically appears in effective field theories or scalar-tensor theories. To proceed further, we have to specify the functional form of \(\lambda(\phi)\). We choose to work in the comoving gauge (\(\delta\phi=0\)), which allows us to express \(\lambda\) as only a function of time. Such a dilatonic coupling has also been studied in the scenarios of inflationary magnetogenesis as it breaks the conformal invariance of gauge fields which is a necessary condition to excite them during inflation. With these motivations, we can parameterise this direct coupling as a power law in the scale factor6, i.e., \(\lambda\propto a^{2n}\). Then the effective mass term in eq. (9) becomes, Footnote 6: In the context of inflationary magnetogenesis, a coupling function of the form \(\lambda\propto a^{2n}\) with \(n>0\) gives rise to the so-called strong coupling problem and there have been several works to address and resolve this problem [86, 87, 92, 93]. In this work, we do not restrict ourselves to any particular choice of \(n\), and thus, the strong coupling issue may simply be circumvented by avoiding the regimes with \(n>0\), as was done, for instance, for gauge fields in Ref. [86]. Moreover, this problem would not arise in the first place if \(\sigma\) is a scalar field associated with the dark sector which is not necessarily coupled with the standard model. \[a^{2}m^{2}-(1-\xi)\frac{a^{\prime\prime}}{a}-\frac{S^{\prime\prime}}{S}-2 \frac{a^{\prime}S^{\prime}}{aS}=-\frac{1}{\tau^{2}}\left[2(1-\xi)+n(n+1)+2n- \left(\frac{m^{2}}{H^{2}}\right)\right]\, \tag{10}\] where we have used \(a(\tau)\simeq-1/(H\tau)\) during inflation. With this result, we find that the solution of the differential equation (9) can be written in terms of Hankel functions. Assuming the standard Bunch-Davis initial conditions for the Fourier modes, we obtain the following solution for \(\tilde{\sigma}\) as, \[\tilde{\sigma}(k,\tau)=\frac{\sqrt{\pi}}{2}\ e^{i(\nu+1/2)\pi/2} \sqrt{-\tau}H_{\nu}^{(1)}(-k\tau)\, \tag{11}\] where \(H_{\nu}^{(1)}(x)\) denotes the Hankel function of the first kind of order \(\nu=\sqrt{(n+3/2)^{2}-2\xi-(m^{2}/H^{2})}\). With this, the mode function solution is \[\sigma(k,\tau)=\frac{\sqrt{\pi}}{2}\ e^{i(\nu+1/2)\pi/2}\frac{ \sqrt{-\tau}}{a\sqrt{\lambda}}H_{\nu}^{(1)}(-k\tau). \tag{12}\] The time dependence in the mode function comes from the terms outside the Hankel function, which scale as \(\sqrt{-\tau}/a\sqrt{\lambda}\sim\tau^{n+3/2}\), whereas the Hankel function in the superhorizon limit (\(|k\tau|\ll 1\)) scale as \(\sim\tau^{-\nu}\). It is important to note that other terms do not cancel the superhorizon scaling of the Hankel function in general. Thus the Fourier modes will evolve on superhorizon scales and will only freeze for a particular choice of parameters, i.e., \(\xi=0,m=0\) or \(\xi=-m^{2}/2H^{2}\). Now, let us consider the action (3) with a non-zero \(\alpha\) and vary it to derive the equation of motion which leads to, \[\sigma_{k}^{\prime\prime}+\left(2\frac{a^{\prime}}{a}+\frac{\lambda^{\prime}} {\lambda}\right)\sigma_{k}^{\prime}+\left(k^{2}+\frac{a^{2}m^{2}}{1+3\alpha H^ {2}}+\frac{\xi}{1+3\alpha H^{2}}\frac{a^{\prime\prime}}{a}\right)\sigma_{k}=0. \tag{13}\] To arrive at the above expression, we have used the scale factor \(a\simeq-1/(H\tau)\) in eq. (5) and obtained components of the background Ricci tensor as \(R_{00}=-3a^{2}H^{2}\) and \(R_{ij}=3a^{2}H^{2}\delta_{ij}\). By comparing this equation with eq. (8), we observe that they are identical under the redefinition of the following parameters as, \[m^{2}\rightarrow\tilde{m}^{2}=\frac{m^{2}}{1+3\alpha H^{2}}, \quad\text{and}\quad\xi\rightarrow\tilde{\xi}=\frac{\xi}{1+3\alpha H^{2}}. \tag{14}\] This indicates that we can obtain the solution of eq. (13) in the same manner as earlier. But in this case, the canonically normalized field will be \(\tilde{\sigma}_{k}=\sigma_{k}a\sqrt{(1+3\alpha H^{2})\lambda}\) and the complete solution for the mode function would be, \[\sigma(k,\tau)=\frac{1}{\sqrt{1+3\alpha H^{2}}}\frac{\sqrt{\pi}}{ 2}\ e^{i(\nu+1/2)\pi/2}\frac{\sqrt{-\tau}}{a\sqrt{\lambda}}H_{\nu}^{(1)}(-k \tau)\, \tag{15}\] with \(\nu=\sqrt{(n+3/2)^{2}-2\tilde{\xi}-(\tilde{m}^{2}/H^{2})}\). Also, in this case, the modes will evolve on superhorizon scales but will remain frozen for \(\tilde{\xi}=0,\tilde{m}=0\) or \(\tilde{\xi}=-\tilde{m}^{2}/2H^{2}\), as earlier, even for the case when the derivative coupling is present and non-vanishing, i.e., \(\alpha\neq 0\). ## 3 Cross-correlation with curvature perturbation and the consistency relation In this section, we shall discuss our calculations of the three-point cross-correlation of the comoving curvature perturbation with the fluctuations of the spectator field. To do so, we use the in-in formalism, which is a standard framework for studying equal-time quantum correlations in the early universe. In this formalism, the interaction Hamiltonian plays a crucial role in capturing the effects of interactions between different fields. For our case, we focus on the interaction Hamiltonian \(H_{\zeta\sigma\sigma}\), which describes the coupling between the curvature perturbation \(\zeta\) and the spectator field \(\sigma\). Thus, the cubic order interaction Hamiltonian can be constructed as follows, \[H_{\zeta\sigma\sigma}=-\frac{1}{2}\int d^{3}x\;\sqrt{-g}\;T^{\mu\nu}\delta g_{ \mu\nu}\, \tag{16}\] where \(T^{\mu\nu}\) represents the stress-energy tensor of the spectator \(\sigma\). For a systematic analysis of the dynamics of metric perturbations at the action level, we use the standard Arnowitt-Deser-Misner (ADM) parametrisation of the metric as, \[ds^{2}=-N^{2}dt^{2}+h_{ij}(dx^{i}+N^{i}dt)(dx^{j}+N^{j}dt)\, \tag{17}\] where \(N({\bf x},t)\) and \(N^{i}({\bf x},t)\) are called the lapse function and the shift vector, respectively. The dynamical degrees of freedom are contained in the spatial part of the metric \(h_{ij}\) whereas lapse and shift are like the Lagrangian multipliers which are determined by the constraint equations. In the work, we mostly work in the comoving gauge where \(\delta\phi=0\) and and spatial metric is parameterised as \(h_{ij}=a^{2}e^{2\zeta}[e^{\gamma}]_{ij}\). In this gauge, the first-order constraint equations give, \[N=1+\frac{\dot{\zeta}}{H},\ \ N_{i}=\partial_{i}\left(-\frac{\zeta}{H}+ \epsilon a^{2}\partial^{-2}\dot{\zeta}\right)\, \tag{18}\] where the overdot denotes the time derivative with respect to \(t\), \(\epsilon\) is the first slow-roll parameter, and \(\partial^{-2}\) denotes the inverse Laplacian operator. Then the metric perturbations at the first order are, \[\delta g_{00}=-2\frac{\dot{\zeta}}{H},\;\delta g_{0i}=\partial_{i}\left(- \frac{\zeta}{H}+\epsilon a^{2}\partial^{-2}\dot{\zeta}\right),\;\mbox{and}\;\; \delta g_{ij}=2a^{2}\zeta\delta_{ij}. \tag{19}\] Using eq. (19) in eq. (16), and by performing some trivial integration by parts, we obtain the following interaction Hamiltonian \[H_{\zeta\sigma\sigma}=-\int d^{3}x\;a^{3}\frac{\zeta}{H}\left(\nabla_{\mu}T^{ \mu 0}\right)+{\cal O}(\epsilon). \tag{20}\] This indicates that if the four divergences of the energy-momentum tensor of the spectator field are zero, the cubic order interaction Hamiltonian is slow-roll suppressed. But, in our case, the spectator is directly coupled to the inflaton. As a result, there will be energy momentum exchange between the inflaton fluctuations and the spectator field. So the divergence of the energy-momentum tensor of the spectator will not be zero, and it can be calculated, \[\nabla_{\mu}T^{\mu\nu}=-\frac{1}{2}\nabla^{\nu}\lambda\left((g^{\rho\kappa}+ \alpha R^{\rho\kappa})\,\partial_{\rho}\sigma\partial_{\kappa}\sigma+2V( \sigma)+\frac{\xi}{6}R\sigma^{2}\right). \tag{21}\] This expression can be trivially obtained by invoking the diffeomorphic invariance of the action (3), and it is outlined in detail in appendix A. To proceed further, we first set \(\alpha=0\) to ensure clarity and introduce it later, as we did in the previous section. Using the above equation in eq. (20) and rewrite the leading order interaction Hamiltonian in terms of conformal time as follows, \[H_{\zeta\sigma\sigma}=-\frac{1}{2}\int d^{3}xa^{2}\lambda^{\prime}(\tau)\tau \zeta\left(\sigma^{\prime 2}-(\partial\sigma)^{2}-\left(a^{2}m^{2}+\xi\frac{a^{ \prime\prime}}{a}\right)\sigma^{2}\right). \tag{22}\] At this level, we have dropped all the terms that are proportional to slow roll parameters. As in (6), curvature perturbation \(\zeta\) is also mode expanded and the corresponding mode function obtained in the standard manner with the solution, \[\zeta_{k}(\tau)=\frac{1}{\sqrt{2\epsilon}}\frac{H}{\sqrt{2k^{3}}} \left(1+ik\tau\right)e^{-ik\tau}. \tag{23}\] Using this interaction Hamiltonian in the in-in master formula for a three-point function \(\mathcal{O}\), \[\left\langle\mathcal{O}(\tau)\right\rangle=-i\int^{\tau}d\tau^{ \prime}\left\langle\left[\mathcal{O}(\tau),H_{\zeta\sigma\sigma}(\tau^{\prime })\right]\right\rangle \tag{24}\] we calculate the three-point correlator \(\left\langle\zeta\sigma\sigma\right\rangle\) and obtain, \[\left\langle\zeta(\mathbf{k}_{1},\tau_{I})\sigma(\mathbf{k}_{2},\tau_{I})\sigma(\mathbf{k}_{3},\tau_{I})\right\rangle=(2\pi)^{3}\delta^{(3)}( \mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3})\left[-\left(\mathcal{I}_{1}+ \mathbf{k}_{2}\cdot\mathbf{k}_{3}\,\mathcal{I}_{2}\right)+\left(\frac{m^{2}} {H^{2}}+2\xi\right)\mathcal{I}_{3}\right], \tag{25}\] with, \[\mathcal{I}_{1} = 2\,\mathrm{Im}\left[\zeta_{k_{1}}(\tau_{I})\sigma_{k_{2}}(\tau_ {I})\sigma_{k_{3}}(\tau_{I})\int d\tau\tau a^{2}\lambda^{\prime}(\tau)\zeta_{ k_{1}}^{*}(\tau){\sigma^{\prime}}_{k_{2}}^{*}(\tau){\sigma^{\prime}}_{k_{3}}^{*}( \tau)\right]\, \tag{26}\] \[\mathcal{I}_{2} = 2\,\mathrm{Im}\left[\zeta_{k_{1}}(\tau_{I})\sigma_{k_{2}}(\tau_ {I})\sigma_{k_{3}}(\tau_{I})\int d\tau\tau a^{2}\lambda^{\prime}(\tau)\zeta_{ k_{1}}^{*}(\tau)\sigma_{k_{2}}^{*}(\tau)\sigma_{k_{3}}^{*}(\tau)\right]\,\] (27) \[\mathcal{I}_{3} = 2\,\mathrm{Im}\left[\zeta_{k_{1}}(\tau_{I})\sigma_{k_{2}}(\tau_ {I})\sigma_{k_{3}}(\tau_{I})\int\frac{d\tau}{\tau}a^{2}\lambda^{\prime}(\tau) \zeta_{k_{1}}^{*}(\tau)\sigma_{k_{2}}^{*}(\tau)\sigma_{k_{3}}^{*}(\tau)\right]. \tag{28}\] The result in eq. (25) is our complete result for the correlator and the integrals can be evaluated for the most general case. Moreover, to study the CR associated with \(\left\langle\zeta\sigma\sigma\right\rangle\), we have to evaluate the integrals (26), (27) and (28) using the explicit form of the coupling function, i.e., \(\lambda(\tau)\propto a^{2n}\propto\tau^{-2n}\) and the mode functions. However, in the squeezed limit, i.e. \(\mathbf{k}_{1}\to 0\), and \(\mathbf{k}_{2}\simeq-\mathbf{k}_{3}\), we can show that \[\mathcal{I}_{1}=-(2n)\;|\zeta_{k_{1}}(\tau_{I})|^{2}|\sigma_{k_{ 2}}(\tau_{I})|^{2}+k_{2}^{2}\mathcal{I}_{2}+\left(2\xi+\frac{m^{2}}{H^{2}} \right)\mathcal{I}_{3}. \tag{29}\] The detailed derivation of this equation is given in appendix B. We have arrived at it by using the equation of motion of mode function and the normalized Wronskian. By using eq. (29) in the squeezed limit of eq. (25), we get the squeezed limit correlator as \[\lim_{\mathbf{k}_{1}\to 0}\left\langle\zeta(\mathbf{k}_{1},\tau_{I}) \sigma(\mathbf{k}_{2},\tau_{I})\sigma(\mathbf{k}_{3},\tau_{I})\right\rangle=2 n\;(2\pi)^{3}\delta^{(3)}(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3})P_{ \zeta}(k_{1})P_{\sigma}(k_{2}). \tag{30}\] This relation is precisely the form of the CR for this correlator, i.e., the three-point cross-correlation in the squeezed limit is proportional to the product of two power spectra. The strength of the local non-linearity parameter associated with this correlator is simply \(2n\) which can be expressed in terms of the direct coupling as follows, \[\frac{d\ln\lambda}{d\ln a}=\frac{\dot{\lambda}}{H\lambda}=2n. \tag{31}\] It is remarkable that the non-linearity parameter is independent of the other parameters of our model, i.e., for any \(\xi\) and \(m\), we get the same CR as in eq. (30). Therefore, our CR holds true regardless of whether the spectator field is conformal or non-conformal, massive or massless. This observation highlights the general applicability of the new CR, suggesting its wider scope and validity across different scenarios and field properties. To verify the generality of this CR within our toy model, we can now consider the case with non-minimal derivative coupling (\(\alpha\neq 0\)). It is trivial to compute the interaction Hamiltonian with the non-minimal derivative coupling using (20) and (21). Using this interaction Hamiltonian in the in-in master formula (24), we obtain the correlator as, \[\langle\zeta({\bf k}_{1},\tau_{I})\sigma({\bf k}_{2},\tau_{I}) \sigma({\bf k}_{3},\tau_{I})\rangle=(2\pi)^{3}\delta^{(3)}({\bf k}_{1}+{\bf k }_{2}+{\bf k}_{3})\left[-(1+3\alpha H^{2})\left({\cal I}_{1}+{\bf k}_{2}\cdot{ \bf k}_{3}{\cal I}_{2}\right)+\left(\frac{m^{2}}{H^{2}}+2\xi\right){\cal I}_{3 }\right]. \tag{32}\] Here the integrals \({\cal I}_{1}\), \({\cal I}_{2}\), and \({\cal I}_{3}\) refer to the same integrals defined in equations (26), (27), and (28). However, it is important to note that in this case, the mode function given in eq. (15) should be used. Now let's compute the squeezed limit of the above correlator for which we use the squeezed limit of \({\cal I}_{1}\), \({\cal I}_{2}\), and \({\cal I}_{3}\). As discussed in appendix B, even for \(\alpha\neq 0\) in the squeezed limit, the integral \({\cal I}_{1}\) can be expressed in terms of \({\cal I}_{2}\) and \({\cal I}_{3}\), similar to eq. (69). Interestingly, even for this scenario with \(\alpha\neq 0\), we obtain precisely the same CR as in eq. (30). Consequently, we have also derived the squeezed limit of the cross-correlation of curvature perturbation with the non-minimally interacting spectator field (3) at the leading order in slow-roll parameters. The resulting expression is given by, \[\lim_{{\bf k}_{1}\to 0}\,\langle\zeta({\bf k}_{1},\tau_{I}) \sigma({\bf k}_{2},\tau_{I})\sigma({\bf k}_{3},\tau_{I})\rangle=\frac{\dot{ \lambda}}{H\lambda}\;(2\pi)^{3}\delta^{(3)}({\bf k}_{1}+{\bf k}_{2}+{\bf k}_{3 })P_{\zeta}(k_{1})P_{\sigma}(k_{2}). \tag{33}\] This result bears a striking resemblance with the squeezed limit correlator of the curvature perturbation with gauge fields as found in [74, 75], which is not a mere coincidence7. In both cases, a light degree of freedom is directly coupled to the inflaton, and this direct coupling is realized as a power law in the scale factor. Footnote 7: One may notice a sign difference in the non-linearity parameter in eq. (33) and ref. [75]. We recently found that there is actually a typo in [74] which is also there in the results of [75]. From the above consistency relation, we observe that the corresponding local non-linearity parameter is independent of the parameters \(m\), \(\xi\) and \(\alpha\) as in the earlier case. This is an interesting and a non-trivial result. The factor \(\dot{\lambda}/(H\lambda)\) can be understood as the new scale introduced by the direct coupling \(\lambda\) measured in the units of Hubble parameter. Since we are naturally working in the regime \(\dot{\lambda}/(H\lambda)\gg\epsilon,\eta\) where \(\epsilon\) and \(\eta\) are the slow roll parameters, one can heuristically estimate the strength of the leading order interaction Hamiltonian as \(H_{\zeta\sigma\sigma}/H_{\sigma\sigma}\propto\dot{\lambda}/(H\lambda)\cdot P _{\zeta}^{1/2}\), similar to [74]. Consequently, we expect the local non-linearity parameter to be of the order of \(\dot{\lambda}/(H\lambda)\). Moreover, a careful look at the cubic order action (60) provides us the insight that when \(\zeta\) becomes superhorizon and frozen, the action is identical to the quadratic action of \(\sigma\) but with a new direct coupling \(-\dot{\lambda}\zeta/H\). It means that we can determine \(\left\langle\zeta_{L}\langle\sigma\sigma\rangle_{\zeta_{L}}\right\rangle\) if we know the two point correlator of \(\sigma\) for an arbitrary coupling function \(\lambda\). This observation allow us to derive the above CR8 semi-classically and understand as to why the local non-linearity parameter is purely determined by the coupling function. To do so, let's work in the flat gauge. One can easily see that the lapse and shifts are proportional to slow roll parameters, as shown in eq. (2.24) of [25]. Then, the third order action can be written as, Footnote 8: In the literature, CRs and soft theorems are often used interchangeably. It is important to note that while CRs represent model-independent statements about cosmological observables, soft theorems are a subset of CRs arising from the non-linear realisation of symmetries within cosmological correlators. \[S^{(3)}=-\frac{1}{2}\int d^{4}x\sqrt{-g}\;\partial_{\phi}\lambda \;\delta\phi\left[\left(g^{\mu\nu}+\alpha R^{\mu\nu}\right)\partial_{\mu} \sigma\partial_{\nu}\sigma+2V(\sigma)+\frac{\xi}{6}R\sigma^{2}\right]+\text{ slow-roll suppressed terms}. \tag{34}\] This action is consistent with the result we obtained when we appropriately translated eq. (20) from the comoving gauge to the flat gauge. It is evident that the influence of inflaton fluctuation solely comes through the coupling \(\lambda(\phi)\) when we drop the slow-roll suppressed terms. Consequently, it becomes apparent that the long-wavelength inflaton fluctuations can be absorbed into the coupling function, resulting in the action being transformed back to a second-order form but with a modified coupling. This allows us to study the effects of the long wavelength perturbation \(\delta\phi_{L}\) on the short wavelength fluctuations of \(\sigma\) by defining the effective coupling as \(\lambda_{B}=\lambda(\phi_{0}+\delta\phi_{L})=\lambda_{0}+\partial_{\phi} \lambda\delta\phi_{L}\). Using this expansion, the two-point correlator of the spectator field with the modified coupling can be written as, \[\left\langle\sigma\sigma\right\rangle_{B}=\left\langle\sigma \sigma\right\rangle_{0}+\frac{\partial\left\langle\sigma\sigma\right\rangle_{ B}}{\partial\delta\phi_{L}}\bigg{|}_{\delta\phi_{L}=0}\delta\phi_{L}+\cdots \tag{35}\] It is now straightforward to find the squeezed correlator of the form \(\left\langle\delta\phi_{L}\left\langle\sigma\sigma\right\rangle_{B}\right\rangle\) as analogous to the approach of Maldacena CR. For this, one has to evaluate \(\left\langle\sigma\sigma\right\rangle\) for a given form of \(\lambda(\phi)\), and in our case, the coupling function takes the form \(\lambda(\phi)=e^{2\phi/M}\), then \(\lambda_{B}\) is just a rescaled \(\lambda_{0}\) by a constant factor \(1+\delta\phi_{L}\partial_{\phi}\lambda/\lambda\). By observing the fact that eq. (9) is insensitive to a constant rescaling of \(\lambda\), we can write the correlator in the coordinate space as \[\left\langle\sigma({\bf x}_{1},\tau)\sigma({\bf x}_{2},\tau) \right\rangle_{B}=\left\langle\frac{1}{a^{2}\lambda_{B}}\tilde{\sigma}({\bf x }_{1},\tau)\tilde{\sigma}({\bf x}_{2},\tau)\right\rangle\approx\left\langle \sigma({\bf x}_{1},\tau)\sigma({\bf x}_{2},\tau)\right\rangle_{0}-\delta\phi_{L }\frac{\partial_{\phi}\lambda}{\lambda}\left\langle\sigma({\bf x}_{1},\tau) \sigma({\bf x}_{2},\tau)\right\rangle_{0}. \tag{36}\] This two-point function can further be correlated with \(\delta\phi\) which, in the Fourier space, will lead to, \[\lim_{{\bf k}_{1}\to 0}\left\langle\delta\phi({\bf k}_{1},\tau_{I}) \sigma({\bf k}_{2},\tau_{I})\sigma({\bf k}_{3},\tau_{I})\right\rangle=-\frac{ \partial_{\phi}\lambda}{\lambda}\;(2\pi)^{3}\delta^{(3)}({\bf k}_{1}+{\bf k}_{ 2}+{\bf k}_{3})P_{\delta\phi}(k_{1})P_{\sigma}(k_{2}). \tag{37}\] Note that, the above result is obtained in the flat gauge. We can translate it into the comoving gauge by using \(\sqrt{2\epsilon}\,\zeta=\delta\phi\) and also using \(\sqrt{2\epsilon}\,\partial_{\phi}\lambda=-\dot{\lambda}/H\), to find \[\lim_{{\bf k}_{1}\to 0}\left\langle\zeta({\bf k}_{1},\tau_{I}) \sigma({\bf k}_{2},\tau_{I})\sigma({\bf k}_{3},\tau_{I})\right\rangle=\frac{ \dot{\lambda}}{\lambda H}\;(2\pi)^{3}\delta^{(3)}({\bf k}_{1}+{\bf k}_{2}+{\bf k }_{3})P_{\zeta}(k_{1})P_{\sigma}(k_{2})\, \tag{38}\] where \(\dot{\lambda}/H\lambda=2n\) for \(\lambda\propto a^{2n}\). Here, we would like to stress that in ref. [74], the CR is derived in the comoving gauge by using the conformal nature of the gauge field. In comparison, our result is independent of such conformal nature of the spectator field. This CR can be interpreted as a simple yet non-trivial consequence of direct coupling. Before we close this section, let us make a few subtle remarks about the conditions under which the background wave method (Maldacena approach) works. It is well known that the underlying working principle of the Maldacena CR is based on constructing an adiabatic mode inside the Hubble radius. In inflation, an adiabatic mode is a long wavelength frozen mode that is indistinguishable from a coordinate transformation. Using this approach, the CR for our scenario can be written as, \[\lim_{{\bf k}_{1}\to 0}\frac{1}{P_{\zeta}(k_{1})}\left\langle \zeta_{{\bf k}_{1}}\sigma_{{\bf k}_{2}}\sigma_{{\bf k}_{3}}\right\rangle=-(2 \pi)^{3}\delta^{(3)}({\bf k}_{1}+{\bf k}_{2}+{\bf k}_{3})\frac{\partial\ln[k_ {2}^{3}P_{\sigma}(k_{2})]}{\partial\ln k_{2}}P_{\sigma}(k_{2}). \tag{39}\] In the superhorizon limit, we find that \(k^{3}P_{\sigma}\sim k^{3-2\nu}\) and \(\nu\) is given after eq. (15). Therefore, this CR is in agreement with (33) _only_ when \(2\xi+m^{2}/H^{2}=0\). In other words, the CR obtained using the background wave method aligns with the soft limit of the in-in result exclusively when the spectator \(\sigma\) becomes frozen in the superhorizon limit. However, it is noteworthy that the CR obtained using our semi-classical approach remains applicable within this setup without such limitations. For scenarios involving direct dilatonic coupling, our analysis shows that all the non-minimal gravitational interactions are suppressed by the slow-roll parameters, and they can be ignored in the slow-roll limit. Furthermore, due to the evolution of spectator modes on superhorizon scales (with exceptions for specific parameter choices), one can not apply the Maldacena formalism to derive the CR. Therefore, one has to resort to our semi-classical formalism to obtain the CR for the cross-correlation of the curvature perturbation with spectator fields. ## 4 Tensor cross-correlation and the consistency relation In this section, we shall compute the tensor cross-correlation with our non-minimal spectator and study its squeezed limit. Contrary to the scalar CRs, it has been discussed that the tensor CRs are more robust conditions which are preserved in most situations and reflect the adiabatic nature of tensor modes during inflation. These CRs can only be violated in specific situations. It is observed that tensor CRs remain valid even when there are multiple scalar fields as long as any anisotropies decrease rapidly in an exponential manner [94]. This is because, in an expanding universe with anisotropies decreasing rapidly in an exponential manner, the graviton mode becomes constant on superhorizon scales. Therefore, there exists an adiabatic tensor mode that is locally indistinguishable from a pure gauge mode i.e. it can be absorbed by means of a suitable coordinate transformation and used to derive the conventional tensor CRs. Here, we explicitly compute the cross-correlation of spectator fluctuations with the tensor perturbation using in-in formalism within our toy model and study the robustness of tensor CR. To perform our calculation, we have to mode expand tensor perturbations. Using the standard quantisation formalism, the mode expansion for tensor perturbations is defined as follows, \[\gamma_{ij}({\bf x},\tau)=\int\frac{d^{3}{\bf k}}{(2\pi)^{3}}\sum_{s=\pm 2 }\left[\gamma_{k}(\tau)\,e^{i{\bf k}\cdot{\bf x}}\,\epsilon^{s}_{ij}(\hat{{ \bf k}})\,b^{s}_{{\bf k}}+h.c.\right], \tag{40}\] where \(\epsilon^{s}_{ij}\) represents the polarization tensor corresponding to helicity \(s\). The normalization condition is given by \(\epsilon^{s}_{ij}{\epsilon^{*}_{ij}}^{s^{\prime}}=2\delta_{ss^{\prime}}\). The creation and annihilation operators satisfy the usual commutation relation \([b^{s}_{{\bf k}},{b^{s^{\prime}}_{{\bf k}^{\prime}}}^{\dagger}]=(2\pi)^{3} \delta^{(3)}({\bf k}-{\bf k}^{\prime})\delta ss^{\prime}\). The amplitude of the tensor mode during inflation is obtained in the standard manner as, \[\gamma_{k}(\tau)=\frac{H}{\sqrt{k^{3}}}\left(1+ik\tau\right)e^{-ik\tau}. \tag{41}\] Note that, we often suppress the helicity index \(s\) since we do not have any parity violating term. To proceed further, it is necessary to obtain the Ricci tensor up to the first order in tensor perturbations (\(\gamma_{ij}\)), which is given by \[R_{ij}=\delta_{ij}\left[\frac{a^{\prime\prime}}{a}+\left(\frac{a^{\prime}}{a} \right)^{2}\right]+\gamma_{ij}\left[\frac{a^{\prime\prime}}{a}-\left(\frac{a^ {\prime}}{a}\right)^{2}\right]. \tag{42}\] During inflation, we can write \(a^{\prime\prime}/a\simeq 2a^{2}H^{2}\) and \(a^{\prime}/a=aH\). Then the cubic order Lagrangian for \(\langle\gamma\sigma\sigma\rangle\) can be trivially derived from eq. (3). Therefore, the corresponding interaction Hamiltonian can be written as, \[H_{\gamma\sigma\sigma}=-\frac{1}{2}\int d^{3}xa^{2}\lambda\left(1- \alpha H^{2}\right)\gamma_{ij}\partial_{i}\sigma\partial_{j}\sigma. \tag{43}\] Similar to the earlier section, in this case as well, we assume the power law parameterisation for the direct coupling, i.e., \(\lambda\propto a^{2n}\propto\tau^{-2n}\). Using the above expression for the interaction Hamiltonian in the master in-in formula (24), we obtain the tensor cross-correlation as \[\langle\gamma({\bf k}_{1},\tau_{I})\sigma({\bf k}_{2},\tau_{I}) \sigma({\bf k}_{3},\tau_{I})\rangle = 2\,(2\pi)^{3}\delta^{(3)}({\bf k}_{1}+{\bf k}_{2}+{\bf k}_{3}) \epsilon_{ij}k_{2i}k_{3j}(1-\alpha H^{2}) \tag{44}\] \[\times {\rm Im}\left[\gamma_{k_{1}}(\tau_{I})\sigma_{k_{2}}(\tau_{I}) \sigma_{k_{3}}(\tau_{I})\int d\tau a^{2}\lambda(\tau)\gamma_{k_{1}}^{*}(\tau) \sigma_{k_{2}}^{*}(\tau)\sigma_{k_{3}}^{*}(\tau)\right]\.\] The integral involved in the above equation is analogous to the integral \({\cal I}_{2}\), which can be evaluated similarly as in appendix B but using the mode function \(\sigma\) as in eq. (15) for \(\alpha\neq 0\). To investigate the tensor CR, let us consider the squeezed limit \({\bf k}_{1}\to 0\), and \({\bf k}_{2}\simeq-{\bf k}_{3}\). Then, we can write the integral in (44) similar to (70) as \[{\rm Im}\left[\gamma_{k_{1}}(\tau_{I})\sigma_{k_{2}}(\tau_{I}) \sigma_{k_{3}}(\tau_{I})\int d\tau a^{2}\lambda(\tau)\gamma_{k_{1}}^{*}(\tau) \sigma_{k_{2}}^{*}(\tau)\sigma_{k_{3}}^{*}(\tau)\right]=-|\gamma_{k_{1}}(\tau _{I})|^{2}|\sigma_{k_{2}}(\tau_{I})|^{2}\] \[\times\ {\rm Im}\left[e^{i(\nu+1/2)\pi}\int d\tau a^{2}\lambda(\tau) \left(\sigma_{k_{2}}^{*}(\tau)\right)^{2}\right]. \tag{45}\] Upon using the mode function (15), the right hand side of the above equation takes the following form, \[\frac{\pi}{4(1+3\alpha H^{2})}|\gamma_{k_{1}}(\tau_{I})|^{2}| \sigma_{k_{2}}(\tau_{I})|^{2}{\rm Im}\left[\int d\tau\tau\left(H_{\nu}^{(2)}(- k_{2}\tau)\right)^{2}\right]\, \tag{46}\] where the factor \(1/(1+3\alpha H^{2})\) evidently appears from the normalisation of the mode function, as given in (15). This can be quickly evaluated using the eq. (71) and we get, \[{\rm Im}\left[\gamma_{k_{1}}(\tau_{I})\sigma_{k_{2}}(\tau_{I}) \sigma_{k_{3}}(\tau_{I})\int d\tau a^{2}\lambda(\tau)\gamma_{k_{1}}^{*}(\tau) \sigma_{k_{2}}^{*}(\tau)\sigma_{k_{3}}^{*}(\tau)\right]=-\frac{1}{(1+3\alpha H ^{2})}\frac{\nu}{2k^{2}}|\gamma_{k_{1}}(\tau_{I})|^{2}|\sigma_{k_{2}}(\tau_{I} )|^{2} \tag{47}\] where \(\nu\) is the order of the Hankel function appearing in (15). With this, the final result for the correlator in the squeezed limit, i.e. \({\bf k}_{1}\to 0\), and \({\bf k}_{2}\simeq-{\bf k}_{3}\equiv{\bf k}\), is \[\lim_{{\bf k}_{1}\to 0}\,\langle\gamma({\bf k}_{1},\tau_{I}) \sigma({\bf k}_{2},\tau_{I})\sigma({\bf k}_{3},\tau_{I})\rangle=(2\pi)^{3} \delta^{(3)}({\bf k}_{1}+{\bf k}_{2}+{\bf k}_{3})\epsilon_{ij}\frac{k_{i}k_{j} }{k^{2}}\left(\frac{1-\alpha H^{2}}{1+3\alpha H^{2}}\right)\nu P_{\gamma}(k_{1 })P_{\sigma}(k). \tag{48}\] For those acquainted with the conventional background wave method, it is evident that such methods can only capture the result for the case without the derivative coupling. To illustrate it further, let's write eq. (48) for \(\alpha=0\), \[\lim_{{\bf k}_{1}\to 0}\,\langle\gamma({\bf k}_{1},\tau_{I}) \sigma({\bf k}_{2},\tau_{I})\sigma({\bf k}_{3},\tau_{I})\rangle=(2\pi)^{3} \delta^{(3)}({\bf k}_{1}+{\bf k}_{2}+{\bf k}_{3})\epsilon_{ij}\frac{k_{i}k_{j} }{k^{2}}\nu P_{\gamma}(k_{1})P_{\sigma}(k). \tag{49}\] On the other hand, the semi-classical derivation using the background wave approach gives, \[\lim_{{\bf k}_{1}\to 0}\frac{1}{P_{\gamma}(k_{1})}\,\langle\gamma_{{\bf k}_{1}} \sigma_{{\bf k}_{2}}\sigma_{{\bf k}_{3}}\rangle=-(2\pi)^{3}\delta^{(3)}({\bf k }_{1}+{\bf k}_{2}+{\bf k}_{3})\epsilon_{ij}\frac{k_{i}k_{j}}{k^{2}}\frac{ \partial}{\partial\ln k^{2}}P_{\sigma}(k) \tag{50}\] It can be derived from eq. (15) that the power spectrum of the spectator field in the superhorizon limit scales as \(P_{\sigma}\sim k^{-2\nu}\) and the derivative term in the above equation gives \(\frac{\partial P_{\sigma}}{\partial\ln k^{2}}=-\nu P_{\sigma}\). This clearly indicates that both in-in and the semi-classical results are in agreement with each other _only_ for \(\alpha=0\), but they disagree for \(\alpha\neq 0\). In the limit \(\alpha=0\), \(\nu=\sqrt{(n+3/2)^{2}-2\xi-(m^{2}/H^{2})}\) and thus, we observe that, for the two approaches to be in agreement, we only require \(\alpha=0\) irrespective of \(\xi\) and \(m\). This shows the universality of tensor CRs. However, our analysis shows a violation of tensor CRs in the presence of a non-minimal derivative coupling. In such cases, it is anticipated that the violation of tensor CR occurs due to the violation of adiabaticity caused by the presence of a non-minimal derivative coupling. This violation does not occur in the conventional sense of a tensor mode evolving on superhorizon scales, but rather in a manner where the superhorizon mode cannot be regarded as a pure gauge mode due to its distinguishability in a local inertial frame. As a result, even if the superhorizon tensor mode appears frozen, it is not classified as an adiabatic mode, leading to an expected violation of tensor CRs. From a different perspective, this violation of tensor CR alongside a frozen tensor mode might be a distinctive signature of the presence of such non-minimal interactions with the tensor mode. This analysis can also be easily extended to other non-minimal derivative couplings. In general, a violation of the Maldacena CR might also indicate a violation of the EP so CRs also provide an interesting way to test the EP on large cosmological scales [95]. ## 5 Conclusions and discussions In this paper, we have studied the cross-correlations of the inflationary scalar and tensor perturbations with the fluctuations of a non-minimally coupled spectator field with dilatonic coupling, providing valuable insights into correlation functions beyond the minimal setup. Firstly, we observed that during slow-roll inflation, the leading order interaction of the spectator and the scalar metric fluctuations in the comoving gauge originates from the dilatonic coupling, highlighting its significance, and found that the additional gravitational interaction from the non-minimal coupling is subject to slow-roll suppression. Notably, this fact became more apparent when considering the flat gauge. This observation led us to derive the CR for the scalar cross-correlation through a straightforward semi-classical approach. Importantly, this derivation represents a generalization of the CR for the cross-correlation of scalar metric fluctuations with gauge fields established in [74]. Our analysis demonstrated that the conformal nature of the spectator field is irrelevant to such semi-classical derivation, and it can be easily established in the flat gauge. In addition, these relations hold true in a generic manner, even in scenarios wherein the conventional semi-classical derivation, e.g., the Maldacena approach fails. We emphasise that these CRs have enormous potential which could be explored in various contexts. For instance, if we identify the spectator field as a potential isocurvature mode, then in cases where the isocurvature mode is directly coupled with the inflaton, these CRs become particularly valuable. They can also capture the NG associated with the isocurvature fluctuations within the inflationary context. The nature of these relations and their connection with non-linearly realized symmetries pose interesting questions for further exploration. Soft theorems (CRs) often arise as a result of such non-linear realizations, prompting an enticing avenue of investigation into the underlying symmetry and its non-linear manifestation within the framework of these novel CRs. We defer the pursuit of these interesting directions to our future work. Further, we have also explored the cross-correlation of tensor perturbation with the spectator field and associated CRs. It is often observed that the tensor CRs are more robust relations and have remarkable universality compared to those for scalars. They remain valid even when there are multiple scalar fields as long as any anisotropies decrease rapidly in an exponential manner [94]. Contrary to the usual lore, our analysis shows that the violation of the tensor CR does not necessarily imply the existence of a non-freezing tensor mode. But this is not surprising because if we consider our working definition of an adiabatic mode as a specific superhorizon cosmological perturbation that is locally indistinguishable from a pure gauge mode, i.e., it can be absorbed by means of a suitable coordinate transformation. In the presence of a non-minimal derivative coupling, one can not treat the superhorizon mode as a pure gauge mode because it can be distinguished in a local inertial frame. Thus, even if the superhorizon tensor mode is frozen, it is not an adiabatic mode, and we expect a violation of the tensor CRs which can be considered a specific signature of the non-minimal derivative coupling. It is well known that light spectator fields usually induce isocurvature perturbations which might leave interesting imprints on cosmological observables [96]. In some specific scenarios such as the curvaton, they are converted to adiabatic perturbations at a later stage. The power spectrum of isocurvature modes is well constrained on large scales which are probed by CMB observations. However, they are not constrained on smaller scales. For instance, a scale-invariant or a very blue isocurvature perturbation spectrum may leave large effects on the short wavelength scales. Detection of such imprints will reveal the underlying high energy physics of the isocurvature sector. Moreover, the NG associated with the isocurvature perturbations [97], either in the form of a three-point correlation or a cross-correlation with curvature perturbations will also provide deeper insights into their generation mechanism. It might be interesting to explore if such isocurvature NG can help in the formation of primordial black holes on smaller scales. We leave these interesting directions for future work. ## Acknowledgments We would especially like to thank Martin S. Sloth for suggesting the topic of this work, initial collaborations and numerous discussions. We also thank Chethan Krishnan for fruitful discussions. RKJ wishes to acknowledge financial support from the new faculty seed start-up grant of the Indian Institute of Science, Bengaluru, India, Science and Engineering Research Board, Department of Science and Technology, Government of India, through the Core Research Grant CRG/2018/002200, the MATRICS grant MTR/2022/000821 and the Infosys Foundation, Bengaluru, India through the Infosys Young Investigator award. PJS acknowledges Sarojini Damodaran Foundation for providing financial support for his visit to CP3-Origins, University of Southern Denmark. ## Appendix A Energy-momentum exchange relation for the spectator In this section, we provide a detailed proof of eq. (21). The energy-momentum tensor \(T^{(\sigma)}_{\mu\nu}\) corresponds to the spectator field \(\sigma\) is obtained from action (3) as, \[T^{(\sigma)}_{\mu\nu}=\lambda\left(\nabla_{\mu}\sigma\nabla_{\nu}\sigma- \frac{1}{2}g_{\mu\nu}\nabla_{\alpha}\sigma\nabla^{\alpha}\sigma\right)+ \alpha\Theta^{(1)}_{\mu\nu}+\frac{\xi}{6}\Theta^{(2)}_{\mu\nu}\, \tag{51}\] with, \[\Theta^{(1)}_{\mu\nu} = \frac{1}{2}g_{\mu\nu}\left(\nabla_{\alpha}\nabla_{\beta}-R_{\alpha \beta}\right)\lambda\nabla^{\alpha}\sigma\nabla^{\beta}\sigma+\frac{1}{2} \Box\left(\lambda\nabla_{\mu}\sigma\nabla_{\nu}\sigma\right)+2\lambda R_{\mu \alpha}\nabla^{\alpha}\sigma\nabla_{\nu}\sigma \tag{52}\] \[- \frac{1}{2}\nabla^{\alpha}\nabla_{\mu}\left(\lambda\nabla_{\nu} \sigma\nabla_{\alpha}\sigma\right)-\frac{1}{2}\nabla^{\alpha}\nabla_{\nu} \left(\lambda\nabla_{\mu}\sigma\nabla_{\alpha}\sigma\right)\,\] \[\Theta^{(2)}_{\mu\nu} = \left(G_{\mu\nu}+g_{\mu\nu}\Box-\nabla_{\mu}\nabla_{\nu}\right) \lambda\sigma^{2}. \tag{53}\] In this work, we only have to deal with the energy-momentum tensor of the spectator field. Therefore, we shall omit the superscript \((\sigma)\) and denote it with \(T_{\mu\nu}\) for convenience. One trivial way to prove eq. (21) is by explicitly working out the four divergences of the above-mentioned energy-momentum tensor using the equation of motion of \(\sigma\). However, there exists a more general way of proving it without using the detailed form of the spectator Lagrangian. We just have to use the fact that it is directly coupled to the inflaton. For this purpose, let's assume the form of spectator action as, \[S_{\sigma}=\int d^{4}x\sqrt{-g}\,\lambda(\phi){\cal L}_{\sigma}. \tag{54}\] Demanding the diffeomorphic invariance of this action gives us the desired result. To see that, let us consider an infinitesimal coordinate transformation from \(x^{\mu}\) to \(x^{\prime\mu}=x^{\mu}+\xi^{\mu}\). Under this transformation, one finds \(\delta g^{\mu\nu}=\nabla^{\mu}\xi^{\nu}+\nabla^{\nu}\xi^{\mu}\) and the corresponding change in the action \(S_{\sigma}\) can be written in the variational sense as, \[\delta S_{\sigma}=\left(\frac{\delta S_{\sigma}}{\delta\phi} \right)_{g,\sigma}\delta\phi+\left(\frac{\delta S_{\sigma}}{\delta\sigma} \right)_{g,\phi}\delta\sigma+\left(\frac{\delta S_{\sigma}}{\delta g^{\mu\nu} }\right)_{\phi,\sigma}\delta g^{\mu\nu}. \tag{55}\] Note that, the second term vanishes when the equation of motion of \(\sigma\) is satisfied. For the first term, we find \(\delta\phi=-\xi^{\nu}\partial_{\nu}\phi\), and thus \[\frac{\delta S_{\sigma}}{\delta\phi}=\int d^{4}x\sqrt{-g}\,\frac {d\lambda}{d\phi}\,{\cal L}_{\sigma}\delta\phi=-\int d^{4}x\sqrt{-g}\,{\cal L} _{\sigma}(\nabla_{\nu}\lambda)\xi^{\nu} \tag{56}\] and the third term can be seen as \[\frac{\delta S_{\sigma}}{\delta g^{\mu\nu}}\delta g^{\mu\nu}=- \frac{1}{2}\int d^{4}x\;\sqrt{-g}\;T_{\mu\nu}\delta g^{\mu\nu}=\int d^{4}x \sqrt{-g}\;(\nabla_{\mu}T^{\mu}_{\nu})\xi^{\nu}-\int d^{4}x\sqrt{-g}\;\nabla_ {\mu}(T^{\mu}_{\nu}\xi^{\nu}). \tag{57}\] The last term is a total divergent term and the natural boundary conditions on \(\xi^{\mu}\) set it to be zero. Then, if we demand the diffeomorphic invariance of \(S_{\sigma}\) left-hand side can be set to zero. \[\delta S_{\sigma}=\int d^{4}x\sqrt{-g}\bigg{(}\nabla_{\mu}T^{\mu }_{\nu}-(\nabla_{\nu}\lambda){\cal L}_{\sigma}\bigg{)}\xi^{\nu}=0 \tag{58}\] This proves, \[\nabla_{\mu}T^{\mu\nu}=(\nabla^{\nu}\lambda){\cal L}_{\sigma}. \tag{59}\] This result can be used to easily derive the cubic order action as follows \[\delta S_{\zeta\sigma\sigma}=\frac{1}{2}\int d^{4}x\;\sqrt{-g}\;T ^{\mu\nu}\delta g_{\mu\nu}=\int d^{4}x\sqrt{-g}\frac{\zeta}{H}\nabla_{\mu}T^{ \mu 0}+{\cal O}(\epsilon)\simeq-\int d^{4}x\sqrt{-g}\dot{\lambda}\frac{\zeta}{H}{ \cal L}_{\sigma}. \tag{60}\] We again stress that the above results only applies to the action written in the form of eq. (54) and does not depend on the explicit form of the Lagrangian. Evaluation of integrals in the squeezed limit In this section, we evaluate the integrals (26), (27) and (28) in squeezed limit, i.e., \({\bf k}_{1}\to 0\). Thus throughout this section, we can approximate \[\lim_{{\bf k}_{1}\to 0}\zeta_{k_{1}}(\tau)=\frac{H}{\sqrt{4\epsilon k^{3}}}\approx \zeta_{k_{1}}(\tau_{I})\, \tag{61}\] and we use eq. (12) as mode function \(\sigma_{k}(\tau)\). However, all of the following analyses can also be trivially performed with eq. (15). Utilizing these in all the integrals and considering the fact that in the squeezed limit \(k_{3}\approx k_{2}=k\), we can express the integrals as follows: \[{\cal I}_{1} = -(4n)\;|\zeta_{k_{1}}(\tau_{I})|^{2}\,{\rm Im}\left[\sigma_{k}^{2 }(\tau_{I})\int^{\tau_{I}}d\tau a^{2}\lambda\left({\sigma^{\prime}}^{*}_{k}( \tau)\right)^{2}\right]\, \tag{62}\] \[{\cal I}_{2} = -(4n)\;|\zeta_{k_{1}}(\tau_{I})|^{2}\,{\rm Im}\left[\sigma_{k}^{2 }(\tau_{I})\int^{\tau_{I}}d\tau a^{2}\lambda\left(\sigma_{k}^{*}(\tau)\right) ^{2}\right]\,\] (63) \[{\cal I}_{3} = -(4n)\;|\zeta_{k_{1}}(\tau_{I})|^{2}\,{\rm Im}\left[\sigma_{k}^{2 }(\tau_{I})\int^{\tau_{I}}\frac{d\tau}{\tau^{2}}a^{2}\lambda\left(\sigma_{k}^ {*}(\tau)\right)^{2}\right]. \tag{64}\] Here we have used \(\lambda^{\prime}/\lambda=-2n/\tau\), and \(\tau_{I}\) denotes very late times at which we evaluate the correlator. In addition, one can also rewrite the eq. (8) as follows, \[\frac{d}{d\tau}\left(a^{2}\lambda\sigma^{\prime}_{k}\right)+\left(k^{2}+a^{2} m^{2}+\xi\frac{a^{\prime\prime}}{a}\right)a^{2}\lambda\sigma_{k}=0. \tag{65}\] This equation can now be used to rewrite the integral involved in eq. (62) as, \[\int d\tau a^{2}\lambda\left({\sigma^{\prime}}^{*}_{k}(\tau) \right)^{2} = a^{2}\lambda\sigma_{k}^{*}{\sigma^{\prime}}^{*}_{k}-\int d\tau \sigma_{k}^{*}\frac{d}{d\tau}\left(a^{2}\lambda{\sigma^{\prime}}^{*}_{k}\right) \tag{66}\] \[= a^{2}\lambda\sigma_{k}^{*}{\sigma^{\prime}}^{*}_{k}+\int d\tau \;a^{2}\lambda\left(k^{2}+\frac{2\xi+(m^{2}/H^{2})}{\tau^{2}}\right)\left( \sigma_{k}^{*}(\tau)\right)^{2}\.\] Upon using this result in eq. (62), we get \[{\cal I}_{1}=-(4n)\;|\zeta_{k_{1}}(\tau_{I})|^{2}\,{\rm Im}\left[\sigma_{k}^{2 }(\tau_{I})\left[a^{2}\lambda\;\sigma_{k}^{*}(\tau){\sigma^{\prime}}^{*}_{k}( \tau)\right]\bigg{|}_{-\infty}^{\tau_{I}}\right]+k^{2}{\cal I}_{2}+\left(2\xi+ \frac{m^{2}}{H^{2}}\right){\cal I}_{3}. \tag{67}\] It can be easily shown that the lower limit will not contribute with in-in time contour, and this leads to the following relation \[{\cal I}_{1}=-(4n)\;|\zeta_{k_{1}}(\tau_{I})|^{2}|\sigma_{k}(\tau_{I})|^{2}\, {\rm Im}\left[a^{2}(\tau_{I})\lambda(\tau_{I})\;{\sigma_{k}}(\tau_{I}){\sigma^ {\prime}}^{*}_{k}(\tau_{I})\right]+k^{2}{\cal I}_{2}+\left(2\xi+\frac{m^{2}}{H ^{2}}\right){\cal I}_{3}. \tag{68}\] The Wronskian corresponds to the equation of motion (8) and the Bunch-Davis initial condition determines, \({\rm Im}\left[\sigma_{k}(\tau_{I}){\sigma^{\prime}}^{*}_{k}(\tau_{I})\right] =1/(2a_{I}^{2}\lambda_{I})\). Here the subscript '\(I\)' denotes the corresponding quantities evaluated at time \(\tau=\tau_{I}\). Then we obtain, \[{\cal I}_{1}=-(2n)\;|\zeta_{k_{1}}(\tau_{I})|^{2}|\sigma_{k}(\tau_{I})|^{2}+k^ {2}{\cal I}_{2}+\left(2\xi+\frac{m^{2}}{H^{2}}\right){\cal I}_{3}. \tag{69}\] This result is sufficient to evaluate the squeezed limit of the correlator given in eq. (25). Upon utilizing this result in the squeezed limit of (25), one can see that the integrals \(\mathcal{I}_{2}\) and \(\mathcal{I}_{3}\) nicely cancel out. So we do not need to evaluate these integrals explicitly to compute the squeezed limit of the correlator. However, let's demonstrate the evaluation of \(\mathcal{I}_{2}\) below because the same procedure can be used to evaluate the integral involved in section 4. Such integrals can be evaluated using the same method as previously outlined in [75, 80]. Thus, let us express the integral \(\mathcal{I}_{2}\) in the squeezed limit as, \[\mathcal{I}_{2}=-2|\zeta_{k_{1}}(\tau_{I})|^{2}|\sigma_{k_{2}}( \tau_{I})|^{2}\operatorname{Im}\left[e^{i(\nu+1/2)\pi}\int d\tau\tau a^{2} \lambda^{\prime}(\tau)\left(\sigma_{k_{2}}^{*}(\tau)\right)^{2}\right]\, \tag{70}\] where we have used \(\sigma_{k}^{2}(\tau_{I})\approx-|\sigma_{k}(\tau_{I})|^{2}e^{i\pi(\nu+1/2)}\) in above equation. Now, using the explicit form of the mode function and \[\operatorname{Im}\left[\int_{0}^{\infty}dxx\left(H_{\nu}^{(2)}( x)\right)^{2}\right]=\frac{2\nu}{\pi}\, \tag{71}\] we obtain \[\mathcal{I}_{2}=\left(\frac{2n\nu}{k_{2}^{2}}\right)|\zeta_{k_{1} }(\tau_{I})|^{2}|\sigma_{k_{2}}(\tau_{I})|^{2}. \tag{72}\]
2306.10357
Recalibrating $\mathbb{R}$-order trees and $\mbox{Homeo}_+(S^1)$-representations of link groups
In this paper we study the left-orderability of $3$-manifold groups using an enhancement, called recalibration, of Calegari and Dunfield's "flipping" construction, used for modifying $\mbox{Homeo}_+(S^1)$-representations of the fundamental groups of closed $3$-manifolds. The added flexibility accorded by recalibration allows us to produce $\mbox{Homeo}_+(S^1)$-representations of hyperbolic link exteriors so that a chosen element in the peripheral subgroup is sent to any given rational rotation. We apply these representations to show that the branched covers of families of links associated to arbitrary epimorphisms of the link group onto a finite cyclic group are left-orderable. This applies, for instance, to fibered hyperbolic strongly quasipositive links. Our result on the orderability of branched covers implies that the degeneracy locus of any pseudo-Anosov flow on an alternating knot complement must be meridional, which generalizes the known result that the fractional Dehn twist coefficient of any hyperbolic fibered alternating knot is zero. Applications of these representations to order-detection of slopes are also discussed in the paper.
Steven Boyer, Cameron McA. Gordon, Ying Hu
2023-06-17T14:08:21Z
http://arxiv.org/abs/2306.10357v2
# Recalibrating \(\mathbb{R}\)-order trees and ###### Abstract. In this paper we study the left-orderability of \(3\)-manifold groups using an enhancement, called recalibration, of Calegari and Dunfield's "flipping" construction, used for modifying \(\text{Homeo}_{+}(S^{1})\)-representations of the fundamental groups of closed \(3\)-manifolds. The added flexibility accorded by recalibration allows us to produce \(\text{Homeo}_{+}(S^{1})\)-representations of hyperbolic link exteriors so that a chosen element in the peripheral subgroup is sent to any given rational rotation. We apply these representations to show that the branched covers of families of links associated to epimorphisms of the link group onto a finite cyclic group are left-orderable. This applies, for instance, to fibered hyperbolic strongly quasipositive links. Our result on the orderability of branched covers implies that the degeneracy locus of any pseudo-Anosov flow on an alternating knot complement must be meridional, which generalizes the known result that the fractional Dehn twist coefficient of any hyperbolic fibered alternating knot is zero. Applications of these representations to order-detection of slopes are also discussed in the paper. Steven Boyer was partially supported by NSERC grant RGPIN 9446-2008. 2010 Mathematics Subject Classification. Primary 57M12, 57M60, 57M99. Key words: \(\text{Homeo}_{+}(S^{1})\)-representations, \(\mathbb{R}\)-order trees, circular orders, left-orderable groups, essential laminations, pseudo-Anosov flows, cyclic branched covers, link groups. We refer the reader to SS2.1 and SS2.2 for the definition of pseudo-Anosov flows, stable and unstable laminations as well as other related concepts. A good example of a pseudo-Anosov flow to keep in mind is the suspension of a pseudo-Anosov homeomorphism \(\varphi\) of a surface of finite type. In this example, the stable and unstable laminations are the suspensions of the stable and unstable invariant laminations of \(\varphi\), and both are very full. In this article, we continue this theme of studying the left-orderability of \(3\)-manifold groups by utilising circle actions from pseudo-Anosov flows and very full laminations, but with two new ingredients: 1. We extend to an orbifold setting the constructions of Calegari-Dunfield and Fenley of \(\mathrm{Homeo}_{+}(S^{1})\)-representations associated to pseudo-Anosov flows and to very full laminations. (See Proposition 3.3 and Theorem 4.5). 2. We reformulate and generalise the flipping operation described in [17, SS3.3] as a purely combinatorial operation on cyclically-ordered \(\mathbb{R}\)-order trees that we call _realibration_. This allows for much more flexibility in producing \(\mathrm{Homeo}_{+}(S^{1})\)-representations (see SS4.4). Combining these ideas allows us to construct many nontrivial representations of link groups with prescribed behaviour on the peripheral subgroups. To describe this more precisely, we first introduce some notation. **Notation**.: Let \(L=K_{1}\cup\dots\cup K_{m}\) be a link in a closed, orientable \(3\)-manifold \(W\) with closed tubular neighbourhood \(N(L)\). The _complement_ of \(L\) in \(W\) is the open manifold \(C(L)=W\setminus L\) and the _exterior_ of \(L\) is the compact manifold \(X(L)=W\setminus\mathrm{int}(N(L))\). We often identify \(C(L)\) with the interior of \(X(L)\). We use \(T_{i}=\partial N(K_{i})\subset\partial X(L)\) to denote the \(i^{th}\) boundary component of \(X(L)\). An essential oriented simple closed curve \(\alpha\) on \(T_{i}\) determines a slope on \(T_{i}\), by forgetting its orientation, and a class in \(\pi_{1}(T_{i})=H_{1}(T_{i})\). We also use \(\alpha_{i}\) to denote the corresponding class in \(\pi_{1}(X(L))\), which is well-defined up to conjugation. For each \(i\), we use \(\mu_{i}\) to denote an oriented meridional curve of \(N(K_{i})\). Links are unoriented unless otherwise stated or clear from the context. If \(L\) and \(W\) are both oriented, then the \(\mu_{i}\) are oriented positively with respect to the orientations on \(L\) and \(W\). In the theorem below, \(\delta_{i}(\Phi_{0})\) is the degeneracy locus on \(T_{i}\) of a pseudo-Anosov flow \(\Phi_{0}\) on \(C(L)\). For instance, if \(L\) is fibred with pseudo-Anosov monodromy and \(\Phi_{0}\) is the suspension flow of the pseudo-Anosov representative of the monodromy restricted to \(C(L)\), thought of as the interior of \(X(L)\), then \(\delta_{i}(\Phi_{0})\) is a collection of parallel simple closed curves on \(T_{i}\) consisting of half of the closed flow lines on \(T_{i}\). See SS2.2 for more details. **Theorem 4.7**.: _Let \(L=K_{1}\cup\dots\cup K_{m}\) be a link in an orientable \(3\)-manifold \(W\) whose complement admits a pseudo-Anosov flow \(\Phi_{0}\). For each \(i\), fix an oriented essential simple closed curve \(\alpha_{i}\) on \(T_{i}\) and an integer \(n_{i}\geq 1\) so that \(n_{i}|\alpha_{i}\cdot\delta_{i}(\Phi_{0})|\geq 2\). Then for any integer \(a_{i}\) coprime with \(n_{i}\), there is a homomorphism \(\rho:\pi_{1}(X(L))\to\text{Homeo}_{+}(S^{1})\) with non-cyclic image such that \(\rho(\alpha_{i})\) is conjugate to rotation by \(2\pi a_{i}/n_{i}\) for each \(i\)._ The theorem shows that given any multislope \((\alpha_{1},\ldots,\alpha_{m})\) on \(\partial X(L)\) such that \(\delta_{i}(\Phi_{0})\) is not a multiple of \(\alpha_{i}\) and any \(m\)-tuple \((u_{1},u_{2},\ldots,u_{m})\) of non-trivial roots of unity, there is a homomorphism \(\rho:\pi_{1}(X(L))\to\mathrm{Homeo}_{+}(S^{1})\) with non-cyclic image such that for each \(i\), \(\rho(\alpha_{i})\) is conjugate to the rotation determined by \(u_{i}\). ### Applications to order-detection The notion of slope detection was first introduced by Boyer and Clay in [7] to prove the \(L\)-space conjecture for graph manifolds, and further used in [12] by the authors of the present article to study the \(L\)-space conjecture for general toroidal manifolds (SS6.1). One of the main results of the latter paper was to show that meridional slopes (i.e. slopes that are distance one from the longitudinal slope) on the boundaries of integer homology solid tori are order-detected, a result with many consequences. For instance, the following results from [12] and [13] are obtained as applications of this fact: 1. The fundamental groups of cyclic branched covers of prime satellite links are all left-orderable [12], which settles the left-orderability version of the Gordon-Lidman Conjecture [35]. 2. The fundamental groups of toroidal integer homology spheres are left-orderable [12], which is the left-orderable analog of a result of [24, 37] on \(L\)-spaces. 3. The fundamental group of any non-meridional Dehn filling on a composite knot is left-orderable [13], which is predicted by the \(L\)-space conjecture and the fact that \(L\)-space knots are prime [48]. The proof that meridional slopes are order-detected in [12, Theorem 1.3(2), Corollary 1.4] involves a detailed analysis of the dynamics of Thurston's universal circle action associated with taut foliations, and is long and technical. In SS6.1, we give a very quick proof that meridians of certain hyperbolic knots are order-detected (see Theorem 6.2) using the representations constructed in Theorem 4.7. **Theorem 6.2**.: _Let \(K\) be a hyperbolic knot in an integer homology sphere \(W\), such that the complement of \(K\) admits a pseudo-Anosov flow whose degeneracy locus is meridional. Then the meridional slope of \(K\) is order-detected._ Theorem 6.2 demonstrates the potential of these representations in studying order-detected slopes. We plan to undertake a fuller investigation of this in a future paper. ### Applications to left-orderability of branched covers We consider a general class of cyclic branched covers defined as follows. **Definition 1.1**.: Let \(L=K_{1}\cup...\cup K_{m}\) be a link in an integer homology sphere \(W\) and \(\psi:\pi_{1}(X(L))\to\mathbb{Z}/n\) an epimorphism such that \(\psi(\mu_{i})\neq 0\) for each \(i\). We denote by \(\Sigma_{\psi}(L)\to W\) the \(n\)-fold cyclic cover \(\Sigma_{\psi}(L)\to W\) branched over \(L\) associated to the \(n\)-fold cyclic cover \(X_{\psi}(L)\to X(L)\) determined by \(\psi\). Following standard notational conventions, if \(L\) is oriented we use \(\Sigma_{n}(L)\) to denote the _canonical cyclic branched cover_ of \(L\) associated with the epimorphism \(\pi_{1}(X(L))\to\mathbb{Z}/n\) that sends each correspondingly oriented \(\mu_{i}\) to \(1\ (\mathrm{mod}\ n)\). Motivated by a result of Ba and Clay, [1, Theorem 2.6], our next theorem shows how a representation of the sort given by Theorem 4.7 can be used to deduce the left-orderability of the fundamental group of an associated cyclic branched cover \(\Sigma_{\psi}(L)\). This result greatly generalizes [41, Theorem 3.1] and [11, Theorem 6], which have been widely used for proving the left-orderability of the fundamental group of cyclic branched covers of knots. **Theorem 5.1**.: _Let \(L=K_{1}\cup\cdots\cup K_{m}\) be a prime link in an integer homology \(3\)-sphere whose exterior is irreducible. Suppose that \(\rho:\pi_{1}(X(L))\to\text{Homeo}_{+}(S^{1})\) is a representation with non-cyclic image such that \(\rho(\mu_{i})\) is conjugate to rotation by \(2\pi a_{i}/n\) for some \(a_{i},n\in\mathbb{Z}\), where \(n\geq 2\). If the induced homomorphism \(\psi:\pi_{1}(X(L))\to\mathbb{Z}/n\) which sends \(\mu_{i}\) to \(a_{i}\ (\mathrm{mod}\ n)\) is an epimorphism, then \(\pi_{1}(\Sigma_{\psi}(L))\) is left-orderable._ Combining Theorem 4.7 with Theorem 5.1, we found, somewhat surprisingly, many examples of hyperbolic links, for which the fundamental group of any cyclic branched cover is left-orderable (see Theorem 1.2 below), even though for a given covering index these branched covers can be very different as topological spaces. Since links whose exteriors admit pseudo-Anosov flows are prime (cf. Remark 3.2), Theorem 1.2 is a corollary of these results; the proof is given in SS6.2. **Theorem 1.2**.: _Let \(L\) be a link in an integer homology sphere whose complement admits a pseudo-Anosov flow none of whose degeneracy loci are meridional. Then the fundamental group of any \(n\)-fold cyclic branched cover \(\Sigma_{\psi}(L)\) of \(L,n\geq 2\), is left-orderable._ **Example 1.3**.: In SS6.3, we give many examples of links which satisfy the hypothesis of Theorem 1.2. These include: 1. Hyperbolic links that can be oriented to be fibered with nonzero fractional Dehn twist coefficient (Theorem 6.4). Most interestingly, links that can be oriented to be fibered and strongly quasipositive belong to this family (Corollary 6.5). 2. Links that can be oriented to be the closures of certain pseudo-Anosov braids (see Theorem 6.6, Corollary 6.8, Proposition 6.10, and Theorem 6.12). The following corollary is a special case of the first of these two examples. **Corollary 7.3**.: _If \(K\) is an \(L\)-space knot then \(\pi_{1}(\Sigma_{n}(K))\) is left-orderable for all \(n\geq 2\) if and only if \(K\) is not \(T(3,4),T(3,5)\), or \(T(2,2q+1)\) for some \(q\geq 1\)._ Using [4] and [26] we obtain the analogous statement with "\(\pi_{1}(\Sigma_{n}(K))\) is left-orderable" replaced by "\(\Sigma_{n}(K)\) is not an \(L\)-space", provided \(n\geq 3\). This leaves open the interesting question, due to Allison Moore, asking whether the double branched cover of a hyperbolic \(L\)-space knot can ever be an \(L\)-space. We use \(L^{\circ}\) to denote \(L\) endowed with an orientation \(\mathfrak{o}\). Varying the values of \(\psi(\mu_{i})\) in Theorem 1.2 over all possible choices of \(\pm 1\ (\mathrm{mod}\ n)\) yields the following corollary. **Corollary 1.4**.: _Let \(L=K_{1}\cup\dots\cup K_{m}\) be a link in an integer homology \(3\)-sphere \(W\) whose complement admits a pseudo-Anosov flow none of whose degeneracy loci are meridional. Then \(\pi_{1}(\Sigma_{n}(L^{\mathfrak{o}}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\mathfrak{o}\) on \(L\),_ To put Corollary 1.4 in context, we remark that known results suggest that if the double branched cover of an oriented link in an integer homology \(3\)-sphere has a left-orderable fundamental group, then so does \(\Sigma_{n}(L)\) for all \(n\geq 2\). On the other hand, \(\Sigma_{2}(L)\) is independent of the orientation on \(L\), so we expect that the left-orderability of \(\pi_{1}(\Sigma_{2}(L))\) implies that of \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) for any orientation \(\mathfrak{o}\) on \(L\) and \(n\geq 2\). To the best of our knowledge, Corollary 1.4 is the first general result confirming this type of behaviour for hyperbolic links. See SS6.2 for a more detailed discussion. We complement Corollary 1.4 by showing that there are pairs of oriented links \(L,L^{\prime}\), which are equivalent as unoriented links, such that \(\pi_{1}(\Sigma_{n}(L))\) is non-left-orderable for all \(n\geq 2\), while \(\pi_{1}(\Sigma_{n}(L^{\prime}))\) is left-orderable for all \(n\geq 3\). See SS6.4. ### Applications to degeneracy loci Lastly, we present a surprising application of our left-orderability results to degeneracy loci of pseudo-Anosov flows. Independent work of Gabai and Mosher ([51]; also see [49]) showed that pseudo-Anosov flows exist in the complement of any hyperbolic link in a closed, orientable \(3\)-manifold. On the other hand, the degeneracy loci of these flows are difficult to compute in general, although they are known in certain situations. For instance, [2, Corollary 1.7] combines with [38, Theorem 1.2] and [4, Corollary 7.3] to show that the degeneracy locus of the monodromy of a fibred hyperbolic alternating knot is always meridional. We significantly extend this fact by showing : **Corollary 6.18**.: _The degeneracy locus of any pseudo-Anosov flow on the complement of an alternating knot is meridional._ ### Plan of the paper Section 2 outlines background material on Fenley's asymptotic circle representations from pseudo-Anosov flows on closed \(3\)-manifolds, essential laminations and degeneracy loci, and rotation and translation numbers. In SS3 we extend Fenley's asymptotic circle representation to the fundamental groups of closed orientable \(3\)-orbifolds with cyclic isotropy (Proposition 3.3). These representations are used to deal with a degenerate case in our arguments in SS4. In SS4 we define the recalibration operation on cyclically ordered \(\mathbb{R}\)-order trees, and use its flexibility to produce \(\mathrm{Homeo}_{+}(S^{1})\)-valued representations of link groups with prescribed behaviour on the peripheral subgroups (Theorem 4.7). In SS5, we consider the Euler classes of these representations and prove Theorem 5.1. The applications discussed above are proved in SS6. Finally, in SS7, we discuss our results in the context of the L-space conjecture. ### Acknowledgements The authors would like to thank John Baldwin, who contributed to initial discussions of the material in this paper, Sergio Fenley, for providing background information on his asymptotic circle, and Adam Clay, for telling us of his work with Idrissa Ba on co-cyclic left-orderable subgroups of circularly-ordered groups. ## 2. Preliminaries In SS2.1 and SS2.2, we cover some basic concepts related to pseudo-Anosov flows. We refer the readers to [16, SS6.6] for a more detailed account of the material. We explain the relationship between degeneracy loci and fractional Dehn twist coefficients in SS2.3. In SS2.4, we survey Fenley's results on the existence of a circle action given a pseudo-Anosov flow on a closed \(3\)-manifold. Finally, in SS2.5, we briefly review the rotation and translation numbers of elements in \(\mathrm{Homeo}_{+}(S^{1})\) and \(\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\). ### Pseudo-Anosov flows An _Anosov flow_\(\Phi_{t}:M\times\mathbb{R}\to M\) on a \(3\)-manifold \(M\) is a flow which preserves a continuous splitting of the tangent bundle \(TM=E^{s}\oplus\frac{\partial}{\partial t}\oplus E^{u}\). Moreover, there are constants \(\mu_{0}\geq 1\) and \(\mu_{1}>0\) so that \[\|d\Phi_{t}(v)\|\leq\mu_{0}e^{-\mu_{1}t}\|v\|,\] \[\|d\Phi_{-t}(w)\|\leq\mu_{0}e^{-\mu_{1}t}\|w\|,\] for any \(v\in E^{s}\), \(w\in E^{u}\) and \(t\geq 0\). So an Anosov flow contracts along \(E^{s}\) and expands vectors along \(E^{u}\). By definition, \(d\Phi_{t}\) is a hyperbolic map on \(E^{s}\oplus E^{u}\). A flow \(\Phi_{t}\) on a \(3\)-manifold \(M\) is _pseudo-Anosov_ if it is Anosov away from a finite number of _pseudo-hyperbolic_ periodic orbits. That is, \(d\Phi_{t}\) restricted to the normal bundle of the flow has a _pseudo-hyperbolic_ singularity at each point along these singular orbits. The archetypical example of a pseudo-Anosov flow is the suspension of a pseudo-Anosov homeomorphism of a connected, orientable surface, in which the periodic orbits obtained from the singular points of the homeomorphism's invariant singular foliations are pseudo-hyperbolic. In this article, we include Anosov flows when we refer to pseudo-Anosov flows, and will consider pseudo-Anosov flows on closed manifolds as well as link complements in a closed manifold, though in the latter case we restrict the behaviour of the flow in the ends of the link complement as follows. Let \(C(L)=W\setminus L\) be the complement of a link \(L\) in a closed, connected, orientable \(3\)-manifold \(W\). For a flow on \(C(L)\) to be pseudo-Anosov, in addition to the definition above, we also require the dynamics of each end of \(C(L)\) to be that of a neighbourhood of a pseudo-hyperbolic orbit in a pseudo-Anosov flow with the orbit removed. The archetypical example of such a flow is the suspension of a pseudo-Anosov homeomorphism of a cusped surface of finite type. To simplify notation, we will drop the subscript \(t\) from the notation of a flow from now on. ### Essential laminations and degeneracy loci Let \(\Phi\) be a pseudo-Anosov flow on a \(3\)-manifold \(M\). By the stable manifold theorem [39], both \(E^{s}\oplus\frac{\partial}{\partial t}\) and \(E^{u}\oplus\frac{\partial}{\partial t}\) are integrable, which results in two singular foliations on \(M\) that are invariant under the flow. They are called the weak stable foliation and the weak unstable foliation respectively. A lamination on a \(3\)-manifold \(M\) is a foliation on a closed subset of \(M\) by surfaces. So given a pseudo-Anosov flow on \(M\), by blowing air into the singular leaves of its weak stable and unstable invariant foliations (i.e. replacing the leaves by small regular neighbourhoods whose interiors are removed), one obtains two invariant _essential_ laminations on \(M\), which are called the stable and unstable laminations of the pseudo-Anosov flow. Next, we give a description of the topological structure of the invariant laminations of a pseudo-Anosov flow which is sufficient for our purposes. For more general background material on essential laminations, we refer the reader to [32]. Let \(\Lambda\) be a lamination on \(M\). A _complementary region_ of \(\Lambda\) is a component of the completion of \(M\setminus\Lambda\) under a path metric on \(M\). We call a lamination on a \(3\)-manifold \(M\)_very full_ ([16, Definition 6.42]) if each complementary region is homeomorphic to either an ideal polygon bundle over \(S^{1}\) or a once punctured ideal polygon bundle over \(S^{1}\). In the first case, we require the ideal polygon to have at least \(2\) ideal vertices. We follow the terminology in [51] and call a complementary region of the first type a _pared solid torus_ and of the second type _a pared torus shell_. By construction, the invariant laminations of a pseudo-Anosov flow on \(M\) are very full. Let \(\Phi_{0}\) be a pseudo-Anosov flow on the complement \(C(L)\) of a link \(L=K_{1}\cup K_{2}\cup\cdots\cup K_{m}\) in \(W\) and \(\Lambda\) be the stable lamination of \(\Phi_{0}\). Then for each \(i\), there is a pared torus shell complementary region of \(\Lambda\), corresponding to the missing \(i^{th}\) component \(K_{i}\) of the link \(L\). Note that the pared torus shell is homeomorphic to \(N(K_{i})\setminus K_{i}\) with a collection of parallel simple closed curves on \(T_{i}=\partial N(K_{i})\) removed. The isotopy class of this collection of simple closed curves is called the _degeneracy locus_ of the pseudo-Anosov flow on \(T_{i}\) and is denoted by \(\delta_{T_{i}}(\Phi_{0})\) or, more simply, by \(\delta_{i}(\Phi_{0})\). This notion was first introduced in [32] for the suspension flow of a pseudo-Anosov homeomorphism, where it was called the degenerate curve. These degeneracy loci are important in determining if an essential lamination remains essential after Dehn filling ([32, Theorem 5.3]). We say that the degeneracy locus is _meridional_ on the \(i^{th}\) component \(T_{i}\) if \(\delta_{i}(\Phi_{0})\) is a union of meridional curves of \(N(K_{i})\). The slope determined by \(\delta_{i}(\Phi)\) on \(T_{i}\) is the slope determined by a connected component of \(\delta_{i}(\Phi)\). ### Degeneracy loci and fractional Dehn twist coefficients Let \(L=K_{1}\cup\cdots\cup K_{m}\) be a hyperbolic fibred link in an oriented \(3\)-manifold \(W\) with monodromy \(h\) and fibre \(F\) (so \((F,h)\) is an open book decomposition of \(W\)). Since \(L\) is hyperbolic, \(h\) is freely isotopic to a pseudo-Anosov homeomorphism \(\varphi\) of \(F\)[62]. The suspension flow \(\Phi\) of \(\varphi\) on the link exterior \(X(L)\) restricts to a pseudo-Anosov flow on the link complement \(int(X(L))\cong C(L)\) which we denote by \(\Phi_{0}\). In this case, the degeneracy loci of \(\Phi_{0}\) can be described more precisely, as we explain next. It follows from the properties of pseudo-Anosov homeomorphisms that the flow \(\Phi\) has an even number of periodic orbits on each boundary component of \(X(L)\), half from the repelling periodic points of \(\varphi\) on \(\partial F\) and half from the attracting periodic points. Then on each boundary component, the degeneracy locus of \(\Phi_{0}\) is the union of half of the periodic orbits of \(\Phi\) on \(\partial X(L)\). For each \(i=1,\cdots,m\), let \(T_{i}\) denote the boundary component \(\partial N(K_{i})\) of \(X(L)\), \(\mu_{i}\) the meridional class given by \(h\), and \(\lambda_{i}\) the slope on \(T_{i}\) corresponding to \(F\cap T_{i}\), oriented positively with respect to \(\mu_{i}\) and the orientation of \(W\). Then the degeneracy locus on \(T_{i}\) can be expressed as \[\delta_{i}(\Phi_{0})=c\mu_{i}+d\lambda_{i}\] where \(c\) and \(d\) are not necessarily coprime. Note that \(c\) must be nonzero. The quotient \[c_{T_{i}}(h)=\frac{d}{c}\in\mathbb{Q} \tag{2.3.1}\] is called the _fractional Dehn twist coefficient_ of \(h\) along \(T_{i}\cap F\subset\partial F\). In the case that the boundary of \(F\) is connected, we will drop the the subscript \(T_{i}\). Fractional Dehn twist coefficients were originally defined to measure the amount of twisting around \(F\cap T_{i}\subset\partial F\) required to isotope \(h\) to its Nielsen-Thurston representative \(\varphi\) (see [40, SS3.2]), and is an important notion in studying the tightness of the contact structure supported by the open book \((F,h)\)[40, Theorem 1.1]. A pseudo-Anosov monodromy \(h\) is called _right-veering_ (resp. _left-veering_) if \(c_{T_{i}}(h)>0\) (resp. \(c_{T_{i}}(h)<0\)) for all \(i=1,\cdots,m\). ### Fenley's asymptotic circle Here we describe Fenley's asymptotic circle associated to a pseudo-Anosov flow on a closed \(3\)-manifold ([27]). Given a pseudo-Anosov flow \(\Phi\) on a closed, connected, orientable \(3\)-manifold \(W\), let \(\widetilde{\Phi}\) be the pull-back of \(\Phi\) to the universal cover \(\widetilde{W}\) of \(W\). **Theorem 2.1**.: ([28, Proposition 4.2]) _The orbit space \(\mathcal{O}\) of \(\widetilde{\Phi}\) is homeomorphic to \(\mathbb{R}^{2}\). Moreover, the projection \(\pi:\widetilde{W}\to\mathcal{O}\) is a locally-trivial fibre bundle whose flow line fibres are homeomorphic to \(\mathbb{R}\). _ An immediate consequence of the theorem is that closed manifolds, as above, which admit pseudo-Anosov flows are irreducible with infinite fundamental groups, and therefore aspherical. The action of \(\pi_{1}(W)\) on \(\widetilde{W}\) descends to one on \(\mathcal{O}\). Since the flow lines in \(\widetilde{W}\) inherit a coherent \(\pi_{1}(W)\)-invariant orientation, the action of \(\pi_{1}(W)\) on \(\mathcal{O}\) is by orientation-preserving homeomorphisms, so we obtain a homomorphism \[\psi:\pi_{1}(W)\to\mathrm{Homeo}_{+}(\mathcal{O})\] Fenley has constructed an ideal boundary for \(\mathcal{O}\) over which this action extends. **Theorem 2.2**.: ([27, Theorem A]) _There is a natural compactification \(\mathcal{D}=\mathcal{O}\cup\partial\mathcal{O}\) of \(\mathcal{O}\) where \(\mathcal{D}\) is homeomorphic to a disk with boundary circle \(\partial\mathcal{O}\). The action of \(\pi_{1}(W)\) on \(\mathcal{O}\) extends to one on \(\mathcal{D}\) by homeomorphisms. _ It follows from Fenley's construction that the action of \(\pi_{1}(W)\) on the ideal boundary \(\partial\mathcal{O}\) of \(\mathcal{O}\) is faithful. That is, the associated homomorphism \[\rho_{\Phi}:\pi_{1}(W)\to\mathrm{Homeo}_{+}(\partial\mathcal{O})\] is injective. We think of \(\rho_{\Phi}\) as taking values in \(\mathrm{Homeo}_{+}(S^{1})\). ### Rotation and translation numbers In this section, we briefly review the translation numbers of elements in \(\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\) and rotation numbers of elements in \(\mathrm{Homeo}_{+}(S^{1})\). For details, see [33, SS5]. Though these notions are used implicitly in the proof of Theorem 4.5 in SS4.4, the understanding of translation and rotation numbers only becomes essential in SS6.1, where we discuss the applications of Theorem 4.5 to slope detection. Recall that \(\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\) denotes the group of homeomorphisms of the real line which commute with translation by \(1\) and can be identified with the universal covering group of \(\mathrm{Homeo}_{+}(S^{1})\). Given an element \(h\in\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\), the _translation number_ of \(h\), denoted by \(\tau(h)\), is defined to be the limit \[\lim_{n\to\infty}\frac{h^{n}(0)}{n}\] In particular, if \(h\) is translation by \(r\in\mathbb{R}\), i.e., \(h(x)=x+r\) for \(x\in\mathbb{R}\), then it is easy to verify that \(\tau(h)=r\). The following lemma lists some basic properties of the translation number that we will use. See [33, SS5] for proofs. **Lemma 2.3**.: _Let \(\tau:\text{Homeo}_{\mathbb{Z}}(\mathbb{R})\to\mathbb{R}\) denote the translation number._ 1. \(\tau\) _is a homomorphism when restricted to a_ \(\mathbb{Z}\oplus\mathbb{Z}\) _subgroup of_ \(\text{Homeo}_{\mathbb{Z}}(\mathbb{R})\)_. In particular,_ \(\tau(h^{n})=n\tau(h)\) _for any_ \(n\in\mathbb{Z}\)_._ 2. \(h\in\text{Homeo}_{\mathbb{Z}}(\mathbb{R})\) _has a fixed point on_ \(\mathbb{R}\) _if and only if_ \(\tau(h)=0\)_._ 3. _The translation number is invariant under conjugation in_ \(\text{Homeo}_{\mathbb{Z}}(\mathbb{R})\)_._ Let \(f\) be an element in \(\mathrm{Homeo}_{+}(S^{1})\) and \(\tilde{f}\) a lift of \(f\) in \(\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\). We define the _rotation number_ of \(f\) to be the image of \(\tau(\tilde{f})\) in \(\mathbb{R}/\mathbb{Z}\). Since any two lifts of \(f\) in \(\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\) differ by a translation by an integer, the rotation number of \(f\) is well-defined. If \(f\) is rotation by \(\frac{2\pi a}{n}\), then the rotation number of \(f\) is \(\frac{a}{n}\) (mod \(\mathbb{Z}\)). ## 3. Asymptotic circle representations of flows on \(3\)-orbifolds Suppose that \(\Phi_{0}\) is a pseudo-Anosov flow on the complement of a link \(L=K_{1}\cup K_{2}\cup\cdots\cup K_{m}\) in a closed, connected, orientable \(3\)-manifold \(W\) and \(\delta_{i}(\Phi_{0})\) its degeneracy locus on \(T_{i}\) for \(i=1,\cdots,m\). Given slopes \(\alpha_{i}\) on \(T_{i}\), it is known that if \(|\delta_{i}(\Phi_{0})\cdot\alpha_{i}|\geq 2\) for each \(i\), then \(\Phi_{0}\) extends to a pseudo-Anosov flow on the Dehn filled manifold \(X(L)(\alpha_{1},\cdots,\alpha_{m})\) in such a way that the core of each filling solid torus is a periodic orbit [29]. Then by our discussion in SS2.4, there is a \(\mathrm{Homeo}_{+}(S^{1})\)-representation of \(\pi_{1}(X(L)(\alpha_{1},\cdots,\alpha_{m}))\). In this section, we extend this result to orbifold fillings of \(X(L)\) and obtain a \(\mathrm{Homeo}_{+}(S^{1})\)-representation of the orbifold group (cf. Corollary 3.4). The existence results of Proposition 3.3 and Corollary 3.4 are used in the proof of our main result, Theorem 4.5, to deal with a special case when the leaf space of a certain lamination is degenerate. ### Well-adapted flows on orbifolds We consider \(\mathcal{M}\) a closed, connected, oriented \(3\)-orbifold and \(B=B_{1}\cup\cdots\cup B_{m}\) an oriented link in the underlying \(3\)-manifold \(|\mathcal{M}|\) for which the singular set of \(\mathcal{M}\) is a union of components of \(B\). For each \(i\), let \(\beta_{i}\) be the positively oriented meridian of \(N(B_{i})\), which we also use to denote the associated class in \(\pi_{1}(\mathcal{M})\), and \(n_{i}\geq 1\) the order of the isotropy along \(B_{i}\). Alternatively, one can view such a \(3\)-orbifold as the result of orbifold filling on a link exterior \(X(L)\) along slopes \(\alpha_{i}\). From this viewpoint, \(B_{i}\) is the core of the filling solid torus attached to \(T_{i}\). We denote the resulting orbifold by \(X(L)(\alpha_{*};n_{*})\), where \(n_{i}\) is the order of the isotropy group over \(B_{i}\), \(n_{*}=(n_{1},\cdots,n_{m})\), and \(\beta_{i}=\alpha_{i}\) as a slope on \(\partial X(L)\). We say that a flow \(\Phi\) on the underlying \(3\)-manifold \(|\mathcal{M}|\) is _well-adapted_ to the pair \((\mathcal{M},B)\) if the following three conditions are satisfied: 1. each \(B_{i}\) is an orbit of \(\Phi\) and the orientation on \(B_{i}\) agrees with that of the flow; 2. the restriction \(\Phi_{0}\) of \(\Phi\) to the complement \(C(B)=|\mathcal{M}|\setminus B\) is pseudo-Anosov; 3. \(n_{i}|\delta_{i}(\Phi_{0})\cdot\beta_{i}|\geq 2\) for each \(i\), where \(\delta_{i}(\Phi_{0})\) is the degeneracy locus of \(\Phi_{0}\) on \(\partial N(B_{i})\). Let \(\Phi_{0}\) be a pseudo-Anosov flow on the complement of a link \(L\) in an oriented, closed, connected \(3\)-manifold \(W\), denoted by \(C(L)\) as before. We identify \(C(L)\) with the interior of \(X(L)\). Then there is a flow \(\Phi^{*}\) on \(X(L)\) whose restriction to the interior of \(X(L)\) is \(\Phi_{0}\). In particular, on a regular neighborhood of any component of \(\partial X(L)\), the flow \(\Phi^{*}\) is modeled by the suspension flow of a pseudo-Anosov homeomorphism of a surface with nonempty boundary. For each \(i\), let \(\alpha_{i}\) be a slope on \(T_{i}\) and \(n_{i}\) a positive integer satisfying \(n_{i}|\delta_{i}(\Phi_{0})\cdot\alpha_{i}|\geq 2\). Since \(|\delta_{i}(\Phi_{0})\cdot\alpha_{i}|\neq 0\), the linear foliation of \(T_{i}\) by simple closed curves of slope \(\alpha_{i}\) can be isotoped to be everywhere transverse to \(\Phi^{*}|_{T_{i}}\). Then following [29, SS1], we define a quotient map \(\pi:X(L)\to X(L)(\alpha_{*})\) by collapsing every leaf of the linear foliation on \(\partial X(L)=\cup_{i}T_{i}\) to a point. Then \(\Phi^{*}\) induces a flow \(\Phi\) on the Dehn filled manifold \(X(L)(\alpha_{*})\) and the cores of the filling solid tori are closed orbits of the flow. If \(|\delta_{i}(\Phi_{0})\cdot\alpha_{i}|\geq 2\) for each \(i\), the induced flow \(\Phi\) on \(X(\alpha_{*})\) is pseudo-Anosov [29], though not necessarily otherwise. We consider the orbifold \(X(L)(\alpha_{*};n_{*})\), where \(\alpha_{*}=(\alpha_{1},\cdots,\alpha_{m})\) and \(n=(n_{1},\cdots,n_{m})\). As noted above, the singular set \(B=(B_{1},\cdots,B_{m})\) of \(X(L)(\alpha_{*};n_{*})\) is the union of the cores of the filling solid tori with orientation inherited from the flow \(\Phi\) and the underlying manifold of \(X(L)(\alpha_{*};n_{*})\) is \(X(\alpha_{*})\). It follows that the flow \(\Phi\) constructed above is well-adapted to the pair \((X(L)(\alpha_{*};n_{*}),B)\). ### Homeo\({}_{+}(S^{1})\)-representations of orbifold groups A closed \(3\)-manifold that admits a pseudo-Anosov flow must be irreducible. The following lemma proves that open manifolds which admit pseudo-Anosov flows satisfy analogous properties; this is needed in the proof of Proposition 3.3 to guarantee that the orbifold \(X(L)(\alpha_{*};n_{*})\) is finitely covered by a manifold. We defer the proof of the lemma to the end of this subsection. **Lemma 3.1**.: _Suppose that \(\Phi_{0}\) is a pseudo-Anosov flow on the complement of a link \(L\) in a closed, connected, orientable \(3\)-manifold \(W\). Then,_ 1. _The exterior_ \(X(L)\) _of_ \(L\) _in_ \(W\) _is irreducible, boundary-incompressible and aspherical._ 2. _If_ \(A\) _is an essential annulus in_ \(X(L)\) _and_ \(\alpha\) _is the slope of a boundary component of_ \(A\) _on a torus_ \(T\subset\partial X(L)\)_, then_ \(|\delta_{T}(\Phi_{0})\cdot\alpha|=0\)_._ **Remark 3.2**.: Since the exterior of a composite link contains an essential annulus whose boundary components are meridians of some component of the link, part (2) of the lemma implies that if the complement of a link \(L\) admits a pseudo-Anosov flow with no meridional degeneracy loci then \(L\) is prime. **Proposition 3.3**.: _Suppose that \(\mathcal{M}\), \(B\), \(\beta_{i}\), and \(n_{i}\) are as above and that \(\Phi\) is a flow on \(|\mathcal{M}|\) that is well-adapted to the pair \((\mathcal{M},B)\). Then there is a faithful representation_ \[\rho_{\Phi}:\pi_{1}(\mathcal{M})\to\mathrm{Homeo}_{+}(S^{1})\] _Further, \(\rho_{\Phi}(\beta_{i})\) is conjugate to rotation by \(2\pi/n_{i}\) for each \(i=1,2,\ldots,m\)._ Proof.: By assumption, the restriction \(\Phi_{0}\) of \(\Phi\) to \(C(B)\) is pseudo-Anosov. By Lemma 3.1, \(\mathcal{M}\) contains no teardrops or spindles, so is finitely covered by a manifold [3]. Consider a commutative diagram of covering maps where \(p\) and \(p^{\prime}\) are universal covers and \(p_{1}\) is a finite degree regular cover from a manifold \(W\) to \(\mathcal{M}\). Let \(\Phi^{\prime}\) be the lift of the flow \(\Phi\) to \(W\). Set \((C^{\prime},X^{\prime},B^{\prime})=p_{1}^{-1}(C(B),X(B),B)\) and let \(\Phi^{\prime}_{0}\) be the restriction of \(\Phi^{\prime}\) to \(C^{\prime}\). Given a boundary component \(T\) of \(X^{\prime}\), suppose that \(p_{1}(T)=\partial N(B_{i})\) and let \(\upsilon\in H_{1}(T)\) be the class of the meridional slope of the component of \((p_{1})^{-1}(N(B_{i}))\) containing \(T\). Then \(\upsilon\) is mapped to \(n_{i}\beta_{i}\in H_{1}(\partial N(B_{i}))\) by the restriction of \(p_{1}\) to \(T\). The degeneracy locus \(\delta_{T}(\Phi_{0}^{\prime})\) of \(\Phi_{0}^{\prime}\) on \(T\) is the inverse image of \(\delta_{i}(\Phi_{0})\) and so \[|\delta_{T}(\Phi_{0}^{\prime})\cdot\upsilon|=n_{i}|\delta_{i}(\Phi_{0})\cdot \beta_{i}|\geq 2\] It follows that \(\Phi^{\prime}\) is a pseudo-Anosov flow on \(W\) and therefore its lift \(\widetilde{\Phi}\) to \(\widetilde{\mathcal{M}}\) has orbit space \(\mathcal{O}\cong\mathbb{R}^{2}\) (Theorem 2.1). Since \(\widetilde{\Phi}\) is also the lift of \(\Phi\) to \(\widetilde{\mathcal{M}}\), it is invariant under the action of \(\pi_{1}(\mathcal{M})\). Thus there is an induced action of \(\pi_{1}(\mathcal{M})\) on \(\mathcal{O}\) and as in the proof of Fenley's theorem (cf. Theorem 2.2), this action extends to an action on \(\mathcal{D}\), and therefore on \(\partial\mathcal{O}\). Let \[\rho_{\Phi}:\pi_{1}(\mathcal{M})\to\mathrm{Homeo}_{+}(\partial\mathcal{O})\] be the associated representation. The meridional class \(\beta_{i}\) acts on \(\widetilde{\mathcal{M}}\) as a rotation by \(2\pi/n_{i}\) about a component \(\widetilde{B}_{i}\) of the inverse image of \(B_{i}\) in \(\widetilde{\mathcal{M}}\) in the sense determined by the induced orientations on \(\widetilde{B}_{i}\) and \(\widetilde{\mathcal{M}}\). Hence as \(\widetilde{B}_{i}\) is a flow line of \(\widetilde{\Phi}\), \(\rho_{\Phi}(\beta_{i})\) is conjugate to a rotation by \(2\pi/n_{i}\) with respect to the orientation on \(S^{1}=\partial\mathcal{O}\) determined by the orientation on the flow lines of \(\widetilde{\Phi}\) and that on \(\widetilde{\mathcal{M}}\). In particular, \(\rho_{\Phi}\) is injective on each finite subgroup of \(\pi_{1}(\mathcal{M})\). The restriction of \(\rho_{\Phi}\) to \(\pi_{1}(W)\) is Fenley's asymptotic circle representation associated to \(\Phi^{\prime}\), so is injective. It follows that each element of \(\mathrm{kernel}(\rho_{\Phi})\) is of finite order and therefore contained in an isotropy subgroup of \(\pi_{1}(\mathcal{M})\), on which we have just seen that \(\rho_{\Phi}\) is injective. Thus the kernel of \(\rho_{\Phi}\) is trivial. **Corollary 3.4**.: _Let \(L=K_{1}\cup\cdots\cup K_{m}\) be a link in a closed, connected, oriented \(3\)-manifold \(W\) whose complement admits a pseudo-Anosov flow \(\Phi_{0}\). Fix essential simple closed curves \(\alpha_{i}\) on \(T_{i}\) and integers \(n_{i}\geq 1\) satisfying \(n_{i}|\delta_{i}(\Phi_{0})\cdot\alpha_{i}|\geq 2\). Then_ 1. _There is a faithful representation_ \(\rho:\pi_{1}(X(L)(\alpha_{*};n_{*}))\to\mathrm{Homeo}_{+}(S^{1})\)_._ 2. _If_ \(\alpha_{i}\) _is oriented positively with respect to the orientation on_ \(X(L)(\alpha_{*})\) _and that of the core of the_ \(\alpha_{i}\)_-filling solid torus induced by the flow, then_ \(\rho(\alpha_{i})\) _is conjugate to rotation by_ \(2\pi/n_{i}\) _for each_ \(i=1,2,\ldots,m\)_._ Proof.: Set \(\mathcal{M}=X(L)(\alpha_{*};n_{*})\). Let \(B=B_{1}\cup\cdots\cup B_{m}\) be the link in \(|\mathcal{M}|=X(L)(\alpha_{*})\) corresponding to the cores of the \(\alpha_{i}\)-surgery solid tori. Since \(|\delta_{i}(\Phi_{0})\cdot\alpha_{i}|\neq 0\) for each \(i\), \(\Phi_{0}\) extends to a flow \(\Phi\) on \(|\mathcal{M}|\) which is well-adapted to the pair \((\mathcal{M},B)\) once we orient \(B\) compatibly with \(\Phi\). The corollary then follows immediately from Proposition 3.3. We finish this subsection by proving Lemma 3.1. Proof of Lemma 3.1.: Write \(L=K_{1}\cup\cdots\cup K_{m}\) and choose slopes \(\alpha_{i}\) on \(T_{i}\) such that \(|\delta_{i}(\Phi_{0})\cdot\alpha_{i}|\geq 2\) for each \(i\). Then \(\Phi_{0}\) extends to a pseudo-Anosov flow \(\Phi\) on \(W^{\prime}=X(L)(\alpha_{1},\ldots,\alpha_{m})\) for which the cores of the filling solid tori are flow lines. Theorem 2.1 then implies that \(X(L)\) is covered by a manifold of the form \(P\times\mathbb{R}\), where \(P\) is a subsurface of \(\mathbb{R}^{2}\). Hence \(X(L)\) is irreducible and aspherical. If it were boundary-compressible it would be a solid torus and therefore \(W^{\prime}=X(L)(\alpha_{1})\) would not be aspherical, contrary to the fact that it admits a pseudo-Anosov flow. This proves (1). For (2), suppose that \(A\) is an essential annulus in \(X(L)\) and \(A_{0}\) is a boundary component of \(A\) of slope \(\alpha\) on a torus \(T\subset\partial X(L)\). Without loss of generality we can assume that \(T=\partial N(K_{1})\). Suppose \(|\delta_{1}(\Phi_{0})\cdot\alpha|\geq 1\). Let \(\mathcal{O}=X(L)(\alpha;2)\) be the orbifold obtained by an order \(2\) orbifold filling to the boundary component \(T\) along slope \(\alpha\). That is, \(\mathcal{O}\) has underlying space \(X(L)(\alpha)\) and singular set the core \(B_{1}\) of the \(\alpha\)-filling solid torus with isotropy \(\mathbb{Z}/2\). By construction \(\mathcal{O}\) contains no spindles (i.e. \(2\)-orbifolds of the form \(S^{2}(a,b)\) where \(1<a<b\)) since it has only \(\mathbb{Z}/2\) isotropy. Nor does it have any teardrops (i.e. \(2\)-orbifolds of the form \(S^{2}(a)\) where \(a>1\)) since \(X(L)\) is boundary-incompressible. Thus there is a finite regular cover \(p:M\to\mathcal{O}\), where \(M\) is a manifold ([3, Corollary 1.3]). Note that \(C(L)\) can be naturally identified with \(\operatorname{int}(|\mathcal{O}|)\setminus B_{1}\). We denote the preimages of \(C(L)\) and \(X(L)\) under \(p\) by \(\widetilde{C}(L)\) and \(\widetilde{X}(L)\) respectively. Let \(\widetilde{T}\subset\partial\widetilde{X}(L)\) be a boundary component that covers \(T\) and \(\tilde{\alpha}\) the slope on \(\widetilde{T}\) that covers \(\alpha\). Set \(M_{0}=\widetilde{X}(L)(\tilde{\alpha})\subseteq M\). Let \(\widetilde{\Phi}_{0}\) be the pullback flow of \(\Phi_{0}\) under \(p|_{\widetilde{C}(L)}:\widetilde{C}(L)\to C(L)\). Then since the isotropy group along \(B_{1}\) is \(\mathbb{Z}/2\), it follows that \[|\delta_{\widetilde{T}}(\widetilde{\Phi}_{0})\cdot\tilde{\alpha}|=2|\delta_{ 1}(\Phi_{0})\cdot\alpha|\geq 2\] Hence \(\widetilde{\Phi}_{0}\) can be extended to a pseudo-Anosov flow on the interior of \(M_{0}\). In particular \(M_{0}\) is irreducible, boundary-incompressible and aspherical by (1). Let \(\widetilde{A}\) be a component of the inverse image of \(A\) and \(\widetilde{A}_{0}\) the boundary component of \(\widetilde{A}\) lying over \(A_{0}\). We can assume that \(\widetilde{A}\) is chosen so that \(\widetilde{A}_{0}\subset\widetilde{T}\). Then \(\tilde{\alpha}\) is the slope of \(\widetilde{A}_{0}\). Since \(M_{0}\) is boundary-incompressible, \(\partial\widetilde{A}\) must be contained in \(\widetilde{T}\); otherwise, \(\widetilde{A}\) is contained in an essential disk properly embedded in \(M_{0}\). Then we also have \(\partial A\subset T\). Let \(\mathcal{S}\cong S^{2}(2,2)\) be the \(2\)-orbifold in \(\mathcal{O}\) obtained from \(A\) by attaching two meridional disks of the \(\alpha\)-filling solid torus. It follows that \(\mathcal{S}\) pulls back to a \(2\)-sphere \(\widetilde{S}\) in \(M\), which is the union of \(\widetilde{A}\) and two meridional disks of the component of the inverse image in \(M\) of the \(\alpha\)-filling torus whose boundary is \(\widetilde{T}\). The cover \(p\) restricts to a degree \(2\) universal cover \(\widetilde{S}\to\mathcal{S}\). The irreducibility of \(M\) implies that \(\widetilde{S}\) bounds a \(3\)-ball \(\widetilde{B}\) in that manifold. Suppose that there is a deck transformation \(\gamma\) of the cover \(M\to\mathcal{O}\) such that \(\gamma(\widetilde{B})\cap\widetilde{B}\neq\emptyset\) but \(\gamma(\widetilde{B})\neq\widetilde{B}\). Then either \(\gamma^{\varepsilon}(\widetilde{B})\subset\operatorname{int}(\widetilde{B})\) for some \(\varepsilon\in\{\pm 1\}\) or \(M=\widetilde{B}\cup\gamma(\widetilde{B})\). The former is ruled out since \(\gamma\) is of finite order, while the latter is ruled out by the asphericity of \(M\). Thus \(\gamma(\widetilde{B})\cap\widetilde{B}\neq\emptyset\) implies that \(\gamma(\widetilde{B})=\widetilde{B}\). It follows that the stabiliser of \(\widetilde{B}\) coincides with that of \(\widetilde{S}\), and is therefore isomorphic to \(\mathbb{Z}/2\). Then we have a \(2\)-fold cover \(\widetilde{B}\to\mathcal{B}\subset\mathcal{O}\) of orbifolds, which can be thought of as a \(2\)-fold cover branched over the properly embedded arc \(|\mathcal{B}|\cap B_{1}\) in the underlying space \(|\mathcal{B}|\) of \(\mathcal{B}\). Since \(\widetilde{B}\) is a \(3\)-ball, \(|\mathcal{B}|\) is a simply-connected compact \(3\)-manifold with boundary a \(2\)-sphere, and therefore a \(3\)-ball. But then as \(\widetilde{B}\) is a \(3\)-ball, \(|\mathcal{B}|\cap B_{1}\) is unknotted in \(|\mathcal{B}|\) by the \(\mathbb{Z}/2\)-Smith Conjecture, and therefore \(A\) is boundary parallel in \(X(L)\), which contradicts our assumption that it is essential. This final contradiction shows that \(|\delta_{T}(\Phi_{0})\cdot\alpha|=0\). ## 4. Recalibrating \(\mathbb{R}\)-order trees and \(\text{Homeo}_{+}(S^{1})\)-representations ### \(\mathbb{R}\)-order trees An \(\mathbb{R}\)-order tree is a set \(T\) together with a family \(\mathcal{S}(T)\) of totally ordered subsets whose elements are called _segments_. Each segment \(\sigma\) is order isomorphic to a closed interval in the reals whose least and greatest elements (its endpoints) are assumed to be distinct and are denoted \(\sigma^{-}\) and \(\sigma^{+}\) respectively. The _inverse_ of a segment \(\sigma\) is \(\sigma\) endowed with the opposite order and is denoted \(-\sigma\). It is assumed that the following conditions hold: * \(\sigma\in\mathcal{S}(T)\) if and only if \(-\sigma\in\mathcal{S}(T)\); * a closed subinterval of a segment with more than one element is a segment; * any two elements of \(T\) can be joined by a sequence \(\sigma_{1},\cdots,\sigma_{k}\in\mathcal{S}(T)\) with \(\sigma_{i}^{+}=\sigma_{i+1}^{-}\) for \(i=1,\cdots,k-1\); * if \(\sigma_{0}\sigma_{1}\cdots\sigma_{k-1}\) is a cyclic word of segments with \(\sigma_{i}^{+}=\sigma_{i+1}^{-}\) for all \(i\) (mod \(k\)), then each \(\sigma_{j}\) can be written as a concatenation of subsegments yielding a cyclic word \(\rho_{0}\rho_{1}\cdots\rho_{n-1}\) which becomes the trivial word when adjacent inverse segments are canceled; * if \(\sigma_{1}\) and \(\sigma_{2}\) are segments whose intersection is \(\sigma_{1}^{+}=\sigma_{2}^{-}\), then \(\sigma_{1}\cup\sigma_{2}\in\mathcal{S}(T)\); * \(T\) is the union of countably many segments. We will sometimes write \([\sigma^{-},\sigma^{+}]\) for \(\sigma\), \((\sigma^{-},\sigma^{+}]\) for \(\sigma\setminus\sigma^{-}\), etc. An \(\mathbb{R}\)-order tree is endowed with the weak topology with respect to its segments. Order trees were defined in [32] to model the structure of the leaf space of an essential lamination \(\Lambda\) on \(\mathbb{R}^{3}\). The definition above was taken from [31]. (See also [57].) Let \(\Lambda\) be an essential lamination on a closed \(3\)-manifold \(M\). We may assume that \(\Lambda\) has no isolated leaves. Otherwise, one can perform the well-known "blowup" operation on isolated leaves [30, Operation 2.1.1]. If \(\Lambda\) is the stable or unstable lamination of a pseudo-Anosov flow, then there are no isolated leaves. Let \(\widetilde{\Lambda}\) denote the pullback of \(\Lambda\) to the universal cover \(\widetilde{M}\cong\mathbb{R}^{3}\). Then the leaf space of \(\widetilde{\Lambda}\), denoted by \(T(\widetilde{\Lambda})\), is defined to be the quotient \(\widetilde{M}/\!\!\!\sim\) where \(x\sim y\) if \(x\) and \(y\) are on the same leaf of \(\widetilde{\Lambda}\) or they are in the same complementary region of \(\widetilde{\Lambda}\). The leaf space \(T(\widetilde{\Lambda})\) is an \(\mathbb{R}\)-order tree [31]. The segments of \(T(\widetilde{\Lambda})\) are the images of transverse arcs to \(\widetilde{\Lambda}\) under the natural map \(v:\mathbb{R}^{3}\to T(\widetilde{\Lambda})\). One of the main results of [31] states that this association induces a one-to-one correspondence between homeomorphism classes of essential laminations on \(\mathbb{R}^{3}\) and isomorphism classes of \(\mathbb{R}\)-order trees ([31, Theorem 8.1]). Analogous definitions and results hold for essential laminations on \(\mathbb{R}^{2}\), though here the lamination's embedding in the plane endows the tree with extra structure in the form of local circular orderings, which we will discuss in the next section. Theorem 4.2 of [31] states that every \(\mathbb{R}\)-order tree endowed with this extra structure is isomorphic to the leaf space of some essential lamination on \(\mathbb{R}^{2}\), while [31, Theorem 6.9] states that two essential laminations on \(\mathbb{R}^{2}\) with isomorphic \(\mathbb{R}\)-order trees are equivalent up to splitting along leaves. ### Cyclically ordered \(\mathbb{R}\)-order trees A _circular order_ on a set \(E\) is a function \(c:E^{3}\to\{-1,0,1\}\) satisfying: * if \(e_{1},e_{2},e_{3}\in E\), then \(c(e_{1},e_{2},e_{3})=0\) if and only if \(e_{i}=e_{j}\) for some \(i\neq j\); * for all \(e_{1},e_{2},e_{3},e_{4}\in E\) we have \(c(e_{2},e_{3},e_{4})-c(e_{1},e_{3},e_{4})+c(e_{1},e_{2},e_{4})-c(e_{1},e_{2},e_ {3})=0\). An immediate consequence of these conditions is that if \(e_{1},e_{2},e_{3}\in E\) and \(\tau\in S_{3}\) a permutation, then \[c(e_{\tau(1)},e_{\tau(2)},e_{\tau(3)}))=\operatorname{sign}(\tau)c(e_{1},e_{2},e_{3}) \tag{4.2.1}\] Consider an \(\mathbb{R}\)-order tree \(T\) with \(\mathcal{S}(T)\) its family of ordered segments. Following [31, SS2], a _cyclic ordering_ on \(T\) consists of two sets of local circular orders: * a circular order on any finite set of segments \(\{\sigma_{i}\}\) such that \(\sigma_{i}^{-}=\sigma_{j}^{-}\) for all \(i,j\) and \((\sigma_{i}\setminus\sigma_{i}^{-})\cap(\sigma_{j}\setminus\sigma_{j}^{-})=\emptyset\) for all \(i\neq j\). Further, the circular order on a subset of \(\{\sigma_{i}\}\) or on a set of initial subsegments of \(\sigma_{i}\)'s is that induced from \(\{\sigma_{i}\}\). In this case we call the common point \(\sigma_{i}^{-}\) a _vertex_ of \(T\). * a circular order on any finite set of segments \(\{\tau\}\cup\{\tau_{i}\}_{i=1}^{n}\), where \(n\geq 2\), \(\tau_{i}^{+}\neq\tau_{j}^{+}\) for \(i\neq j\) and \(\tau=\tau_{i}\setminus\tau_{i}^{+}=\tau_{j}\setminus\tau_{j}^{+}\) for all \(i,j\). Further, the circular order on a subset of \(\{\tau\}\cup\{\tau_{i}\}\) or on a set of terminal subsegments of \(\tau\) and \(\tau_{i}\)'s is that induced from \(\{\tau\}\cup\{\tau_{i}\}\). In this case the set \(\{\tau_{i}^{+}\}_{i=1}^{n}\) is contained in a cataclysm1 of \(T\). Footnote 1: A _cataclysm_ is a subset of \(T\) with cardinality two or more of the form \(\overline{\tau}-(\tau\setminus\tau^{+})\), where \(\tau\) is a segment of \(T\) (cf. [17, §3.4]). For instance, in Figure 1, the set \(\{\tau_{1}^{+},\tau_{2}^{+},\tau_{3}^{+},\tau_{4}^{+}\}\) is contained in a cataclysm. Thus a cyclic order on \(T\) determines circular orders on the germs of segments incident to vertices and cataclysms. Following Roberts and Stein [57], we define a _cusp_ of \(T\) to be an equivalence class of pairs of segments \((\sigma,\tau)\) with \(\sigma\setminus\sigma^{+}=\tau\setminus\tau^{+}\) but \(\sigma^{+}\neq\tau^{+}\), where we say that \((\sigma_{1},\tau_{1})\) is equivalent to \((\sigma_{2},\tau_{2})\) if \(\sigma_{1}^{+}=\sigma_{2}^{+}\), \(\tau_{1}^{+}=\tau_{2}^{+}\) and there is an \(x\in(\sigma_{1}\setminus\sigma_{1}^{+})\cap(\sigma_{2}\setminus\sigma_{2}^{+})\) such that \(\sigma_{i}=[\sigma_{i}^{-},x]\cup[x,\sigma_{i}^{+}]\) and \(\tau_{i}=[\tau_{i}^{-},x]\cup[x,\tau_{i}^{+}]\) for both values of \(i\). A cusp is determined uniquely by the pair \(\{\sigma^{+},\tau^{+}\}\), which we use to denote it. The reader will verify that if \(\{p,q\}\) is a cusp, Figure 1. Two types of local circular orders on an \(\mathbb{R}\)-order tree then \(p\) and \(q\) are contained in a cataclysm. A _stem_ of a cusp \(\{p,q\}\) is any of the intervals \([\sigma^{-},\sigma^{+})\), where \((\sigma,\tau)\) represents \(\{p,q\}\). The _geodesic spine_ between points \(x,y\) of \(T\) ([57, Definition 3.5, Theorem 3.6]) is the intersection of the images of all paths between them. Moreover, it is easy to use the amalgamation of segments to see that the geodesic spine between \(x\) and \(y\) can be expressed uniquely as a union \(\sigma_{1}\cup\sigma_{2}\cup\cdots\cup\sigma_{n}\) of segments where \(\sigma_{1}^{-}=x,\sigma_{n}^{+}=y\) and \(\{\sigma_{i}^{+},\sigma_{i+1}^{-}\}\) is a cusp for \(1\leq i\leq n-1\). See Figure 2. It is possible that \(\sigma_{1}\) and/or \(\sigma_{n}\) are the degenerate segments \(x\) and/or \(y\) when the latter belong to a cusp. More generally, a _geodesic spine_ is a subset of \(T\) which can be expressed as a nested union of geodesic spines between pairs of points. For each triple of distinct points \((x,y,z)\) of an \(\mathbb{R}\)-order tree, Roberts and Stein defined a subset \(Y_{(x,y,z)}\) of \(T\) representing the unique place in \(T\) at which the geodesic spine from \(y\) to \(x\) diverges from that from \(y\) to \(z\) ([57, page 182]). It is either a point or a cusp, where the latter occurs if and only if both geodesic spines contain a stem of the cusp and \(x\) and \(z\) lie in different components of the complement of one (and hence all) of the cusp's stems in \(T\). A _ray_ based at \(x\in T\) is a proper embedding \(r:([0,\infty),0)\to(T,x)\). By definition, a ray can only go through at most one point in each cataclysm; otherwise, the map \(r\) cannot be an embedding. One can see this in Figure 2, where any continuous path going through both blue segments must travel downward into the stem of the cusp and then backtrack upward. We consider two rays \(r_{1},r_{2}\) to be equivalent, written \(r_{1}\sim r_{2}\), if there are real numbers \(t_{1},t_{2}\geq 0\) such that \(r_{1}([t_{1},\infty))=r_{2}([t_{2},\infty))\), and define the set of ends of \(T\) to be \[\mathcal{E}(T)=\{\text{rays in }T\}/\mathord{\sim}\] Given \(e\in\mathcal{E}(T)\) and point \(x\in T\), there is a (unique) geodesic spine in \(T\) between \(x\) and \(e\) obtained by concatenating a geodesic spine between \(x\) and some point \(y\) of \(T\) with a ray based at \(y\) in the class of \(e\). The following is stated in the proof [17, Theorem 3.8]. For completeness we include a proof. **Lemma 4.1**.: _The set of ends \(\mathcal{E}(T)\) of a cyclically ordered \(\mathbb{R}\)-order tree \(T\) admits a natural circular ordering._ Figure 2. The segments colored in blue can be a part of a geodesic spine Proof.: To define a circular ordering \(c\) on \(\mathcal{E}(T)\) associated to the cyclic ordering on \(T\), let \(e_{*}=(e_{1},e_{2},e_{3})\in\mathcal{E}(T)^{3}\) and set \(c(e_{*})=0\) if \(e_{i}=e_{j}\) for some \(i\neq j\). If the \(e_{i}\) are distinct, choose representative rays \(r_{i}\) for the \(e_{i}\) and increasing sequences \(\{x_{n}\}\) in \(r_{1}\), \(\{y_{n}\}\) in \(r_{2}\), and \(\{z_{n}\}\) in \(r_{3}\) which limit, respectively, to \(e_{1},e_{2},e_{3}\). Our hypotheses imply that for large \(n\) the points \(x_{n},y_{n},z_{n}\) are distinct and the subsets \(Y(x_{n},y_{n},z_{n})\), \(Y(y_{n},z_{n},x_{n})\), and \(Y(z_{n},x_{n},y_{n})\) stabilise. By taking subsequences we can assume that \(Y(x_{n},y_{n},z_{n}),Y(y_{n},z_{n},x_{n})\), and \(Y(z_{n},x_{n},y_{n})\) are independent of \(n\). Theorem 3.10 of [57] implies that there are segments \(\sigma_{1},\sigma_{2},\sigma_{3}\) of \(T\) such that \((\sigma_{i}-\sigma_{i}^{-})\cap(\sigma_{j}-\sigma_{j}^{-})=\emptyset\) for \(i\neq j\) and one of the following situations arises: * \(\sigma_{1}^{-}=\sigma_{2}^{-}=\sigma_{3}^{-}=Y(x_{n},y_{n},z_{n})=Y(z_{n},x_{n },y_{n})=Y(y_{n},z_{n},x_{n})\) is a vertex \(v(e_{*})\) of \(T\). Moreover, \(\sigma_{i}\setminus\sigma_{i}^{-}\) is contained in the component of \(T\setminus v(e_{*})\) containing each \(x_{n}\) when \(i=1\), \(y_{n}\) when \(i=2\), and \(z_{n}\) when \(i=3\). See Figure 3, where the components of \(T\setminus v(e_{*})\) containing \(\{x_{n}\}\), \(\{y_{n}\}\), \(\{z_{n}\}\) are illustrated. (Note that these points are not necessarily on \(\sigma_{i}\).) * \(\sigma_{1}^{-}=Y(z_{n},x_{n},y_{n})\), \(\sigma_{2}^{-}=Y(x_{n},y_{n},z_{n})\), \(\sigma_{3}^{-}=Y(y_{n},z_{n},x_{n})\) are distinct points contained in a cataclysm of \(T\), also denoted \(v(e_{*})\). Moreover, \(\sigma_{i}\setminus\sigma_{i}^{-}\) is contained in the component of \(T\setminus v(e_{*})\) containing each \(x_{n}\) when \(i=1\), \(y_{n}\) when \(i=2\), and \(z_{n}\) when \(i=3\). (Figure 4) * up to switching \(e_{3}\) and one of \(e_{1},e_{2}\): \(\sigma_{1}^{-}=Y(z_{n},x_{n},y_{n})\) and \(\sigma_{2}^{-}=Y(x_{n},y_{n},z_{n})\) are distinct points contained in a cataclysm \(v(e_{*})\) of \(T\), and \(\{\sigma_{1}^{-},\sigma_{2}^{-}\}=Y(y_{n},z_{n},x_{n})\). Figure 3. Moreover, \(\sigma_{i}\setminus\sigma_{i}^{-}\) is contained in the component of \(T\setminus v(e_{*})\) containing each \(x_{n}\) when \(i=1\), \(y_{n}\) when \(i=2\), and \(z_{n}\) when \(i=3\). (Figure 5) In each case the cyclic ordering on \(T\) determines a circular ordering \(c_{v(e_{*})}\) on the germs of segments with an endpoint in \(v(e_{*})\). Set \[c(e_{1},e_{2},e_{3})=c_{v_{(e_{*})}}(\sigma_{1},\sigma_{2},\sigma_{3}) \tag{4.2.2}\] To complete the proof we must show that \[s=c(e_{2},e_{3},e_{4})-c(e_{1},e_{3},e_{4})+c(e_{1},e_{2},e_{4})-c(e_{1},e_{2}, e_{3}) \tag{4.2.3}\] is zero for each choice of \(e_{1},e_{2},e_{3},e_{4}\in\mathcal{E}(T)\). This is readily verified if \(e_{i}=e_{j}\) for some \(i\neq j\), since \(v(e_{*})\) is invariant under permutations of \(e_{*}\). Suppose then that \(e_{1},e_{2},e_{3},e_{4}\) are distinct elements of \(\mathcal{E}(T)\) and let \(e_{*}=(e_{1},e_{2},e_{3})\). For each \(i=1,\cdots,4\), there is a geodesic spine \(\gamma_{i}\) from \(v(e_{*})\) to \(e_{i}\) obtained by concatenating a geodesic spine \(c_{i}\) from \(v(e_{*})\) to a point \(p_{i}\) of \(T\) with a ray \(r_{i}\) in the class of \(e_{i}\) based at \(p_{i}\). We assume that \(c_{i}\) contains only one point \(v(e_{*})\), which prohibits \(c_{i}\) from starting off by jumping between different points in \(v(e_{*})\) when \(v(e_{*})\) is a cataclysm. For instance, in Figure 5, \(\sigma_{2}^{-}\cup\sigma_{1}\) is a geodesic spine that contains two points of \(v(e_{*})\). Let \(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4}\) denote initial segments of \(\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\), oriented so that \(\sigma_{i}^{-}\in v(e_{*})\) for each \(i\). Then by our discussion of the three cases above (cf. Figures 3, 4 and 5), \(\sigma_{i}\setminus\sigma_{i}^{-}\) are pairwise disjoint for \(i=1,2,3\). If \(\sigma_{4}\setminus\sigma_{4}^{-}\) is also disjoint from \(\sigma_{i}\setminus\sigma_{i}^{-}\) for \(i=1,2,3\), then the identity \(s=0\) follows from the fact that \(c_{v(e_{*})}\) is a circular ordering. Otherwise, there is a unique \(i\in\{1,2,3\}\) such that \((\sigma_{4}\setminus\sigma_{4}^{-})\cap(\sigma_{i}\setminus\sigma_{i}^{-})\neq\emptyset\). Sketching some possible configurations for the \(\{\sigma_{i}\}\) shows intuitively why the cocycle condition (4.2.3) holds in this case, though the formal verification is somewhat tedious. As such we depict the case that \(i=1\) and \(v(e_{*})\) is a vertex in Figure 6 to help orient the reader through the formal argument. Figure 5. For the general case, set \(\{j,k\}=\{1,2,3\}\setminus\{i\}\), in which case the reader will verify that \(v(e_{i},e_{j},e_{4})=v(e_{i},e_{k},e_{4})\). After possibly shrinking \(\sigma_{4}\) and \(\sigma_{i}\), we can suppose that \(\sigma_{4}=\sigma_{i}\). Let \(\sigma_{i}^{\prime},\sigma_{4}^{\prime}\), \(\sigma_{j}^{\prime}\), \(\sigma_{k}^{\prime}\) be initial segments of the geodesic spines based at \(v^{\prime}\) in the class of \(e_{i},e_{4}\), \(e_{j}\) and \(e_{k}\). Note that we can choose \(\sigma_{j}^{\prime}=\sigma_{k}^{\prime}\). Then, \[s=\left\{\begin{array}{ll}c_{v}(\sigma_{2},\sigma_{3},\sigma_{1})-c_{v^{ \prime}}(\sigma_{1}^{\prime},\sigma_{3}^{\prime},\sigma_{4}^{\prime})+c_{v^{ \prime}}(\sigma_{1}^{\prime},\sigma_{2}^{\prime},\sigma_{4}^{\prime})-c_{v}( \sigma_{1},\sigma_{2},\sigma_{3})&\text{ if }i=1\\ c_{v^{\prime}}(\sigma_{2}^{\prime},\sigma_{3}^{\prime},\sigma_{4}^{\prime})-c_{ v}(\sigma_{1},\sigma_{3},\sigma_{2})+c_{v^{\prime}}(\sigma_{1}^{\prime},\sigma_{2}^{ \prime},\sigma_{4}^{\prime})-c_{v}(\sigma_{1},\sigma_{2},\sigma_{3})&\text{ if }i=2\\ c_{v^{\prime}}(\sigma_{2}^{\prime},\sigma_{3}^{\prime},\sigma_{4}^{\prime})-c_{ v^{\prime}}(\sigma_{1}^{\prime},\sigma_{3}^{\prime},\sigma_{4}^{\prime})+c_{v}( \sigma_{1},\sigma_{2},\sigma_{3})-c_{v}(\sigma_{1},\sigma_{2},\sigma_{3})& \text{ if }i=3\end{array}\right.\] It is simple to see that each of these expressions is zero using (4.2.1) and the fact that we can take \(\sigma_{2}^{\prime}=\sigma_{3}^{\prime}\) for the first, \(\sigma_{1}^{\prime}=\sigma_{3}^{\prime}\) for the second, and \(\sigma_{1}^{\prime}=\sigma_{2}^{\prime}\) for the third. This completes the proof. ### Group actions on \(\mathbb{R}\)-order trees We say that a group \(G\) acts on an \(\mathbb{R}\)-order tree \(T\) if it acts on the underlying space preserving \(\mathcal{S}(T)\). We say that it acts on a cyclically ordered \(\mathbb{R}\)-order tree if it acts on the \(\mathbb{R}\)-order tree preserving the local circular orders. Since any such action satisfies \[Y_{(g\cdot x,g\cdot y,g\cdot z)}=g(Y_{(x,y,z)})\] for triples of distinct points \(x,y,z\in T\), we deduce the following lemma. **Lemma 4.2**.: _If a group \(G\) acts on a cyclically ordered \(\mathbb{R}\)-order tree, then \(G\) acts on the set of ends of \(T\) by order-preserving automorphisms. _ A _circular order_ on a group \(G\) is a circular order on the set \(G\) which satisfies \[c(g\cdot g_{1},g\cdot g_{2},g\cdot g_{3})=c(g_{1},g_{2},g_{3})\] for all \(g,g_{1},g_{2},g_{3}\in G\). An automorphism of a circularly ordered set \((E,c)\) is a bijection of \(E\) which preserves \(c\). The group of such automorphisms will be denoted by \(\operatorname{Aut}(E,c)\). **Lemma 4.3**.: _If \((E,c)\) is a circularly ordered set, then the group \(\operatorname{Aut}(E,c)\) is circularly ordered._ Figure 6. Here, \(i=1\) and we use \(v\) to denote \(v(e_{*})\), \(v^{\prime}\) to denote \(v(e_{i},e_{k},e_{4})=v(e_{i},e_{j},e_{4})\) and suppose that \(v\) and \(v^{\prime}\) are vertices. In the case that one of them is a cataclysm, the figure is similar (cf. Figure 4 and Figure 5). Proof.: A proof in the case that \(E\) is the circle with its natural circular order is contained in [15, Theorem 2.2.14]. The general case is similar, though we sketch the argument for use in the proof of Theorem 4.5. Set \(G=\operatorname{Aut}(E,c)\), fix \(e\in E\) and let \(G_{e}\leq G\) be the stabiliser of \(e\). We obtain a \(G\)-invariant circular order \(c_{e}\) on the set of cosets \(G/G_{e}\) via the \(G\)-invariant embedding \(gG_{e}\mapsto g\cdot e\): \[c_{e}(g(g_{1}G_{e}),g(g_{2}G_{e}),g(g_{3}G_{e}))=c_{e}(g_{1}G_{e},g_{2}G_{e},g_ {3}G_{e})\text{ for all }g,g_{1},g_{2},g_{3}\in G\] Hence \(G\) is circularly ordered if \(G_{e}=\{1\}\). If \(G_{e}\neq\{1\}\), observe that \(E\setminus\{e\}\) admits a \(G_{e}\)-invariant total order defined by \[e_{1}<e_{2}\Leftrightarrow c(e,e_{1},e_{2})=1\] Since \(G_{e}\) acts faithfully on \(E\setminus\{e\}\) preserving this total order, it admits a left-order \(<_{e}\) ([19]). A circular order \(c^{\prime}\) on \(G\) can then be obtained from the sequence \(1\to G_{e}\to G\to G/G_{e}\) by piecing together the left-order \(<_{e}\) on \(G_{e}\) and the circular order \(c_{e}\) on \(G/G_{e}\): If \(g_{1},g_{2},g_{3}\in G\) set * \(c^{\prime}(g_{1},g_{2},g_{3})=0\) if \(g_{i}=g_{j}\) for some \(i\neq j\); * \(c^{\prime}(g_{1},g_{2},g_{3})=c_{e}(g_{1}G_{e},g_{2}G_{e},g_{3}G_{e})\) if \(g_{1}G_{e},g_{2}G_{e},g_{3}G_{e}\) are distinct cosets. In the case when \(g_{1},g_{2},g_{3}\) are distinct, but \(g_{1}G_{e},g_{2}G_{e},g_{3}G_{e}\) are not, set: * \(c^{\prime}(g_{1},g_{2},g_{3})=(-1)^{(j-i)+1}\text{sign}_{<_{e}}(g_{i}^{-1}g_{ j})\) if \(g_{i}G_{e}=g_{j}G_{e}\neq g_{k}G_{e}\) where \(i<j\); * \(c^{\prime}(g_{1},g_{2},g_{3})=\text{sign}(\tau)\) if \(g_{1}G_{e}=g_{2}G_{e}=g_{3}G_{e}\) and \(\tau\in S_{3}\) is the unique permutation for which \(h_{\tau(1)}<_{e}h_{\tau(2)}<_{e}h_{\tau(3)}\) where \(h_{i}=g^{-1}g_{i}\) for some (and hence any) \(g\in g_{1}G_{e}\). It is routine to verify that \(c^{\prime}\) is a circular order on \(G\). A basic example is \(E=S^{1}\) and \(c\) the circular order determined by the usual orientation on \(S^{1}\). In this case \(\operatorname{Aut}(E,c)=\operatorname{Homeo}_{+}(S^{1})\), and we assume below that \(\operatorname{Homeo}_{+}(S^{1})\) is endowed with a circular order of the form described in the proof of Lemma 4.3. ### Stir frying representations via recalibration We saw in the previous subsection that if a group \(G\) acts on a cyclically ordered \(\mathbb{R}\)-order tree \(T\), there is an associated action of \(G\) on the set of ends of \(T\) which preserves the induced circular order. We will see in this subsection that this often leads to a circular order on \(G\) via Lemma 4.3 and hence, if \(G\) is countable, a dynamic realisation \(\rho:G\to\operatorname{Homeo}_{+}(S^{1})\). Altering the \(G\)-invariant cyclic ordering on \(T\) by a process we call _recalibration_, yields a fundamentally new representation \(G\to\operatorname{Homeo}_{+}(S^{1})\). Here, we make this idea precise by applying it to the natural actions of the fundamental groups on the ends of the leaf spaces of stable laminations of pseudo-Anosov flows on universal covers. As in SS3, we assume that \(\mathcal{M}\) is a closed, connected, oriented \(3\)-orbifold finitely covered by a manifold whose singular set is contained in an oriented link \(B=B_{1}\cup\dots\cup B_{m}\subset|\mathcal{M}|\). Let \(n_{i}\geq 1\) be the order of the isotropy of \(\mathcal{M}\) along \(B_{i}\). We denote a positively oriented meridional curve of \(N(B_{i})\) by \(\beta_{i}\). Given a flow \(\Phi\) on \(|\mathcal{M}|\) which is well-adapted to the pair \((\mathcal{M},B)\) (SS3.1), we constructed a commutative diagram of covering maps in the proof of Proposition 3.3: where \(p\) and \(p^{\prime}\) are universal covers and \(p_{1}\) is a finite degree regular cover from a manifold \(W\) to \(\mathcal{M}\). Let \(\Phi^{\prime}\) be the lift of the flow \(\Phi\) to \(W\) and \(\widetilde{\Phi}\) its lift to \(\widetilde{\mathcal{M}}\cong\mathbb{R}^{3}\). Both \(\Phi^{\prime}\) and \(\widetilde{\Phi}\) are pseudo-Anosov. The orbit space \(\mathcal{O}\) of \(\widetilde{\Phi}\) is homeomorphic to \(\mathbb{R}^{2}\) (Theorem 2.1) and inherits an orientation induced from those on \(\mathcal{M}\) and the flow. Let \(\widetilde{\Lambda}_{s}\) on \(\widetilde{\mathcal{M}}\cong\mathbb{R}^{3}\) be the pullback of the stable lamination \(\Lambda^{\prime}_{s}\) of \(\Phi^{\prime}\) (SS2.2). It follows from our constructions that the image of \(\widetilde{\Lambda}_{s}\) under the projection map \(\pi:\widetilde{\mathcal{M}}\to\mathcal{O}\) is an essential lamination \(\bar{\Lambda}_{s}\) on \(\mathcal{O}\) by lines. Then \(\pi_{1}(\mathcal{M})\) acts on the leaf space \(T(\bar{\Lambda}_{s})\) and hence on its space of ends \(\mathcal{E}(T(\bar{\Lambda}_{s}))\). Let \[\varphi:\pi_{1}(\mathcal{M})\to\operatorname{Aut}(\mathcal{E}(T(\bar{\Lambda} _{s})))\] be the associated homomorphism. There is a \(\pi_{1}(\mathcal{M})\)-invariant cyclic ordering on \(T(\bar{\Lambda}_{s})\) induced by the inclusion \(\bar{\Lambda}_{s}\subset\mathcal{O}\) so if \(c_{\Phi}\) is the associated circular ordering on \(\mathcal{E}(T(\bar{\Lambda}_{s}))\), the image of \(\varphi\) is contained in the circularly ordered group \(\operatorname{Aut}(\mathcal{E}(T(\bar{\Lambda}_{s})),c_{\Phi})\). In particular, \(\operatorname{image}(\varphi)\) is a circularly ordered group. Let \(\Phi_{0}\) be the restriction of \(\Phi\) to \(|\mathcal{M}|\setminus B\). If for \(i=1,2,\ldots,m\), we set \(\Delta_{i}=|\beta_{i}\cdot\delta_{i}(\Phi_{0})|\), then each component \(B_{i}\) of \(B\) lifts to a countable union of flow lines in \(\widetilde{\mathcal{M}}\) which corresponds to a \(\pi_{1}(\mathcal{M})\)-invariant subset \(V_{i}\) of \(T(\bar{\Lambda}_{s})\), each point of which has valency \[d_{i}=n_{i}\Delta_{i}\] For each \(x\in V_{i}\), let \(\sigma_{1}^{x},\sigma_{2}^{x},\ldots,\sigma_{d_{i}}^{x}\) be segments incident to \(x\) such that the half-open intervals \(\sigma_{i}^{x}\setminus\{x\}\) lie in different components of \(T(\bar{\Lambda}_{s})\setminus\{x\}\). Assume, moreover, that they are indexed (mod \(d_{i}\)) with respect to the local circular order at \(x\) determined by the inclusion \(\bar{\Lambda}_{s}\subset\mathcal{O}\). Then if \(\beta_{i}^{x}\in\pi_{1}(\mathcal{M})\) is the conjugate of \(\beta_{i}\) which leaves \(x\) invariant, the proof of Proposition 3.3 shows that \[\beta_{i}^{x}\cdot\sigma_{i}^{x}=\sigma_{i+\Delta_{i}}^{x} \tag{4.4.1}\] Further, since each \(\gamma\in\pi_{1}(\mathcal{M})\) acts as an orientation-preserving homeomorphism of \(\mathcal{O}\), if \(x\in V_{i}\) there is a \(k(x,\gamma)\in\mathbb{Z}\) such that \[\gamma\cdot\sigma_{j}^{x}=\sigma_{j+k(x,\gamma)}^{\gamma\cdot x} \tag{4.4.2}\] The following lemma is needed for the proof of Theorem 4.5 below. **Lemma 4.4**.: _The kernel of \(\varphi:\pi_{1}(\mathcal{M})\to\text{Aut}(\mathcal{E}(T(\bar{\Lambda}_{s})))\) is torsion-free. Moreover, its image is infinite and non-abelian if some \(n_{i}\geq 3.\)_ Proof.: A non-trivial element \(\gamma\) of finite order in \(\pi_{1}(\mathcal{M})\) has fixed points in \(\widetilde{\mathcal{M}}\cong\mathbb{R}^{3}\), so is a power of a conjugate of some \(\beta_{j}\) where \(n_{j}\geq 2\). Then \(\gamma\) fixes a point \(x\) of \(T(\bar{\Lambda}_{s})\) and by (4.4.1) it acts non-trivially on \(\mathcal{E}(T(\bar{\Lambda}_{s}))\). Hence \(\gamma\not\in\text{kernel}(\varphi)\). Circularly ordered finite groups are cyclic, so to complete the proof we need only show that the image of \(\varphi\) is non-abelian. By construction, \(V_{i}\) is spread uniformly across \(T(\bar{\Lambda}_{s})\). In particular there are distinct points \(x_{0}\) and \(x_{0}^{\prime}=\gamma\cdot x_{0}\), where \(\gamma\in\pi_{1}(\mathcal{M})\). Then \(\beta_{i}^{x_{0}}\) stabilises \(x_{0}\) while \(\beta_{i}^{x_{0}^{\prime}}=\gamma\beta_{i}^{x_{0}}\gamma^{-1}\) stabilises \(x_{0}^{\prime}\). If \(e\) is an end of \(T(\bar{\Lambda}_{s})\) determined by a ray based at \(x_{0}\) and passing through \(x_{0}^{\prime}\), the fact that \(n_{i}\geq 3\) implies that \((\beta_{i}^{x_{0}}\beta_{i}^{x_{0}^{\prime}})(e)\neq(\beta_{i}^{x_{0}^{\prime }}\beta_{i}^{x_{0}})(e)\). Hence \(\varphi(\beta_{i}^{x_{0}}\beta_{i}^{x_{0}^{\prime}})\neq\varphi(\beta_{i}^{x_ {0}^{\prime}}\beta_{i}^{x_{0}})\), so the image of \(\varphi\) is non-abelian. **Theorem 4.5**.: _Suppose that \(\mathcal{M}\) is a closed, connected, oriented \(3\)-orbifold and \(B=B_{1}\cup\cdots\cup B_{m}\) is an oriented link in the underlying \(3\)-manifold \(|\mathcal{M}|\) which contains the singular set of \(\mathcal{M}\). Denote by \(n_{i}\geq 1\) the order of the isotropy of \(\mathcal{M}\) along \(B_{i}\) and suppose that \(\Phi\) is a flow on \(|\mathcal{M}|\) that is well-adapted to \((\mathcal{M},B)\). Then given integers \(a_{i}\) coprime with \(n_{i}\), there is an induced action of \(\pi_{1}(\mathcal{M})\) on a cyclically ordered \(\mathbb{R}\)-order tree and an associated faithful representation_ \[\rho:\pi_{1}(\mathcal{M})\to\mathrm{Homeo}_{+}(S^{1})\] _where \(\rho(\beta_{i})\) is conjugate to rotation by \(2\pi a_{i}/n_{i}\) for each \(i\)._ We note that the theorem allows for \(n_{i}\) to be \(1\) and that if \(n_{i}=1\) for all \(i\), then \(\mathcal{M}\) is a manifold. The following lemma will be used in the proof of the theorem. **Lemma 4.6**.: _Let \(a_{i}\) and \(n_{i}\) be as in the theorem and let \(\Phi_{0}\) be the pseudo-Anosov flow obtained by restricting \(\Phi\) to \(C(B)\) If \(\Delta_{i}=|\delta_{i}(\Phi_{0})\cdot\beta_{i}|\geq 1\) then there exists \(a_{i}^{\prime}\equiv a_{i}\ (\mathrm{mod}\ n_{i})\) such that \(\gcd(a_{i}^{\prime},\Delta_{i})=1\)._ Proof.: Factor \(\Delta_{i}\) as \(c_{i}e_{i}\) where \(\gcd(e_{i},n_{i})=1\) and each prime which divides \(c_{i}\) also divides \(n_{i}\). Next choose an integer \(k\) so that \(kn_{i}\equiv 1-a_{i}\ (\mathrm{mod}\ e_{i})\), so we have \(a_{i}^{\prime}=a_{i}+kn_{i}\equiv 1\ (\mathrm{mod}\ e_{i})\). Then \(a_{i}^{\prime}\equiv a_{i}\ (\mathrm{mod}\ n_{i})\) and is coprime with \(e_{i},n_{i}\) and, a fortiori, with \(c_{i}\). Thus it is coprime with \(\Delta_{i}=c_{i}e_{i}\). Proof of Theorem 4.5.: Let \(\Phi_{0}\) denote the restriction of \(\Phi\) to \(C(B)\), which, by hypothesis, is pseudo-Anosov. By Lemma 3.1, \(\mathcal{M}\) contains no teardrops or spindles, so is finitely covered by a manifold [3]. If \(n_{i}=1\) or \(2\) for all \(i=1,\cdots,m\), we take \(\rho\) to be the asymptotic circle action \(\rho\) in Proposition 3.3. We remark that we cannot always use the action of \(\pi_{1}(\mathcal{M})\) on \(\mathcal{E}(T(\bar{\Lambda}_{s}))\) in this case as it is possible that \(T(\bar{\Lambda}_{s})\) is a line and hence \(\mathcal{E}(T(\bar{\Lambda}_{s}))\) only contains two points. Next we assume that some \(n_{i}\geq 3\). Lemma 4.4 then shows that the kernel of \(\varphi:\pi_{1}(\mathcal{M})\to\operatorname{Aut}(\mathcal{E}(T(\bar{\Lambda} _{s})))\) is torsion free and its image is infinite and non-abelian. Our strategy involves altering the circular orderings on \(T(\bar{\Lambda}_{s})\) equivariantly over each \(V_{i}\) with \(n_{i}\geq 3\). (When \(n_{i}\leq 2\), \(a_{i}\) is uniquely determined (mod \(n_{i}\)) and no such alterations are needed.) The reader will find an illustrative example of the idea of the argument in Figure 7 below. Recall that \(V_{i}\) is the \(\pi_{1}(\mathcal{M})\)-invariant subset of \(T(\bar{\Lambda}_{s})\) corresponding to \(B_{i}\) and that each point of \(V_{i}\) has valency \(d_{i}=n_{i}\Delta_{i}\). Without loss of generality we can suppose that \(\gcd(a_{i},d_{i})=1\), by Lemma 4.6. For each \(i\) with \(n_{i}\geq 3\), fix an integer \(b_{i}\) such that \(a_{i}b_{i}\equiv 1\) (mod \(d_{i}\)) and replace the circular ordering at \(x\in V_{i}\) by the one determined by the listing \[\sigma_{1}^{x},\sigma_{b_{i}+1}^{x},\sigma_{2b_{i}+1}^{x},\sigma_{3b_{i}+1}^{ x},\ldots,\sigma_{(d_{i}-1)b_{i}+1}^{x} \tag{4.4.3}\] where the indices take values in \(\{1,\cdots,d_{i}\}\) (mod \(d_{i}\)). We claim that this new circular ordering on the segments incident to the points of \(V_{i}\) is invariant under the action of \(\pi_{1}(\mathcal{M})\). To see this, first use (4.4.1) and (4.4.2) to reduce the verification to showing that the new local order is preserved under the action \(t:\{\sigma_{i}^{x}\}\to\{\sigma_{i}^{x}\}\) with \(t(\sigma_{i}^{x})=\sigma_{i+1}^{x}\). Next note that since \(a_{i}b_{i}\equiv 1\) (mod \(d_{i}\)), for any \(k\in\{0,1,\cdots,d_{i}-1\}\), we have \[\sigma_{(kb_{i}+1)+1}^{x}=\sigma_{(kb_{i}+1)+a_{i}b_{i}}^{x}=\sigma_{(k+a_{i} )b_{i}+1}^{x} \tag{4.4.4}\] and therefore the sequence \[t(\sigma_{1}^{x}),t(\sigma_{b_{i}+1}^{x}),t(\sigma_{2b_{i}+1}^{x}),\ldots,t( \sigma_{(d_{i}-1)b_{i}+1}^{x})\] is identical to the sequence in (4.4.3) up to a cyclic permutation. We _recalibrate_\(T(\bar{\Lambda}_{s})\) by changing the local order at \(V_{i}\) as in (4.4.3) to obtain a new \(\pi_{1}(\mathcal{M})\)-invariant cyclic ordering on \(T(\bar{\Lambda}_{s})\) and therefore a new \(\pi_{1}(\mathcal{M})\)-invariant circular order \(c\) on \(\mathcal{E}(T(\bar{\Lambda}_{s}))\). As such, the image of \(\varphi:\pi_{1}(\mathcal{M})\to\operatorname{Aut}(\mathcal{E}(T(\bar{\Lambda} _{s})))\) lies in \(\operatorname{Aut}(\mathcal{E}(T(\bar{\Lambda}_{s})),c)\). Since \(\operatorname{kernel}(\varphi)\) is a torsion-free infinite index subgroup of \(\pi_{1}(\mathcal{M})\), it is the fundamental group of an irreducible, orientable, non-compact \(3\)-manifold. Hence it is either trivial or left-orderable (cf. proof of [10, Theorem 1.1]). Then the exact sequence \[1\to\operatorname{kernel}(\varphi)\to\pi_{1}(\mathcal{M})\stackrel{{ \varphi}}{{\longrightarrow}}\operatorname{Aut}(\mathcal{E}(T(\bar{ \Lambda}_{s})),c)\] determines a circular order \(c_{0}\) on \(\pi_{1}(\mathcal{M})\) ([15, Lemma 2.2.12]) whose dynamic realisation is an order-preserving injection \(\rho_{0}:\pi_{1}(\mathcal{M})\to\operatorname{Homeo}_{+}(S^{1})\) ([15, Lemma 2.2.10]). We show that \(\rho_{0}(\beta_{i}^{x})\) is conjugate to a rotation by \(2\pi a_{i}/n_{i}\). To do this, we must unpack the construction of the circular order \(c_{0}\) on \(\pi_{1}(\mathcal{M})\) and its dynamic realisation \(\rho_{0}\). The set of ends \(\mathcal{E}(T(\bar{\Lambda}_{s}))\) can be decomposed as the disjoint union \(\bigsqcup_{j=1}^{d_{j}}\mathcal{E}_{j}^{x}\), where \(\mathcal{E}_{j}^{x}\) is the set of ends determined by infinite geodesic spines based at \(x\) with initial segment \(\sigma_{j}^{x}\). Then (4.4.1) shows that \(\beta_{i}^{x}\cdot\mathcal{E}_{j}^{x}=\mathcal{E}_{j+\Delta_{i}}^{x}\) Choose \(e_{j}\in\mathcal{E}_{j}^{x}\) for \(j=1,2,\ldots,d_{i}\) so that the set \(\{e_{j}\}\) is preserved under the action of \(\beta_{i}^{x}\). By construction, the circular order on the set \(\{e_{j}\}\) corresponds to the circular order on the set of segments \(\{\sigma_{j}^{x}\}\), which is given by (4.4.3). See Figure 7. Given any \(k\in\{0,\ldots,d_{i}-1\}\), by (4.4.4), we have \[\beta_{i}^{x}\cdot e_{1+kb_{i}}=e_{1+kb_{i}+\Delta_{i}}=e_{1+(k+a_{i}\Delta_{ i})b_{i}} \tag{4.4.5}\] If we use \(e_{1}\) to define the circular order on \(\operatorname{Aut}(\mathcal{E}(\bar{\Lambda}_{s}),c)\) (cf. the proof of Lemma 4.3), then the circular order on the cyclic group \(\langle\beta_{i}^{x}\rangle\) is determined by the induced circular order on the set \(\langle\beta_{i}^{x}\rangle\cdot e_{1}\subset\mathcal{E}(\bar{\Lambda}_{s})\). Choose an order-preserving embedding \(\iota:\pi_{1}(\mathcal{M})\to S^{1}\) such that the action of \(\pi_{1}(\mathcal{M})\) on \(\iota(\pi_{1}(\mathcal{M}))\) given by \(\gamma^{\prime}\cdot\iota(\gamma)=\iota(\gamma^{\prime}\gamma)\) for any \(\gamma,\gamma^{\prime}\in\pi_{1}(\mathcal{M})\), extends to an action of \(\pi_{1}(\mathcal{M})\) on \(S^{1}\) by orientation-preserving homeomorphisms. Assuming that \(\iota\) is chosen with enough Figure 7. In this figure we simplify the notation by dropping the superscript \(x\) from \(\sigma_{j}^{x}\). We suppose that \(n_{i}=3\), \(\Delta_{i}=|\delta_{i}(\Phi_{0})\cdot\beta_{i}|=2\), so \(d_{i}=2\times 3=6\), and \(a_{i}=2\). Theorem 4.5 claims that we can find a representation \(\rho\) so that \(\rho(\beta_{i})\) is conjugate to rotation by \(4\pi/3\). Since \((a_{i},d_{i})\neq 1\), to make the construction work, we must replace \(a_{i}\) by \(5\), say, which equals \(2\) (mod \(n_{i}\)), and is coprime with \(d_{i}=6\). Choose \(b_{i}=-1\). Now we can produce the new local order as described in (4.4.3), which is shown in the figure to the right. Note that since \(\Delta_{i}=2\), \(\beta_{i}\) sends \(\sigma_{i}^{x}\) to \(\sigma_{i+2}^{x}\) so its action under the natural order has rotation number \(1/3\) (mod \(\mathbb{Z}\)), while its rotation number under the new order is \(2/3\) (mod \(\mathbb{Z}\)). care (cf. [15, Lemma 2.2.10]), we obtain the dynamic realisation of \(c_{0}\), which is a faithful representation \(\rho_{0}:\pi_{1}(\mathcal{M})\to\text{Homeo}_{+}(S^{1})\). Then from (4.4.5) and the construction of the dynamic realization, it follows that the rotation number of \(\rho_{0}(\beta_{i}^{x})\) in \(\text{Homeo}_{+}(S^{1})\) is \(\frac{a_{i}\Delta_{i}}{d_{i}}=\frac{a_{i}}{n_{i}}\). Since \(\beta_{i}^{x}\) has finite order, it must be conjugate to a rotation by \(\frac{2\pi a_{i}}{n_{i}}\), which was to be proved. **Theorem 4.7**.: _Let \(L=K_{1}\cup\cdots\cup K_{m}\) be a link in an orientable \(3\)-manifold \(W\) whose complement admits a pseudo-Anosov flow \(\Phi_{0}\). For each \(i\), fix an oriented essential simple closed curve \(\alpha_{i}\) on \(T_{i}\) and an integer \(n_{i}\geq 1\) so that \(n_{i}|\alpha_{i}\cdot\delta_{i}(\Phi_{0})|\geq 2\). Then for any integer \(a_{i}\) coprime with \(n_{i}\), there is a homomorphism \(\rho:\pi_{1}(X(L))\to\text{Homeo}_{+}(S^{1})\) with non-cyclic image such that \(\rho(\alpha_{i})\) is conjugate to rotation by \(2\pi a_{i}/n_{i}\) for each \(i\)._ Proof.: Set \(\mathcal{M}=X(L)(\alpha_{*};n_{*})\). Let \(B=B_{1}\cup\cdots\cup B_{m}\) be the link in \(|\mathcal{M}|=X(L)(\alpha_{*})\) corresponding to the cores of the \(\alpha_{i}\)-surgery solid tori. Theorem 4.7 now follows from Theorem 4.5 as Corollary 3.4 followed from Proposition 3.3. ## 5. Euler class of representations and left-orderable cyclic branched covers The goal of this section is to prove the following theorem. Our method is to show that the Euler classes, defined below, of the representations constructed in the previous sections vanish on certain co-cyclic subgroups. **Theorem 5.1**.: _Let \(L=K_{1}\cup\cdots\cup K_{m}\) be a prime link in an integer homology \(3\)-sphere whose exterior is irreducible. Suppose that \(\rho:\pi_{1}(X(L))\to\text{Homeo}_{+}(S^{1})\) is a representation with non-cyclic image such that \(\rho(\mu_{i})\) is conjugate to rotation by \(2\pi a_{i}/n\) for some \(a_{i},n\in\mathbb{Z}\), where \(n\geq 2\). If the induced homomorphism \(\psi:\pi_{1}(X(L))\to\mathbb{Z}/n\) which sends \(\mu_{i}\) to \(a_{i}\pmod{n}\) is an epimorphism, then \(\pi_{1}(\Sigma_{\psi}(L))\) is left-orderable._ **Remark 5.2**.: We allow the possibility that \(\rho(\mu_{i})=\text{id}_{S^{1}}\) in Theorem 5.1 (i.e. \(a_{i}\equiv 0\pmod{n}\)), which leads to a slightly more general type of cyclic branched cover \(\Sigma_{\psi}(L)\) than considered elsewhere in the paper. Similarly we allow \(n_{i}\) to be \(1\) in Lemma 5.3 below. ### Euler classes The set of central extensions of a group \(G\) by \(\mathbb{Z}\) is naturally identified with \(H^{2}(G)\) in such a way that the direct product \(G\times\mathbb{Z}\) corresponds to \(0\). More precisely, each inhomogeneous \(2\)-coycle \(\xi\) on \(G\) normalised to take the value \(0\) on \((1,1)\) determines a central extension \[1\to\mathbb{Z}\to\widetilde{G}_{\xi}\stackrel{{\varphi}}{{ \longrightarrow}}G\to 1\] where \(\widetilde{G}_{\xi}=G\times\mathbb{Z}\) as a set, \(\varphi\) is the projection, and multiplication in \(\widetilde{G}_{\xi}\) is defined by \((g,a)\cdot(h,b)=(gh,a+b+\xi(g,h))\). Altering the cocycle by a coboundary yields an equivalent extension. Conversely, given a central extension \(1\to\mathbb{Z}\to\widetilde{G}\stackrel{{\varphi}}{{ \longrightarrow}}G\to 1\) and transversal \(s:(G,1)\to(\widetilde{G},1)\) to \(\varphi\), the function \(\xi:G^{2}\to\text{kernel}(\varphi)=\mathbb{Z},(g,h)\mapsto s(gh)^{-1}s(g)s(h)\) is a normalised \(2\)-coycle whose class in \(H^{2}(G)\) is independent of the choice of \(s\). The class \(e\in H^{2}(G)\) of an extension \(1\to\mathbb{Z}\to\widetilde{G}\stackrel{{\varphi}}{{\longrightarrow}}G\to 1\) is called its _Euler class_. There is a universal covering homomorphism \(\varphi:\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\to\operatorname{Homeo}_{+}(S ^{1})\), where \(\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\) is the group of homeomorphisms of the real line which commute with translation by \(1\). The kernel of \(\varphi\) is the group of integer translations and is central in \(\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\), so there is a central extension \[1\to\mathbb{Z}\to\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\stackrel{{ \varphi}}{{\longrightarrow}}\operatorname{Homeo}_{+}(S^{1})\to 1\] Given a representation \(\rho:G\to\operatorname{Homeo}_{+}(S^{1})\), define \(\widetilde{G}_{\rho}\) to be the subgroup \(\{(g,f)\mid\rho(g)=\varphi(f)\}\) of the direct product \(G\times\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\) and note that the projections \(\varphi_{\rho}:\widetilde{G}_{\rho}\to G\) and \(\tilde{\rho}:\widetilde{G}_{\rho}\to\operatorname{Homeo}_{\mathbb{Z}}( \mathbb{R})\) give rise to a commutative diagram of central extensions The _Euler class_\(e(\rho)\in H^{2}(G)\) of \(\rho\) is defined to be the Euler class of the extension \(1\to\mathbb{Z}\to\widetilde{G}_{\rho}\stackrel{{\varphi_{\rho}}}{ {\longrightarrow}}G\to 1\) and is therefore the obstruction to \(\varphi_{\rho}\) admitting a splitting homomorphism. Equivalently, it is the obstruction to lifting \(\rho\) to a representation \(G\to\operatorname{Homeo}_{\mathbb{Z}}(\mathbb{R})\). ### Proof of Theorem 5.1 We begin with a lemma. **Lemma 5.3**.: _Suppose that \(L\) is an oriented link in an oriented integer homology \(3\)-sphere \(W\) with components \(K_{1},K_{2},\ldots,K_{m}\) and \(\psi:\pi_{1}(X(L))\to\mathbb{Z}/n\) is an epimorphism with associated \(n\)-fold cyclic branched cover \((\Sigma_{\psi}(L),\tilde{L})\to(W,L)\). If \(n_{i}\) is the order of \(\psi(\mu_{i})\) in \(\mathbb{Z}/n\) and \(\Sigma_{\psi}(L)\) is an irreducible rational homology \(3\)-sphere with infinite fundamental group, then \(H^{2}(\pi_{1}(X(L)(\mu_{*};n_{*})))\cong\oplus_{i=1}^{m}\mathbb{Z}/n_{i}\), where \(\mu_{*}=(\mu_{1},\mu_{2},\ldots,\mu_{m})\) and \(n_{*}=(n_{1},n_{2},\ldots,n_{m})\)._ Proof.: Set \(G=\pi_{1}(X(L)(\mu_{*};n_{*}))\). Our hypotheses imply that \(\psi\) factors through a homomorphism \(\bar{\psi}:G\to\mathbb{Z}/n\) where, by construction, the cover of \(X(L)(\mu_{*};n_{*})\) corresponding to the kernel of \(\bar{\psi}\) is \(\Sigma_{\psi}(L)\). Let \(Z\) be a \(K(G,1)\) so that \(H_{*}(G)\cong H_{*}(K)\) and \(H^{*}(G)\cong H^{*}(Z)\). Since \(W\) is an integer homology \(3\)-sphere, it is easy to verify that \(H_{1}(Z)\cong\oplus_{i=1}^{m}\mathbb{Z}/n_{i}\) Let \(\widetilde{Z}\to Z\) be the \(n\)-fold cyclic cover where \(\pi_{1}(\widetilde{Z})=\pi_{1}(\Sigma_{\psi}(L))=\operatorname{kernel}(\bar{ \psi})\), so that \(\widetilde{Z}=K(\pi_{1}(\Sigma_{\psi}(L)),1)\). Since \(\Sigma_{\psi}(L)\) is irreducible with infinite fundamental group, it is aspherical. Hence \(\Sigma_{\psi}(L)\simeq\widetilde{Z}\) and therefore as \(\Sigma_{\psi}(L)\) is a rational homology \(3\)-sphere, \(H_{2}(\widetilde{Z})=H_{2}(\Sigma_{\psi}(L))=0\). A transfer argument then shows that \(H_{2}(Z;\mathbb{Q})=0\). Now apply universal coefficients to \(Z\) to deduce that \[H^{2}(G)\cong H^{2}(Z)\cong\operatorname{Ext}(H_{1}(Z),\mathbb{Z})\oplus \operatorname{Hom}(H_{2}(Z),\mathbb{Z})=\operatorname{Ext}(\oplus_{i=1}^{m} \mathbb{Z}/n_{i},\mathbb{Z})\cong\oplus_{i=1}^{m}\mathbb{Z}/n_{i}\] Proof of Theorem 5.1.: Let \(n_{i}\) be the order of \(\psi(\mu_{i})\) in \(\mathbb{Z}/n\), \(n_{*}=(n_{1},n_{2},\ldots,n_{m})\), \(\mu_{*}=(\mu_{1},\mu_{2},\ldots,\mu_{m})\) and \(G=\pi_{1}(X(L)(\mu_{*};n_{*}))\). Our hypotheses imply that \(\psi\) and \(\rho\) factor through homomorphisms \(\bar{\psi}:G\to\mathbb{Z}/n\) and \(\bar{\rho}:G\to\mathrm{Homeo}_{+}(S^{1})\) where, by construction, the cover of \(X(L)(\mu_{*};n_{*})\) corresponding to \(\bar{\psi}\) is \(\Sigma_{\psi}(L)\). Hence \(\pi_{1}(\Sigma_{\psi}(L))=\ker(\bar{\psi})\). Our hypotheses also imply that \(\Sigma_{\psi}(L)\) is irreducible (cf. [12, Proposition 10.2]), so it will have a left-orderable fundamental group if it has a positive first Betti number ([10, Theorem 1.1]). Assume, then, that \(\Sigma_{\psi}(L)\) is a rational homology \(3\)-sphere and therefore \(H^{2}(G)\cong\oplus_{i=1}^{m}\mathbb{Z}/n_{i}\) by Lemma 5.3. Since the only finite subgroups of \(\mathrm{Homeo}_{+}(S^{1})\) are conjugate into \(SO(2)\), hence cyclic, our hypotheses also imply that the image of \(\bar{\rho}\) is infinite. Then the restriction of \(\bar{\rho}\) to \(\pi_{1}(\Sigma_{\psi}(L))\) also has infinite image. In particular, \(\Sigma_{\psi}(L)\) has an infinite fundamental group. Let \(Y\) be a CW-complex obtained by attaching \(2\)-cells \(D_{1},D_{2},\ldots,D_{m}\) to \(X(L)\) with attaching maps \(\partial D_{i}\to X(L)\) which wrap \(n_{i}\) times around a positively oriented meridian of \(K_{i}\), so that \(\pi_{1}(Y)=G\). Next construct an Eilenberg-MacLane space \(Z=K(G,1)\) by attaching cells of dimension \(3\) or more to \(Y\). We can identify \(H^{2}(G)\) with \(H^{2}(Z)\). In what follows we use \(C_{*}(Z)\) and \(C^{*}(Z)\) to denote the cellular chain and cochain complexes of \(Z\) over \(\mathbb{Z}\). A cohomology class represented by a cocycle \(\zeta\) will be denoted by \([\zeta]\). For each \(1\leq i\leq m\), let \(f_{i}\in\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\) be a lift of \(\bar{\rho}(\mu_{i})\) which is conjugate to a translation by \(a_{i}/n\). In SS2 of [50], Milnor constructed a cellular \(2\)-cocycle \(\omega\) of \(Z\) representing \(e(\bar{\rho})\) whose value on the \(2\)-cell \(D_{i}\) is the negative of the translation number of \(f_{i}^{n_{i}}=\mathrm{sh}(n_{i}a_{i}/n)\) ([50, Lemma 2]). That is, \(\omega(D_{i})=-n_{i}a_{i}/n\), where we note that as \(n_{i}\) is the order of \(\psi(\mu_{i})\equiv a_{i}\) in \(\mathbb{Z}/n\), \(n_{i}a_{i}/n\in\mathbb{Z}\). Since \(H^{2}(Z)\cong H^{2}(G)\cong\oplus_{i=1}^{m}\mathbb{Z}/n_{i}\) where each \(n_{i}\) divides \(n\), \(n\omega\) represents zero in \(H^{2}(Z)\). Hence there is a \(1\)-cochain \(\eta\in C^{1}(Z)\) for which \(\delta\eta=n\omega\) and so reducing \(\eta\ (\mathrm{mod}\ n)\) we obtain a \((\mathrm{mod}\ n)\)\(1\)-cocycle \(\bar{\eta}\) representing an element \([\bar{\eta}]\in H^{1}(Z;\mathbb{Z}/n)=\mathrm{Hom}(H_{1}(Z),\mathbb{Z}/n)\). Let \(f\) be the composition \[\pi_{1}(Z)\to H_{1}(Z)\stackrel{{[\bar{\eta}]}}{{\longrightarrow }}\mathbb{Z}/n\] and note that if \(p:\widetilde{Z}\to Z\) is the covering corresponding to \(\mathrm{kernel}(f)\), then \(\widetilde{Z}=K(\mathrm{kernel}(f),1)\). We claim that \(\widetilde{Z}\) is homotopy equivalent to \(\Sigma_{\psi}(L)\). Equivalently, \(\pi_{1}(\widetilde{Z})\cong\pi_{1}(\Sigma_{\psi}(L))\). To see this, note that the boundary of the \(2\)-chain \(D_{i}\) is the \(1\)-cycle \(n_{i}\mu_{i}\) in \(C_{1}(Z)\), so \[\eta(\mu_{i})=(1/n_{i})\eta(\partial(D_{i}))=(1/n_{i})(\delta\eta)(D_{i})=(n/n _{i})\omega(D_{i})=(n/n_{i})(-n_{i}a_{i}/n)=-a_{i}\] Then thinking of \(\mu_{i}\) as an element of \(\pi_{1}(Z)=G\) we have \[f(\mu_{i})=\bar{\eta}(\mu_{i})\equiv-a_{i}\ (\mathrm{mod}\ n)\ =-\bar{\psi}(\mu_{i})\] It follows that \(f=-\bar{\psi}\) and therefore \(\pi_{1}(\widetilde{Z})=\mathrm{kernel}(f)=\mathrm{kernel}(\bar{\psi})=\pi_ {1}(\Sigma_{\psi}(L))\). To complete the proof, let \(\hat{\rho}=\bar{\rho}|_{\pi_{1}(\Sigma_{\psi}(L))}\) and observe that \[e(\hat{\rho})=p^{*}(e(\bar{\rho}))=p^{*}([\omega])\in H^{2}(\widetilde{Z})\] If we can show that \(p^{*}([\omega])=0\), then \(\hat{\rho}\) lifts to a (non-trivial) representation \(\tilde{\rho}:\pi_{1}(\Sigma_{\psi}(L))\to\text{Homeo}_{\mathbb{Z}}(\mathbb{R}) \leq\text{Homeo}_{+}(\mathbb{R})\), so as \(\Sigma_{\psi}(L)\) is orientable and irreducible, \(\pi_{1}(\Sigma_{\psi}(L))\) is left-orderable ([10, Theorem 1.1]) and we are done. To prove that \(p^{*}([\omega])=0\), note that the identity \(\delta\eta=n\omega\) implies that the Bockstein homomorphism \(H^{1}(Z;\mathbb{Z}/n)\stackrel{{\beta}}{{\longrightarrow}}H^{2} (Z)\) of the coefficient sequence \(0\to\mathbb{Z}\stackrel{{ n}}{{\longrightarrow}}\mathbb{Z}\to \mathbb{Z}/n\to 0\) sends \([\bar{\eta}]\) to \([\omega]\). Then \[p^{*}([\omega])=p^{*}(\beta([\bar{\eta}]))=\beta(p^{*}([\bar{\eta}]))\] by the naturality of \(\beta\). On the other hand, \(p^{*}([\bar{\eta}])\in H^{1}(\widetilde{Z};\mathbb{Z}/n)\) corresponds to \([\bar{\eta}]\circ p_{*}\in\text{Hom}(H_{1}(\widetilde{Z}),\mathbb{Z}/n)\), so is zero by the definition of \(\widetilde{Z}\). Thus \(p^{*}([\omega])=0\). ## 6. Applications and Examples ### Order detects meridional slopes There are three types of slope detection: order-detection, foliation-detection and non-\(L\)-space detection. For simplicity, we consider slope detection for knot manifolds, i.e. compact, connected, orientable, irreducible \(3\)-manifolds with torus boundary that are not homeomorphic to \(S^{1}\times D^{2}\). It was shown in [8, Theorem 1.3] that given two knot manifolds \(M_{1}\) and \(M_{2}\) and a homeomorphism \(f:\partial M_{1}\to\partial M_{2}\), if \(f\) maps an _order-detected_ slope on \(\partial M_{1}\) to an order-detected slope on \(\partial M_{2}\), then the fundamental group of \(W=M_{1}\cup_{f}M_{2}\) is left-orderable. Similar results hold for foliation-detection and non-\(L\)-space detection, which are proved in [12, Theorem 5.2] and [37, Theorem 1.14] respectively. We refer the reader to [8, SS7.2] (see also [12, SS6]) for the formal definition of order-detection. We remark that order-detection is called LO-detection in [12]. The following proposition gives a sufficient representation-theoretic condition for a slope to be order-detected. **Proposition 6.1** (Proposition 6.9 in [12]).: _Suppose that \(M\) is a knot manifold and \(\alpha\) is an oriented essential simple closed curve on \(\partial M\). If \(\rho:\pi_{1}(M)\to\text{Homeo}_{+}(\mathbb{R})\) is a homomorphism such that \(\rho(\alpha)\) has a fixed point but not \(\rho(\pi_{1}(\partial M))\), then the slope given by \(\alpha\) is order-detected._ **Theorem 6.2**.: _Let \(K\) be a hyperbolic knot in an integer homology sphere \(W\), such that the complement of \(K\) admits a pseudo-Anosov flow whose degeneracy locus is meridional. Then the knot meridian is order-detected._ Proof.: Let \(\mu,\lambda\) be simple closed meridional and longitudinal curves of \(K\) which are arbitrarily oriented. By assumption, there is a pseudo-Anosov flow \(\Phi_{0}\) on \(C(K)\), such that up to flipping the orientation on \(\mu\), satisfies \(\delta(\Phi_{0})=c\mu\) for some \(c\geq 1\). Let \(\alpha\) be an oriented simple closed curve representing the homology class \(\mu+\lambda\) in \(H_{1}(\partial X(K))\). For each \(n\geq 2\) we consider the orbifold \(\mathcal{M}=X(K)(\alpha;n)\) and let \(B\) be the core of the filling solid torus. Since \(n|\delta(\Phi_{0})\cdot\alpha|=nc\geq 2\), the flow \(\Phi_{0}\) extends to a flow \(\Phi\) on the underlying manifold \(|\mathcal{M}|\) which is well-adapted to the pair \((\mathcal{M},B)\), at least after we orient \(B\) with the induced orientation from the flow \(\Phi\). Then by Theorem 4.5, there exists a representation \(\rho:\pi_{1}(\mathcal{M})\to\mathrm{Homeo}_{+}(S^{1})\) such that \(\rho(\alpha)\) is conjugate to a rotation by \(\frac{2\pi}{n}\). For the rest of the proof we use the notation established in the proof of Theorem 4.5 (SS4.4). Recall that \(\Phi\) lifts to a pseudo-Anosov flow on the universal cover \(\widetilde{\mathcal{M}}\). Let \(\widetilde{\Lambda}_{s}\) be its stable lamination, which projects to an essential lamination on the leaf space \(\mathcal{O}\) of the pullback flow on the universal cover \(\widetilde{\mathcal{M}}\), and \(T(\bar{\Lambda}_{s})\) denote the leaf space of \(\bar{\Lambda}_{s}\) which is a cyclically ordered \(\mathbb{R}\)-order tree. In the case that \(n>2\), the representation \(\rho\) is from the action of \(\pi_{1}(\mathcal{M})\) on \(\mathcal{E}(T(\bar{\Lambda}_{s}))\), the set of ends of \(T(\bar{\Lambda}_{s})\). The knot \(K\) lifts to a countable union of flow lines in the universal cover \(\widetilde{\mathcal{M}}\), which corresponds to a \(\pi_{1}(\mathcal{M})\)-invariant subset \(V\) of \(T(\bar{\Lambda}_{s})\), each point of which has valency \(nc\). For each \(x\in V\), we let \(\sigma_{1}^{x},\cdots,\sigma_{nc}^{x}\) be the distinct segments of \(T(\bar{\Lambda}_{s})\) incident to \(x\), which are indexed (mod \(nc\)) with respect to the local circular order at \(x\). Let \(\mu^{x}\) be the conjugate of \(\mu\) in \(\pi_{1}(\mathcal{M})\) which fixes \(x\). The rotation number of \(\rho(\mu^{x})\) is determined by how \(\mu^{x}\) acts on \(\sigma_{i}^{x}\), which is dynamically identical with the action of \(\mu^{x}\) on the cusps of the complementary region of \(\widetilde{\Lambda}_{s}\) that is fixed by it. Since \(\mu\) is parallel to a component of the degeneracy locus, it follows that \(\mu^{x}\) fixes each \(\sigma_{i}^{x}\) and therefore, the rotation number of \(\rho(\mu^{x})\) is zero. By Lemma 2.3, the rotation number of \(\rho(\mu)\) is also zero. Let \(\rho^{\prime}\) be the composition \(\pi_{1}(X(K))\to\pi_{1}(\mathcal{M})\to\mathrm{Homeo}_{+}(S^{1})\). Since \(H^{2}(X(K))=0\) and \(H_{1}(X(K))\) is generated by the class represented by \(\mu\), there exists a lift of \(\rho^{\prime}\), denoted by \(\tilde{\rho}^{\prime}:\pi_{1}(X(K))\to\mathrm{Homeo}_{\mathbb{Z}}(\mathbb{R})\), such that the translation number of \(\tilde{\rho}^{\prime}(\mu)=0\). However, the translation number of \(\tilde{\rho}^{\prime}(\alpha)\) equals \(\frac{1}{n}+k\) for some \(k\in\mathbb{Z}\) and hence is nonzero. It follows that \(\tilde{\rho}^{\prime}(\alpha)\) acts without fixed points on \(\mathbb{R}\). By Lemma 2.3 and Proposition 6.1, we have that \(\mu\) is order-detected. ### Orderability of branched covers and link orientations We begin with a proof of Theorem 1.2. Proof of Theorem 1.2.: Let \(\psi(\mu_{i})=a_{i}\,(\mathrm{mod}\,n)\) in \(\mathbb{Z}/n\), \(n_{i}=n/\gcd(a_{i},n)\) and \(a_{i}^{\prime}=a_{i}/\gcd(a_{i},n)\). Then \(a_{i}^{\prime}\) and \(n_{i}\) are coprime, and \(n_{i}\) is the order of \(a_{i}\) in \(\mathbb{Z}/n\) for each \(i\). By assumption, \(n_{i}\geq 2\), so as the degeneracy loci \(\delta_{i}(\Phi_{0})\) are non-meridional, we have \(n_{i}|\mu_{i}\cdot\delta_{i}(\Phi_{0})|\geq 2\). Then by Theorem 4.7, there exists a homeomorphism \(\rho:\pi_{1}(X(L))\to\mathrm{Homeo}_{+}(S^{1})\) with non-cyclic image such that \(\rho(\mu_{i})\) is conjugate to rotation by \(2\pi a_{i}^{\prime}/n_{i}=2\pi a_{i}/n\) for each \(i\). By Remark 3.2 and Lemma 3.1, the link \(L\) is prime and its exterior is irreducible. Hence by Theorem 5.1, the branched cover associated to the epimorphism \(\pi_{1}(X(L))\to\mathbb{Z}/n\) which sends \(\mu_{i}\) to \(a_{i}\) (mod \(n\)) is left-orderable. This completes the proof. **Remark 6.3**.: The conclusion of Theorem 1.2 also holds for epimorphism \(\psi:\pi_{1}(X(L))\to\mathbb{Z}/n\) as long as \(n_{i}|\mu_{i}\cdot\delta_{i}(\Phi_{0})|\geq 2\), where \(n_{i}\) is the order of \(\psi(\mu_{i})\) in \(\mathbb{Z}/n\). That is, if for some \(i\), \(|\mu_{i}\cdot\delta_{i}(\Phi_{0})|\geq 2\), then the conclusion holds even if \(n_{i}=1\), i.e. \(\psi(\mu_{i})=0\) in \(\mathbb{Z}/n\). Recall Corollary 1.4, which we restate here for the reader's convenience. **Corollary 1.4**.: _Let \(L=K_{1}\cup\cdots\cup K_{m}\) be a link in an integer homology \(3\)-sphere \(W\) whose complement admits a pseudo-Anosov flow none of whose degeneracy loci are meridional. Then \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\circ\) on \(L\),_ This corollary partially confirms a rather surprising consequence of known results on the left-orderability of the fundamental groups of the \(n\)-fold cyclic branched covers \(\Sigma_{n}(L)\) of prime oriented links \(L\) in \(S^{3}\). Specifically, these results are consistent with the possibility that every such link satisfies (exactly) one of the following. * \(\pi_{1}(\Sigma_{n}(L))\) is left-orderable for all \(n\geq 2\); * \(\pi_{1}(\Sigma_{n}(L))\) is non-left-orderable for all \(n\geq 2\); * \(\pi_{1}(\Sigma_{n}(L))\) is \(\left\{\begin{array}{ll}\text{non-left-orderable}&\text{for $2\leq n\leq N$}\\ \text{left-orderable}&\text{for $n>N$}\end{array}\right\}\text{ for some integer $N$, $2\leq N\leq 5$.}\right.\) A consequence of these statements would be \[\text{if $\pi_{1}(\Sigma_{2}(L))$ is left-orderable then $\pi_{1}(\Sigma_{n}(L))$ is left-orderable for all $n\geq 2$} \tag{6.2.1}\] This seems puzzling at first sight inasmuch as \(\Sigma_{2}(L)\) is independent of the orientation on \(L\) while this is not true for \(\Sigma_{n}(L)\), \(n\geq 3\). Thus (6.2.1) predicts that if \(\pi_{1}(\Sigma_{2}(L))\) is left-orderable then so is \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) for all orientations \(\circ\) on \(L\). In this direction, it follows from [10, Theorem 1.1] that if \(\pi_{1}(\Sigma_{2}(L))\) is left-orderable, then \(\pi_{1}(\Sigma_{2n}(L^{\circ}))\) is left-orderable for all \(n\geq 1\) and all orientations \(\circ\) on \(L\), though ostensibly it says nothing about odd order cyclic branched covers. However, Corollary 1.4 gives conditions under which (6.2.1) does indeed hold. ### Examples of links with non-meridional pseudo-Anosov flows Next, we give some examples of links whose complements admit pseudo-Anosov flows with non-meridional degeneracy loci to which, therefore, Theorem 1.2 applies. **Theorem 6.4**.: _Let \(L=K_{1}\cup\cdots\cup K_{m}\) be a hyperbolic link in an integer homology \(3\)-sphere \(W\) which can be oriented to be a fibred link whose monodromy has a non-zero fractional Dehn twist coefficient on each boundary component of the fibre. Then the fundamental group of any \(n\)-fold cyclic branched cover of \(L\), \(n\geq 2\), is left-orderable. In particular, \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\circ\) on \(L\)._ Proof.: Since \(L\) is hyperbolic, its monodromy is freely isotopic to a pseudo-Anosov homeomorphism [62]. Hence, the suspension flow of the pseudo-Anosov homeomorphism gives rise to a pseudo-Anosov flow on \(C(L)\). Moreover, because the fractional Dehn twist coefficient of the monodromy on each boundary component of the fibre is nonzero, the degeneracy loci of the suspension flow are non-meridional. The theorem now follows from Theorem 1.2. The condition on the fractional Dehn twist coefficients in Theorem 6.4 is necessary. For example, let \(L_{k}\) be the \(2\)-bridge link corresponding to the continued fraction \([2,2,...,2]\) of length \(k\), where \(k\geq 2\). (The number of components on \(L_{k}\) is \(1\) if \(k\) is even and \(2\) if \(k\) is odd; \(L_{2}\) is the figure eight knot.) Then \(L_{k}\) is hyperbolic and fibred, but \(\Sigma_{n}(L_{k})\) has non-left-orderable fundamental group for all \(n\geq 2\) ([52], [42], [11]; see SS6.4). The family of fibred strongly quasipositive links arises naturally as the set of bindings of open books which carry the tight contact structure on the \(3\)-sphere ([38]). Topologically, Giroux's stabilization theorem characterizes the family as the set of fibred links whose fibre surface can be transformed into a plumbing of positive Hopf bands by a finite sequence of such plumbings [34, 59]. For this family, Theorem 6.4 implies: **Corollary 6.5**.: _Suppose that \(L\) is a hyperbolic link in \(S^{3}\) which can be oriented to be fibred and strongly quasipositive. Then the fundamental group of any \(n\)-fold cyclic branched cover of \(L\), \(n\geq 2\), is left-orderable. In particular, \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\mathfrak{o}\) on \(L\)._ Proof.: Let \(\mathfrak{o}\) be an orientation on \(L\) for which \(L^{\circ}\) is fibred and strongly quasipositive. Since \(L^{\circ}\) is strongly quasipositive, the open book associated to its fibring carries the standard tight contact structure ([38]). It is then shown in [40, Theorem 1.1 and Proposition 3.1] that the fractional Dehn twist coefficients of its monodromy on the boundary components of its fibre are either all positive or all negative. Hence, the conclusion holds by Theorem 6.4. Next we show how pseudo-Anosov flows with non-meridional degeneracy loci often exist on the complements of pseudo-Anosov closed braids. Let \(D_{w}\) denote the \(w\)-punctured \(2\)-disk. There is a natural identification of \(\operatorname{MCG}(D_{w})\) with the \(w\)-strand braid group \(B_{w}\) (see e.g. [25, Chapter 9]) and as such we can associate a fractional Dehn twist coefficient \(c(b)\in\mathbb{Q}\) to each \(b\in B_{w}\). **Theorem 6.6**.: _Let \(b\in B_{w}\) be a pseudo-Anosov braid and \(L=\hat{b}\). Suppose that \(c(b)\) is neither \(0\) nor the reciprocal of a non-zero integer. Then the fundamental group of any \(n\)-fold cyclic branched cover of \(L\), \(n\geq 2\), is left-orderable. In particular, \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\mathfrak{o}\) on \(L\)._ Proof.: By assumption \(b\), thought of as a mapping class, is freely isotopic to a pseudo-Anosov homeomorphism \(\beta\) of \(D_{w}\). The interior of the mapping torus of \(\beta\) is the complement of an \(m+1\) component link in \(S^{3}\) consisting of \(L\) and the braid axis \(A\), while the restriction of the suspension flow \(\Phi_{0}\) of \(\beta\) to the complement of \(L\sqcup A\) is pseudo-Anosov. Express the degeneracy locus \(\delta_{A}(\Phi_{0})\) of \(\Phi_{0}\) on the boundary of a tubular neighbourhood of \(A\) homologically as \[\delta_{A}(\Phi_{0})=p\mu^{\prime}+q\lambda^{\prime}\] where \(\mu^{\prime}\) and \(\lambda^{\prime}\) are meridional and longitudinal classes of \(A\). Our hypotheses on \(c(b)\) imply that \(|q|>1\) and therefore \(\mu^{\prime}\) intersects \(\delta_{A}(\Phi_{0})\) at least twice. Hence there is a pseudo-Anosov flow \(\bar{\Phi}_{0}\) on the complement of \(L\) obtained by extending \(\Phi_{0}\) over a tubular neighbourhood of \(A\). It is clear that the degeneracy loci of \(\bar{\Phi}_{0}\) are non-meridional, so the desired conclusion follows from Theorem 1.2. **Remark 6.7**.: Theorem 6.6 extends with the same proof to the closures of pseudo-Anosov braids in open book decompositions of integer homology \(3\)-spheres. It was shown in [9, Theorem 1.9] that given a pseudo-Anosov braid \(b\) on an odd number of strands, if \(|c(b)|\geq 2\), then all even order cyclic branched covers of \(\hat{b}\) have left-orderable fundamental groups and admit co-oriented taut foliations. The following corollary substantially improves the left-orderability part of this result. **Corollary 6.8**.: _Let \(L=\hat{b}\) be an hyperbolic link in \(S^{3}\), where the fractional Dehn twist coefficient of \(b\in B_{w}\) satisfies \(|c(b)|>1\). Then the fundamental group of any \(n\)-fold cyclic branched cover of \(L\), \(n\geq 2\), is left-orderable. In particular, \(\pi_{1}(\Sigma_{n}(L^{\varrho}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\mathfrak{o}\) on \(L\)._ Proof.: Since \(L\) is hyperbolic and \(|c(b)|>1\), [43, Theorem 8.4] implies that \(b\) is pseudo-Anosov braid. The corollary now follows from Proposition 6.6 since the condition \(|c(b)|>1\) implies that \(c(b)\) is neither zero nor the reciprocal of a non-zero integer. **Remark 6.9**.: In SS6.4 we give infinitely many examples of braids \(b\) with \(c(b)=1\) such that \(L=\hat{b}\) is a \(2\)-component hyperbolic link with \(\pi_{1}(\Sigma_{n}(L))\) left-orderable for all \(n\geq 3\), while if \(L^{\varrho}\) is the oriented link obtained by reversing the orientation of one of components of \(L\) then \(\pi_{1}(\Sigma_{n}(L^{\varrho}))\) is non-left-orderable for all \(n\geq 2\). This shows that Corollary 6.8 can fail quite dramatically if the condition \(|c(b)|>1\) is relaxed. Since adding a positive full twist to a braid \(b\) will increase its fractional Dehn twist coefficient by \(1\), it is easy to obtain examples to which Corollary 6.8 can be applied. More precisely, recall that the centre of \(B_{w}\) is generated by the braid \(C_{w}\) corresponding to a positive full Dehn twist along \(\partial D_{w}\). Then, **Proposition 6.10**.: _Assume that \(b\) is a pseudo-Anosov braid in \(B_{w}\) and for \(k\in\mathbb{Z}\), let \(L_{k}\) be the closure of the braid \(C_{w}^{k}b\). Suppose_ 1. \(k\neq-c(b),-c(b)\pm 1\) _when_ \(c(b)\in\mathbb{Z};\)__ 2. \(k\neq-\lfloor c(b)\rfloor,-\lfloor c(b)\rfloor-1\) _when_ \(c(b)\notin\mathbb{Z}\)_._ _Then the fundamental group of any \(n\)-fold cyclic branched cover of \(L_{k}\), \(n\geq 2\), is left-orderable. In particular, \(\pi_{1}(\Sigma_{n}(L_{k}^{\varrho}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\mathfrak{o}\) on \(L_{k}\)._ Proof.: Since \(C_{w}\) corresponds to a positive full Dehn twist along \(\partial D_{w}\), \(C_{w}^{k}b\) is freely isotopic to \(b\) and is therefore also pseudo-Anosov. Hence \(\widehat{C_{w}^{k}b}\) is hyperbolic when \(|c(C_{w}^{k}b)|>1\) ([43, Theorem 8.4]). We have \(c(C_{w}^{k}b)=k+c(b)\) by the definition of the fractional Dehn twist coefficient of a braid (cf. [45, Proposition 2.7]), and the reader will verify that the condition \(|c(C_{w}^{k}b)|>1\) corresponds to the conditions in the statements of (1) and (2) of the proposition. An application of Corollary 6.8 completes the proof. Recall that a braid \(b\) in \(B_{w}\) is _quasipositive_ if it is represented by a braid word of the form \[b=\prod_{i=1}^{s}w_{i}\sigma_{k_{i}}w_{i}^{-1}\] where \(\sigma_{1},\cdots,\sigma_{w-1}\) are the standard generators of \(B_{w}\) ([58]). **Lemma 6.11**.: _Let \(b\) be a pseudo-Anosov quasipositive braid in \(B_{w}\). Then \(c(b)>0\)._ Proof.: Let \(S\) be the double branched cover of \(D_{w}\) over \(w\) points contained in \(\mathrm{int}(D_{w})\) and \(\varphi\in\mathrm{MCG}(\mathrm{S})\) be the lift of \(b\). Then \(\varphi\) is pseudo-Anosov and \(c(\varphi)=\frac{c(b)}{2}\) (cf. [9, Lemma 10.1, Lemma 10.2]). It is known that each \(\sigma_{i}\) lifts to a positive Dehn twist along a simple closed curve in \(S\)[25, SS9.4], and the same is true of any conjugate of \(\sigma_{i}\). Then the quasipositivity of \(b\) implies that \(\varphi\) is a product of positive Dehn twists. Thus \(\varphi\) is right-veering and hence \(c(b)=2c(\varphi)>0\) ([40, Proposition 3.1]). Theorem 6.12 below follows immediately from Proposition 6.10 and Lemma 6.11. **Theorem 6.12**.: _Let \(b\) be a pseudo-Anosov quasipositive braid in \(B_{w}\) and \(L_{k}\) the braid closure of \(C_{w}^{k}b\), \(k\in\mathbb{Z}\). Then for any \(k\geq 1\), the fundamental group of any \(n\)-fold cyclic branched cover of \(L_{k}\), \(n\geq 2\), is left-orderable. In particular, given any \(k\geq 1\), \(\pi_{1}(\Sigma_{n}(L_{k}^{\bullet}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\mathfrak{o}\) on \(L_{k}\)._ Dependence of the left-orderability of the fundamental groups of cyclic branched covers on link orientation Here we show that in contrast to expectation (6.2.1), if \(\pi_{1}(\Sigma_{2}(L))\) is not left-orderable then in general the left-orderability of \(\pi_{1}(\Sigma_{n}(L))\) for \(n>2\) depends on the orientation on \(L\). Given non-zero integers \(a_{1},a_{2},...,a_{r}\), \(r\geq 1\), let \(L=L(2a_{1},2a_{2},...,2a_{r})\) be the 2-bridge knot or link corresponding to the rational number with continued fraction \([2a_{1},2a_{2},...,2a_{r}]\). This is a knot if \(r\) is even and a 2-component link if \(r\) is odd. (We use the "+" convention for continued fractions, so for example \(L(2,2)\) is the figure eight knot, corresponding to the rational number \(5/2\).) When \(L\) has two components we give it the _canonical_ orientation, illustrated in Figure 8. We then use \(L^{\circ}\) to denote the link obtained by reversing the orientation of one of the components of \(L\). If all the \(a_{i}\) are positive then \(\Sigma_{n}(L)\cong\Sigma_{2}(L_{n})\) for some alternating link \(L_{n}\)[52], [42], and hence \(\pi_{1}(\Sigma_{n}(L))\) is non-left-orderable for all \(n\geq 2\) by [11]. In particular, for the \((2,2k)\)-torus links \(L(2k)\), \(k\geq 1\), \(\pi_{1}(\Sigma_{n}(L(2k))\) is non-left-orderable for all \(n\geq 2\). (This was first proved by Dabkowski, Przytycki and Togha in [21] by a direct algebraic argument.) On the other hand, with the other orientation, \(\pi_{1}(\Sigma_{n}(L(2k)^{\circ}))\) is left-orderable for all \(n\geq 3\) if \(k\geq 3\), and for all \(n\geq 4\) if \(k=2\)[14]. (We remark that the orientation on \(L(2k)^{\circ}\) is the one coming from its realization as the closure of the \(2\)-braid \(\sigma_{1}^{2k}\).) This shows that in general, if \(\pi_{1}(\Sigma_{2}(L))\) is not left-orderable then the left-orderability of \(\pi_{1}(\Sigma_{n}(L))\) for \(n>2\) depends on the orientation on \(L\). Note that torus links are Seifert links, i.e. their exteriors are Seifert fibered. The following theorem gives hyperbolic examples. **Theorem 6.13**.: _Let \(L=L(2k_{1},2l_{1},...,2k_{r},2l_{r},2k_{r+1})\), where \(r\), \(k_{i}\), and \(l_{i}\) are positive. Then,_ 1. \(\pi_{1}(\Sigma_{n}(L))\) _is non-left-orderable for all_ \(n\geq 2\)_._ 2. _If_ \(k_{i}=k\geq 3\) _for all_ \(i\)_, then_ \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) _is left-orderable for all_ \(n\geq 3\)_._ 3. _If_ \(k_{i}=2\) _for all_ \(i\)_, then_ \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) _is left-orderable for all_ \(n\geq 4\)_._ 4. _If_ \(k_{i}=l_{i}=1\) _for all_ \(i\) _and_ \(r\) _is odd, then_ \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) _is left-orderable for all_ \(n\geq 7\)_._ Note that for the links in part (4) of the theorem \(L\) is fibred. Proof.: Part (1) is discussed above. For parts (2), (3), and (4) we use a result of Ohtsuki, Riley and Sakuma [54], who constructed epimorphisms between \(2\)-bridge link groups which preserve oriented meridional classes with respect to the canonical orientations, and hence also the non-canonical orientations. Any such epimorphism induces epimorphisms between the fundamental groups of the associated cyclic branched covers so if the target is left-orderable, so is the domain. Figure 8. The canonical orientation of \(2\)-bridge links For (2) and (3) we start with \(L(2k)\), \(k\geq 2\). It follows from [54, Proposition 5.1] that for \(L\) of the form \(L(2k,2l_{1},2k,2l_{2},...,2l_{r},2k)\) there is an epimorphism \(\pi_{1}(S^{3}\setminus L)\to\pi_{1}(S^{3}\setminus L(2k))\) as above. Then (2) and (3) follow from the remarks in the paragraph immediately preceding Theorem 6.13. For (4) we start with \(L(2,2,2)\). Lemma 6.14 below shows that \(\pi_{1}(\Sigma_{n}(L(2,2,2)^{\mathfrak{o}}))\) is left orderable for all \(n\geq 7\). It follows from [54, Proposition 5.1] that if \[L=L(2,2,2,2l_{1},2,2,2,2l_{2},...,2,2,2,2l_{r},2,2,2)\] then \(\pi_{1}(\Sigma_{n}(L^{\mathfrak{o}}))\) is left-orderable for \(n\geq 7\). In particular, taking all \(l_{i}=1\) we get \(L(2,2,2,...,2)\) of any length congruent to \(3\) (mod \(4\)). **Lemma 6.14**.: \(\pi_{1}(\Sigma_{n}(L(2,2,2)^{\mathfrak{o}}))\) _is left-orderable for all \(n\geq 7\)._ Proof.: We write down a presentation of the link group using the link diagram in Figure 9: \[\pi_{1}(X(L))=<x,y:wx=xw>,\] where \(x\) and \(y\) are meridional generators in the standard Wirtinger presentation shown in Figure 9 and \(w=yxy^{-1}x^{-1}yxyx^{-1}y^{-1}xy\). Define \(\rho_{\theta}:\pi_{1}(X(L))\to SL(2,\mathbb{C})\) by setting \[\rho(x)=\begin{pmatrix}m&1\\ 0&m^{-1}\end{pmatrix},\quad\rho(y)=\begin{pmatrix}m&0\\ s&m^{-1}\end{pmatrix}\] where \(m=e^{i\theta}\) and \(s=3-4\cos^{2}(\theta)\), \(\theta\neq 0\). One can easily verify that \[\rho_{\theta}(w)=\begin{pmatrix}-\cos(3\theta)+i\sin(3\theta)&1+2\cos(2\theta )\\ 0&-\cos(3\theta)-i\sin(3\theta)\end{pmatrix}\] and hence \(\rho_{\theta}(wx)=\rho_{\theta}(xw)\). So \(\rho_{\theta}\) defines an \(SL(2,\mathbb{C})\)-representation of \(\pi_{1}(X(L))\) for each \(\theta\neq 0\). Also note that \(\rho_{\theta}(y^{-1}xy)=\rho_{\theta}(xyx^{-1})\). Figure 9. The \(2\)-bridge link \(L(2,2,2)^{\mathfrak{o}}\) with the non-fibered orientation By [47, p. 786] (also see [41, Theorem 4.3]), when \(s(\theta)<0\) or \(s(\theta)>4\sin^{2}(\theta)\), the representation \(\rho_{\theta}\) is conjugate to an \(SL(2,\mathbb{R})\)-representation, denoted by \(\rho^{\prime}_{\theta}\), for which \(\rho^{\prime}_{\theta}(x)\) is conjugate to the rotation by \(\theta\). Hence, letting \(\theta=\frac{\pi}{n}\), for each \(n>6\), we have an \(SL(2,\mathbb{R})\)-representation \(\rho^{\prime}_{\theta}\), where \(\rho^{\prime}(x)\) is conjugate to rotation by \(\frac{\pi}{n}\). Since \(\rho_{\theta}(y^{-1}xy)=\rho_{\theta}(xyx^{-1})\), we have \(\rho^{\prime}_{\theta}(y^{-1}xy)=\rho^{\prime}_{\theta}(xyx^{-1})\), and therefore, \(\rho^{\prime}_{\theta}(y)\) is also conjugate to a rotation by \(\frac{\pi}{n}\). By projecting the representation to a \(PSL(2,\mathbb{R})\)-representation, for each \(n>6\) we have a \(PSL(2,\mathbb{R})\) representation \(\rho\) of \(\pi_{1}(X(L))\) such that \(\rho(x)\) and \(\rho(y)\) are both conjugate to the rotation by \(\frac{2\pi}{n}\). It follows from Theorem 5.1 that \(\pi_{1}(\Sigma_{n}(L))\) is left-orderable for all \(n>6\). In Section 7 we prove an analog of Theorem 6.13 where the property "has left-orderable fundamental group" is replaced by "is not an \(L\)-space", consistent with the \(L\)-space Conjecture. We now show that there are hyperbolic \(2\)-bridge links which show that Corollary 6.8 is best possible. We first note the following, which will also be used in Section 7. **Lemma 6.15**.: \(L^{\circ}\) _is strongly quasipositive._ Proof.: It is clear that \(L^{\circ}\) is special alternating (i.e. one of the chessboard surfaces in an alternating diagram is orientable; cf. Figure 9), therefore positive, and hence strongly quasipositive by [60]. In the case \(r=1\) we identify an explicit strongly quasipositive braid whose closure is \(L^{\circ}\). Write \(L=L(2k,2l,2m)\), \(k,l,m\geq 1\). Then it is straightforward to verify that \(L^{\circ}=\hat{b}\) where \(b\) is the strongly quasipositive index \((l+2)\) braid \[\sigma_{1}^{2k}(\sigma_{2}...\sigma_{l+1})(\sigma_{1}^{-1}...\sigma_{l}^{-1}) \sigma_{l+1}^{2m}(\sigma_{l}...\sigma_{1})=\sigma_{1}^{2k}(\sigma_{2}... \sigma_{l+1})a_{1,l+2}^{2m}\] shown in Figure 10, which illustrates the case \(l=5\). Here \(a_{1,l+2}=(\sigma_{1}^{-1}\sigma_{2}^{-1}...\sigma_{l}^{-1})\sigma_{l+1}( \sigma_{l}...\sigma_{2}\sigma_{1})\). Figure 10. A closed braid presentation of \(L(2k,2l,2m)^{\circ}\) with \(l=5\). The following shows that the condition \(c(b)>1\) in Corollary 6.8 cannot be relaxed. **Theorem 6.16**.: _Let \(L\) be the closure of the 3-braid \(b=\sigma_{1}^{2k}\sigma_{2}a_{13}^{2k}\), where \(k\geq 3\), and let \(L^{\mathfrak{o}}\) be obtained by reversing the orientation of one of the components of \(L\). Then \(c(b)=1\), \(\pi_{1}(\Sigma_{n}(L))\) is left-orderable for all \(n\geq 3\), and \(\pi_{1}(\Sigma_{n}(L^{\mathfrak{o}}))\) is non-left-orderable for all \(n\geq 2\)._ Proof.: Taking \(l=1\) in the \((l+2)\)-braid \(b\) defined above we get the the 3-braid \(b=\sigma_{1}^{2k}\sigma_{2}a_{13}^{2m}\). More generally, let \(b(p,q,r)\) be the strongly quasipositive 3-braid \(\sigma_{1}^{p}\sigma_{2}^{q}a_{13}^{r}\), \(p,q,r\geq 1\). These braids are considered in [5, Corollary 3.7], which lists the strongly quasipositive 3-braids whose closures are prime, non-split, non-trivial, definite links. Among these are the braids \(b(p,q,r)\), whose closures are not fibred. (The closures of the others are precisely the ADE links, i.e. the fibered strongly quasipositive links whose fiber is a plumbing of positive Hopf bands according to the tree associated to a Dynkin diagram of type A, D, or E. See e.g. [14]). Taking account of the different braid word conventions used in [5] and in the present paper, [5, Lemma 3.2(3)] shows that \(b(p,q,r)\) is conjugate to \(C_{3}\sigma_{1}^{p-1}\sigma_{2}^{-1}\sigma_{1}^{q-1}\sigma_{2}^{-1}\sigma_{1} ^{r-1}\sigma_{2}^{-1}\). In [9, Proof of Theorem 1.11] it is shown that the fractional Dehn twist coefficient \(c(b(p,q,r))=1\). In particular, taking \(p=2k,q=1,r=2m\), the braid \(b=\sigma_{1}^{2k}\sigma_{2}a_{13}^{2m}\) has \(c(b)=1\). Reorienting \(L^{\mathfrak{o}}=\hat{b}\) gives \(L=L(2k,2,2m)\), and \(\Sigma_{n}(L)\) has a non-left-orderable fundamental group for all \(n\geq 2\) by Theorem 6.13(1). On the other hand, taking \(k=m\geq 3\), \(\pi_{1}(\Sigma_{n}(L^{\mathfrak{o}}))\) is left-orderable for all \(n\geq 3\) by Theorem 6.13(2). ### Application to degeneracy loci As mentioned in the introduction, Gabai and Mosher have independently shown that pseudo-Anosov flows exist on the complement of any hyperbolic link in a closed, orientable 3-manifold. More precisely, they show that given a finite depth taut foliation \(\mathcal{F}\) on a compact, connected, orientable, hyperbolic 3-manifold \(M\) with non-empty boundary consisting of tori, there is a pseudo-Anosov flow on the interior of \(M\) which is almost transverse to \(\mathcal{F}\) (cf. [51, Theorem C(3)]). Unfortunately, no proof has been published, though Landry and Tsang have recently produced the first of several planned articles which will provide a demonstration. See [49]. However, the degeneracy loci of these flows are difficult to determine in general. Corollary 1.4 implies that if \(L\) is a link in an integer homology 3-sphere which admits an orientation \(\mathfrak{o}\) for which \(\Sigma_{n}(L^{\mathfrak{o}})\) has a non-left-orderable fundamental group for some \(n\geq 2\), then the degeneracy loci of a pseudo-Anosov flow on the link's complement cannot all be non-meridional. Specializing to knots we have: **Corollary 6.17**.: _Let \(K\) be a knot in an integer homology \(3\)-sphere. If \(\Sigma_{n}(K)\) has a non-left-orderable fundamental group for some \(n\geq 2\), then the degeneracy locus of any pseudo-Anosov flow on the complement of \(K\) is meridional. _ For instance, the 2-fold cyclic branched covers of alternating knots have non-left-orderable fundamental groups ([11, Theorem 4]), so we obtain: **Corollary 6.18**.: _The degeneracy locus of any pseudo-Anosov flow on the complement of an alternating knot is meridional. _ ## 7. Connections with the \(L\)-space Conjecture One of the motivations for studying the left-orderability of \(3\)-manifold groups is the \(L\)-space Conjecture [11], [44], which asserts that for a prime, closed, orientable \(3\)-manifold \(M\) the following are equivalent: 1. \(M\) is not an \(L\)-space in the sense of Heegaard Floer homology; 2. \(\pi_{1}(M)\) is left-orderable; 3. \(M\) admits a co-orientable taut foliation. It is known that (3) implies (2) ([55], [46], [6]); all other implications are open in general. In this context it is interesting to compare Corollary 6.5, which we restate below, with [4, Theorem 1.1]. **Corollary 6.5**.: _Suppose that \(L\) is a hyperbolic link in \(S^{3}\) which can be oriented to be fibred and strongly quasipositive. Then the fundamental group of any \(n\)-fold cyclic branched cover \(\Sigma_{\psi}(L)\) of \(L\), \(n\geq 2\), is left-orderable. In particular, \(\pi_{1}(\Sigma_{n}(L^{\circ}))\) is left-orderable for all \(n\geq 2\) and all orientations \(\mathfrak{o}\) on \(L\)._ The following is part of [4, Theorem 1.1]. **Theorem 7.1**.: _If \(L\) is fibred and strongly quasipositive then \(\Sigma_{n}(L)\) is not an \(L\)-space for all \(n\geq 6\)._ In this generality, Theorem 7.1 is best possible: for the trefoil \(T(2,3)\), \(\Sigma_{n}(T(2,3))\) has finite fundamental group, and therefore is an \(L\)-space, for \(2\leq n\leq 5\). However, an obvious problem in light of Corollary 6.5 is to show that if \(L\) is assumed to be hyperbolic then the conclusion of Theorem 7.1 holds for all \(n\geq 2\). Another obvious difference between Corollary 6.5 and Theorem 7.1 is the independence in Corollary 6.5 of the particular \(n\)-fold cyclic branched cover \(\Sigma_{\psi}(L)\). A challenge is to show that this holds with "\(\pi_{1}(\Sigma_{\psi}(L))\) is left-orderable" replaced by "\(\Sigma_{\psi}(L)\) is not an \(L\)-space". We next discuss the three properties in the \(L\)-space Conjecture for the cyclic branched covers of \(L\)-space knots. These are prime [48], fibred [53], and strongly quasipositive [38]. First, the situation is completely understood for torus knots [35] (see also [14]): \(\Sigma_{n}(K)\) has left-orderable fundamental group, is not an \(L\)-space, and admits a co-orientable taut foliation if and only if \(\pi_{1}(\Sigma_{n}(K))\) is infinite. In particular, \(\Sigma_{n}(K)\) has all three properties for all \(n\geq 2\) if and only if \(K\) is not \(T(3,4),T(3,5)\), or \(T(2,2q+1)\) for some \(q\geq 1\). Second, if \(K\) is any satellite knot, then for all \(n\geq 2\)\(\Sigma_{n}(K)\) has left-orderable fundamental group, is not an \(L\)-space, and, if the companion is fibred, admits a co-orientable taut foliation [12]. In particular this applies to \(L\)-space knots. This leaves the hyperbolic case, where we have the following. **Theorem 7.2**.: _Let \(K\) be a hyperbolic \(L\)-space knot. Then,_ 1. \(\pi_{1}(\Sigma_{n}(K))\) _is left-orderable for all_ \(n\geq 2\)_._ 2. \(\Sigma_{n}(K)\) _is not an_ \(L\)_-space for all_ \(n\geq 3\)_._ 3. \(\Sigma_{n}(K)\) _admits a co-orientable taut foliation for all_ \(n\geq 4g(K)-2\)_._ Proof.: Part (1) follows from Corollary 6.5. Part (2) follows from [4, Corollary 1.4] and [26]. The former says that the conclusion holds for \(n\geq 4\), and for \(n=3\) unless \(g(K)=2\), while the latter says that the only \(L\)-space knot with genus \(2\) is \(T(2,5)\). Part (3) follows from [9]. Combining part (1) of Theorem 7.2 with the results for torus knots and satellite knots discussed above gives the following. **Corollary 7.3**.: _If \(K\) is an \(L\)-space knot then \(\pi_{1}(\Sigma_{n}(K))\) is left-orderable for all \(n\geq 2\) if and only if \(K\) is not \(T(3,4),T(3,5)\), or \(T(2,2q+1)\) for some \(q\geq 1\)._ Similarly, using part (2) of Theorem 7.2, we get the analogous statement with "\(\pi_{1}(\Sigma_{n}(K))\) is left-orderable" replaced by "\(\Sigma_{n}(K)\) is not an \(L\)-space", provided \(n\geq 3\). This leaves open the interesting question, due to Allison Moore, asking whether the double branched cover of a hyperbolic \(L\)-space knot can ever be an \(L\)-space. The discussion at the beginning of SS6.2 of the \(n\)-fold cyclic branched covers \(\Sigma_{n}(L)\) of prime oriented links \(L\) in \(S^{3}\) applies with the property "has left-orderable fundamental group" replaced by "is not an \(L\)-space". The following theorem is a version of Theorem 6.13 for the latter property, describing how the analog of (6.2.1) can fail if \(\Sigma_{2}(L)\) is an \(L\)-space. **Theorem 7.4**.: _Let \(L\) and \(L^{\varrho}\) be the \(2\)-bridge links in Theorem 6.13. Then,_ 1. \(\Sigma_{n}(L)\) _is an_ \(L\)_-space for all_ \(n\geq 2\)_._ 2. _If some_ \(k_{i}\geq 3\)_, then_ \(\Sigma_{n}(L^{\varrho})\) _is not an_ \(L\)_-space for all_ \(n\geq 3\)_._ 3. _If some_ \(k_{i}=2\)_, then_ \(\Sigma_{n}(L^{\varrho})\) _is not an_ \(L\)_-space for all_ \(n\geq 4\)_._ 4. _If_ \(k_{i}=1\) _for all_ \(i\)_, then_ \(\Sigma_{n}(L^{\varrho})\) _is not an_ \(L\)_-space for all_ \(n\geq 2\pi/\arccos(l/l+1)\)_, where_ \(l=\min\{l_{i}\mid 1\leq i\leq r\}\)_. Hence if some_ \(l_{i}=1\) _then_ \(\Sigma_{n}(L^{\varrho})\) _is not an_ \(L\)_-space for all_ \(n\geq 6\)_._ Note that (4) shows that for the fibered links \(L=L(2,2,...,2)\), \(\Sigma_{n}(L^{\varrho})\) is not an \(L\)-space for all \(n\geq 6\); cf. part (4) of Theorem 6.13. Proof.: As noted in the proof of Theorem 6.13, \(\Sigma_{n}(L)\cong\Sigma_{2}(L_{n})\) where \(L_{n}\) is alternating. It follows that \(\Sigma_{n}(L)\) is an \(L\)-space [56] for all \(n\geq 2\). This proves (1). To prove (2), (3), and (4) let \(k=\max\{k_{i}\mid 1\leq i\leq r+1\}\). Let \(F\) be the Seifert surface for \(L^{\varrho}\) obtained from Seifert's algorithm applied to the link diagram shown in Figure 8 but with the non-canonical orientation (cf. Figure 9), and let \(\mathcal{S}_{F}\) be the associated Seifert form. For \(\xi\in S^{1}\), let \(\mathcal{S}_{F}(\xi)\) be the Hermitian form \((1-\xi)\mathcal{S}_{F}+(1-\bar{\xi})\mathcal{S}_{F}^{T}\). Choose \(i\) such that \(k_{i}=k\). Then \(F\) has a subsurface \(F(k)\) as shown in Figure 11, with associated Seifert form \(\mathcal{S}_{F(k)}\) and Hermitian form \(\mathcal{S}_{F(k)}(\xi)\). Note that the boundary of \(F(k)\) is the torus link \(T(2,2k)\), with the fibered orientation. Suppose that \(\Sigma_{n}(L^{\varrho})\) is an \(L\)-space for some \(n\geq 3\). Then by [4, Theorem 1.1]\(\mathcal{S}_{F}(\xi)\) is definite for \(\xi\in\bar{I}_{-}(\xi_{n})\), where \(\xi_{n}=\exp(2\pi i/n)\). Since the inclusion of \(F(k)\) into \(F\) induces an injection of \(H_{1}(F(k))\) into \(H_{1}(F)\) as a direct summand, the same definiteness holds for \(\mathcal{S}_{F(k)}(\xi)\). Therefore all the roots of \(\Delta_{T(2,2k)}(t)=(t^{2k}-1)/(t+1)\) lie in \(I_{+}(\xi_{n})\). In particular \(\exp(2\pi i(k-1)/2k)\in I_{+}(\xi_{n})\). Hence \((k-1)/2k<1/n\), giving \(k<n/(n-2)\). If \(k\geq 3\), this implies \(n<3\), and if \(k=2\), we get \(n<4\). This proves (2) and (3). Figure 11. Subspace \(F(k)\) of the Seifert surface \(F\) of \(L^{\varrho}\). There are \(2k\) crossings in this diagram. Figure 12. Subsurface \(F^{\prime}(l)\) of the Seifert surface \(F\) of \(L^{\varrho}\) If \(k=1\) then \(T(2,2k)\) is the Hopf link, with Alexander polynomial \((t-1)\), so instead, we choose \(i\) such that \(l_{i}=l\), and note that \(F\) contains the subsurface \(F^{\prime}(l)\) shown in Figure 12. As before, the Hermitian form \(\mathcal{S}_{F^{\prime}(l)}(\xi)\) is definite for \(\xi\in\bar{I}_{-}(\xi_{n})\). The link \(L^{\prime}(l)=\partial F^{\prime}(l)\) is the 2-bridge link \(L(2,2l,2)\) with the non-canonical orientation. Computing a Seifert matrix of the genus 1 surface \(F^{\prime}(l)\) gives \(\Delta_{L^{\prime}(l)}(t)=(l+1)(t-1)(1-(2l/(l+1))t+t^{2})\). The quadratic term has roots \(\alpha\) and \(\bar{\alpha}\) where \(\alpha=\exp(i\theta)\) and \(2\mathrm{cos}(\theta)=\alpha+\bar{\alpha}=2l/(l+1).\) Hence \(\theta=\arccos(l/(l+1))\). Since \(\alpha\in I_{+}(\xi_{n})\) we must have \(\arccos(l/(l+1))<2\pi/n\), and therefore \(n<2\pi/\arccos(l/(l+1))\). This proves (4).
2305.09356
A Dynamically Similar Lab-Scale District Heating Network via Dimensional Analysis
Strict user demands and large variability in external disturbances, along with limited richness in the data collected on the daily operating conditions of district heating networks makes the design and testing of novel energy-reducing control algorithms for district heating networks challenging. This paper presents the development of a dynamically similar lab-scale district heating network that can be used as a test bench for such control algorithms. This test bench is developed using the Buckingham pi theorem to the match the lab-scale components to the full-scale. By retaining the relative thermodynamics and fluid dynamics of a full-scale network in the lab-scale system, the experimental setup allows for repeatability of the experiments being performed and flexibility in the testing conditions. Moreover, the down-scaling of the experiment is leveraged to accelerate testing, allowing for the recreation of operating periods of weeks and months in hours and days. A PID controller is implemented on the lab-scale test bench to validate its response against literature data. Results show 63% efficiency during heating operations compared to 70% efficiency for a similar full-scale system, with comparable pressure losses across the system.
Audrey Blizard, Stephanie Stockar
2023-05-16T11:28:19Z
http://arxiv.org/abs/2305.09356v1
# A Dynamically Similar Lab-Scale District Heating Network via Dimensional Analysis ###### Abstract Strict user demands and large variability in external disturbances, along with limited richness in the data collected on the daily operating conditions of district heating networks makes the design and testing of novel energy-reducing control algorithms for district heating networks challenging. This paper presents the development of a dynamically similar lab-scale district heating network that can be used as a test bench for such control algorithms. This test bench is developed using the Buckingham \(\pi\) theorem to the match the lab-scale components to the full-scale. By retaining the relative thermodynamics and fluid dynamics of a full-scale network in the lab-scale system, the experimental setup allows for repeatability of the experiments being performed and flexibility in the testing conditions. Moreover, the down-scaling of the experiment is leveraged to accelerate testing, allowing for the recreation of operating periods of weeks and months in hours and days. A PID controller is implemented on the lab-scale test bench to validate its response against literature data. Results show 63% efficiency during heating operations compared to 70% efficiency for a similar full-scale system, with comparable pressure losses across the system. keywords: district heating network, thermal fluid systems, dynamic similitude, optimization, control verification + Footnote †: journal: Energy Conversion and Management ## 1 Introduction Reducing energy consumption is one of the main tools available to combat climate change and lessen humans' impact on the environment. A large source of energy demand is the heating and cooling of buildings. For instance, in 2015 these processes account for 35% of the energy consumed by buildings in the United States [1]. Increasing the process efficiency will decrease the carbon footprint of commercial and residential buildings and reduce their operating costs. In this context, district heating networks (DHNs) are a promising method to efficiently deliver heat to buildings in an urban environment. Rather than implementing individual heating systems in each building, DHNs centralize the process, relying on economies of scale to increase efficiency and reduce carbon emissions. Additionally, because the heat is generated at a centralized plant, DHNs also simplify the process of integrating of a variety of advanced technologies, including combined heating and power and bio-fuels, along with the use and storage of energy generated by intermittently-available renewable energy sources, such as solar [2]. While DHNs offer inherent performance benefit over traditional heating methods, the system performance can be greatly improved through advanced control strategies. Existing controllers rely on limited information, mainly the supply and return temperatures and the network pressure differential to set the supply temperature and initial flow rate [3]. While simple to implement, these controllers are ineffective in meeting the heat demands of users especially in large and highly interconnected DHNs due to the lack of granular data on the network's current operating conditions and limited knowledge of the network's future demands [3]. Conventional control strategies also lack the flexibility needed to take full advantage of inconsistent renewable energy sources [4]. Optimized control algorithms enable existing DHNs to be operated more efficiently, reducing energy consumption and providing a more comfortable user experience at a lower operational cost. Preliminary studies of model-based control algorithms for DHNs, which are able to account for predicted demands and disturbances, have been shown to reduce the energy consumption by up 34% [5]. However, in most cases, the validation of novel control algorithms for DHNs has been focused on tests performed in simulation environments [6; 7; 8]. Without validation of the control strategies on the physical system, it is difficult to guarantee that similar performance improvements observed in simulations will be obtained in real-world DHNs. There are some examples of physical tests benches for DHNs [9; 10], but none are sufficient for the testing of novel control algorithms due to their narrow focus on individual components of the DHN network. Additionally, there are examples of control algorithms being tested on real world DHNs [11; 12]. However, these tests required significant modifications to the communication infrastructure to enable the implementation this controller on an operational DHN. Furthermore, when performing tests on full-scale DHNs, there is unpredictability in the external disturbances, such as ambient temperature, solar irradiation, and occupancy times. The performance characteristics of DHNs are highly seasonal, and a validation test of new controllers across the variety of operating conditions experienced by DHNs can take weeks or even months. For example, one of the preliminary tests discussed above were conducted over an entire winter season [11]. A dynamically similar lab-scale test bench will eliminate challenges associated with the validation of novel control strategies, allowing for the repeatability in the experiments being performed, flexibility in the conditions being explored, and reducing the time needed to perform tests, while ensuring the applicability of the control algorithm to real-world DHNs. Existing literature supports the idea of using dynamically scaled experimental setups in the design of novel controllers, with the Buckingham \(\pi\) Theorem being the most common method of obtaining dynamically similar setups. For example, this technique has been show effective in matching airborne wind energy systems dynamics [13] and vehicle dynamics during high speed maneuvers [14]. The Buckingham \(\pi\) Theorem is a formalization of the procedure of dimensional analysis which works by identifying the relevant relationships within a system and systematically scaling them while retaining their relative proportions [15]. This technique is commonly used to create smaller and economical experimental test benches in a variety of fields, and its use in the dynamic scaling of thermo-fluid systems is well known. For example, the Buckingham \(\pi\) Theorem has been applied in the analysis of the heat equation [16] and more specifically, has been used in the design of the thermal characteristics of spacecrafts [17]. The Buckingham \(\pi\) theorem has been used in DHNs to analyze individual components of the network, but has never been used to simultaneously scale all components of the network. For example, a reduce parameter model of a building's temperature dynamics was obtained through the correlation of different heating and cooling modes into related nondimensional groups [18; 19]. Similarly, applications to the characterization of the heat transfer rate in heat exchangers was developed based on their geometry [20; 21]. A model of a DHN pipe network was created using the Buckingham \(\pi\) Theorem with the goal of reducing the number of network characteristics needed to accurately predicting the heat losses [22]. Finally, this approach has been used to perform a detailed analysis of the pressure losses of a liquid flow in a single length of pipe [23]. While the Buckingham \(\pi\) Theorem is an established method for scaling the static responses of systems and analyzing individual components of DHNs, this technique has never been used in the scaling the dynamic response of an entire district heating network with the intended application of model validation and control testing. Using the Buckingham \(\pi\) Theorem, this paper will present the design of a dynamically similar lab-scale test bench that will allow for rapid testing of new control algorithms on physical hardware, and the nondimensional framework developed in this paper provides a method to compare the results obtained on the scaled network to that of a full-sized DHN. As DHNs operate in both the thermodynamic and fluid domains, the characterization of both has to be considered in the scaling procedure. The remainder of this paper is organized as follows. First, Section 2 provides a description of the components of the lab-scale DHN, along with the equations used to model the network's dynamics. Section 3 describes the application of the Buckingham \(\pi\) theorem to establish the nondimensionalization parameters used to characterize the system. Then, Section 4 presents ranges for the size of components in a full-scale DHN found in literature and describes the desired parameters for the equivalent lab-scale components using the established \(\pi\) groups. In Section 5, the lab-scale system is validated against simulation results from an equivalent full-scale system. Finally, conclusions are presented in Section 6. ## 2 District Heating Networks ### Experimental Setup The lab-scale setup considered in this paper is representative of a two-user DHN, however the sizing methodology presented is applicable to any network size. The two buildings in this network are represented by fluid-filled acrylic boxes. These thermal masses contain a submerged winding of copper pipe acting as the heat exchangers in a full-scale network to transfer heat from the distribution network into the buildings. Submerged impellers are used to distribute the heat throughout the thermal masses and ensure the fluid is well-mixed. The distribution network, made from 1/2" PEX pipes, circulates water from the heating plant to the thermal masses. This network consists of a main supply line that splits into two loops, one for each user. Each loop has a characterized control valve that divides the water between the user and a bypass. The user branch sends water to the heat exchanger, while the bypass branch diverts water from the heat exchanger. This division allows the operator to control the mass flow rate in the heat exchanger, adjusting the heat supplied to the thermal mass to control its temperature to a desired value, \(T_{set}\). The flows from each heat exchanger and bypass are mixed at return nodes 1 and 2 and the user loops rejoin at the main return node. From here, the water is sent back to the centralized plant through the main return line to be reheated. The network is connected to a residential 30-gal water heater and a fixed-speed 3/4 HP pump, which reproduce the connection with the heating plant in a full-scale network. The detailed discussion of the experimental setup and its construction is presented in Krieger et al. [24]. A diagram of the described system is shown in Fig. 1, while a picture of the physical setup is shown in Fig. 2. The component sizing in the original design were performed based on a static similarity analysis. However, an exact and systematic method to ensure dynamic similarity is needed to guarantee the performance of the lab-scale system is representative of a full-scale DHN. ### Dynamic System Model of a DHN The derivation of a physics-based model of a DHN, applicable to both the full-scale and lab-scale network is presented in this section. The derivation of these fundamental equations allows for the identification of the variables relevant to the system's behavior. Additionally, these equations will be used to simplify the dimensional analysis performed in Section 3 as these equations describe the relationship between the variables and parameters in the system. The system is divided into three major components: the water supply, the distribution pipes, and the buildings, all of which exhibit both temperature and fluid dynamics. The lab-scale system is supplied by a water heater and pump that deliver water with a set supply temperature \(T_{s}\) and initial mass flow rate \(\dot{m}_{I}\) to the network. These values serve as the inlet values for the first pipe in the distribution network. The distribution network is then used to circulate this heated water to the buildings. Each pipe in the distribution network must be modeled in both the thermal and fluid domains. The bulk temperature (\(T_{p}\)), which is the relevant temperature in each pipe segment, is modeled using the conservation of energy equation \[\frac{dT_{p}}{dt}=\frac{\dot{m}}{\rho V}(T_{pin}-T_{p})-\frac{hA_{s}}{\rho c_{ p}V}(T_{p}-T_{a}) \tag{1}\] where \(\dot{m}\) is the mass flow rate of water through the segment, \(T_{a}\) is the ambient temperature, \(\rho\) is the fluid density, \(hA_{s}\) is the conductive heat transfer coefficient of the pipe segment, \(c_{p}\) is the specific heat of water at constant pressure, assumed constant, and \(V\) is the volume of each pipe segment. Finally, \(T_{pin}\) is the inlet temperature obtained from the conservation of energy equation of the previous pipe. Note that for the supply line, \(T_{pin}=T_{s}\). The pipe's volume is given by \[V=\frac{\pi}{4}D^{2}l \tag{2}\] where \(D\) is the pipes internal diameter and \(l\) is the length of the pipe segment. In the fluid domain, the pressure drop (\(\Delta P\)) across each pipe segment is calculated using \[\Delta P=k_{tot}\left(\frac{\dot{m}}{A_{c}}\right)^{2} \tag{3}\] where \(A_{c}\) is the pipe's cross sectional area and \(k_{tot}\) is a pressure loss coefficient representative of both the distributed and concentrated pressure losses [25]. In practice, DHNs are self balancing, meaning the mass flow (\(\dot{m}\)) is divided between the branches to equalize the pressure losses throughout the network, resulting in a relative mass flow rate that is inversely proportional to \(k_{tot}\). Hence the mass flow rate in each segment can be determined offline by solving the set of algebraic equations characterizing the pressure losses in each segment. The total pressure loss for each user loop \(\Delta P_{Li},\;i=1,2\) can be calculated as a function of mass flow rate by summing the \(\Delta P\) values for each pipe segment. The pressure balance in the network can be enforced using \[\Delta P_{L1}\left(\dot{m}_{1}\right)=\Delta P_{L2}\left(\dot{m}_{2}\right) \tag{4}\] where \(\dot{m}_{i}\) is the mass flow in each user loop. Conservation of mass ensures that \[\dot{m}_{1}+\dot{m}_{2}=\dot{m}_{I} \tag{5}\] Combining Eq. (4) and Eq. (5), the flow rate split between the branches can be calculated. The temperature dynamics of the thermal mass is modeled using the conservation of energy equation, following the same principle as the pipe segments in Eq. (1): \[\left(\rho c_{p}V\right)_{ThM}\frac{dT_{ThM}}{dt}=\dot{Q}_{in}-\dot{Q}_{out} \tag{6}\] where \(\left(\rho c_{p}V\right)_{ThM}\) is the heat capacity of the thermal mass, \(\dot{Q}_{out}\) is the heat lost by the thermal mass to the environment, and \(\dot{Q}_{in}\) is the heat provided to the thermal mass Figure 1: Layout of the two-user lab-scale DHN with components labeled. Figure 2: Photograph of experimental setup with relevant components labeled. by the network through the heat exchanger. The rate of heat transferred into the thermal mass is given by \[\dot{Q}_{in}=\left(hA_{s}\right)_{HX}\left(T_{HX}-T_{ThM}\right) \tag{7}\] where \(T_{HX}\) is the bulk temperature of the water in the heat exchanger, and \((hA_{s})_{HX}\) is the convective heat transfer coefficient of the heat exchanger. Similarly, the heat transferred from the thermal mass to the environment is \[\dot{Q}_{out}=\left(hA_{s}\right)_{ThM}\left(T_{ThM}-T_{a}\right) \tag{8}\] where \((hA_{s})_{HX}\) is the convective heat transfer coefficient of the thermal mass. Equations (1) to (8) provide a model of the network that will serve as the foundation to perform the scaling of the system. Having this model simplifies the process of creating a similar lab-scale system by proving an understanding of the underlying relationship between the relevant variables. ## 3 Nondimensionalization of System Equations To ensure applicability of tests performed on the lab-scale system, the transient and steady state responses of the full-scale system must be scaled to the lab-scale. The multi-domain nature of DHNs requires the lab-scale model to match both the heat transfer characteristics and the fluid dynamics of the original system. This section presents the extension of the Buckingham \(\pi\) Theorem for the uniformly scaling of all components of a full-scale DHN. The procedure established by the Buckingham \(\pi\) Theorem is followed to identify the relevant nondimensionalization parameters for the system and use them to create dimensionless \(\pi\) groups that describe the behavior of the system. Finally, from these \(\pi\) groups, the nondimensional form of the system equations is found. The first step of the Buckingham \(\pi\) Theorem is to identify the variables that are relevant to the system. From the equations presented in Section 2, the 20 relevant variables associated with the system are \[f\begin{pmatrix}t,&T_{s},&T_{a},&T_{p},&T_{HX},&T_{ThM},&\dot{m}_{I},&\dot{m},&\rho,\\ &c_{p},&D,&L,&hAs,&k_{tot},&A_{c},&\rho_{ThM},\\ &c_{p_{ThM}},&V_{ThM},&hAs_{ThM},&hAs_{HX}\end{pmatrix}=0 \tag{9}\] However, due to design constraints, the available sizes of some components, and lab space restrictions, there are a limited number of modifiable design parameters. Due to the aforementioned constraints, matching all 20 variables for every component of the system is impossible. Instead, using the system model, it is possible to combine some of the variables into groups. Because the equations governing the system's dynamic responses and losses have been established, the relationship between the some of the variables and these variables can be grouped together during the nondimensionalization process. This will reduce the number of dimensionless quantities to be matched, while still providing a representative description of the system. Leveraging this information, the variables listed in Eq. (9) are reduced to the following \[f\begin{pmatrix}t,&T_{s},&T_{p},&T_{HX},&T_{ThM},&\dot{m}_{I},\\ D,&\rho,&\frac{\dot{m}}{\rho V},&\frac{hA_{s}}{\rho c_{p}V}&\left(T_{p}-T_{a} \right),\\ \Delta P,&\left(\rho c_{p}V\right)_{ThM},&\dot{Q}_{in},&\dot{Q}_{out}\end{pmatrix}=0 \tag{10}\] The next step in the Buckingham \(\pi\) Theorem is to identify the fundamental units of the variables in the system. Dimensional formulas can be used to show the fundamental units associated with each physical quantity and to determine what terms are needed to generate each nondimensional group. The dimensional formulas for each variable group in the reduced list are \[t=\left[t\right] \tag{11a}\] \[T_{s},T_{p},T_{HX},T_{ThM}=\left[T\right]\] (11b) \[\dot{m}_{I}=\left[Mt^{-1}\right]\] (11c) \[D=\left[L\right]\] (11d) \[\rho=\left[ML^{-3}\right]\] (11e) \[\frac{\dot{m}}{\rho V}=\left[t^{-1}\right]\] (11f) \[\frac{hA_{s}}{\rho c_{p}V}(T_{p}-T_{a})=\left[Tt^{-1}\right]\] (11g) \[\Delta P=\left[ML^{-1}t^{-2}\right]\] (11h) \[\left(\rho c_{p}V\right)_{ThM}=\left[ML^{2}t^{-2}T^{-1}\right]\] (11i) \[\dot{Q}_{in},\dot{Q}_{out}=\left[ML^{2}t^{-3}\right] \tag{11j}\] From these dimensional formulas, it can be seen that four fundamental units appear in the variables used to describe the relevant system dynamics. These units are length (\(L\)), time (\(t\)), mass (\(M\)), and temperature (\(T\)). Therefore, four fundamental quantities are needed to create the independent, nondimensional variable groups. The quantities selected are fluid density, initial mass flow rate, supply temperature, and internal pipe diameter and are summarized in Table 1. These values are selected because first, they best characterize the scale of operation for each component in the system, and second, are the ones subject to the most challenging constraints (dimensions, specifications) in the lab-scale design. From the above steps, it can be seen there are fourteen relevant variable groups and four fundamental units. Therefore, according to the Buckingham \(\pi\) Theorem, a total of ten independent \(\pi\) groups can be found. The process of finding the \(\pi\) group for a single variable is 1. Identify the fundamental units of the variable using its dimensional formula, given in Eq. (11). 2. Note the exponent for each of the fundamental units. 3. Eliminate the units of the variable using the appropriate power of the nondimensionalization parameters (Table 1) in order of \(T_{s}\), \(\dot{m}_{I}\), \(\rho\), and \(D\). 4. Write the final nondimensional group from the original variable and nondimensionalization parameters raised to the appropriate powers. This procedure is followed to nondimensionalize the variables listed in Eq. (10). The resulting \(\pi\) groups are listed below. Nondimensional time is found using \[t^{*}=t\cdot\frac{\dot{m}_{I}}{\rho D^{3}} \tag{12}\] The nondimensional temperatures of each pipe segment in the network can be found by dividing the current temperature by the supply temperature. \[T_{p}^{*}=\frac{T_{p}}{T_{s}},\quad T_{HX}^{*}=\frac{T_{HX}}{T_{s}} \tag{13}\] Because the temperature of the fluid in the thermal mass can be modulated by the bypass valve, it is independent from the network supply temperature \(T_{s}\), and the desired setpoint, \(T_{set}\), can be decided arbitrarily, provided that \(T_{set}<T_{s}\). Therefore, the current thermal mass temperature must be normalized by subtracting the setpoint before dividing by the supply temperature \[T_{HM}^{*}=\frac{T_{ThM}-T_{set}}{T_{s}} \tag{14}\] There are three \(\pi\) groups needed to characterized the pipes, denoted as \(\pi_{1}-\pi_{3}\). The first two groups, \(\pi_{1}\) and \(\pi_{2}\) are from conservation of energy equation for the pipe Eq. (1), where \(\pi_{1}\) is the nondimensional coefficient for the heat supplied into the pipe \[\pi_{1}=\frac{\dot{m}}{\rho V}\cdot\frac{D^{3}\rho}{\dot{m}_{I}} \tag{15}\] and \(\pi_{2}\) comes from the term used to describe the heat lost to the environment by the pipe. \[\pi_{2}=\frac{hA_{s}}{\rho c_{p}V}(T_{p}-T_{a})\cdot\frac{D^{3}\rho}{\dot{m}_{ I}T_{s}} \tag{16}\] The final group for the pipes, \(\pi_{3}\), comes from the fluid dynamics model of the pipe Eq. (3) and is used to ensure the pressure losses are consistent between the network scales. \[\pi_{3}=\Delta P\cdot\frac{\rho D^{4}}{\dot{m}^{2}} \tag{17}\] The dynamics of the building components, given by Eq. (6) can be described using \(\pi_{4}-\pi_{6}\), where \(\pi_{4}\) is the nondimensional heat capacity of the building \[\pi_{4}=(\rho c_{p}V)_{ThM}\cdot\frac{\rho T_{s}D}{\dot{m}^{2}} \tag{18}\] The second group for the building dynamics, \(\pi_{5}\), describes the energy transferred into the thermal mass from the heat exchanger \[\pi_{5}=\dot{Q}_{in}\frac{\rho^{2}D^{4}}{\dot{m}^{3}} \tag{19}\] and the last group \(\pi_{6}\) describes the heat lost by the thermal mass to the environment \[\pi_{6}=\dot{Q}_{out}\frac{\rho^{2}D^{4}}{\dot{m}^{3}} \tag{20}\] Using the \(\pi\) groups, the system equations are then written in their dimensionless forms. Having the nondimensional form of these equations allows for the modeling of the network behavior in the nondimensional space, which makes comparison between the full-scale and lab-scale systems possible. The temperature dynamics equation in the pipes in Equation (1) is reformulated as \[\frac{dT_{p}^{*}}{dt^{*}}=\pi_{1}\left(T_{p_{in}}^{*}-T_{p}^{*}\right)-\pi_{2} \tag{21}\] while the temperature dynamics equation for the thermal masses, Eq. (6), is rewritten as \[\pi_{4}\frac{dT_{ThM}^{*}}{dt^{*}}=\pi_{5}-\pi_{6} \tag{22}\] ## 4 Selection of the Lab-Scale System Parameters The physical parameters of the lab-scale system are selected to match the nondimensional large-scale values for all the previously established \(\pi\) groups. A literature review was conducted to find representative dimensional values for the sizes of each component for a variety of real-world DHN configurations. This information is then used to calculate the values of the full-scale \(\pi\) groups. The results of this literature review is summarized in Table 2. Then, the unmodifiable components of the lab-scale system were considered. Specifically, the lab-scale network is supplied by a a fixed speed rotary vane pump and 30 gallon residential water heater. These two components constrained two of the nondimensionalization parameters, namely the size of the pump determines \(\dot{m}_{I}\), and the recovery rate of the water heater (the amount of water that can be heated during a one hour period) dictates \(T_{s}\). Additionally, the network uses 1/2" PEX pipes for the supply network, fixing the value for \(D\). By modifying the original experimental setup, the rest of the components in the lab-scale DHN are resized to ensure agreement between the full-scale and lab-scale \(\pi\) groups. The ranges of the lab-scale values that result from matching the \(\pi\) groups are presented in Table 2. \begin{table} \begin{tabular}{l c c} \hline **Parameter** & **Symbol** & **Unit** \\ \hline Density of operating fluid & \(\rho\) & \(ML^{-3}\) \\ Initial mass flow rate & \(\dot{m}_{I}\) & \(Mt^{-1}\) \\ Supply temperature & \(T_{s}\) & \(T\) \\ Pipe internal diameter & \(D\) & \(L\) \\ \hline \end{tabular} \end{table} Table 1: Nondimensionalization parameters selected to create the \(\pi\) groups The lengths of the pipes are chosen to ensure agreement between the \(\pi_{1}\) values. The pipes have been encased in two layers of R13 fiberglass insulation to reduce the heat lost by the pipes to the environment, matching the \(\pi_{2}\) values. Additionally, the predicted pressure drops in the lab-scale network are calculated based on the friction, length, and geometry changes in the network and were found to be in the desired range to match the values of \(\pi_{3}\) for the lengths selected. The thermal mass characteristics are also matched between the full-scale and lab-scale. To decrease the volume required to match heat capacity of the thermal mass and increase the rate of heat transfer from the heat exchanger, the lab-scale thermal masses representing the buildings are filled with water, rather than the air as would be in full-scale system. The volume of the thermal masses is then chosen to match \(\pi_{4}\). This results in the chosen volumes of \(7000~{}cm^{3}\) and \(10400~{}cm^{3}\) respectively. The desired steady-state temperature in the thermal mass is set to match the rate of heat extracted from the network (\(\pi_{5}\)) during steady state operation. The desired building temperature is of particular relevance because this variable is used to modulate the bypass valves supplying the heat exchangers to meet the desired setpoint when each building is occupied. Finally, because the lab-scale DHN is indoors, the difference between the set point temperature (\(T_{ThM}\)) and the ambient temperature (\(T_{a}\)) is less than for an equivalent outdoor system. Hence, the required rate of heat transfer out of the buildings can not be met through natural convection alone. Additionally, the ambient temperature can vary widely throughout a day, and the ability to recreate these fluctuations is key to performing realistic tests. To address these issues, Peltier junctions are added to a wall of the thermal masses. An image of one of the thermal mass with the embedded Peltier junction can be seen in Fig. 3. These Peltier junctions remove heat from the thermal mass when a voltage is applied via the thermoelectric cooling effect. The addition of the Peltier junction to the thermal mass allows for the control of the rate of heat removed from each thermal mass, allowing for more flexibility in the tests that can be performed. The rate of heat lost by the thermal mass (Eq. (8)) must be modified to include the effects of the Peltier junction \[\dot{Q}_{out}=\left(hA_{s}\right)_{ThM}\left(T_{ThM}-T_{a}\right)+\dot{Q}_{pelt} \tag{23}\] where \(\dot{Q}_{pelt}\) is the rate of heat removed by the Peltier junction, which is set directly by modulating the power supplied to the Peltier junctions. For example, to replicate an ambient temperature of -5 C in the full-scale, the Peltier junctions must operate at approximately 40 W. The rate of heat loss caused by the Peltier junctions can be converted to a simulated ambient temperature (\(T_{a_{sim}}\)) as \[T_{a_{sim}}=-\frac{hA_{s_{act}}}{hA_{s_{sim}}}\left(T_{ThM}-T_{a}\right)- \frac{\dot{Q}_{pelt}}{hA_{s_{sim}}}+T_{ThM} \tag{24}\] \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{6}{l}{**Nondimensionalization Parameters**} \\ \hline & & \multicolumn{2}{c}{Full-Scale} & \multicolumn{2}{c}{Lab-Scale} & \\ Parameter & Symbol & Value & Unit & Value & Unit & Source \\ \hline Density of network fluid & \(\rho\) & 971 & \(\left[kg/m^{3}\right]\) & 994 & \(\left[kg/m^{3}\right]\) & [25] \\ Initial mass flow rate & \(\dot{m}_{I}\) & 20 & \(\left[kg/s\right]\) & 0.0862 & \(\left[kg/s\right]\) & [26; 27] \\ Supply temperature & \(T_{s}\) & 80 & \(\left[C\right]\) & 36 & \(\left[C\right]\) & [26; 28] \\ Pipe internal diameter & \(\left[D\right]\) & 0.1 & \(\left[m\right]\) & 12 & \(\left[mm\right]\) & [28; 27] \\ \hline \multicolumn{6}{l}{**Pipe Characteristics**} \\ \hline & & \multicolumn{2}{c}{Full-Scale} & \multicolumn{2}{c}{Lab-Scale} & \\ Parameter & Symbol & Value & Unit & Value & Unit & Source \\ \hline Length of segment & \(l\) & \(20-100\) & \(\left[m\right]\) & \(2.5-11\) & \(\left[m\right]\) & [26; 29] \\ Pressure loss across segment & \(\Delta P\) & 0.01-0.2 & \(\left[MPa\right]\) & 1-10 & \(\left[kPa\right]\) & [29; 30] \\ Conductive heat transfer coefficient & \(hA_{s}\) & 5-90 & \(\left[W/K\right]\) & 0.23-1.0 & \(\left[W/K\right]\) & [31] \\ \hline \multicolumn{6}{l}{**Thermal Mass Characteristics**} \\ \hline & & \multicolumn{2}{c}{Full-Scale} & \multicolumn{2}{c}{Lab-Scale} & \\ Parameter & Symbol & Value & Unit & Value & Unit & Source \\ \hline Heat capacity & \(\left(\rho c_{p}V\right)_{ThM}\) & \(0.15-7\) & \(\left[GJ/K\right]\) & \(30-45\) & \(\left[kJ/K\right]\) & [32; 18; 7] \\ Pressure loss across heat exchanger & \(\Delta P_{HX}\) & 0.03 & \(\left[MPa\right]\) & 2-3 & \(\left[kPa\right]\) & [29] \\ Convective heat transfer coefficient & \(\left(hA_{s}\right)_{HX}\) & 5-12 & \(\left[kW/K\right]\) & 14.5-16 & \(\left[W/K\right]\) & [33; 34] \\ of heat exchanger & \(\dot{Q}_{out}\) & 70-230 & \(\left[kW\right]\) & 0-79 & \(\left[W\right]\) & [32; 27] \\ Rate of heat lost by building & \(T_{Set}\) & 20 & \(\left[C\right]\) & 26 & \(\left[C\right]\) & [33] \\ \hline \end{tabular} \end{table} Table 2: Average network component values for full-scale and lab-scale systems where \(hA_{s_{act}}\) is the thermal mass's heat transfer coefficient and \(hA_{s_{sim}}\) is the simulated heat transfer coefficient. The thermal masses' heat transfer coefficients \(hA_{s_{sim}}\) and the ambient temperature, both components of \(\pi_{5}\), are modulated by changing voltage supplied to the Peltier junction to match the full-scale values. ## 5 Validation of the Resulting Lab-Scale DHN There is a limited amount of data available in literature on daily operating conditions of real-world DHNs at the individual user level. More often, models are developed for design and energy characterization purposes. For this reason, in the paper, simulation results are used to validate the similarity between the lab-scale and full-scale systems. ### Data Acquisition System The experimental setup is outfitted with temperature, pressure and mass flow sensors to ensure all relevant dynamics are captured during operation. The system has two separate but synchronized data acquisition systems (DAQs). The first DAQ consists of three 8-channel USB data acquisition modules and is used for collecting temperature data from the 17 thermistors located throughout the network. The second system is a National Instruments PXIe1073 system, with a PXIe4353 and a PXIe6363 installed, which is used for temperature, pressure and flow rate data. The PXIe4353 provides thermocouple input channels, which are used to measure the cold-side temperature of the Peltier junctions. The PXIe6363 has 16 analog differential inputs, nine of which are used to collect data from the pressure transducers located throughout the network. Additionally, four analog input channels are used to collect data from the mass flow sensors. The PXIe6363 also has 4 analog outputs, which supply voltage to the two characterized control valves to modulate their positions during the experiment. The location of the sensors in the network are shown in Fig. 4. NI LabVIEW is used to interface with both DAQs to synchronize and record the data, set the control valve positions, and display results in real time. ### Description of Experiment Through the \(\pi\) groups framework, any set of testing conditions can be recreated in the lab. In this paper, the response of the lab-scale system is evaluated against the simulation results presented in the paper by Saletti et al. [27]. This test is representative of a typical day during winter operation and shows a diverse response range for the two thermal mases. In the full-scale experiment, a PID control algorithm was designed to act as a baseline for comparison to a novel control algorithm. This PID controller was implemented in a Model-in-the-Loop simulation to control two school buildings to a desired set temperature during their hours of operation. In the full-scale simulation, the maximum available supply mass flow rate for the two buildings was 20 kg/s, while the maximum supply temperature was 80 C, consistent with the ranges presented in Section 4. A continuous 48 hour subset of the week-long simulation was selected to be emulated in the lab-scale experiment. In the full-scale experiment, thermal mass 1, associated with the smaller building (a school), started at its heated temperature and was immediately cut off from the heat supply and allowed to cool for 12 hours, before being heated again for an additional 12 hours. The occupancy times of thermal mass 2, representative of the larger building (a sports hall), were offset from that of thermal mass 1. The larger thermal mass remained at its heated temperature for two hours and 45 minutes before being set to cool for 11 hours. After its cooling period, thermal mass 2 was reheated for the remaining 10 hours and 15 minutes. These occupancy cycles were repeated twice during the 48 hour period. For the lab-scale experiment, the timings of cycles were scaled using Eq. (12), resulting in the entire 48 hour period being recreated in just under 18 hours, a 63% reduction in time needed to run the Figure 4: Diagram of the two-user lab-scale DHN with sensor locations labeled. Figure 3: Photograph of thermal mass with embedded Peltier junction. experiment. To recreate the thermal masses' temperature profiles, two PID controllers are implemented in the lab-scale network, where the difference between the current thermal masses' temperatures and their desired setpoints are used to set the position of the bypass valves to drive the thermal masses to the desired temperature. During hours of operation, the setpoint was 28 C, while during cooling periods, the PID control setpoint was 0 C. The ambient temperature profile used in the original simulation is show in Fig. 5. This ambient temperature was recreated in the lab-scale experiment using the Peltier junction. The power setpoint for each Peltier junction was controlled using \[\dot{Q}_{pelt}=\left(hA_{s}\right)_{ThM}\left(\left(T_{a}-T_{ set}\right)_{Lab}\right.\\ -\left.k_{T}\left(T_{a}-T_{set}\right)_{Full}\right) \tag{25}\] where \(k_{T}\) is the ratio of nondimensionalization parameters for the full-scale and lab-scale temperatures: \[k_{T}=\frac{T_{s_{Lab}}}{T_{s_{Full}}} \tag{26}\] Each Peltier junction has a built-in PID controller that is used to track a desired power setpoint. ### Results The temperature response of the thermal masses are shown in Figs. 5 and 6, specifically the nondimensional temperature of the thermal masses, along with the nondimensional simulated ambient temperatures for both the full-scale and lab-scale systems are shown in Figs. 5a and 5b for thermal mass 1 and thermal mass 2, respectively. The result of the direct comparison shows a consistent dynamic response between the full-scale and the lab-scale system. Additionally, it can be seen that the simulated ambient temperature is able to effectively follow the desired full-scale temperature, with the RMS error being 0.22% for both thermal masses. Figure 6 shows the median nondimensional full-scale and lab-scale thermal mass temperature along with the limits of 25th and 75th percentiles for the temperature of both thermal masses. The mean and standard deviations of the temperatures are listed in Table 3, along with the ratio of the mean lab-scale temperatures to the mean full-scale temperatures. Consistently, the lab-scale results have a larger spread in the nondimensional thermal mass temperature, which is more evident in the smaller thermal mass (\(ThM1\)). While the general trend between the full-scale and lab-scale thermal mass temperatures are similar, there is a large variation between the extreme values observed. This deviation can be attributed to the difference in values of \(\pi_{4}\) between the particular buildings being represented. Specifically, \(\pi_{4}\) describes the total heat capacity of the thermal mass, which is mainly effected by the building's volume. In the full-scale configuration, the nondimensional heat capacities were approximately 8.6 times bigger for both thermal mass 1 and thermal mass 2 in the full-scale simulation. These values are consistent with the ratio of the mean temperature values between the two scales. Moreover, when considering the differences in the heat transfer coefficients for different sized buildings, this provides an explanation of the variations observed. There was a large range of potential building volumes presented in literature; in the lab-scale setup, the thermal masses are sized more similarly to residential buildings, while the full-scale experiment used commercial/industrial sized buildings. The effects of various building sizes can be seen in Gambarotta et al. [35], where the temperature dynamics of 12 different buildings are simulated in one network. The building temperatures in the lab-scale system will exhibit more rapid dynamic responses as compared to larger buildings, but will still be valid as a proving ground for new modeling and control techniques. The response of the pipes in the distribution network is shown in Figs. 7 to 9. Figure 7 presents the nondimensional pressure losses in the pipes. For reference, the dimensional scale for pressure losses range from 0 to 25 kPa and the pump provides a 75 kPa increase in pressure, which are in the acceptable range for a similar full-scale system. The pressure losses across the heat exchangers are 3.6 and 4.5 kPa for thermal mass 1 and 2 respectively. Figure 8 shows the mass flow split between the network branches. This data is divided into four time intervals, the average mass flow rate, the mass flow rate during the cooling of both thermal masses, the mass flow rate during the heating of both thermal masses, and the mass flow rate during the steady state operation, where the PID controller is working to maintain the current temperature in the thermal masses. The measured values correlate well with relative pressure losses in the network. This data is used in the calculation of \(\pi_{1}\) and in the calculation of the heat losses in the network. The values in these plots are validated against the data provided in Ancona et al. [26]. The supply temperature, return temperature, and select pipe temperatures throughout the network are presented in Fig. 9 for the portion of the experiment where both thermal masses are being re-heated for the first time and then maintained at the set temperature. This information was used to validate the temperature dynamics in the pipes. The difference between the supply and re \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Scale** & **Mean** & **STD** & **Ratio** \\ \hline \multirow{2}{*}{\(T_{ThM1}\)} & Lab & \(-9.42\times 10^{-3}\) & \(10.5\times 10^{-3}\) & \multirow{2}{*}{7.32} \\ & Full & \(-1.29\times 10^{-3}\) & \(2.15\times 10^{-3}\) & \\ \hline \multirow{2}{*}{\(T_{ThM2}\)} & Lab & \(-6.81\times 10^{-3}\) & \(8.17\times 10^{-3}\) & \multirow{2}{*}{2.11} \\ & Full & \(-3.22\times 10^{-3}\) & \(4.07\times 10^{-3}\) & \\ \hline \hline \end{tabular} \end{table} Table 3: Mean and standard deviation of nondimensional thermal mass temperatures turn temperature was used to calculate the energy losses throughout the network. Additionally, there is a time delay between changes in the supply temperature and those changes being seen in the return temperature. This delay is caused by the time taken for the operating fluid to circulate through the network. As the fluid dynamics of the system are much faster than the temperature dynamics, this delay can be modeled as an algebraic offset. The magnitude of the algebraic offset seen in the lab-scale data was validated against the data collected from a CFD-based analysis of a DHN's distribution network presented in Zhao et al. [36]. This paper used the peak-valley method to quantify the delay. In nondimensional time, the average delay in temperature peaks and valleys for users a similar nondimensional distance away from the supply is approximately 4,500, or about 90 seconds in the lab-scale system, similar to the results seen in the lab-scale system. This time delay can have a large impact on the energy consumption of a DHN and can consequently greatly effect the efficiency of a predictive controller. Therefore, accurately capturing this delay in the lab-scale system is critical for control design. The ratio of heat being lost to the environment compared to the heat being extracted by the heat exchangers is presented in Fig. 10. The rate of heat lost can be found using the change in enthalpy of the circulating water, calculated according to \[\dot{Q}_{tot}=\dot{m}_{I}c_{p}\left(T_{s}-T_{r}\right) \tag{27}\] \[\dot{Q}_{ThM_{i}}=\dot{m}_{HX_{i}}c_{p}\left(T_{HX_{in}}-T_{HX_{i}out}\right) \tag{28}\] \[\dot{Q}_{tot}=\dot{Q}_{amb}+\dot{Q}_{ThM_{1}}+\dot{Q}_{ThM_{2}} \tag{29}\] where \(T_{s}\) is collected by TM1, \(T_{r}\) by TM2, \(T_{HX_{in}}\) by TM10 and TM5, and \(T_{HX_{i}out}\) by TM12 and TM17 respectively. This efficiency measure is divided into the same time intervals as the mass flow rates (overall, cooling, heating, and steady state). The reported average efficiency of a full-scale district heating network is around 70% [37], while the overall lab-scale is only around 28% efficient. There are many explanations for the discrepancy between these values. The load diversity in a DHN with many users is much higher than for just the two users being represented here, leading to a higher percentage of the network's operation being in the heating phase, where the lab-scale DHN performs comparably to a full-scale DHN. Additionally, the size of the thermal masses can also impact this metric, as smaller residential buildings require less heat than larger industrial buildings, thereby decreasing the time spent in the heating phase of operation. Furthermore, in this experiment, no effort was made to reduce the mass flow rate during the times when the buildings were not being heated. A more optimal control algorithm would have reduced the mass flow rate during the cooling periods, reducing the energy wasted and increasing efficiency. Finally, the full-scale efficiency is calculated for an entire year of operation, Figure 5: Comparison between the lab-scale and full-scale systems. Figure 6: Median nondimensional thermal mass temperature with upper and lower quartiles for full-scale and lab-scale thermal masses. and heat losses vary drastically depending on the season and ambient temperature. The breakdown of the total heat lost to the environment by each of the components of the system is presented in Fig. 11. Most of of the heat is being lost by pipes, followed by the two thermal masses. The heat lost by the thermal masses includes the heat lost through natural convection and the heat lost due to the cooling of the Peltier junctions. The heat lost by the heater is the smallest source of energy losses in the network. The relative energy loss between the components is fairly consistent between the different phases of operation. ## 6 Conclusion This paper describes the design and validation of a dynamically similar lab-scale DHN. The equations used to model the system are presented, along with the dimensionless groups that describe the relevant dynamics of the system. Then, representative values for all components of a full-scale DHN based on current literature are provided, along with the corresponding desired lab-scale values. Finally, the scaled DHN is validated by recreating a simulated two day period of operation and compares the results to desired values. The data is presented in the nondimensional form to allow direct comparison to the full-scale data. Future work will use the data collected from the lab-scale DHN to validate modeling techniques developed for use in the design of novel control algorithms.
2305.12038
Stabilized finite element methods for the time-spectral convection-diffusion equation
Discretizing a solution in the Fourier domain rather than the time domain presents a significant advantage in solving transport problems that vary smoothly and periodically in time, such as cardiorespiratory flows. The finite element solution of the resulting time-spectral formulation is investigated here for the convection-diffusion equations. In addition to the baseline Galerkin's method, we consider stabilized approaches inspired by the streamline upwind Petrov/Galerkin (SUPG), Galerkin/least square (GLS), and variational multiscale (VMS) methods. We also introduce a new augmented SUPG (ASU) method that, by design, produces a nodally exact solution in one dimension for piecewise linear interpolation functions. Comparing these five methods using 1D, 2D, and 3D canonical test cases shows while the ASU is most accurate overall, it exhibits stability issues in extremely oscillatory flows with a high Womersley number in 3D. The GLS method, which is identical to the VMS for this problem, presents an attractive alternative due to its excellent stability and reasonable accuracy.
Mahdi Esmaily, Dongjie Jia
2023-05-19T23:48:45Z
http://arxiv.org/abs/2305.12038v4
# Stabilized finite element methods for the time-spectral convection-diffusion equation ###### Abstract Discretizing a solution in the Fourier domain rather than the time domain presents a significant advantage in solving transport problems that vary smoothly and periodically in time, such as cardiorespiratory flows. The finite element solution of the resulting time-spectral formulation is investigated here for the convection-diffusion equations. In addition to the baseline Galerkin's method, we consider stabilized approaches inspired by the streamline upwind Petrov/Galerkin (SUPG), least square (LSQ), and variational multiscale (VMS) methods. We also introduce a new augmented SUPG (ASU) method that, by design, produces a nodally exact solution in one dimension for piecewise linear interpolation functions. Comparing these five methods using 1D, 2D, and 3D canonical test cases shows while the ASU is most accurate overall, it exhibits convergence issues in extremely oscillatory flows with a high Womersley number in 3D. The VMS method presents an attractive alternative due to its excellent convergence characteristics and reasonable accuracy. ## 1 Introduction In the past few decades, there has been an upward trend in the use of cardiorespiratory simulations for surgical design [1, 2], diagnosis [3], and patient-specific modeling [4]. Among various numerical methods, the finite element method has been an attractive choice for this purpose owing to its versatility in dealing with complex physics and complex geometries [5, 6, 7]. The prohibitive computational cost of these computations, on the other hand, has limited the use of this technology in the academic setting, especially when the solutions of inverse problems, such as optimization [8, 9, 10], parameter identification [11, 12, 13], and uncertainty quantification [14, 15, 16] is concerned. The cost of a computational fluid dynamics simulation scales with the dimension of the discrete problem, which is the product of the degrees of freedom for spatial discretization (i.e., number of grid points) and the number of time steps for time integration. The latter, in a time-dependent simulation, may well exceed the thousands as the time step size should be small enough to ensure accuracy and the time integration period should be long enough to ensure independence of the solution from the arbitrary initial condition [17]. Given this large number, the cost of a simulation can be dramatically reduced if one were to replace the time integration with a more cost-effective alternative. For this purpose, we propose discretizing the fluid problem in the frequency rather than time domain [18, 19, 20]. This choice is motivated by the fact that the transport variables in the cardiorespiratory system often vary smoothly and periodically in time. As a result, they can be well-approximated with only a handful of Fourier modes [21, 22]. This number, in contrast to thousands of time steps, presents a huge opportunity for reducing the dimensionality of the discrete problem and as a consequence, the cost of a typical cardiorespiratory simulation. To take advantage of these attractive properties, we need an efficient simulation scheme that robustly handles the complexities of cardiorespiratory flow simulations. In response, this study aims to construct a numerical method that is applicable to a broad range of flow conditions. Categorically, the grand challenge of solving incompressible Navier-Stokes equations in the frequency domain relies on the solution of three sub-problems. The first is to deal with the incompressibility constraint, the second is to ensure a stable solution under strongly convective regimes, and the third, which we leave for future studies, is to efficiently handle the mode coupling associated with the nonlinear convective acceleration term. The first sub-problem above, which can be replicated exclusively in the unsteady Stokes equation, has been the subject of our earlier studies [23, 22]. Our earlier feasibility study [23] showcased the possibility of reducing the cost of simulation by several orders of magnitude if one were to solve it in the frequency domain. The later study [22] generalized that method to avoid the use of complex arithmetic and allow for the use of similar interpolation functions for pressure and velocity, thus circumventing the inf-sup condition [24, 25, 26]. The focus of the present study is to deal with the strongly convective flows, which is the second sub-problem enumerated above. More concretely, the primary aim of this study is to identify a finite element method that produces a converged and accurate solution for a wide range of flow conditions. In doing so, we decouple this sub-problem from the third mode-coupling sub-problem above by strictly focusing on the linear unsteady convection-diffusion equation that is driven by a given steady flow. Additionally, we restrict our discussion to linear interpolation functions provided their widespread use in the discretization of complex cardiorespiratory geometries. In the context of conventional time formulation, it is well-known that Galerkin's method (GAL) produces nonphysical oscillations in strongly convective regimes [27]. The literature dedicated to dealing with this issue is too vast to recount here [28, 29, 30, 31]. Thus, we forgo discussion of methods such as discontinuity capturing [32, 33, 34, 35] and focus on a select few popular stabilization methods that are investigated in details in the following sections. A widely adopted technique to counteract instabilities associated with convection-dominant flows is to use an upwinding scheme by appropriately adjusting the test function weights in the upstream and downstream of the tested node [36, 37]. This strategy, known as the streamline upwind Petrov/Galerkin (SUPG) method, successfully generates stable and accurate results for a conventional time formulation by adding a direction-dependent diffusion to the underlying Galerkin's formulation. In the present study, we will investigate to what extent the GAL method issues carry over to the time-spectral form of the convection-diffusion equation and whether a SUPG-based method is successful at eliminating those issues. Besides the SUPG, other methods have been proposed to stabilize the solution of convection dominant flows. Among those, we consider a least-squares-based method (LSQ) that is built by adding a symmetric residual penalty term to the discrete form [38, 39, 40]. We also investigate a scheme based on the variational multiscale (VMS) method, which is constructed by modeling the unresolved scales in the discrete solution via its residual [41, 42, 43]. In addition to the three stabilized methods above, which are inspired by their conventional time formulations, we will introduce a method that is tailor-designed for the time-spectral form of the convection-diffusion equation. This method, which can be viewed as an augmented SUPG method (ASU), is designed to accomplish what the SUPG is designed to accomplish, but for the time-spectral rather than the steady-state version of the convection-diffusion equation. More specifically, we design the ASU to produce a nodally exact solution for a one-dimensional model problem for the time-spectral convection-diffusion equation, regardless of whether the solution is steady or unsteady. The present article is organized as follows. In Section 2, we introduce a one-dimensional model problem to derive the five methods mentioned above. Later in Section 3, we discuss the extension of these methods to multiple dimensions. We draw conclusions in Section 4. ## 2 A 1D model problem In order to rigorously develop an accurate technique for solving the convection-diffusion problem in multiple dimensions, we first turn to a simple one-dimensional problem. This 1D problem has historical significance for its roots in the development of the SUPG method. We also use this model problem to construct the ASU approach for the time-spectral convection-diffusion equations. ### Problem statement Consider the unsteady convection of a neutral tracer \(\hat{\phi}(x,t)\) in a one-dimensional domain that is governed by \[\begin{split}\hat{\phi}_{,t}+a\hat{\phi}_{,x}&=\kappa \hat{\phi}_{,xx},\\ \hat{\phi}(0,t)&=0,\\ \hat{\phi}(L,t)&=\cos(\omega t),\end{split} \tag{1}\] where \(L\) is the domain size, \(\kappa\in\mathbb{R}^{+}\) is the diffusivity, \(\omega\in\mathbb{R}\) is the oscillation frequency of the boundary condition, and \(a\in\mathbb{R}\) is the convective velocity that is uniform in the entire domain. The initial condition was not specified in Eq. (1) because we are solely interested in its particular solution (i.e., \(\phi(x,t)\) as \(t\rightarrow\infty\)) that is independent of the initial transient behavior of \(\phi\) when \(\kappa>0\). Even though the boundary condition specified in Eq. (1) is expressed in the form of a uni-modal excitation, one can simply generalize what we discuss below to any arbitrary (but well-behaved) time-varying boundary condition \(f(t)\) given that \(\hat{\phi}\) is linear in terms of \(f\) and \(f\) can be expressed as the summation of trigonometric functions through Fourier transformation. Since we are interested in the time-spectral formulation of Eq. (1), we instead attempt to solve an equivalent problem that is \[\begin{split} i\omega\phi+a\phi_{,x}&=\kappa\phi_ {,xx},\\ \phi(0)&=0,\\ \phi(L)&=1,\end{split} \tag{2}\] where \(i^{2}=-1\) and \[\hat{\phi}(x,t)=\text{Real}\left(\phi(x)e^{i\omega t}\right)=\phi_{r}\cos( \omega t)-\phi_{i}\sin(\omega t). \tag{3}\] In Eq. (3), \(\phi_{r}=\phi_{r}(x)\) and \(\phi_{i}=\phi_{i}(x)\) denote the real and imaginary components of \(\phi\). These two functions determine the overall amplitude \(|\phi|=\sqrt{\phi_{r}^{2}+\phi_{i}^{2}}\) of the solution and its phase shift \(\theta=\tan^{-1}(\phi_{i}/\phi_{r})\) relative to the boundary condition. Since \(\phi_{r}\) and \(\phi_{i}\) capture the overall behavior of the solution \(\hat{\phi}(x,t)\), we rely on these two functions to evaluate the various methods below. ### Exact solution Since Eq. (2) is a constant coefficient second-order ordinary differential equation, its solution can be obtained through elementary means and is \[\phi(x)=\frac{\exp\left(r_{1}\frac{x}{L}\right)-\exp\left(r_{2}\frac{x}{L} \right)}{\exp(r_{1})-\exp(r_{2})}, \tag{4}\] where \[r_{1,2}=P\pm\sqrt{P^{2}+iW^{2}}, \tag{5}\] are the roots of the characteristic polynomial and are expressed in terms of \[P =\frac{aL}{2\kappa}, \tag{6}\] \[W =L\sqrt{\frac{\omega}{\kappa}}, \tag{7}\] which are the Peclet and Womersley numbers, respectively. The Peclet number, which weighs convection relative to the diffusion, is analogous to the Reynolds number and is typically much larger than one for engineering applications or blood flow in large blood vessels. The Womersley number, on the other hand, measures the importance of unsteady effects against diffusion and can vary from tens in major vessels to values much smaller than one in the cardiovascular system [44, 45]. ### Baseline Galerkin's method (GAL) Galerkin's approximate solution is obtained from the weak form of Eq. (2) by multiplying it by a test function \(w^{h}(x)\) and integrating by parts the diffusion while noting the approximate \(\phi^{h}\) diminishes at the boundaries. The resulting problem statement is to find \(\phi^{h}(x)\) that satisfied the boundary conditions such that for any \(w^{h}(x)\), which diminishes at the boundaries, we have \[(w^{h},i\omega\phi^{h})+(w^{h},a\phi^{h}_{,x})+(w^{h}_{,x},\kappa\phi^{h}_{,x})=0, \tag{8}\] where \((f,g)=\int_{0}^{L}fg\mathrm{d}x\) denotes the inner product of functions \(f\) and \(g\). As detailed in Appendix A, Eq. (8) has an explicit solution for piecewise linear interpolation functions of uniform size \(h\). Taking \(N\) to be the number of elements (so that the nodal position \(x_{A}=hA\) for \(A=0,1,\cdots,N\)), the solution at node \(A\) is \[\phi^{h}(x_{A})=\frac{\rho_{1}^{A}-\rho_{2}^{A}}{\rho_{1}^{N}-\rho_{2}^{N}}, \tag{9}\] where \[\rho_{1,2}=\frac{1+2i\beta\pm\sqrt{\alpha^{2}-3\beta^{2}+6i\beta}}{1-\alpha-i \beta}, \tag{10}\] with \[\alpha =\frac{ah}{2\kappa}, \tag{11}\] \[\beta =\frac{\omega h^{2}}{6\kappa}. \tag{12}\] The two variables \(\alpha\) and \(\beta\) appearing in Eqs. (11) and (12) are the Peclet number and square of the Womersley number, respectively, that are defined based on the element size \(h\) rather than the domain size \(L\). Differently put, \(\alpha\) represents the relative magnitude of the convective term in comparison to the diffusive term at the element length scale. Similarly, \(\beta\) represents the relative magnitude of the acceleration term in comparison to the diffusive term at the element length scale. In the steady state flows, where \(\omega=0\) and thus \(\beta=0\), \(\rho_{1,2}=1,(1+\alpha)/(1-\alpha)\). This result, which has been established in the past [37], explains the nonphysical oscillatory nature of Galerkin's solution in strongly convective flows in which \(|\alpha|>1\). In these cases where \(\rho_{2}<0\) will produce alternating signs for \(\phi^{h}(x_{A})\) as \(A\) switches between odd and even numbers. In the next section, we will show this behavior persists in unsteady flows when \(\beta>0\). ### The issue As stated earlier, it is well established that the spatiotemporal Galerkin's method fails at strongly convective regimes by generating nonphysical oscillations in the solution. To what extent this issue persists in unsteady regimes is what we investigate below. To answer the above question, we considered the 1D model problem stated in Eq. (2). We prescribe a flow that is from right to left (\(a<0\)) so that the solution temporal oscillations (that are physical and caused by the unsteady boundary conditions) propagate into the computational domain. This problem setup allows us to better contrast various methods. Although results are not presented here for \(a>0\), the error for those cases resembles what is presented below for \(a<0\). The real and imaginary parts of the solution obtained from Galerkin's method are compared against the exact solution from Eq. (4) in Figure 1 The results are obtained for a wide range of element Peclet and Womersley numbers to demonstrate three key observations. Firstly, in strongly convective flows (large \(\alpha\)), the error is dominated by nonphysical oscillations (Figure 1-(c,d)). These oscillations, which are similar to that of the steady problem, are generated due to the presence of sharp changes near the Dirichlet boundary. Secondly, at relatively small \(\alpha\) and \(\beta\), which correspond to cases where the mesh is sufficiently small, Galerkin's method provides a very good approximation of the solution (Figure 1-(a)). Lastly, in highly oscillatory but weakly convective flows Galerkin's solution overshoots the exact solution near the oscillatory boundary (Figure 1-(b)). In the next section, we will discuss how various stabilization techniques overcome these issues. Figure 1: The real (black) and imaginary (red) part of the solution to the 1D time-spectral convection-diffusion problem. The exact (dots) and Galerkin’s (solid line) solutions are graphed for \(|\alpha|=0.1\) (top row), \(|\alpha|=10\) (bottom row), \(\beta=0.1\) (left column), and \(\beta=1\) (right column). ### Streamline upwind Petrov/Galerkin method The SUPG method is constructed by modifying the way that the steady state version of Eq. (2) is tested [37]. Instead of testing the equation with \(w\) as shown in Eq. (8), the SUPG adds an upwinding contribution and tests it with \(w+\tau aw_{,x}\). The added term \(\tau aw_{,x}\), which is only active along the streamwise direction in multidimensional flows, increases the overall test function weight upstream of the tested point. With this adjustment, the SUPG method modifies Eq. (8) to \[\underbrace{(w^{h},i\omega\phi^{h})+(w^{h},a\phi^{h}_{,x})+(w^{h}_{,x},\kappa \phi^{h}_{,x})}_{\text{Baseline Galerkin}}+\underbrace{(\tau aw^{h}_{,x},i\omega \phi^{h}+a\phi^{h}_{,x})}_{\text{The SUPG terms}}=0. \tag{13}\] The parameter \(\tau\) in Eq. (13) is formulated in a way such that the resulting approximate solution is nodally exact for steady-state flows with \(\omega=0\). Note that the contribution of the diffusion term to the SUPG terms is dropped in Eq. (13) since we are limiting our discussion to linear interpolation functions (hence \(\phi^{h}_{,xx}=0\)) and the integrals for the new SUPG terms are performed within each element. On a side note, one may wish to only retain the quasi-steady SUPG term in Eq. (13) and drop the \((\tau aw^{h}_{,x},i\omega\phi^{h})\) term. As one would expect, our numerical experiments show that this alternative formulation produces less accurate results across the board. Hence, we solely consider the form shown in Eq. (13) for the remainder of the present study. The two added SUPG terms effectively modify the convective velocity to \(\tilde{a}=a-i\omega\tau a\) and the diffusion coefficient to \(\tilde{\kappa}=\kappa+\tau a^{2}\). That effectively modifies \(\alpha=ah/(2\kappa)\) to \(\tilde{\alpha}=\tilde{a}h/(2\tilde{\kappa})\). Therefore, the new numerical solution must be computed by substituting \(\alpha\) with \(\tilde{\alpha}\) in Eq. (10). Recalling that \(\beta=0\) in the steady regimes, that results in \(\tilde{\rho}_{1,2}=1,(1+\tilde{\alpha})/(1-\tilde{\alpha})\). Thus, the SUPG solution computed based on \(\tilde{\rho}_{1,2}\) will be nodally exact when [36] \[\tilde{\rho}_{1,2}^{A}=\exp(r_{1,2}\frac{x_{A}}{L}), \tag{14}\] where \(A\) is an exponent on the left-hand side. From the two conditions imposed by Eq. (14), one is already satisfied given that \(r_{1}=0\) (since \(W=0\)) and \(\tilde{\rho}_{1}=1\). It is from the second condition that one can obtain a relationship for \(\tau\). In its exact form, that relationship is \[\tau=\frac{h}{2a}\left(\coth\alpha-\frac{1}{\alpha}\right). \tag{15}\] To simplify computation and extension of Eq. (15) to multiple dimensions, \(\tau\) is often approximated as \(\tau=\frac{h}{2a}(1+9\alpha^{-2})^{-\frac{1}{2}}\). The resulting \(\tau\), which behaves the same as \(\tau\) asymptotically, is often written as [46, 39, 43] \[\tau =(\tau_{\text{conv}}^{-2}+\tau_{\text{diff}}^{-2})^{-\frac{1}{2}}, \tag{16}\] \[\tau_{\text{conv}}^{-1} =\frac{2a}{h},\] (17) \[\tau_{\text{diff}}^{-1} =\frac{12\kappa}{h^{2}}. \tag{18}\] Although the above method is designed for the steady regime, one may directly apply it to the time-spectral form of the convection-diffusion equation. That entails computing \(\tau\) from Eq. (16) and plugging it into Eq. (13) to compute the numerical solution \(\phi^{h}\). Later in Section 2.9, we will discuss how this method behaves if it were to be used in unsteady regimes where \(\omega\neq 0\). ### Variational multiscale method The VMS method is constructed [41, 47, 42] by modeling the scale in \(\phi\) that are not resolved by \(\phi^{h}\), namely \(\phi^{\prime}\), via the residual of the original PDE, i.e., \(r(\phi^{h})\). More specifically, to build this method we set \[\phi=\phi^{h}+\phi^{\prime}, \tag{19}\] with \[\phi^{\prime}=-\tau r(\phi^{h}), \tag{20}\] in which \[r(\phi^{h})=i\omega\phi^{h}+a\phi^{h}_{,x}-\kappa\phi^{h}_{,xx}. \tag{21}\] With these definitions, the VMS problem statement becomes similar to that of the Galerkin's in Eq. (8) when \(\phi^{h}\) is replaced with \(\phi\) from Eq. (19). The result is \[\underbrace{(w^{h},i\omega\phi^{h})+(w^{h},a\phi^{h}_{,x})+(w^{h}_{,x},\kappa \phi^{h}_{,x})}_{\text{Baseline Galerkin}}+\underbrace{(\tau aw^{h}_{,x},i\omega \phi^{h}+a\phi^{h}_{,x})}_{\text{The SUPG terms}}+\underbrace{(i\omega\tau w^{h},i \omega\phi^{h}+a\phi^{h}_{,x})}_{\text{The new VMS terms}}=0, \tag{22}\] where the new VMS terms, which are absent in the VMS time formulation, appear here due to the acceleration term taking the form of a source term in the frequency formulation of the convection-diffusion problem. In deriving Eq. (22), we neglected \((w^{h},\kappa\phi^{\prime}_{,xx})\) and also \(-\kappa\phi^{h}_{,xx}\) when computing \(\phi^{\prime}\) from Eqs. (20) and (21). In the time formulation of the VMS method, these terms are dropped by neglecting the small scales in the acceleration term and using the fact that shape functions are linear. One may retain these terms in the frequency formulation by integrating them by parts. Doing so will introduce an additional \(-(2i\omega\tau w^{h}_{,x},\kappa\phi^{h}_{,x})\) term in Eq. (22), thus modifying the effective viscosity by \(-2i\omega\tau\kappa\). This choice to emit this term is a holistic one, which arises from the comparison of the performance of the two methods and selecting the method that is more accurate across the entire parameter space. The results of the numerical experiments for that alternative method, although not presented here, show its inferior performance in comparison to what is considered here in Eq. (22) across all studied cases. ### Least-squares method The LSQ method is constructed by incorporating a symmetric penalty term that is proportional to the residual of the original PDE to the baseline Galerkin's method [38, 39, 40]. That penalty term is also scaled by \(\tau\) to recover the steady SUPG term, thus producing \[\left(w^{h},r(\phi^{h})\right)+\sum_{e}\left(r(w^{h}),\tau r(\phi^{h})\right) _{\Omega_{e}}=0, \tag{23}\] where the integrals under summation are performed in elements interior \(\Omega_{e}\) given that \(\phi^{h}_{,xx}\) is not defined on the element boundaries. Using Eq. (21), we can simplify Eq. (23) for linear shape functions and express it in a more explicit form, which is \[\underbrace{(w^{h},i\omega\phi^{h})+(w^{h},a\phi^{h}_{,x})+(w^{h}_{,x},\kappa \phi^{h}_{,x})}_{\text{Baseline Galerkin}}+\underbrace{(\tau aw^{h}_{,x},a\phi^{ h}_{,x})}_{\text{The steady SUPG term}}+\underbrace{(i\omega\tau w^{h},i\omega\phi^{h})+(2i\omega\tau w^{h}_{,x}, \kappa\phi^{h}_{,x})}_{\text{The new LSQ terms}}=0. \tag{24}\] In contrast to the VMS method where we neglected the diffusion term when calculating residual, we retain that term for the LSQ, producing the last term in Eq. (24). Our numerical experiments show this term has a small, but positive effect on the accuracy of the results. ### A new augmented SUPG method Our overall approach to obtaining a new stabilization method, which we call augmented SUPG or ASU, for the time-spectral convection-diffusion equation is similar to that of the SUPG. The key difference is that, in this case, we enforce both conditions in Eq. (14) while taking into account unsteady flows where \(\beta\neq 0\). Enforcing nodally exact solution in general for unsteady flows entails computing two distinct \(\hat{\rho}_{1,2}\) in terms of modified \(\hat{\alpha}\) and \(\hat{\beta}\) and making sure they satisfy a relationship similar to that of Eq. (14). The element Peclet and Womersley number, on the other hand, can be modified by properly adjusting oscillation frequency \(\omega\), convective velocity \(a\), and/or diffusivity \(\kappa\). Of course, out of these three parameters, only two can be independently adjusted given the solution to the discrete form is insensitive up to a prefactor. Although the final result is independent of which two parameters we select, adjusting \(\omega\) and \(\kappa\) slightly simplifies the overall derivation process. Thus, the ASU method is derived by seeking \(\hat{\omega}\) and \(\hat{\kappa}\) or equivalently \[\hat{\alpha} =\frac{ah}{2\hat{\kappa}}, \tag{25}\] \[\hat{\beta} =\frac{\hat{\omega}h^{2}}{6\hat{\kappa}}, \tag{26}\] such that the discrete solution to the time-spectral convection-diffusion problem becomes nodally exact for piecewise linear shape functions. The process of computing \(\hat{\alpha}\) and \(\hat{\beta}\) so that the resulting \(\hat{\rho}_{1,2}(\hat{\alpha},\hat{\beta})\) satisfy the corresponding condition imposed by Eq. (14) is rather lengthy, and thus, moved to Appendix B. The result of that process is \[\hat{\alpha} =\frac{3\sinh\alpha}{\cosh\gamma+2\cosh\alpha}, \tag{27}\] \[i\hat{\beta} =\frac{\cosh\gamma-\cosh\alpha}{\cosh\gamma+2\cosh\alpha}, \tag{28}\] where \[\gamma=\sqrt{\alpha^{2}+6i\beta}. \tag{29}\] One can readily verify that Eq. (27) is a more general form corresponding relationship for the SUPG method as setting \(\beta=0\) will produce the well-known relationship \(\hat{\alpha}=\tanh\alpha\) while \(\hat{\beta}=\beta\) is unchanged at zero. Knowing the desired forms of \(\hat{\alpha}\) and \(\hat{\beta}\), our next task is to design \(\hat{\omega}\) and \(\hat{\kappa}\) so that those desired forms are achieved. Since \(\hat{\kappa}=\kappa\alpha/\hat{\alpha}\), from Eq. (27) we can write \[\hat{\kappa}=\kappa\alpha\coth\alpha+\kappa\alpha\left(\frac{\cosh\gamma- \cosh\alpha}{3\sinh\alpha}\right). \tag{30}\] The first term on the right-hand side is identical to that of the traditional SUPG method and can be written as \(\kappa+\tau a^{2}\) based on Eq. (15). To simplify the second term, note that from Eqs. (27) and (28) \[\frac{\cosh\gamma-\cosh\alpha}{3\sinh\alpha}=\frac{i\hat{\beta}}{\hat{\alpha}}. \tag{31}\] Relating this result to \(\hat{\omega}\) and \(\tau_{\rm diff}\) via Eqs. (11), (18), (25), and (26) produces \[\hat{\kappa}=\underbrace{\kappa}_{\text{Baseline Galerkin}}+\underbrace{a^{2} \mathcal{I}}_{\text{Conventional SUPG}}+\underbrace{\kappa_{\text{ASU}}}_{\text{ The new ASU term}} \tag{32}\] where \[\kappa_{\text{ASU}}=2i\hat{\omega}\tau_{\rm diff}\kappa. \tag{33}\] This strikingly simple expression suggests that although the SUPG method remains mostly intact for the time-spectral form of the convection-diffusion equation, its diffusion must be augmented by \(\kappa_{\text{ASU}}\). Note that \(\kappa_{\text{ASU}}\) is imaginary to the leading order as it is pre-multiplied by \(i\) and, as we will see later, \(\hat{\omega}\) is real up to the first order with respect to \(\beta\). That implies \(\kappa_{\text{ASU}}\) primarily acts to diffuse the scalar field between its real and imaginary components (i.e., in-phase and out-of-phase solutions). Similar to \(\hat{\kappa}\), one can compute \(\hat{\omega}\) by observing \(\hat{\omega}/\omega=(\hat{\beta}\alpha)/(\beta\hat{\alpha})\). That yields \[\hat{\omega}=\left(\frac{\alpha}{i\beta}\right)\left(\frac{\cosh\gamma-\cosh \alpha}{3\sinh\alpha}\right)\omega. \tag{34}\] This seemingly complex relationship can be significantly simplified if \(\beta\lessapprox 1\). As detailed in Appendix C, using asymptotic expansions at the limits \(\beta/\alpha^{2}\ll 1\) and \(\beta/\alpha^{2}\gg 1\) produces another strikingly simple relationship for \(\hat{\omega}\), that is \[\hat{\omega}\approx\omega\exp(i\omega\tau), \tag{35}\] in which \(\tau\) is the approximate SUPG time scale from Eq. (16) that is already calculated when implementing any of the stabilized methods discussed above. The leading order term in the relationship for \(\hat{\omega}\) (Eq. (35)) with regard to \(\beta\) (viz., a measure of flow unsteadiness relative to diffusion at element scale) is \(\omega\). In order words, the exponential prefactor in Eq. (35) goes to one as \(\omega\to 0\) or \(\beta\to 0\). This observation is, of course, expected by design as \(\omega\) must not be altered at the steady state limit. Thus, to highlight minute differences between the exact and approximate \(\hat{\omega}\) from Eqs. (34) and (35), respectively, we subtracted this leading order term from the two before comparing the two in Figure 2. Although shown only up to \(\beta=1\), the approximate \(\hat{\omega}\) closely agrees with its exact form for \(\beta\lessapprox 5\). For \(\beta\) values higher than that, Eq. (34) exhibits a highly nonlinear behavior that is not captured by Eq. (35). However, finding an alternative approximate form for \(\hat{\omega}\) that captures this extreme regime is of little interest as it signifies situations where the mesh resolution is extremely poor in comparison to the level of details present in the solution. As it is evident in Figure 2, \(\hat{\omega}\), passed its leading order term, follows a behavior similar to that of \(\tau\). This well-known behavior is characterized by the diffusive and convective limits at which \(\tau\approx\tau_{\rm diff}\) and \(\tau\approx\tau_{\rm conv}\), respectively. Similarly, in the diffusive limit at which \(\alpha\ll 1\), \(\hat{\omega}\) becomes independent of \(\alpha\). In this limit, \(\hat{\omega}/\omega\) passed its constant leading order becomes proportional to \(i\beta\) (Eq. (97)). As we will see in the next section, it is at this limit that the ASU provides a more accurate estimate than the conventional SUPG method. The two methods, however, both converge in the convective limit at which \(\alpha\gg 1\) since the difference between \(\omega\) and \(\hat{\omega}\) diminishes at a rate proportional to \(\alpha^{-1}\) (Eq. (94)). Having a relationship for \(\hat{\omega}\) and \(\hat{\kappa}\), the ASU method is obtained by modifying the baseline Galerkin's method to \[\underbrace{(w^{h},i\hat{\omega}\phi^{h})+(w^{h},a\phi^{h}_{,x})+(w^{h}_{,x},\kappa\phi^{h}_{,x})}_{\text{Galerkin with a modified $\omega$}}+\underbrace{(\tau aw^{h}_{,x},a\phi^{h}_{,x})}_{\text{ The steady SUPG term}}+\underbrace{(w^{h}_{,x},\kappa_{{}_{\text{ASU}}}\phi^{h}_{,x})}_{\text{ The new ASU term}}=0, \tag{36}\] in which \(\hat{\omega}\) and \(\kappa_{{}_{\text{ASU}}}\) are computed from Eqs. (35) and (33), respectively. Considering Eq. (36), the simplest way to implement the ASU method is to 1. start from the Galerkin's method but use \(\hat{\omega}\) instead of \(\omega\) throughout computations, 2. add the quasi-steady SUPG term, and 3. augment the physical diffusivity with the dominantly-imaginary \(\kappa_{{}_{\text{ASU}}}\). Figure 2: The real (black) and imaginary (red) component of \(1-\hat{\omega}/\omega\) using the exact (dots) and approximate (solid line) expressions as a function of \(\alpha\) for \(\beta=0.01\), \(0.1\), and \(1.0\). In the next section, we will see how the five methods discussed above behave in the 1D unsteady convection-diffusion setting. Later in Section 3, we will discuss how they can be generalized to multiple dimensions and behave in more complex settings. ### Comparison of various methods in 1D All the methods discussed above can be brought into a form similar to that of Galerkin's method in Eq. (8) but with a potentially modified \(\omega\), \(a\), and \(\kappa\). Once they are in this standard form, we can identify how each alters the baseline frequency, velocity, and diffusivity parameters as they appear in Galerkin's discrete form. The result of this process is condensed in Table 1. The following are the key observations: 1. The quasi-steady SUPG term appears in all stabilized methods, thus increasing the physical diffusivity by \(a^{2}\tau\). The inclusion of this term is crucial in obtaining stable results at highly convective regimes. This observation is in accordance with what has been traditionally observed in the time formulation of convection-diffusion equation [46]. 2. All stabilization methods will be similar in strongly convective regimes that are not highly oscillatory (\(\alpha\gg 1\) and \(\beta\lessapprox 1\)). In such regimes, the overall behavior of these methods is determined by \(a^{2}\tau\) term in \(\hat{\kappa}\) rather than \(i\omega\tau\). 3. Although the LSQ and ASU are derived from entirely independent procedures, they are very similar. Setting aside \(a\) that is identical between two methods, they both produce the same \(\hat{\omega}\) up to the second order leading term for small \(\beta\) (or equivalently small \(\omega\tau\)). The effective diffusivity \(\hat{\kappa}\) also takes a very similar form and is identical between the two up to the second order leading term for small \(\beta\). That is so since at small \(\alpha\), \(\tau\approx\tau_{\rm diff}\) and at large \(\alpha\), \(a^{2}\tau\) and \(\kappa\) become leading order terms in \(\hat{\kappa}\). The results presented in the following sections will demonstrate the similarity of the two approaches. 4. All methods agree on how \(\omega\) must be adjusted up to the second order leading term with respect to \(\beta\) (or \(\omega\tau\)). If we divide the SUPG and VMS discrete forms by \(1-i\omega\tau\) and \(1-2i\omega\tau\), respectively, so that all methods leave convective velocity unchanged at \(\hat{a}=a\), we can conclude that \(\hat{\omega}/\omega=1+i\omega\tau+O\Big{(}(\omega\tau)^{2}\Big{)}\) for all stabilization methods. By the same virtue, the VMS, LSQ, and ASU agree on the form of the effective diffusivity \(\hat{\kappa}\) at small \(\alpha\) when \(a^{2}\tau\) is negligible (as stated earlier, at large \(\alpha\) all stabilization methods collapse). This observation explains the lower accuracy of the SUPG method in comparison to the other three, as shown below. To put the above observations into a more concrete perspective, we have simulated the 1D model problem described earlier in Section 2.1 using all five methods. The results are shown in Figure 3 for three isolated cases at which \(\beta=3\) and \(\alpha=0.5\), \(5\), and \(50\). To evaluate the accuracy of these methods more comprehensively, we have repeated these computations at a higher resolution of \(\alpha\) and calculated the \(l^{2}\)-norm of error for each simulation. The results are reported in Figure 4 for \(\beta=0.01\), \(0.1\), and \(1\). The general observations from these numerical results agree with what we discussed above. Namely 1. All stabilization methods (i.e., SUPG, VMS, LSQ, and ASU) will be similar at sufficiently large \(\alpha\), producing a reasonably accurate result. The Galerkin's, as we saw earlier in Section 2.4, suffers from nonphysical oscillations in such regimes. \begin{table} \begin{tabular}{c|c c c c c} & GAL & SUPG & VMS & LSQ & ASU \\ \hline \hline \(\hat{\omega}\) & \(\omega\) & \(\omega\) & \((1-i\omega\tau)\omega\) & \((1+i\omega\tau)\omega\) & \(\exp(i\omega\tau)\omega\) \\ \(\hat{a}\) & \(a\) & \((1-i\omega\tau)a\) & \((1-2i\omega\tau)a\) & \(a\) & \(a\) \\ \(\hat{\kappa}\) & \(\kappa\) & \(\kappa+a^{2}\tau\) & \(\kappa+a^{2}\tau\) & \((1+2i\omega\tau)\kappa+a^{2}\tau\) & \(\kappa+2i\hat{\omega}\tau_{\rm diff}+a^{2}\tau\) \\ \hline \hline \end{tabular} \end{table} Table 1: The effective oscillation frequency \(\hat{\omega}\), convective velocity \(\hat{a}\), and diffusivity \(\hat{\kappa}\) for a given method if they were to be formulated in the form of Eq. (8). Figure 3: The real (black) and imaginary (red) components of the solution for the 1D model problem using various methods (solid line) in comparison to the exact solution (dots). \(\beta=3\) for all cases and \(\alpha\) is \(0.5\), \(5\), and \(50\) for the left, middle, and right columns, respectively. 2. For sufficiently small \(\beta\), the VMS, LSQ, and ASU methods behave similarly. Additionally, the GAL and SUPG behave similarly if both \(\alpha\) and \(\beta\) are small. In such regimes, the first group generates more accurate results than the second group. 3. Overall, the ASU is the most accurate method as it is tailor-designed for this problem. The error associated with this method is purely due to the approximations associated with Eqs. (16) and (35). Differently put, the error shown in Figure 4 for the ASU method reduces to zero if \(\tau\) and \(\hat{\omega}\) are calculated exactly from Eqs. (15) and (34). Our numerical experiments show that the majority of the error is due to the approximate \(\tau\) (Eq. (16)). It is only at relatively high \(\beta\) that the approximation used in calculating \(\hat{\omega}\) (Eq. (35)) translates to an error in \(\phi^{h}\). 4. The second most accurate method, after the ASU, varies depending on the regime under consideration. Roughly speaking, the VMS and LSQ produce accurate results except for large \(\beta\) and small \(\alpha\). They are followed by the SUPG which remains reasonably accurate for the bulk of the parameter space. The same can be said for the GAL if we exclude high \(\alpha\). Figure 4: The numerical solution error for the 1D model problem as a function of \(\alpha\) for (a) \(\beta=1\), (b) \(\beta=0.1\), and (c) \(\beta=0.01\) using the Galerkin’s (dash-double-dotted), SUPG (dash-dotted), VMS (dotted), LSQ (dashed), and ASU (solid) approach. Generalization to multiple dimensions In what follows, we briefly state the discrete form of the methods considered in the previous section and discuss their practical implementation in a finite element code that uses purely real arithmetic. We do so since the majority of existing codes use purely real variables, and they are linked against linear solvers that are purposely built for handling real arithmetic. For modeling the transport of a non-reactive solute in multiple dimensions, we consider the general form of Eq. (2) that is \[\begin{split} i\omega\phi+\mathbf{a}\cdot\nabla\phi&= \nabla\cdot(\kappa\nabla\phi)+q\ \ \text{in}\ \ \Omega,\\ \phi&=g\ The stabilization parameter \(\tau\) is computed using a relationship similar to its 1D counterpart from Eq. (16) that is [30, 39, 43] \[\tau =(\tau_{\text{conv}}^{-2}+\tau_{\text{diff}}^{-2})^{-\frac{1}{2}}, \tag{45}\] \[\tau_{\text{conv}}^{-2} =\mathbf{a}^{T}\mathbf{G}\mathbf{a},\] (46) \[\tau_{\text{diff}}^{-2} =9\kappa^{2}\mathbf{G}:\mathbf{G}, \tag{47}\] where \[\mathbf{G}=\left(\frac{\partial\mathbf{x}}{\partial\mathbf{\xi}}\right)\left(\frac{ \partial\mathbf{x}}{\partial\mathbf{\xi}}\right)^{T}, \tag{48}\] is the metric tensor computed from mapping between physical \(\mathbf{x}\) and parent element \(\mathbf{\xi}\) coordinates. The coefficient \(9\) appearing in Eq. (47), which may be optimized for a given element type, is set to that constant here to reduce the number of variables that could influence the relative accuracy of various methods. The generalization of the VMS method from Eq. (22) to multiple dimensions is also straightforward and amounts to finding \(\phi_{r}^{h}\) and \(\phi_{i}^{h}\) such for any \(w_{r}^{h}\) and \(w_{i}^{h}\) we have \[B_{\text{V}}(w_{r}^{h},w_{i}^{h};\phi_{r}^{h},\phi_{i}^{h})=F(w_{r}^{h},w_{i} ^{h}). \tag{49}\] where \[B_{\text{V}}(w_{r}^{h},w_{i}^{h};\phi_{r}^{h},\phi_{i}^{h})=B_{\text{G}} -(\tau\mathbf{a}\cdot\nabla w_{r}^{h},\omega\phi_{i}^{h}-\mathbf{a}\cdot \nabla\phi_{r}^{h})+(\tau\omega w_{r}^{h},\omega\phi_{r}^{h}+\mathbf{a}\cdot\nabla \phi_{i}^{h}) \tag{50}\] \[+(\tau\mathbf{a}\cdot\nabla w_{i}^{h},\omega\phi_{r}^{h}+\mathbf{a}\cdot \nabla\phi_{i}^{h})+(\tau\omega w_{i}^{h},\omega\phi_{i}^{h}-\mathbf{a}\cdot\nabla \phi_{r}^{h}).\] As stated earlier, including the contribution of the diffusion term in the unresolved scale will negatively impact the overall accuracy of the VMS method. Similarly, integrating \(-(\tau\mathbf{a}\cdot\nabla w_{r}^{h},\omega\phi_{i}^{h})\) by parts (and its complementary term) to combine it with \((\tau\omega w_{r}^{h},\mathbf{a}\cdot\nabla\phi_{i}^{h})\) slightly reduces the accuracy of this scheme. Hence, we do not employ these variants in what follows. The process is similar for the LSQ and can be stated as finding \(\phi_{r}^{h}\) and \(\phi_{i}^{h}\) such that for any \(w_{r}^{h}\) and \(w_{i}^{h}\) we have \[B_{\text{L}}(w_{r}^{h},w_{i}^{h};\phi_{r}^{h},\phi_{i}^{h})=F(w_{r}^{h},w_{i} ^{h}). \tag{51}\] where \[B_{\text{L}}(w_{r}^{h},w_{i}^{h};\phi_{r}^{h},\phi_{i}^{h})=B_{ \text{G}} -(\tau\omega w_{r}^{h},\omega\phi_{r}^{h})+(\tau\mathbf{a}\cdot \nabla w_{r}^{h},\mathbf{a}\cdot\nabla\phi_{r}^{h})-(2\tau\omega\nabla w_{r}^{h}, \kappa\nabla\phi_{i}^{h}) \tag{52}\] \[-(\tau\omega w_{i}^{h},\omega\phi_{i}^{h})+(\tau\mathbf{a}\cdot \nabla w_{i}^{h},\mathbf{a}\cdot\nabla\phi_{i}^{h})+(2\tau\omega\nabla w_{i}^{h}, \kappa\nabla\phi_{r}^{h}).\] Finally, the generalization of the ASU from Eq. (36) is stated as finding \(\phi_{r}^{h}\) and \(\phi_{i}^{h}\) such for any \(w_{r}^{h}\) and \(w_{i}^{h}\) we have \[B_{\text{A}}(w_{r}^{h},w_{i}^{h};\phi_{r}^{h},\phi_{i}^{h})=F(w_{r}^{h},w_{i} ^{h}). \tag{53}\] where \[B_{\text{A}}(w_{r}^{h},w_{i}^{h};\phi_{r}^{h},\phi_{i}^{h})= \,\hat{B}_{\text{G}} +(\tau\mathbf{a}\cdot\nabla w_{r}^{h},\mathbf{a}\cdot\nabla\phi_{r}^{h} )+(\nabla w_{r}^{h},\kappa_{{}_{\text{ASU}}}\nabla\phi_{r}^{h}) \tag{54}\] \[+(\tau\mathbf{a}\cdot\nabla w_{i}^{h},\mathbf{a}\cdot\nabla\phi_{i}^{h} )+(\nabla w_{i}^{h},\kappa_{{}_{\text{ASU}}}\nabla\phi_{i}^{h}).\] In this equation, \(\hat{B}_{\text{G}}\) is identical to \(B_{\text{G}}\) except for \(\omega\) that is replaced by \(\hat{\omega}\) from Eq. (35) that is \(\hat{\omega}=\exp(i\omega\tau)\omega\). Also, \(\kappa_{{}_{\text{ASU}}}\) is computed from Eq. (33) with \(\tau_{\text{diff}}\) from Eq. (47), yielding \[\kappa_{{}_{\text{ASU}}}=\frac{2i}{3}\left(\mathbf{G}:\mathbf{G}\right)^{-\frac{1}{2}} \hat{\omega}. \tag{55}\] Since \(\hat{\omega}\) and \(\kappa_{{}_{\text{ASU}}}\) are complex-valued variables, Eq. (54) must be further simplified to obtain a purely real expression. Denoting the real and imaginary component of \(\hat{\omega}\) by \(\hat{\omega}_{r}\) and \(\hat{\omega}_{i}\), respectively, and those of \(\kappa_{{}_{\text{ASU}}}\) by \(\kappa_{{}_{\text{ASU}}}\) and \(\kappa_{{}_{\text{ASU}}}\), respectively, the final expression for the ASU method becomes \[B_{\text{A}}(w_{r}^{h},w_{i}^{h};\phi_{r}^{h},\phi_{i}^{h})= -(w_{r}^{h},\hat{\omega}_{r}\phi_{i}^{h})-(w_{r}^{h},\hat{\omega}_ {i}\phi_{r}^{h})+(w_{r}^{h},\mathbf{a}\cdot\nabla\phi_{r}^{h})+(\nabla w_{r}^{h}, \kappa\nabla\phi_{r}^{h}) \tag{56}\] \[+(w_{i}^{h},\hat{\omega}_{r}\phi_{r}^{h})-(w_{i}^{h},\hat{\omega} _{i}\phi_{i}^{h})+(w_{i}^{h},\mathbf{a}\cdot\nabla\phi_{i}^{h})+(\nabla w_{i}^{h}, \kappa\nabla\phi_{i}^{h})\] \[+(\tau\mathbf{a}\cdot\nabla w_{r}^{h},\mathbf{a}\cdot\nabla\phi_{r}^{h})+( \nabla w_{r}^{h},\kappa_{{}_{\text{ASU}}}\nabla\phi_{r}^{h})-(\nabla w_{r}^{h}, \kappa_{{}_{\text{ASU}}}\nabla\phi_{i}^{h})\] \[+(\tau\mathbf{a}\cdot\nabla w_{i}^{h},\mathbf{a}\cdot\nabla\phi_{i}^{h})+( \nabla w_{i}^{h},\kappa_{{}_{\text{ASU}}}\nabla\phi_{i}^{h})+(\nabla w_{i}^{h}, \kappa_{{}_{\text{ASU}}}\nabla\phi_{r}^{h}).\] ### Convergence properties Before we test these methods in more realistic 2D and 3D settings, it will be instrumental to investigate their convergence behavior. Since there is no time stepping in solving time-spectral equations, we use the term convergence in this context to examine whether the linear system produced by these methods is non-singular or avert to ill-conditioning. Such a property has implications on the ability of the underlying iterative linear solver to produce a converged solution (hence the exploitation of that terminology). To ensure the convergence of the linear solver, one may study the positive definiteness of the underlying linear system. For the baseline Galerkin's method, the linear system will be positive definite if \[E_{\mathrm{G}}=\mathbf{c}^{\mathrm{T}}\mathbf{K}_{\mathrm{G}}\mathbf{c}, \tag{57}\] is always larger or equal to zero for any \(\mathbf{c}\in\mathbb{R}^{N}\) and it is zero if and only if \(\mathbf{c}=0\). In Eq. (57), \(\mathbf{K}_{\mathrm{G}}\) denotes the linear system obtained from Galerkin's method. It is rather straightforward to show that \[E_{\mathrm{G}}=B_{\mathrm{G}}(w_{r}^{h},w_{i}^{h};w_{r}^{h},w_{i}^{h}), \tag{58}\] given that \(\mathbf{K}_{\mathrm{G}}\) is extracted from Eq. (41) and \(w_{r}^{h}\) and \(w_{i}^{h}\) are also arbitrary test functions, enabling us to evaluate Eq. (57) for an arbitrary \(\mathbf{c}\). Therefore, from Eqs. (41) and (58) we have \[\begin{split} E_{\mathrm{G}}&=-(w_{r}^{h},\omega w _{i}^{h})+(w_{r}^{h},\mathbf{a}\cdot\nabla w_{r}^{h})+(\nabla w_{r}^{h},\kappa \nabla w_{r}^{h})+(w_{i}^{h},\omega w_{r}^{h})+(w_{i}^{h},\mathbf{a}\cdot\nabla w _{i}^{h})+(\nabla w_{i}^{h},\kappa\nabla w_{i}^{h})\\ &=\int_{\Omega}\kappa\|\nabla w^{h}\|^{2}\mathrm{d}\Omega\geq 0, \end{split} \tag{59}\] where \(w^{h}=[w_{r}^{h}\ w_{i}^{h}]^{\mathrm{T}}\). Since \(E_{\mathrm{G}}=0\) only when \(w^{h}=0\), the linear system produced by the baseline Galerkin's method will be positive definite. Thus, we expect Galerkin's method to converge regardless of the regime under consideration. Our numerical experiment results corroborate this conclusion. The same procedure can be applied to the remaining stabilized methods. For the SUPG method, it follows from Eq. (44) \[\begin{split} E_{\mathrm{S}}&=E_{\mathrm{G}}+(\tau \mathbf{a}\cdot\nabla w_{r}^{h},\mathbf{a}\cdot\nabla w_{r}^{h}-\omega w_{i}^{h})+( \tau\mathbf{a}\cdot\nabla w_{i}^{h},\mathbf{a}\cdot\nabla w_{i}^{h}+\omega w_{r}^{h}) \\ &=\int_{\Omega}\Big{[}\kappa\|\nabla w^{h}\|^{2}+\tau\|\mathbf{a} \cdot\nabla w^{h}\|^{2}+2\tau\omega w_{r}^{h}\mathbf{a}\cdot\nabla w_{i}^{h}\Big{]} \mathrm{d}\Omega.\end{split} \tag{60}\] While the first and second terms under the integral in Eq. (60) are always positive for a nonzero \(w^{h}\), the third term could be positive or negative. \(E_{\mathrm{S}}\) can, in fact, become negative if \(\omega>O(a/h)\) or \(\omega>O(\kappa^{2}/(ah^{3}))\) in strongly convective or diffusive regimes, respectively, thus introducing negative eigenvalues in the tangent matrix. With some eigenvalues being positive and some being negative, an eigenvalue may land near zero (given the significant variability in the mesh size, for instance), thus preventing the convergence of the linear solver. Starting from Eq. (50) and applying the same analysis to the VMS method yields \[\begin{split} E_{\mathrm{V}}&=E_{\mathrm{G}}+(\tau \omega w_{r}^{h},\omega w_{r}^{h})+(\tau\mathbf{a}\cdot\nabla w_{r}^{h},\mathbf{a} \cdot\nabla w_{r}^{h})-(2\tau\mathbf{a}\cdot\nabla w_{r}^{h},\omega w_{i}^{h})\\ &\qquad\quad+(\tau\omega w_{i}^{h},\omega w_{i}^{h})+(\tau\mathbf{a} \cdot\nabla w_{i}^{h},\mathbf{a}\cdot\nabla w_{i}^{h})+(2\tau\mathbf{a}\cdot\nabla w _{i}^{h},\omega w_{r}^{h})\\ &=E_{\mathrm{G}}+\int_{\Omega}\tau\Big{[}\|\omega w_{i}^{h}\|^{2} +\|\mathbf{a}\cdot\nabla w_{r}^{h}\|^{2}-2\omega w_{i}^{h}\mathbf{a}\cdot\nabla w_{r} ^{h}\Big{]}\mathrm{d}\Omega\\ &\qquad\quad+\int_{\Omega}\tau\Big{[}\|\omega w_{r}^{h}\|^{2}+\| \mathbf{a}\cdot\nabla w_{i}^{h}\|^{2}+2\omega w_{r}^{h}\mathbf{a}\cdot\nabla w_{i}^{h} \Big{]}\mathrm{d}\Omega\\ &=\int_{\Omega}\!\!\Big{[}\kappa\|\nabla w^{h}\|^{2}+\tau\|\omega w _{i}^{h}-\mathbf{a}\cdot\nabla w_{r}^{h}\|^{2}+\tau\|\omega w_{r}^{h}+\mathbf{a}\cdot \nabla w_{i}^{h}\|^{2}\Big{]}\mathrm{d}\Omega\geq 0.\end{split} \tag{61}\] Therefore, the VMS method, similar to Galerkin's method, produces a positive definite tangent matrix and is expected to behave well in terms of convergence of the linear solver. Repeating the same process for the LSQ method by using Eq. (52) produces \[\begin{split} E_{\rm L}=E_{\rm G}&-(\tau\omega w^{h}_{r},\omega w^{h}_{r})+(\tau\mathbf{a}\cdot\nabla w^{h}_{r},\mathbf{a}\cdot\nabla w^{h}_{r}) -(2\tau\omega\nabla w^{h}_{r},\kappa\nabla w^{h}_{i})\\ &-(\tau\omega w^{h}_{i},\omega w^{h}_{i})+(\tau\mathbf{a}\cdot\nabla w ^{h}_{i},\mathbf{a}\cdot\nabla w^{h}_{i})+(2\tau\omega\nabla w^{h}_{i},\kappa\nabla w ^{h}_{r})\\ =&\int_{\Omega}\Big{[}\kappa\|\nabla w^{h}\|^{2}- \tau\omega^{2}\|w^{h}\|^{2}+\tau\|\mathbf{a}\cdot\nabla w^{h}\|^{2}\Big{]}{\rm d} \Omega,\end{split} \tag{62}\] which, owing to the second term under integral, is not necessarily positive definite. Similar to the SUPG method, the second term becomes dominant if \(\omega>O(a/h)\) in strongly convective regimes. In the strongly diffusive limit, the requirement is \(\omega>O(\kappa/h^{2})\), which is less stringent than that of the SUPG. Thus, we expect the LSQ to exhibit more convergence issues in comparison to the SUPG. Lastly, we use Eq. (56) to evaluate the ASU. The result is \[\begin{split} E_{\rm A}&=-(w^{h}_{r},\hat{\omega} _{r}w^{h}_{i})+(w^{h}_{r},\hat{\omega}_{i}w^{h}_{r})+(w^{h}_{r},\mathbf{a}\cdot \nabla w^{h}_{r})+(\nabla w^{h}_{r},\kappa\nabla w^{h}_{r})\\ &\quad+(w^{h}_{i},\hat{\omega}_{r}w^{h}_{r})+(w^{h}_{i},\hat{ \omega}_{i}w^{h}_{i})+(w^{h}_{i},\mathbf{a}\cdot\nabla w^{h}_{i})+(\nabla w^{h}_{i },\kappa\nabla w^{h}_{i})\\ &\quad+(\tau\mathbf{a}\cdot\nabla w^{h}_{r},\mathbf{a}\cdot\nabla w^{h}_ {r})+(\nabla w^{h}_{r},\kappa_{{}_{\rm ASU}}\nabla w^{h}_{r})-(\nabla w^{h}_{r },\kappa_{{}_{i}{}_{\rm ASU}}\nabla w^{h}_{i})\\ &\quad+(\tau\mathbf{a}\cdot\nabla w^{h}_{i},\mathbf{a}\cdot\nabla w^{h}_ {i})+(\nabla w^{h}_{i},\kappa_{{}_{\rm ASU}}\nabla w^{h}_{i})+(\nabla w^{h}_{i },\kappa_{{}_{i}{}_{\rm ASU}}\nabla w^{h}_{r})\\ &=\int_{\Omega}\Big{[}\hat{\omega}_{i}\|w^{h}\|^{2}+\kappa\| \nabla w^{h}\|^{2}+\tau\|\mathbf{a}\cdot\nabla w^{h}\|^{2}+\kappa_{{}_{\rm ASU}}\| \nabla w^{h}\|^{2}\Big{]}{\rm d}\Omega.\end{split} \tag{63}\] At the outset, the ASU may appear to produce a positive definite tangent matrix as well. Nonetheless, that is not necessarily the case as \(\hat{\omega}_{i}\) and \(\kappa_{{}_{\rm ASU}}\) may become negative. While the former only occurs at \(\omega\tau>\pi\), the latter occurs even at small values of \(\omega\). However, for the system to potentially produce negative eigenvalues in those scenarios, we must have \(\kappa+\kappa_{{}_{\rm ASU}}<0\), which implies \(\tau\omega^{2}>O(\kappa/h^{2})\). Thus, in either case, one expects the ASU to produce a positive definite tangent matrix at small \(\omega\). We can, in fact, show that the ASU will be similar to the LSQ, in which it is limited by \(\omega<O(a/h)\) and \(\omega<O(\kappa/h^{2})\) in strongly convective and diffusive regimes, respectively, to remain convergent. To lessen those requirements for the convergence of the ASU, one can limit the angle \(\omega\tau\) that appears in the definition of \(\hat{\omega}\) in Eq. (35). One way to achieve that will be to set an upper bound on \(\tau\), namely \(\tau_{\rm max}\), and compute \(\hat{\omega}\) using the following expression rather than Eq. (35): \[\hat{\omega}=\omega\exp(i\omega\min(\tau,\tau_{\rm max})). \tag{64}\] Our numerical experiments show that \(\tau_{\rm max}\) is set by the diffusive rather than the convective limit. Thus, considering the first two terms under the integral in Eq. (63), we must have \(|\hat{\omega}_{i}|<O(\kappa/h^{2})\) for the ASU to converge. Since the first order approximation of \(\hat{\omega}_{i}\) is \(\omega^{2}\tau\), the above condition translates to \(\tau<O(\kappa/(h^{2}\omega^{2}))\), which provides us with an upper limit on \(\tau\), namely \(\tau_{\rm max}\). Using Eq. (18) to express \(\kappa/h^{2}\) in terms of \(\tau_{\rm diff}\) produces \[\tau_{\rm max}^{-1}=\pi\omega^{2}\tau_{\rm diff}, \tag{65}\] where \(\tau_{\rm diff}\) is computed from Eq. (47) in a multi-dimensional setting. The pre-factor \(\pi\) incorporated in Eq. (65) is purely empirical and selected to be the smallest value that ensures convergence of the ASU for a wide range of \(\alpha\) and \(\beta\). It is important to note that this value is based on numerical calculations using tetrahedral elements and may differ if one were to use a different interpolation function. In summary, for the methods under consideration, only the baseline Galerkin's and VMS are formally guaranteed to produce a positive definite tangent matrix. The remaining methods, namely the SUPG, LSQ, and ASU could fail to converge in strongly oscillatory regimes with a large Womersley number. Whether limiting the value of \(\hat{\omega}_{i}\) in an ad hoc manner ensures the convergence of ASU is what we investigate below through a 2D and 3D test case. These cases also permit us to evaluate the accuracy of these methods in more realistic settings. ### A 2D test case In Section 2.8, we discussed how one should modify the oscillations frequency \(\omega\), convective velocity \(a\), and diffusivity \(\kappa\) to obtain a nodally exact solution in a 1D setting. In 2D and 3D settings, these modifications ay not be ideal as they do not account for the direction-dependency of these parameters. Considering the conventional SUPG method, for instance, it is constructed to increase diffusion in the streamwise direction, leaving the diffusion in the crosswind direction unchanged. This selective modification of \(\kappa\) is crucial as an increase in \(\kappa\) in the crosswind direction will overly dampen the solution. To amplify these effects, we investigate the behavior of various methods in a setting where a boundary layer is present in both tangent and normal to the background convection \(\mathbf{a}\). This 2D problem, which is shown schematically in Figure 5, is governed in the frequency domain by \[\begin{split} i\omega\phi+a\phi_{,x}&=\kappa(\phi_ {,xx}+\phi_{,yy}),\\ \phi(x,0)&=\phi(L,y)=0,\\ \phi(x,L)&=\phi(0,y)=1.\end{split} \tag{66}\] Since Dirichlet boundary conditions are imposed on all four boundaries and flow is from left to right, this case produces a flow-facing boundary layer on the right and two flow-tangent boundary layers on top and bottom. The boundary layer on the right is similar in nature to what we considered in the 1D problem in Section 2.9. The other two, however, test the behavior of various methods when there is a sharp gradient in \(\phi\) in the crosswind direction, which is a situation that was not tested by that earlier 1D case. The selection of this 2D case was motivated by the existence of a closed-form solution that permits us to evaluate the accuracy of all methods. As detailed in Appendix D, the closed-form solution is obtained in the form of a series using the method of separation of variables and is \[\begin{split}\phi(x,y)&=\frac{\sinh\left(\sqrt{i}W \frac{y}{L}\right)}{\sinh(\sqrt{i}W)}+\sum_{n=1}^{\infty}\left[A_{n}\exp \left(r_{-n}\frac{x}{L}\right)+B_{n}\exp\left(r_{+n}\frac{x}{L}\right)\right] \sin\left(\frac{n\pi y}{L}\right),\\ A_{n}&=\frac{(a_{n}+b_{n})\exp(r_{+n})-b_{n}}{\exp (r_{+n})-\exp(r_{-n})},\ \ B_{n}=\frac{b_{n}-(a_{n}+b_{n})\exp(r_{-n})}{\exp(r_{+n})-\exp(r_{-n})},\\ a_{n}&=\frac{2(1-\cos(n\pi))}{n\pi},\ \ b_{n}=\frac{2n\pi \cos(n\pi)}{iW^{2}+(n\pi)^{2}},\\ r_{\pm n}&=P\pm\sqrt{P^{2}+iW^{2}+(n\pi)^{2}}. \end{split} \tag{67}\] where the Peclet \(P\) and Womersley \(W\) number definitions are identical to that of the 1D problem in Eqs. (6) and (7), respectively. In practice, we perform the summations in Eq. (67) for 200 terms, which is sufficiently large for the truncation error to be negligible in comparison to the numerical error associated with the methods tested below. In Figure 6, we have compared the exact solution from Eq. (67) against the numerical solutions discussed earlier in Section 3. These calculations are performed on a \(10\times 10\) bilinear grid using \(W=10^{3/2}\approx 31.6\) and \(P=100/(8\pi)\approx 40\). For this specific regime, Galerkin's method performs relatively well as the quickly varying boundary condition (high Womersley number) causes \(\phi\) to diffuse before reaching the right boundary, thus avoiding formation of a boundary layer that could otherwise create non-physical oscillations. For the remaining stabilized methods, we observe a trend similar to that of the 1D model problem, with the ASU and SUPG being the best and least accurate methods, respectively, and the VMS and LSQ being somewhere in the middle. Figure 5: The 2D convection-diffusion problem under consideration in a square-shaped domain of size \(L\times L\) Figure 6: The real part of the exact solution from Eq. (67) compared against numerical solutions from Eqs. (40), (43), (49), (51), and (53) for the 2D problem shown in Figure (5) at \(W=10^{3/2}\) and \(P=100/(8\pi)\). The left column is the deviation of numerical solutions from the exact solution. To generalize the above observation, we have repeated these computations using a wider range of conditions. The results are presented in terms of \(l^{2}\)-norm error in Figure 7. These results are in agreement with what we discussed earlier, namely the ASU, LSQ, and VMS methods' similar behavior at small \(\beta\) or \(W\) and their higher accuracy in comparison to the SUPG at small \(\alpha\) or \(P\) (Figure 7-(c)). Also, at large \(\alpha\) or \(P\), the solutions from all stabilized methods coincide, while that of the GAL diverges. As we have seen with the 1D case, the ASU is the most accurate method for this 2D case as well. The observation that the various methods perform similarly considering this 2D case and the earlier 1D case, despite the presence of flow-tangent boundary layers, was somewhat expected. Considering the multidimensional discrete form of various methods in Section 3, the added stabilization terms are either tested by \(\tau\mathbf{a}.\nabla w^{h}\) or \(\tau\omega w^{h}\). These two sets of terms become important when either \(P\) or \(W\) are large. In the case of a large \(P\), where stabilization is required due to the strong convection, \(\tau\mathbf{a}.\nabla w^{h}\) is only active in the streamwise direction, thus correctly avoiding the unnecessary introduction of crosswind terms in the discrete form. In the case of a large \(W\), where stabilization is required due to the fast varying boundary conditions, \(\tau\omega w^{h}\) acts the same in all directions, thus correctly not distinguishing between the flow-facing and flow-tangent boundary layers. Note this argument also applies to the ASU as \(\hat{\omega}/\omega\) and \(\kappa_{\mbox{\tiny ASU}}/\kappa\) are dependent on \(\tau\omega\) and \(\hat{\omega}\tau_{\mbox{\tiny diff}}\), respectively, which properly act the same in all direction. In terms of convergence, none of the methods exhibit any issue for the investigated range of conditions. For our iterative linear solver, we used the Generalized minimal residual method (GMRES) with a tolerance of \(10^{-4}\)[48] that is implemented in our in-house linear solver [49, 50, 51]. The number of GMRES iterations, when averaged over all simulated \(P\) above, was larger for Galerkin's method. This larger number is caused by the larger number of iterations at higher \(P\) where Galerkin's method diverges (Table 2). Among the stabilized methods, the average number of iterations increases with \(W\) except for the VMS method which produces a positive definite matrix. This observation is expected as the SUPG, LSQ, and ASU methods can produce a tangent matrix with a wide spectrum of eigenvalues as \(\omega\), and correspondingly \(W\), increases. ### A 3D test case In the previous case, the edges of the bilinear elements were perfectly aligned with the coordinate directions and that of the flow. This alignment permits one to directly select a direction-dependent element size \(h\) to compute a direction-dependent \(\tau\). Even though such an exercise will produce more accurate results in that idealized setting, it will be hard to generalize those performances to practical cases where there is no clear definition of \(h\) for a given direction. Here, we relied on a scalar definition of \(\tau\) that estimated \(h\) from the metric tensor (Eq. (48)). Nevertheless, to stress-test the methods in a general setting where there is no clear alignment between the physical and parent element coordinate systems, we consider the case shown in Figure 8. This case involves the unsteady convection-diffusion of a scalar field in a 3D cylindrical domain. The imposed boundary conditions and prescribed velocity field are selected to allow for the extraction of a closed-form solution for \(\phi\). Namely, the velocity is uniform and set \(a\) along the cylinder axial direction in the entire domain. Dirichlet boundary conditions are imposed at the two ends with one end being zero and the other end being one in the frequency domain. A Neumann boundary condition is imposed for the remainder cylindrical shell face, thus effectively reducing this 3D problem to the 1D problem described in Section 2.1. Despite the fact that the exact solution for this 3D problem, which is only a function of position along the axial direction, is identical to the 1D example (namely, Eq. (4)), the numerical solution behaves very differently. The differentiating factor for the 3D case is that the domain is discretized using tetrahedral \begin{table} \begin{tabular}{c|c c c c c} \(W\) & GAL & SUPG & VMS & LSQ & ASU \\ \hline \hline 10 & 54 & 20 & 20 & 21 & 21 \\ 31.6 & 66 & 27 & 22 & 31 & 27 \\ 100 & 81 & 64 & 21 & 50 & 64 \\ \hline ave. & 67 & 37 & 21 & 34 & 37 \\ \hline \hline \end{tabular} \end{table} Table 2: The average number of GMRES iterations for the 2D cases shown in Figure 7. Figure 8: The schematic of the 3D model problem defined in a cylindrical domain with a uniform velocity prescribed in the axial direction. A cut of the tetrahedral mesh that is used for all simulations is shown. Figure 7: Error in numerical solution of 2D model problem (Figure 5) on a \(10\times 10\) grid as a function of Peclet number \(P\) for Womersley \(W=100\) (a), \(W=10^{3/2}\) (b) and \(W=10\) (c). elements and the convective velocity does not necessarily align with a parent element coordinate direction. As a result, the overall behavior of a given numerical method relies on the multidimensional definition of the stabilization parameter (in particular \(\tau\)) and whether it correctly captures the effective element size in the streamwise and crosswind directions. The specifics of this numerical experiment are as follows. The mesh is composed of 2350 nodes and 8604 tetrahedral elements, a section of which, is shown in Figure 8. The length-to-diameter ratio of the cylinder is 5. The linear system produced by the various numerical methods is solved using the GMRES iterative method [48]. The tolerance on the linear solver is set to \(10^{-4}\), which is verified to be sufficiently small so that the reported errors are independent of that tolerance. Similar to the 2D test case above, these computations are performed using our in-house finite element solver that has been extensively used for cardiovascular simulations in the past [52, 53, 54, 55]. The results of these calculations at three Peclet numbers of \(P=10\), \(100\), and \(1000\) and a Womersley number of \(W=100\) are shown in Figure 9. Consistent with what we have observed earlier, Galerkin's method generates non-physical oscillations at high \(P\). All stabilized methods produce reasonably accurate results at small and large \(P\), where either the modifications to the baseline Galerkin's method diminish or those modifications are dominated by the SUPG term, respectively. At the intermediate \(P\), however, the SUPG method exhibits an under-damped behavior by generating oscillations at higher amplitude than the exact solution. On the other hand, the ASU and, to a lesser extent, VMS exhibit over-damped behavior. The LSQ and GAL produce relatively accurate predictions for this case. These computations are repeated for a wider range of \(P\) and \(W\) and the solution error is reported in Figure 10 to allow for a more quantitative comparison of the accuracy of all methods. The relative behavior of these methods at \(W=10\) and \(W=31.6\) is similar to what we observed in the 1D and 2D settings. The ASU outperforms other techniques, the GAL diverges at large \(P\) and the SUPG is less accurate at smaller \(P\). For the highly oscillatory \(W=100\) condition, a new picture emerges. The ASU accuracy is degraded at small \(P\) and, to a larger extent, at intermediate \(P\). As we saw earlier in Figure 9, the GAL and LSQ produce the most accurate results in those regimes. The loss of accuracy of the ASU, in this case, is caused by limiting the \(\omega\tau\) angle in the definition of \(\hat{\omega}\) in Eq. (64). Owing to the large value of \(W=100\), the ASU tangent matrix will become ill-conditioned without that treatment for reasons discussed under Section 3.1. With this treatment, the ASU converges for all cases considered in Figure 10. Figure 9: The behavior of the real component of the exact and numerical solutions for the 3D problem shown in Figure 8 at Womersley number \(W=100\) and three values of Peclet number \(P=10\) (left column), \(P=100\) (middle column), and \(P=1000\) (right column). Figure 10: Error in the solution of various methods for the 3D problem shown schematically in Figure 8 as a function of the Peclet number \(P\) at Womersely number of \(W=100\) (a; top), \(W=31.6\) (b; middle), and \(W=10\) (c; bottom). The convergence behavior of all methods is studied in Table 3 in terms of its influence on the average number of GMRES iterations. Since the reported figures are averaged over all simulations with different values of \(P\), they reflect outlier simulations that require many iterations to converge. As we saw earlier with the 2D case, the GAL method has a relatively high iteration number at all \(W\) owing to the cases with high \(P\) where it generates non-physical oscillations. The SUPG method converges relatively quickly in all cases, which is somewhat unexpected given that it does not formally produce a positive definite matrix. The behavior of the other three stabilized methods is in accordance with what we discussed in Section 3.1 regarding the property of their tangent matrices. The VMS method, with a positive definite tangent matrix, exhibits excellent convergence. While the LSQ and ASU methods also exhibit the same behavior at \(W=10\) and \(W=31.6\), they struggle at \(W=100\), confirming our earlier analysis of these methods. In the extremely oscillatory regimes, the LSQ fails to converge for \(P<50\), causing the linear solver to reach the set maximum number of iterations of 1000. Although the ASU converges for all cases before reaching that set limit, it requires a larger number of iterations on average. ## 4 Conclusions We introduced and compared five methods for the solution of the time-spectral convection-diffusion equation. These methods are evaluated based on accuracy and convergence properties. The baseline Galerkin's method (Eq. (41)) for the time-spectral equation behaves very similarly to its conventional time formulation counterpart, producing non-physical oscillations in strongly convective regimes. Including the streamline upwind Petrov/Galerkin (SUPG) stabilization term in the formulation removes those non-physical oscillations. Nevertheless, the resulting method (Eq. (44)) tends to produce a solution that overshoots physical oscillations at high Womersley numbers. The second stabilized method investigated was the least-square method (LSQ). Although the LSQ method (Eq. (52)) produces reasonably accurate results across the board, it has poor convergence characteristics, particularly in multi-dimensional settings. We also explored the variational multiscale (VMS) method (Eq. (50)), which demonstrates accuracy comparable to the LSQ method. Moreover, it produces a formally positive-definite tangent matrix, leading to excellent convergence properties as verified by our numerical experiments. The last method introduced was the augmented SUPG (ASU), which we tailor-designed to produce a nodally exact solution for the time-spectral convection-diffusion equation in 1D. This method (Eq. (56)) achieves the highest accuracy across all tested cases, except for the 3D case at the highest Womersley number, where a modification (Eq. (64)) is necessary for convergence. Considering all factors, the VMS method is an attractive option for achieving a balance between accuracy, convergence behavior, and implementation simplicity. The ASU, on the other hand, is an optimal choice for regimes with low to moderate element Womersley number (\(\beta\leq O(1)\)) if accuracy takes precedence. ## References * [1] Mahdi Esmaily, Tain-Yen Hsia, and Alison Marsden. The assisted bidirectional Glenn: a novel surgical approach for first stage single ventricle heart palliation. _The Journal of Thoracic and Cardiovascular Surgery_, 149(3):699-705, 2015. * [2] Alison L Marsden, Jeffrey A Feinstein, and Charles A Taylor. A computational framework for derivative-free optimization of cardiovascular geometries. _Computer Methods in Applied Mechanics and Engineering_, 197(21-24):1890-1905, 2008. \begin{table} \begin{tabular}{c|c c c c c} \(W\) & GAL & SUPG & VMS & LSQ & ASU \\ \hline \hline 10 & 160 & 102 & 102 & 105 & 105 \\ 31.6 & 129 & 71 & 68 & 80 & 81 \\ 100 & 128 & 69 & 44 & 610 & 292 \\ \hline ave. & 139 & 81 & 71 & 265 & 159 \\ \hline \hline \end{tabular} \end{table} Table 3: The average number of GMRES iterations for the 3D cases shown in Figure 10. * [3] Roel S Driessen, Ibrahim Danad, Wijnand J Stuijfzand, Pieter G Raijmakers, Stefan P Schumacher, Pepijn A van Diemen, Jonathon A Leipsic, Juhani Knuuti, S Richard Underwood, Peter M van de Ven, et al. Comparison of coronary computed tomography angiography, fractional flow reserve, and perfusion imaging for ischemia diagnosis. _Journal of the American College of Cardiology_, 73(2):161-173, 2019. * [4] Charles A Taylor and CA Figueroa. Patient-specific modeling of cardiovascular mechanics. _Annual review of biomedical engineering_, 11:109-134, 2009. * [5] Charles A Taylor, Thomas JR Hughes, and Christopher K Zarins. Finite element modeling of blood flow in arteries. _Computer methods in applied mechanics and engineering_, 158(1-2):155-196, 1998. * [6] I.E. Vignon-Clementel, A.L. Marsden, and J.A. Feinstein. A primer on computational simulation in congenital heart disease for the clinician. _Progress in Pediatric Cardiology_, 30:3-13, 2010. * [7] Y. Bazilevs, M.C. Hsu, D. Benson, S. Sankaran, and A.L. Marsden. Computational fluid-structure interaction: methods and application to a total cavopulmonary connection. _Computational Mechanics_, 45:77-89, 2009. * [8] Weiguang Yang, Jeffrey A Feinstein, and Alison L Marsden. Constrained optimization of an idealized Y-shaped baffle for the Fontan surgery at rest and exercise. _Computer Methods in Applied Mechanics and Engineering_, 199(33-36):2135-2149, 2010. * [9] Mahdi Esmaily, Francesco Migliavacca, Irene Vignon-Clementel, Tain-Yen Hsia, and Alison Marsden. Optimization of shunt placement for the Norwood surgery using multi-domain modeling. _Journal of Biomechanical Engineering_, 134(5):051002, 2012. * [10] Aekaansh Verma, Mahdi Esmaily, Jessica Shang, Richard Figliola, Jeffrey A Feinstein, Tain-Yen Hsia, and Alison L Marsden. Optimization of the assisted bidirectional Glenn procedure for first stage single ventricle repair. _World Journal for Pediatric and Congenital Heart Surgery_, 9(2):157-170, 2018. * [11] Justin S Tran, Daniele E Schiavazzi, Abhay B Ramachandra, Andrew M Kahn, and Alison L Marsden. Automated tuning for parameter identification and uncertainty quantification in multi-scale coronary simulations. _Computers & fluids_, 142:128-138, 2017. * [12] Mehran Mirramezani and Shawn C Shadden. A distributed lumped parameter model of blood flow. _Annals of Biomedical Engineering_, 48(12):2870-2886, 2020. * [13] Martin R Pfaller, Jonathan Pham, Aekaansh Verma, Nathan M Wilson, David W Parker, Weiguang Yang, and Alison L Marsden. Automated generation of 0D and 1D reduced-order models of patient-specific blood flow. _arXiv preprint arXiv:2111.04878_, 2021. * [14] Carl V Phillips. Quantifying and reporting uncertainty from systematic errors. _Epidemiology_, 14(4):459-466, 2003. * [15] Sethuraman Sankaran and Alison L Marsden. A stochastic collocation method for uncertainty quantification and propagation in cardiovascular simulations. _Journal of Biomechanical Engineering_, 133(3):031001, 2011. * [16] DE Schiavazzi, A Doostan, G Iaccarino, and AL Marsden. A generalized multi-resolution expansion for uncertainty propagation with application to cardiovascular modeling. _Computer methods in applied mechanics and engineering_, 314:196-221, 2017. * [17] Gregory Arbia, Chiara Corsini, Mahdi Esmaily, Alison L Marsden, Francesco Migliavacca, Giancarlo Pennati, Tain-Yen Hsia, Irene E Vignon-Clementel, Modeling of Congenital Hearts Alliance (MOCHA) Investigators, et al. Numerical blood flow simulation in surgical corrections: what do we need for an accurate analysis? _Journal of Surgical Research_, 186(1):44-55, 2014. * [18] Daniel Hupp, Peter Arbenz, and Dominik Obrist. A parallel Navier-Stokes solver using spectral discretisation in time. _International journal of computational fluid dynamics_, 30(7-10):489-494, 2016. * [19] Kenneth C Hall, Jeffrey P Thomas, and William S Clark. Computation of unsteady nonlinear flows in cascades using a harmonic balance technique. _AIAA Journal_, 40(5):879-886, 2002. * [20] Peter Arbenz, Daniel Hupp, and Dominik Obrist. Comparison of parallel time-periodic Navier-Stokes solvers. In _International Conference on Parallel Processing and Applied Mathematics_, pages 57-67. Springer, 2017. * [21] Chenwei Meng and Mahdi Esmaily. A complex-valued stokes solver for simulation of time-periodic creeping flows. In _APS Division of Fluid Dynamics Meeting Abstracts_, pages W07-015, 2020. * [22] Mahdi Esmaily. A stabilized formulation for the solution of the incompressible unsteady Stokes equations in the frequency domain. _arXiv preprint arXiv:2202.04125_, 2022. * [23] Chenwei Meng, Anirban Bhattacharjee, and Mahdi Esmaily. A scalable spectral Stokes solver for simulation of time-periodic flows in complex geometries. _Journal of Computational Physics_, 445:110601, 2021. * [24] Olga Alexandrovna Ladyzhenskaya. _The mathematical theory of viscous incompressible flow_, volume 2. Gordon and Breach New York, 1969. * [25] Ivo Babuska. Error-bounds for finite element method. _Numerische Mathematik_, 16(4):322-333, 1971. * [26] Franco Brezzi. On the existence, uniqueness and approximation of saddle-point problems arising from Lagrangian multipliers. _ESAIM: Mathematical Modelling and Numerical Analysis-Modelisation Mathematique et Analyse Numerique_, 8(R2):129-151, 1974. * [27] Claes Johnson. _Numerical solution of partial differential equations by the finite element method_. Courier Corporation, 2012. * [28] Thomas JR Hughes. Recent progress in the development and understanding of SUPG methods with special reference to the compressible euler and Navier-Stokes equations. _International journal for numerical methods in fluids_, 7(11):1261-1275, 1987. * [29] Leopoldo P Franca and Sergio L Frey. Stabilized finite element methods: Ii. the incompressible Navier-Stokes equations. _Computer Methods in Applied Mechanics and Engineering_, 99(2-3):209-233, 1992. * [30] Yuri Bazilevs, Kenji Takizawa, and Tayfun E Tezduyar. _Computational fluid-structure interaction: methods and applications_. John Wiley & Sons, 2013. * [31] Thomas JR Hughes, Guglielmo Scovazzi, and Leopoldo P Franca. Multiscale and stabilized methods. _Encyclopedia of computational mechanics second edition_, pages 1-64, 2018. * [32] T.J.R. Hughes, M. Mallet, and M. Akira. A new finite element formulation for computational fluid dynamics: II. beyond SUPG. _Computer Methods in Applied Mechanics and Engineering_, 54(3):341-355, 1986. * [33] Thomas JR Hughes and Michel Mallet. A new finite element formulation for computational fluid dynamics: IV. A discontinuity-capturing operator for multidimensional advective-diffusive systems. _Computer methods in applied mechanics and engineering_, 58(3):329-336, 1986. * [34] Tayfun E Tezduyar and YJ0593 Park. Discontinuity-capturing finite element formulations for nonlinear convection-diffusion-reaction equations. _Computer methods in applied mechanics and engineering_, 59(3):307-325, 1986. * [35] Kenji Takizawa, Tayfun E Tezduyar, and Yuto Otoguro. Stabilization and discontinuity-capturing parameters for space-time flow computations with finite element and isogeometric discretizations. _Computational Mechanics_, 62:1169-1186, 2018. * [36] Thomas JR Hughes. A multidimensional upwind scheme with no crosswind diffusion. _Finite element methods for convection dominated flows, AMD 34_, 1979. * [37] Alexander N Brooks and Thomas JR Hughes. Streamline upwind/Petrov-Galerkin formulations for convection dominated flows with particular emphasis on the incompressible Navier-Stokes equations. _Computer methods in applied mechanics and engineering_, 32(1-3):199-259, 1982. * [38] Thomas JR Hughes, Leopoldo P Franca, and Gregory M Hulbert. A new finite element formulation for computational fluid dynamics: VIII. the Galerkin/least-squares method for advective-diffusive equations. _Computer methods in applied mechanics and engineering_, 73(2):173-189, 1989. * [39] Farzin Shakib, Thomas JR Hughes, and Zdenek Johan. A new finite element formulation for computational fluid dynamics: X. the compressible Euler and Navier-Stokes equations. _Computer Methods in Applied Mechanics and Engineering_, 89(1-3):141-219, 1991. * [40] Farzin Shakib. _Finite element analysis of the compressible Euler and Navier-Stokes equations_. Stanford University, 1989. * [41] Thomas JR Hughes. Multiscale phenomena: Green's functions, the dirichlet-to-neumann formulation, subgrid scale models, bubbles and the origins of stabilized methods. _Computer methods in applied mechanics and engineering_, 127(1-4):387-401, 1995. * [42] Thomas JR Hughes and Giancarlo Sangalli. Variational multiscale analysis: the fine-scale green's function, projection, optimization, localization, and stabilized methods. _SIAM Journal on Numerical Analysis_, 45(2):539-557, 2007. * [43] Y Bazilevs, VM Calo, JA Cottrell, TJR Hughes, A Reali, and G Scovazzi. Variational multiscale residual-based turbulence modeling for large eddy simulation of incompressible flows. _Computer Methods in Applied Mechanics and Engineering_, 197(1-4):173-201, 2007. * [44] John R Womersley. Method for the calculation of velocity, rate of flow and viscous drag in arteries when the pressure gradient is known. _The Journal of Physiology_, 127(3):553, 1955. * [45] Akiva Feintuch, Permyos Ruengsakulrach, Amy Lin, Ji Zhang, Yu-Qing Zhou, Jonathon Bishop, Lorinda Davidson, David Courtman, F Stuart Foster, David A Steinman, et al. Hemodynamics in the mouse aortic arch as assessed by MRI, ultrasound, and numerical modeling. _American Journal of Physiology-Heart and Circulatory Physiology_, 292(2):H884-H892, 2007. * [46] Thomas JR Hughes and Michel Mallet. A new finite element formulation for computational fluid dynamics: III. the generalized streamline operator for multidimensional advective-diffusive systems. _Computer methods in applied mechanics and engineering_, 58(3):305-328, 1986. * [47] Thomas JR Hughes, Gonzalo R Feijoo, Luca Mazzei, and Jean-Baptiste Quincy. The variational multiscale method--a paradigm for computational mechanics. _Computer Methods in Applied Mechanics and Engineering_, 166(1-2):3-24, 1998. * [48] Youcef Saad and Martin H Schultz. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. _SIAM Journal on scientific and statistical computing_, 7(3):856-869, 1986. * [49] Mahdi Esmaily, Yuri Bazilevs, and Alison Marsden. Impact of data distribution on the parallel performance of iterative linear solvers with emphasis on CFD of incompressible flows. _Computational Mechanics_, 55(1):93-103, 2015. * [50] Mahdi Esmaily, Yuri Bazilevs, and Alison Marsden. A bi-partitioned iterative algorithm for solving linear systems arising from incompressible flow problems. _Computer Methods in Applied Mechanics and Engineering_, 286(1):40-62, 2015. * [51] Mahdi Esmaily, Yuri Bazilevs, and Alison L. Marsden. A new preconditioning technique for implicitly coupled multidomain simulations with applications to hemodynamics. _Computational Mechanics_, 52(5):1141-1152, 2013. * [52] Pengfei Lu, Ping Wang, Bingruo Wu, Yidong Wang, Yang Liu, Wei Cheng, Xuhui Feng, Xinchun Yuan, Miriam M Atteya, Haleigh Ferro, et al. A SOX17-PDGFB signaling axis regulates aortic root development. _Nature Communications_, 13(1):1-17, 2022. * [53] Dongjie Jia, Matthew Peroni, Tigran Khalapyan, and Mahdi Esmaily. An efficient assisted bidirectional glenn design with lowered superior vena cava pressure for stage-one single ventricle patients. _Journal of Biomechanical Engineering_, 143(7):071008, 2021. * [54] Dongjie Jia and Mahdi Esmaily. Characterization of the ejector pump performance for the assisted bidirectional Glenn procedure. _Fluids_, 7(1):31, 2022. * [55] Mahdi Esmaily, Bari Murtuza, T.Y. Hsia, and Alison Marsden. Simulations reveal adverse hemodynamics in patients with multiple systemic to pulmonary shunts. _Journal of Biomechanical Engineering_, 137(3):031001-031001-12, 2015. Galerkin's solution to the 1D problem Galerkin's solution \(\phi^{h}(x)\) is expressed in terms of piecewise linear shape function, which for a given node \(A\) is expressed as \[N_{A}(x)=\begin{cases}\frac{x-x_{A-1}}{x_{A-x_{A-1}}},&x_{A-1}<x<x_{A}\\ \frac{x_{A+1}-x}{x_{A+1}-x_{A}},&x_{A}<x<x_{A+1}\\ 0,&\text{otherwise}.\end{cases} \tag{68}\] Using these shape functions, the solution and test functions are expressed as \[\phi^{h}(x)=\sum_{A=0}^{N}U_{A}N_{A}(x), \tag{69}\] and \[w^{h}(x)=\sum_{A=1}^{N-1}c_{A}N_{A}(x), \tag{70}\] respectively, where \(U_{A}\) is the solution at node \(A\) and \(c_{A}\) is an arbitrary constant associated with node \(A\). Note that the end nodes \(A=0\) and \(N\) are excluded from the summations in Eq. (70) as Dirichlet boundary conditions are imposed on the two ends of the domain. Also, since \(\phi^{h}(0)=0\) and \(\phi^{h}(L)=1\), we require \(U_{0}=0\) and \(U_{N}=1\) in what follows. Using Eqs. (69) and (70) to plug in for \(\phi^{h}\) and \(w^{h}\) in Eq. (8) produces \[\sum_{A=1}^{N-1}c_{A}\left\{\sum_{B=0}^{N}\left[\left(N_{A},i\omega N_{B} \right)+\left(N_{A},a\frac{\partial N_{B}}{\partial x}\right)+\left(\frac{ \partial N_{A}}{\partial x},\kappa\frac{\partial N_{B}}{\partial x}\right) \right]U_{B}\right\}=0, \tag{71}\] Since Eq. (71) must hold for any \(c_{A}\), we can conclude \[\sum_{B=0}^{N}\left[\left(N_{A},i\omega N_{B}\right)+\left(N_{A},a\frac{ \partial N_{B}}{\partial x}\right)+\left(\frac{\partial N_{A}}{\partial x}, \kappa\frac{\partial N_{B}}{\partial x}\right)\right]U_{B}=0,\ \ \ A=1,\cdots N-1. \tag{72}\] The three inner products in Eq. (72) can be calculated explicitly using the fact that we are interested in a uniform grid of spacing \(h\) and \(x_{A+1}-x_{A}=x_{A}-x_{A-1}=h\) in Eq. (68). From Eq. (68) it is straightforward to show \[(N_{A},N_{B})=\begin{cases}\frac{1}{2}h,&B=A\pm 1\\ \frac{2}{3}h,&B=A\\ 0,&\text{otherwise}.\end{cases} \tag{73}\] similarly \[\left(N_{A},\frac{\partial N_{B}}{\partial x}\right)=\begin{cases}\pm\frac{1 }{2},&B=A\pm 1\\ 0,&\text{otherwise},\end{cases} \tag{74}\] and \[\left(\frac{\partial N_{A}}{\partial x},\frac{\partial N_{B}}{\partial x} \right)=\begin{cases}-\frac{1}{h},&B=A\pm 1\\ \frac{2}{h},&B=A\\ 0,&\text{otherwise}.\end{cases} \tag{75}\] Plugging in for the inner products that appear in Eq. (72) from Eqs. (73), (74), and (75) yields \[\left(\frac{i\omega h}{6}+\frac{a}{2}-\frac{k}{h}\right)U_{A+1}+\left(\frac{2 i\omega h}{3}+\frac{2\kappa}{h}\right)U_{A}+\left(\frac{i\omega h}{6}-\frac{a}{2}- \frac{k}{h}\right)U_{A-1}=0. \tag{76}\] Multiplying Eq. (76) by \(h/\kappa\) will non-dimensionalize the coefficients, allowing us to express them in terms of \(\alpha\) and \(\beta\) from Eqs. (11) and (12), respectively. The result is \[(i\beta+\alpha-1)U_{A+1}+(4i\beta+2)U_{A}+(i\beta-\alpha-1)U_{A-1}=0. \tag{77}\] Provided that the exact solution has an exponential form (Eq. (4)), it is reasonable to assume that the numerical solution also takes an exponential form. The exponent, however, can differ from that of the exact solution. That difference can be accommodated by selecting an exponent base that is a free parameter, which by following the tradition we call \(\rho\). That permits us to take \[U_{A}=c\rho^{A}, \tag{78}\] in which the arbitrary constant \(c\) is included to allow for matching the solution to the given boundary conditions. For the solution guess from Eq. (78) to satisfy Eq. (77), \(\rho\) must be the roots of a quadratic polynomial that is \[(i\beta+\alpha-1)\rho^{2}+(4i\beta+2)\rho+i\beta-\alpha-1=0. \tag{79}\] The two roots of Eq. (79) are those that were shown in Eq. (10). The general solution can be expressed in terms of the two discrete solutions associated with the two roots \(\rho_{1,2}\), namely \[U_{A}=c_{1}\rho_{1}^{A}+c_{2}\rho_{2}^{A}, \tag{80}\] where \(c_{1}\) and \(c_{2}\) are arbitrary constants selected so that the two boundary conditions are satisfied. Since \(U_{0}=0\) and \(U_{N}=1\), it is straightforward to show \[c_{1,2}=\pm\frac{1}{\rho_{1}^{N}-\rho_{2}^{N}}. \tag{81}\] Combining Eqs. (80) and (81) and noting \(U_{A}=\phi^{h}(x_{A})\) produces the result shown in Eq. (9), thus completing this proof. ## Appendix B The derivation of Eqs. (27) and (28) For a nodally exact solution, we must have \[\hat{\rho}_{1,2}^{A}=\exp(r_{1,2}\frac{x_{A}}{L}), \tag{82}\] where \(r_{1,2}\) are computed from the exact solution in Eq. (5) and \[\hat{\rho}_{1,2}=\frac{1+2i\hat{\beta}\pm\sqrt{\hat{\alpha}^{2}-3\hat{\beta}^{ 2}+6i\hat{\beta}}}{1-\hat{\alpha}-i\hat{\beta}}, \tag{83}\] which is the same as Eq. (10) but with the adjusted \(\hat{\alpha}\) and \(\hat{\beta}\) of the proposed augmented SUPG method from Eqs. (25) and (26). Since \(x_{A}/L=Ah/L\), the right hand side of Eq. (82) can be written as \[\exp(r_{1,2}\frac{x_{A}}{L})=\left(\exp(r_{1,2}\frac{h}{L})\right)^{A}. \tag{84}\] Thus, from Eqs. (82) and (84), we have \[\hat{\rho}_{1,2}=\exp(r_{1,2}\frac{h}{L}). \tag{85}\] Substituting for \(r_{1,2}\) using Eq. (5) and simplifying the result by using the fact that \(Ph/L=\alpha\) and \((Wh/L)^{2}=6\beta\) (see Eqs. (6), (7), (11), and (12)) produces \[\hat{\rho}_{1,2}=\exp(\alpha\pm\sqrt{\alpha^{2}+6i\beta}), \tag{86}\] or alternatively \[\begin{array}{l}\hat{\rho}_{1}+\hat{\rho}_{2}=2\exp(\alpha)\cosh(\gamma),\\ \hat{\rho}_{1}-\hat{\rho}_{2}=2\exp(\alpha)\sinh(\gamma),\end{array} \tag{87}\] where \(\gamma\) is defined in Eq. (29). Combining Eqs. (83) and (87), yields \[\frac{1+2i\hat{\beta}}{1-\hat{\alpha}-i\hat{\beta}}=\exp(\alpha)\cosh(\gamma), \tag{88}\] and \[\frac{\sqrt{\hat{\alpha}^{2}-3\hat{\beta}^{2}+6i\hat{\beta}}}{1-\hat{\alpha}-i \hat{\beta}}=\exp(\alpha)\sinh(\gamma). \tag{89}\] From Eq. (88), it is straightforward to show \[\hat{\alpha}=1-i\hat{\beta}-\frac{1+2i\hat{\beta}}{\exp(\alpha)\cosh(\gamma)}. \tag{90}\] Substituting for \(\hat{\alpha}\) from Eq. (90) into Eq. (89) and simplifying the results produces the following quadratic relationship for \(i\hat{\beta}\) \[\left[4\cosh(\alpha)+2\cosh(\gamma)\right](i\hat{\beta})^{2}+\left[4\cosh( \alpha)-\cosh(\gamma)\right]i\hat{\beta}+\cosh(\alpha)-\cosh(\gamma)=0. \tag{91}\] Equation (91) has two roots. One root is \(i\hat{\beta}=-1/2\), which is not admissible as it produces zero divided by zero in Eq. (88). It is the second root that in combination with Eq. (90) produces Eqs. (27) and (28), thus completing the proof. ## Appendix C Derivation of Eq. (35) To arrive at Eq. (35) from (34) for regimes in which \(\beta\lessapprox 1\), we investigate the asymptotic behavior of \(\hat{\omega}\) at limits of \(\alpha\ll 1\) and \(\alpha\gg 1\). Let us first consider \(\alpha\gg 1\). Since in this limit \(\beta/\alpha^{2}\ll 1\), Eq. (29) can be expressed as \[\gamma=\alpha+3i\left(\frac{\beta}{\alpha}\right)+\left(\frac{9\beta^{2}}{2 \alpha^{3}}\right)+{\cal O}(\alpha^{-5}). \tag{92}\] Since \(\exp(\alpha)\gg\exp(-\alpha)\), we have \[\cosh(\gamma)=\frac{1}{2}\exp(\alpha)\left[1+\left(\frac{3i\beta}{\alpha} \right)-\left(\frac{9\beta^{2}}{2\alpha^{2}}\right)-\left(\frac{9i\beta^{3}}{2 \alpha^{3}}\right)+\left(\frac{9\beta^{2}}{2\alpha^{3}}\right)+{\cal O}( \alpha^{-4})\right]. \tag{93}\] Using this result along with the fact that \(\cosh(\alpha)\approx\sinh(\alpha)\approx\exp(\alpha)/2\) permits us to simplify Eq. (34) and obtain \[\hat{\omega}=\left(1+\frac{3i\beta}{2\alpha}-\frac{3\beta^{2}}{2\alpha^{2}} \right)\omega+{\cal O}(\alpha^{-2}),\ \ \mbox{for}\ \ \alpha\gg 1. \tag{94}\] In deriving Eq. (94), we dropped \(-3i\beta/(2\alpha^{2})\) in comparison to the \(3i\beta/(2\alpha)\). The term \(-3\beta^{2}/(2\alpha^{2})\), however, was not dropped at it was the leading order in excess of \(1\) and it determines the in-phase contribution of the stabilization term at large \(\alpha\). Next, let us consider \(\alpha\ll 1\). By selecting \(\alpha\) such that \(\alpha\ll\beta\), we can write \[\gamma=\sqrt{6i\beta}\left(1+\frac{\alpha^{2}}{12i\beta}+{\cal O}(\alpha^{4}) \right). \tag{95}\] Performing Taylor series expansion of \(\cosh(\gamma)\) yields \[\cosh(\gamma)=1+3i\beta+\frac{\alpha^{2}}{2}-\frac{3\beta^{2}}{2}-\frac{3i \beta^{3}}{10}+{\cal O}(\beta^{4})+{\cal O}(\alpha^{4}). \tag{96}\] Note that the coefficients associated with the \(n^{\rm th}\) exponents of \(\beta\) scale with \(6^{n}/(2n)!\), and thus rapidly go to zero. Truncating \(\beta\) series in Eq. (96) and plugging the result into Eq. (34) after expanding \(\cosh(\alpha)\) and \(\sinh(\alpha)\) produces \[\hat{\omega}=\left(1+\frac{i\beta}{2}-\frac{\beta^{2}}{10}\right)\omega+{ \cal O}(\alpha^{2}),\ \ \mbox{for}\ \ \alpha\ll 1. \tag{97}\] To merge the two expansions in Eqs. (94) and (97) and approximate \(\alpha\) at intermediate values, we can start from Eq. (97) and modify the denominator of the second and third term by adding a factor that scales with \(\alpha\) and \(\alpha^{2}\), respectively. Our numerical experiment shows that the following formula, which matches the above asymptotic expansions, provides a good fit to the original expression in Eq. (35). \[\hat{\omega}=\left(1+\frac{3i\beta}{2\alpha\sqrt{1+9\alpha^{-2}}}+\frac{3(i \beta)^{2}}{2\alpha^{2}(1+15\alpha^{-2})}\right)\omega. \tag{98}\] Interestingly, the denominators of the second and third term in Eq. (98) can be expressed exactly and approximately, respectively, in terms of \(\tau\) from Eq. (16) (as argued below, the third term has a much smaller contribution than the first and second term, and hence this approximation has little effect on the results). Doing so yields \[\hat{\omega}=\left(1+\left(\frac{3i\beta a\tau}{\alpha h}\right)+\frac{1}{2} \left(\frac{3i\beta a\tau}{\alpha h}\right)^{2}\right)\omega. \tag{99}\] From Eqs. (11) and (12), \((3\beta a)/(\alpha h)=\omega\). Hence Eq. (99) can be further simplified to \[\hat{\omega}=\left(1+i\omega\tau+\frac{1}{2}(i\omega\tau)^{2}\right)\omega. \tag{100}\] From the above asymptotic expansion, it is evident that the third term on the right-hand side of Eq. (100) is much smaller than the second, as is the second term in comparison to the first term. To verify this, one can consider the product of \(\tau\) and \(\omega\) that scales with \(\beta/\alpha\) for large \(\alpha\) and with \(\beta\) for small values of \(\alpha\). Since \(\beta\lessapprox 1\), in either case \(\omega\tau\lessapprox 1\). Following this argument, one can include the higher exponents of \(\omega\tau\) in Eq. (100) with prefactors that quickly go to zero and write as \[\hat{\omega}=\left(\sum_{n=0}^{\infty}\frac{1}{n!}(i\omega\tau)^{n}\right)\omega. \tag{101}\] The series in Eq. (101) is the Taylor expansion of \(\exp(i\omega\tau)\). Replacing it will produce Eq. (35), thus completing this derivation. To further verify that this approximation is indeed a good approximation of the exact expression for \(\hat{\omega}\), the readers are referred to Figure 2. Lastly, we should note that even though Eq. (35) was derived for \(\beta\lessapprox 1\), it produces reasonable approximation up to \(\beta<5\). The values of \(\beta\) larger than \(5\) have little relevance as those signify extremely oscillatory regimes on a very coarse grid that can not be resolved even if the exact form of \(\hat{\omega}\) is employed in higher dimensions. In fact, in such scenarios, it is favorable to use the approximate \(\hat{\omega}\) rather than its exact form as Eq. (34) exhibits an erratic behavior that creates a numerical convergence issue. That is in contrast to Eq. (35) which behaves well, even at relatively large values of \(\beta\). ## Appendix D Derivation of Eq. (67) Our overall approach to solving the 2D convection-diffusion problem in the square-shaped domain (Eq. (66)) is to first homogenize the boundary condition on the top, then solve the resulting PDE through the method of separation of variables. Namely, we are seeking a solution with the form \[\phi(x,y)=\psi(x,y)+V(y), \tag{102}\] so that \(\psi(x,y)\) is governed by \[\begin{split} i\omega\psi+a\psi_{,x}&=\kappa(\psi_ {,xx}+\psi_{,yy}),\\ \psi(x,0)&=\psi(x,L)=0,\\ \psi(L,y)&=-V(y),\\ \psi(0,y)&=1-V(y).\end{split} \tag{103}\] It follows from Eqs. (66), (102), and (103) that \(V(y)\) must be satisfied by \[i\omega V =\kappa V_{,yy},\] \[V(0) =0, \tag{104}\] \[V(L) =1.\] Equation (104) is a constant coefficient second-order ODE and its solution can be obtained through elementary means. It can be expressed as \[V(y)=C_{1}\sinh\left(\sqrt{\frac{\omega}{\kappa}}iy\right)+C_{2}\cosh\left( \sqrt{\frac{\omega}{\kappa}}iy\right). \tag{105}\] Applying the boundary conditions and noting \(W^{2}=\omega L^{2}/\kappa\) produces \[V(y)=\frac{\sinh\left(\sqrt{i}W\frac{y}{L}\right)}{\sinh(\sqrt{i}W)}. \tag{106}\] Having \(V(y)\), we can now solve Eq. (103) through the separation of variables. That is to assume \(\psi(x,y)\) is self-similar and can be expressed as \[\psi(x,y)=X(x)Y(y). \tag{107}\] For Eq. (103) to have a solution of the form shown in Eq. (107), we must have \[\frac{i\omega}{\kappa}+\frac{a}{\kappa}\frac{X_{,x}}{X}-\frac{X_{,xx}}{X}= \frac{Y_{,yy}}{Y}=-\lambda^{2}, \tag{108}\] where \(\lambda^{2}\) are the eigenvalues of the \(y\)-ODE that is \[Y_{,yy}+\lambda^{2}Y =0, \tag{109}\] \[Y(0)=Y(L) =0.\] Note that zero and positive eigenvalues were not considered in Eq. (108) as they generate a trivial solution for \(Y(y)\). The solution to the eigenvalue problem in Eq. (109) is \[\lambda_{n} =\frac{n\pi}{L},\;n=1,2,3,..., \tag{110}\] \[Y_{n}(y) =\sin\Big{(}\frac{n\pi y}{L}\Big{)}, \tag{111}\] where we dropped the constants in front of the eigen solutions as they can be merged into those of the \(X\)-ODE later on. Having the eigenvalues \(\lambda\), we can solve the \(X\)-ODE from Eq. (108) that is \[\kappa X_{,xx}-aX_{,x}-\left(\kappa\lambda_{n}^{2}+i\omega\right)X=0. \tag{112}\] Equation (112) is another constant coefficient second-order ODE with a solution that can be expressed as \[X_{n}(x)=A_{n}\exp\Big{(}r_{-n}\frac{x}{L}\Big{)}+B_{n}\exp\Big{(}r_{+n}\frac {x}{L}\Big{)}, \tag{113}\] where \(r_{\pm n}\) are provided in Eq. (67). To put all pieces together, we must combine Eqs. (102), (106), (111), and (113) to obtain \[\phi(x,y)=\frac{\sinh\left(\sqrt{i}W\frac{y}{L}\right)}{\sinh(\sqrt{i}W)}+ \sum_{n=1}^{\infty}\Big{[}A_{n}\exp\Big{(}r_{-n}\frac{x}{L}\Big{)}+B_{n}\exp \Big{(}r_{+n}\frac{x}{L}\Big{)}\Big{]}\sin\Big{(}\frac{n\pi y}{L}\Big{)}\,, \tag{114}\] which is the same as the expression provided in Eq. (67). The last step of the process is to solve for \(A_{n}\) and \(B_{n}\) so that the remaining two boundary conditions are satisfied. More specifically, we must have \[\phi(L,y)=\frac{\sinh\left(\sqrt{i}W\frac{y}{L}\right)}{\sinh(\sqrt{i}W)}+\sum_{n =1}^{\infty}\left[A_{n}\exp\left(r_{-n}\right)+B_{n}\exp\left(r_{+n}\right) \right]\sin\left(\frac{n\pi y}{L}\right)=0. \tag{115}\] Defining constants \[b_{n}=A_{n}\exp\left(r_{-n}\right)+B_{n}\exp\left(r_{+n}\right), \tag{116}\] permits us to express Eq. (115) as \[\sum_{n=1}^{\infty}b_{n}\sin\left(\frac{n\pi y}{L}\right)=-\frac{\sinh\left( \sqrt{i}W\frac{y}{L}\right)}{\sinh(\sqrt{i}W)}, \tag{117}\] in which \(b_{n}\) can be computed through the Fourier transform and is \[\begin{split} b_{n}&=\frac{2}{L}\int_{0}^{L} \left[-\sin\left(\frac{n\pi y}{L}\right)\frac{\sinh\left(\sqrt{i}W\frac{y}{L} \right)}{\sinh(\sqrt{i}W)}\right]\mathrm{d}y,\\ &=\frac{2n\pi\cos(n\pi)}{iW^{2}+(n\pi)^{2}}.\end{split} \tag{118}\] Similarly, from the other boundary condition \[\phi(0,y)=\frac{\sinh\left(\sqrt{i}W\frac{y}{L}\right)}{\sinh(\sqrt{i}W)}+ \sum_{n=1}^{\infty}\left(A_{n}+B_{n}\right)\sin\left(\frac{n\pi y}{L}\right)=1. \tag{119}\] This time, defining \[c_{n}=A_{n}+B_{n}, \tag{120}\] produces \[\begin{split} c_{n}&=\frac{2}{L}\int_{0}^{L}\left[ \sin\left(\frac{n\pi y}{L}\right)-\sin\left(\frac{n\pi y}{L}\right)\frac{\sinh \left(\sqrt{i}W\frac{y}{L}\right)}{\sinh(\sqrt{i}W)}\right]\mathrm{d}y,\\ &=a_{n}+b_{n},\end{split} \tag{121}\] where \[a_{n}=\frac{2(1-\cos(n\pi))}{n\pi}. \tag{122}\] \(A_{n}\) and \(B_{n}\) are obtained using Eqs. (116), (120), and (121). The result is what is provided in Eq. (67). That completes this proof. This figure "ana.png" is available in "png" format from: [http://arxiv.org/ps/2305.12038v1](http://arxiv.org/ps/2305.12038v1) This figure "gal.png" is available in "png" format from: [http://arxiv.org/ps/2305.12038v1](http://arxiv.org/ps/2305.12038v1) This figure "present.png" is available in "png" format from: [http://arxiv.org/ps/2305.12038v1](http://arxiv.org/ps/2305.12038v1)
2302.01785
Orbital Evolution of Binaries in Circumbinary Disks
We present the to-date largest parameter space exploration of binaries in circumbinary disks (CBDs), deriving orbital evolution prescriptions for eccentric, unequal mass binaries from our suite of hydrodynamic simulations. In all cases, binary eccentricities evolve towards steady state values that increase with mass ratio, and saturate at an equilibrium eccentricity $e_{\rm b, eq} \sim 0.5$ in the large mass ratio regime, in line with resonant theory. For binaries accreting at their combined Eddington limit, a steady state eccentricity can be achieved within a few Megayears. Once at their steady state eccentricities, binaries with $q_{\rm b} \gtrsim 0.3$ evolve towards coalescence, while lower mass ratio systems expand due to CBD torques. We discuss implications for population studies of massive black hole binaries, protostars in binary systems, and post-common envelope binaries observed by ground-based gravitational wave detectors.
Magdalena Siwek, Rainer Weinberger, Lars Hernquist
2023-02-03T14:43:58Z
http://arxiv.org/abs/2302.01785v1
# Orbital Evolution of Binaries in Circumbinary Disks ###### Abstract We present the to-date largest parameter space exploration of binaries in circumbinary disks (CBDs), deriving orbital evolution prescriptions for eccentric, unequal mass binaries from our suite of hydrodynamic simulations. In all cases, binary eccentricities evolve towards steady state values that increase with mass ratio, and saturate at an equilibrium eccentricity \(e_{\rm b,eq}\sim 0.5\) in the large mass ratio regime, in line with resonant theory. For binaries accreting at their combined Eddington limit, a steady state eccentricity can be achieved within a few Megayears. Once at their steady state eccentricities, binaries with \(q_{\rm b}\gtrsim 0.3\) evolve towards coalescence, while lower mass ratio systems expand due to CBD torques. We discuss implications for population studies of massive black hole binaries, protostars in binary systems, and post-common envelope binaries observed by ground-based gravitational wave detectors. keywords: accretion, accretion disks, binaries, torques, hydrodynamics, transients ## 1 Introduction Circumbinary disks (CBDs) form and evolve in tandem with binaries in a range of astrophysical contexts: formation of stellar binaries in protoplanetary disks (e.g. Dutrey et al., 1994; Mathieu et al., 1997; Tofflemire et al., 2017; Czekala et al., 2021), massive black hole binaries (MBHBs) immersed in AGN disks (e.g., Begelman et al., 1980; Yu and Tremaine, 2002; Vasiliev et al., 2015; Gualandris et al., 2016), and remnants of common envelopes around progenitors of compact-object mergers (e.g., Kashi and Soker, 2011; Reichardt et al., 2019; see also the review by Roepke and De Marco, 2022, and for a comprehensive review on circumbinary accretion disks and their applications, see Lai and Munoz, 2022.) A disk and binary interact gravitationally, resulting in: (i) eccentricity growth in the CBD and, in some cases, precession around the binary (however, see e.g. Miranda et al. (2016) and Siwek et al. (2023) for 'locked' disks), and (ii) evolution of binary parameters, such as mass ratio \(q_{\rm b}\) (e.g., Artymowicz, 1983; Bate et al., 2002; Farris et al., 2014; Gerosa et al., 2015; Siwek et al., 2023), semi-major axis \(a_{\rm b}\) and eccentricity \(e_{\rm b}\). The evolution of binary orbital parameters occurs due to the accretion of gas and momentum onto the binary, and the gravitational torques between binary and gas. The study of long-term orbital evolution in binaries with CBDs is a critical component to understand the properties of binary populations in current and upcoming electromagnetic (EM) and gravitational wave (GW) observatories. In this work we focus on the rate of change of the semi-major axis (\(\dot{a}_{\rm b}\)), and the binary eccentricity (\(\dot{e}_{\rm b}\)) due to gas torques. Analytical and numerical calculations of gas torques acting on binaries in CBDs show that disks can extract angular momentum from binaries, reducing the semi-major axis and assisting coalescence (e.g., Pringle, 1991; Gould and Rix, 2000; Armitage and Natarajan, 2002; Armitage and Natarajan, 2005; MacFadyen and Milosavljevic, 2008; Cuadra et al., 2009; Haiman et al., 2009). Conversely, recent hydrodynamic simulations have found that the net torque acting on the binary can cause orbital expansion (e.g. Miranda et al., 2016; Moody et al., 2019; Munoz et al., 2019; Duffell et al., 2020; Munoz et al., 2020; D'Orazio and Duffell, 2021), calling into question whether CBDs contribute to coalescence, or prevent binaries from merging. However, a large parameter study including eccentric, unequal mass binaries has not yet been done, and is the focus of this work. Dynamical interaction with gas also changes the orbital eccentricity of the binary. Analytical calculations suggest that Lindblad resonances cause eccentricity excitation, while co-rotation resonances circularize the binary orbit (Goldreich and Tremaine, 1980; Artymowicz et al., 1991). Lubow and Artymowicz (1992) found that the balance between Lindblad and co-rotation resonances evolves binaries with small initial eccentricity and mass ratios above \(q_{\rm b}\gtrsim 0.2\) to orbital eccentricities close to \(e_{\rm b}\sim 0.5\). Similarly, more recent hydrodynamic simulations have found that binaries with equal mass ratios tend towards an 'equilibrium eccentricity' \(e_{\rm b,eq}\sim 0.45\)(Zrake et al., 2021; O'Orazio and Duffell, 2021). In our work we include the lower mass ratio regime, establishing whether an equilibrium eccentricity \(e_{\rm b,eq}\) exists for all binaries in the range \(0.1\leq q_{\rm b}\leq 1.0\), and the relationship between \(e_{\rm b,eq}\) and \(q_{\rm b}\). In Siwek et al. (2023) we provided a detailed analysis of the preferential accretion rates onto binaries of various mass ratios and eccentricities. In this follow-up work, we further expand the parameter space, including more values of \(e_{\rm b}\) and calculate the change in binary semi-major axis and eccentricity as a function of the binary parameters. We study the torques exerted by the gas in the CBD and the circumsingle disks (CSDs) on the binary, both through accretion and gravitational interaction, for a large parameter space covering binary mass ratio and eccentricity. We further include an analysis that explores the contribution of torques from the CBD region only, excluding the cavity. This paper is structured as follows. In Section 2 we present the initial conditions and numerical methods used to evolve our hydrodynamic simulations, including the binary orbital evolution calculations (Section 2.2). In Section 3 we present results from our simulations. Our main result is a study of gravitational and accretion torques as a function of \(e_{\rm b}\) and \(\eta_{\rm b}\), and resulting evolution of binary semi-major axis and eccentricity. In Section 4 we discuss our results in a broader astrophysical context. We outline implications for both EM and GW observations of binary populations, including binary stars and compact objects. ## 2 Numerical methods We carry out hydrodynamic simulations of binaries immersed in CBDs using the moving mesh code Arepo (Springel, 2010; Pakmor et al., 2016) in its Navier Stokes version (Munoz et al., 2013), which employs a Voronoi tessellation to generate a moving grid around a set of discrete mesh-generating points. Similar to the simulation suite presented in Siwek et al. (2023), we initialize our simulations with a binary represented by two sink particles with a fixed mass ratio \(q_{\rm b}\equiv\frac{M_{\rm b}}{M_{\rm 1}}\), where \(M_{\rm 2}\) and \(M_{\rm 1}\) are the secondary and primary binary component respectively. We scale the simulation by choosing the total binary mass \(M_{\rm b}=M_{1}+M_{2}\equiv 1\). Each binary component is represented by a sink particle with sink radius \(r_{\rm s}=0.03a_{\rm b}\), where \(a_{\rm b}\) is the binary semi-major axis. Within the sink region, we remove a fraction of gas from the gas cells at each time step. The fraction of gas removed from gas cells depends on the distance from the sink, and is defined by the dimensionless parameter \(\gamma=\gamma_{0}\times(1-r_{\rm b}/r_{\rm s})^{2}\), where \(\gamma_{0}=0.5\) and \(r_{\rm b}\) is the distance between the j-th sink particle and i-th gas cell. The sink particles move on a fixed Keplerian orbit with semimajor axis \(a_{\rm b}\) and eccentricity \(e_{\rm b}\), which are both fixed in time. When measuring the binary orbital evolution in this work, we calculate the specific angular momentum and energy rates of change of the binary, and thus the inferred change in the orbital parameters (see equations 5 and 4 in section 2.2). We do not measure the orbital evolution directly from the simulation by using 'live' orbits. We choose this method (i) to avoid numerical errors due to orbit integration, and (ii) to allow the binary and disk to achieve a steady state before measuring the orbital evolution. The simulation setup is identical to Siwek et al. (2023). The circumbinary accretion disk is modeled as a finite, locally isothermal torus with an \(\alpha\)-viscosity, where \(\alpha=0.1\) is a constant. Throughout, we choose an aspect ratio \(h=0.1\). The binary-disk system is placed in a 2D computational box of size \(300a_{\rm b}\times 300a_{\rm b}\) with open boundary conditions, allowing the disk to viscously spread over an integration time of \(10\,000\) binary orbital timescales (\(P_{\rm b}\)). Since our numerical methods are identical to those previously presented, more details can be found in Siwek et al. (2023), with the addition of the orbital evolution calculations outlined in this work's section 2.2. ### Sink Radius Study In Section 3.5 we investigate the influence of sink sizes on the measured orbital evolution. We test sink radii in the range \(r_{\rm s}=[0.03\,a_{\rm b},0.01\,a_{\rm b},0.005\,a_{\rm b}]\), with the largest as our fiducial value. Since simulations with very small sink radii are prohibitively expensive, we do not evolve each simulation for \(10\,000\,P_{\rm b}\), as we do in our fiducial simulation suite. Instead, we take a snapshot from our fiducial simulation at \(t=3000\,P_{\rm b}\) for binaries with equal mass ratios and eccentricities \(e_{\rm b}=[0.0,0.2,0.4,0.6,0.8]\) and restart the simulation with a smaller sink radius. We do this to allow the CBD to viscously relax first, which reduces the required run-time of the simulation with smaller, and more computationally expensive, sink radii. We then evolve this simulation for an additional \(2000\,P_{\rm b}\), over which we measure the orbital evolution based on the new sink parameters. ### Numerical integration of gas torques We directly compute the specific angular momentum \(\delta\rm h_{\rm b}\) and specific energy \(\delta e_{\rm b}\) deposited into our sinks in each timestep \(\delta t\): \[\delta\rm h_{\rm b}=(r_{\rm b}\times f_{\rm ext})\,\delta t, \tag{1}\] \[\delta\rm\epsilon_{\rm b}=(r_{\rm b}\cdot f_{\rm ext})\,\delta t, \tag{2}\] where \(r_{\rm b}=r_{\rm l}-r_{\rm 2}\) and \(r_{\rm b}=v_{\rm 1}-v_{\rm 2}\) are the relative positions and velocities of the binary, respectively. The external forces \(f_{\rm ext}\) acting on the sinks include gravitational (section 2.2.1) and accretion forces (section 2.2.2), \[f_{\rm ext}=(f_{\rm g,1}-f_{\rm g,2})+(f_{\rm acc,1}-f_{\rm acc,2}), \tag{3}\] where \(f_{\rm g,i}\) and \(f_{\rm acc,i}\) are defined in equations 6 and 7 respectively. Binary eccentricity \(e_{\rm b}\) and semi-major axis \(a_{\rm b}\) evolve due to the change in specific energy and angular momentum. We compute the rate of change of eccentricity \(\dot{e}_{\rm b}\) and semi-major axis \(\dot{a}_{\rm b}\)(see e.g. Munoz et al., 2019; D'Orazio and Duffell, 2021), \[\dot{e}_{\rm b}=-\frac{1-e_{\rm b}^{2}}{e_{\rm b}^{2}}\Big{[}2\,\frac{\dot{M}_{ \rm b}}{M_{\rm b}}-\frac{\dot{e}_{\rm b}}{e_{\rm b}}-2\,\frac{\dot{l}_{\rm b}} {l_{\rm b}}\Big{]}, \tag{4}\] \[\frac{\dot{a}_{\rm b}}{a_{\rm b}}=\frac{\dot{M}_{\rm b}}{M_{\rm b}}-\frac{\dot{ e}_{\rm b}}{e_{\rm b}}. \tag{5}\] The continuous rate of change of specific angular momentum \(\dot{l}_{\rm b}\) and energy \(\dot{e}_{\rm b}\) is calculated by taking the finite difference of equations 1 and 2. We time average the changes in specific energy and angular momentum to calculate the binary orbital evolution as in equations 4 and 5 over the simulation time of \(10\,000\,P_{\rm b}\). #### 2.2.1 Gravitational Torques Gravitational interaction with the surrounding gas cells results in a gravitational acceleration \(\mathbf{f}_{\rm g,i}\) of the i-th sink particle, \[\mathbf{f}_{\rm g,i}=-\mathcal{G}\sum_{\rm j}m_{\rm j}\frac{(\mathbf{r}_{\rm i}- \mathbf{r}_{\rm j})}{|\mathbf{r}_{\rm i}-\mathbf{r}_{\rm j}|^{3}}. \tag{6}\] Here, sink particle \(i\) is at a location denoted by \(\mathbf{r}_{\rm i}\), the position vector of the sink particle as measured from the simulation barycenter, \(m_{\rm j}\) is the mass and \(\mathbf{r}_{\rm j}\) the position of the j-th gas cell, and \(\mathcal{G}\) is the gravitational constant. #### 2.2.2 Accretion Torques Gas cells within the sink radius \(r_{\rm s}\) are drained by a factor \(\gamma\) in each timestep (see Siwek et al. (2023) for more details on our numerical methods). In addition to mass, linear momentum is accreted by the sink, resulting in an asymmetric accretion force onto the sink particle within each timestep (Roedig et al., 2012; Munoz et al., 2019). The accretion force during a timestep \(\delta t\) due to accretion of mass and momentum from a cell with index i takes the form, \[\mathbf{f}_{\rm acc,i}=\Big{(}\frac{\delta\mathbf{p}_{\rm i}-\delta m_{\rm i }\mathbf{v}_{\rm i}}{m_{\rm i}+\delta m_{\rm i}}\Big{)}/\delta t. \tag{7}\] Here \(\delta\mathbf{p}_{\rm i}\) and \(\delta m_{\rm i}\) are the momentum and mass accreted by sink particle i in the current timestep, while \(m_{\rm i}\) is the sink particle mass prior to the current timestep. \(\mathbf{v}_{\rm i}\) is the velocity vector of the sink particle, and \(\delta t\) is the current timestep of the simulation. Summing over all gas cells which are drained per timestep yields the specific force due to accretion acting on each sink per timestep. ## 3 Results ### Semi-major axis evolution We calculate the evolution of the binary semi-major axis as a function of \(q_{\rm b}\) and \(e_{\rm b}\) as in equation 5. We report the mean of \(\dot{a}_{\rm b}/a_{\rm b}\) over the last 7000 orbits, after the disk is viscously relaxed out to \(\gtrsim 10\,a_{\rm b}\). We show \(\dot{a}_{\rm b}/a_{\rm b}\) for each one of our 80 simulations, and plot the result as a function of mass ratio in the top panel of Figure 1. For circular binaries, we find that the semi-major axis expands for all binaries with \(q_{\rm b}\gtrsim 0.2\) (previously shown in Munoz et al., 2020). However, in most binaries with non-zero eccentricities (up to \(e_{\rm b}=0.8\) in this work), we find that CBD torques lead to inspiral. We note some exceptions in low mass ratio, eccentric binaries (\(q_{\rm b}\lesssim 0.2\), \(0.2\lesssim e_{\rm b}\lesssim 0.6\)) and several high mass ratio binaries with \(e_{\rm b}=0.5\) and \(e_{\rm b}=0.6\), where \(\dot{a}_{\rm b}\) is just greater than 0. The fiducial inspiral/outspiral regimes are shown again in Figure 2 (left table), where we present numerical values of the orbital migration rate \(\dot{a}_{\rm b}\) for each simulation. Blue indicates negative values of \(\dot{a}_{\rm b}\), implying that the binary shrinks, while red indicates positive values of \(\dot{a}_{\rm b}\), implying that the binary expands. The majority (58 out of our 80 simulations) of binaries we tested exhibit negative migration rates, suggesting that CBD torques assist binary coalescence in the majority of the parameter space sampled here. Significant positive migration rates are mostly seen when: (i) the binary moves on a circular orbit, and (ii) in systems with small mass ratios. We note that the semi-major axis expansion in circular binaries originates in the cavity/CSD region. In the right panel of Figure 2, we show the semi-major axis evolution of binaries due to gas torques from cells at a distance \(r\geq a_{\rm b}\) from the binary barycenter. We calculate these gas torques by excising the cavity region \(r<a_{\rm b}\) at every timestep, when calculating the gravitational forces acting between the binary and gas cells. When comparing the torques acting on circular binaries, in both tables, we find that torques from gas cells in the outer CBD lead to binary inspiral, as indicated by the blue colours in the left-most column in Figure 2 (right hand side table). In the absence of the contribution from the inner region (including the CSDs), torques from the CBD would give rise to binary inspiral, as has been traditionally argued in the literature (e.g., Pringle, 1991; Gould and Rix, 2000; Armitage and Natarajan, 2002). However, in circular binaries, the positive contribution from the CSDs outweighs this effect, leading to the now well documented phenomenon of binary outspiral. The origin of the orbital expansion and hardening rates in eccentric binaries is less clear. In eccentric low mass ratio binaries, significant positive torques remain when removing the cavity region from the torque calculation, as seen in the right hand side of Figure 2. This could indicate that torques from the CBD may contribute to binary expansion, if the binary is eccentric. However, we note that the \(r\leq a_{\rm b}\) region we remove from the torque calculation may be too small to entirely exclude the cavity region in eccentric, low mass ratio binaries. This is because the cavity size increases with increasing eccentricity due to the changing locations of resonances (see, e.g., Artymowicz and Lubow (1994) for a study on gap sizes in CBDs). Streams and CSDs around the secondary may therefore not be excised entirely, and may still contribute to the orbital evolution calculation. The expansion rates shown in low mass ratio, highly eccentric binaries may therefore originate in the CSD of the secondary as well as the CBD. Nevertheless, in circular binaries, and eccentric equal mass binaries, the origin of the torques causing expansion is in the CSD, consistent with predictions by Goldreich and Tremaine (1980b), and recently shown with hydrodynamic simulations in Munoz et al. (2020). ### Eccentricity evolution Binary eccentricity evolves as a result of interaction with the CBD (e.g. Roedig et al., 2011; Miranda et al., 2016; Munoz et al., 2019; D'Orazio and Duffell, 2021), however the parameter space explored has been mostly limited to equal mass binaries. Here we show how the orbital eccentricity of eccentric, (mostly) unequal mass ratio binaries evolves in our simulations. The bottom panel of Figure 1 shows the rate of change of binary eccentricity \(\dot{e}_{\rm b}\) as a function of mass ratio, including contributions from both gravitational and accretion torques. Specific numerical values for each simulation are given in Figure 3. Table cells highlighted in red indicate eccentricity growth, while blue indicates eccentricity damping. We note that the rate of change of the orbital eccentricity \(\dot{e}_{\rm b}\) generally increases as a function of mass ratios, for a given binary eccentricity. In addition, in each mass ratio bin, a transition from eccentricity growth (\(\dot{e}_{\rm b}>0\)) to eccentricity damping (\(\dot{e}_{\rm b}<0\)) takes place. This transition can be seen Figure 1: Semi-major axis (top) and eccentricity (bottom) evolution of binaries in our simulation suite, including gravitational and accretion forces. We find that most binaries above \(q_{\rm b}>0.2\) migrate outward,with the exception of a few cases with eccentricities \(e_{\rm b}=0.5\) and \(e_{\rm b}=0.6\). Eccentricity growth increases as a function of mass ratio, with all mass ratios experiencing regimes of circularization and eccentricity growth. Figure 2: Tables showing the semi-major axis rate of change based on gravitational and accretion torques (left) or gravitational torques from the outer CBD region (right; at a distance \(r_{\rm cell}>a_{\rm b}\) from barycenter), for all binaries in our parameter study. The colourmap is centered around 0, with negative \(a_{\rm b}\)values in blue (‘inspiral’), and positive \(a_{\rm b}\) values in red (‘outspiral’). in Figure 3, where red cells indicate eccentricity growth, and blue cells indicate eccentricity damping. An off-white diagonal through the parameter space separates red from blue cells, and shows the transition between the two regimes: this transition indicates the existence of an equilibrium eccentricity for binaries of all mass ratios. This equilibrium eccentricity can be calculated for each simulation, and is defined as the eccentricity at which \(\dot{e}_{\rm b}\sim 0\), representing the steady state of the binary system. In Figure 4 we show the equilibrium eccentricity of each binary as a function of \(q_{\rm b}\), for 3 different torque calculations: our fiducial case including gravitational and accretion torques (g+a, blue line), a test case including gravitational torques only (g, red line), and a third case including only gravitational torques from the CBD at a distance \(r\geq a_{\rm b}\) from the barycenter (g\({}_{\rm r>a}\), green line). These three cases are shown to: (i) investigate the impact of accretion torques on orbital eccentricity evolution, and (ii) distinguish between torques acting on the binary from the CBD (outer region) and cavity (including CSDs and gas streams). In our fiducial case (blue line), we find that low mass ratio binaries in our simulations tend towards low-moderate equilibrium eccentricities \(e_{\rm b,eq}\sim 0.2\). As the binary mass ratio grows, the equilibrium eccentricity increases at an approximately linear rate, until the growth of \(e_{\rm b,eq}\) with mass ratio saturates at \(q_{\rm b}\sim 0.6\), settling at around \(e_{\rm b,eq}\sim 0.5\). In binaries with equal mass ratio, the orbital eccentricity evolves towards a steady state value \(e_{\rm b,eq}\sim 0.48\), similar to recent results in the literature (compare with Munoz et al., 2019; Zrake et al., 2021; D'Orazio and Duffell, 2021). Our main result is that \(e_{\rm b,eq}\) grows with \(q_{\rm b}\), so that binaries with higher mass ratios evolve towards higher equilibrium eccentricities. We find this behaviour to be the same whether accretion torques are included in the calculation or not. The red line in Figure 4 represents the orbital eccentricity towards which binaries evolve when including only gravitational torques in our calculations. Aside from a small deviation near \(q_{\rm b}\sim 0.6\), the g+a (blue line) and g (red line) calculations yield nearly identical equilibrium eccentricities as a function of mass ratio. This implies that accretion torques, which act only in the near vicinity of the sink particles, play a subdominant role in the eccentricity evolution of binaries immersed in CBDs. Instead, the gravitational interaction with gas in the CBD, rather than the cavity region (including the CSDs), is likely more important. We investigate the origin of the eccentricity rate of change further by calculating \(\dot{e}_{\rm b}\) including only torques from gas in the CBD region, i.e. excising any cells within a radius \(r\leq a_{\rm b}\). By doing so, we exclude contributions from the cavity and CSDs. We find that the g\({}_{\rm r>a}\) calculation (green line in Figure 4) yields a similar result to the other two cases: at low mass ratio, \(e_{\rm b,eq}\) grows before flattening at a mass ratio \(q_{\rm b}\gtrsim 0.3\), towards an equilibrium eccentricity \(e_{\rm b,eq}\gtrsim 0.5\). Our results indicate that the equilibrium eccentricity of the binary is predominantly set by interactions with gas in the CBD, as opposed to the CSDs. While the CSDs play a crucial role in the evolution of the semi-major axis, they seem to be less important in the evolution of the binary eccentricity. This result suggests, in line with previous work by Artymowicz et al. (1991); Lubow and Artymowicz (1992), that the eccentricity growth and damping is caused by resonant interactions between the binary and the CBD. We expand the discussion on this topic in section 4. ### Evolution timescales In Figure 5 we show the eccentricity evolution as a function of time, including 3 binaries with mass ratios \(q_{\rm b}=0.1\), 0.5, 1.0. The eccentricity evolution is calculated using semi-analytic evolution in post-processing: We initialize each binary with its mass ratio and initial eccentricity \(e_{\rm b,0}=0.1\), and evolve the eccentricity forward in time by interpolating \(\dot{e}_{\rm b}\) from Figure 1. Throughout, we assume that the binary is accreting at its Eddington limit, with a radiative efficiency \(\epsilon=0.1\). We find the timescale to reach the equilibrium eccentricity to within a few percent is \(\lesssim 5\,\)Myr, in all 3 cases shown. For reference, the Salpeter timescale at 10% radiative efficiency is \(t_{\rm S}\sim 45\,\)Myr. ### Comparison with Literature Similar to Figure 1 in D'Orazio and Duffell (2021), we compare the orbital evolution of equal mass ratio binaries as a function of binary eccentricity to existing results in the literature, shown here in Figure 6. We show the rate of change of the semi-major axis \(\dot{a}_{\rm b}\) and binary eccentricity \(\dot{e}_{\rm b}\) as a function of the fixed binary eccentricity. Our simulations are represented by orange and purple stars (\(\dot{a}_{\rm b}/a_{\rm b}\) and \(\dot{e}_{\rm b}\) respectively), connected by dashed lines. The results are in good agreement with previous simulations of equal mass, eccentric binaries (e.g. Munoz et al., 2019; Zrake et al., 2021; D'Orazio and Duffell, 2021). Figure 3: Table showing the binary eccentricity rate of change based on gravitational and accretion torques (left) or gravitational torques from the outer CBD region (right; at a distance \(r_{\rm coll}>a_{\rm b}\) from barycenter), for all binaries in our parameter study. The colourmap is centered around 0, with negative \(\dot{e}_{\rm b}\) values in blue (‘circularization’), and positive \(\dot{e}_{\rm b}\) values in red (‘eccentricity growth’). & Duffell 2021a), with some exceptions at low eccentricity (e.g. near \(e_{\rm b}\sim 0.1\)). Our simulations do not show a circularization regime at \(e_{\rm b}\lesssim 0.05\). Instead, equal mass ratio binaries grow more eccentric, even with only small initial eccentricities \(e_{\rm b}\gtrsim 0.01\). This difference could be due to the viscosity in our simulations: we use a viscosity model that varies with radius, adopting a constant \(\alpha=0.1\), while D'Orazio & Duffell (2021a) choose a constant coefficient of kinematic viscosity. The eccentricity growth and damping are governed by resonances in the CBD, and the strength to which each resonance is excited depends on the mass density of gas at the resonance location (Lubow & Artymowicz, 1992). Therefore, the viscosity model could play a role in the balancing of the resonances, affecting the eccentricity growth and damping slightly. This could explain the minor differences between our work and previous studies shown in Figure 6. All simulations agree that there is an "equilibrium eccentricity" in the range \(0.4\lesssim e_{\rm b}\lesssim 0.5\), towards which most binaries evolve. Despite some minor differences in the low eccentricity regime, the simulations agree qualitatively. We point out that the comparison in Figure 6 includes results from 3 different hydrodynamic codes: both this work and Munoz et al. (2019) use Arepo (Springel, 2010), while Zrake et al. (2021) is based on Mara3 (Zrake & MacFadyen, 2012) and D'Orazio & Duffell (2021a) on DISCO(Duffell, 2016). The qualitative agreement between different hydrodynamic codes indicates that both the value and existence of an equilibrium eccentricity in equal mass ratio binaries is robust. ### Dependence on Sink Radli & Accretion Torques Sink particles are commonly used to model accretion of gas in CBD simulations, however the detailed sink prescriptions, including sink rate and sink radius, may change the outcome of the simulation (e.g., Dittmann & Ryan, 2021). As part of our parameter study, we test the effect of the sink radius on the measured orbital evolution of the binary. Binaries with high eccentricities approach small separations at pericenter, during which the CSDs surrounding the sink particles may be tidally disrupted. Whether or not the innermost resolved region of the accretion disk around the sink particle is stripped depends on the sink radius \(r_{\rm s}\) size relative to the Eggleton-Roche radius (Eggleton, 1983), \[r_{\rm ER}=\frac{0.49g_{\rm b}^{2/3}}{0.6g_{\rm b}^{2/3}+\log\left[1+g_{\rm b }^{1/3}\right]}\times r_{\rm b}, \tag{8}\] where \(r_{\rm b}=a_{\rm b}(1-e_{\rm b})\) is the binary separation at pericenter. In our fiducial simulations, the sink radius \(r_{\rm s}=0.03a_{\rm b}\) becomes comparable to \(r_{\rm ER}\) when \(e_{\rm b}=0.8\), and as a result, the accretion disks around sink particles on orbits with \(e_{\rm b}=0.8\) are likely to experience distortions due to tidal forces at pericentric approach. As CSDs are tidally compressed or stretched during pericentric approach, the orbital motion of the gas is no longer Keplerian, and consequently, the accretion onto the sink is no longer isotropic. Accreted gas may enter the sink region from preferred directions, depending on the tidal distortion of the CSD and the motion of the sink. Anisotropic accretion torques could then become important, as the innermost region of the accretion disk around the sink particle is no longer accreting isotropically. In extreme cases, if the CSD is entirely stripped, the sink particle moves through a 'headwind' when it directly encounters gas streams in the cavity region, leading to additional anisotropic accretion. As the tidal distortions in the CSD move from the outer Figure 4: The ‘equilibrium eccentricity’, \(e_{\rm b,eq}\), as a function of binary mass ratio, for our 3 methods of torque calculations. \(e_{\rm b,eq}\) grows approximately linearly as a function of mass ratio, saturating when \(q_{\rm b}\gtrsim 0.6\). Figure 5: Eccentricity evolution as a function of time in Myrs, starting at an initial eccentricity \(e_{\rm b,0}=0.1\) and evolving towards the equilibrium eccentricity \(e_{\rm b,eq}\) within \(\lesssim 8\)Myrs in all cases shown. All binaries are accreting at the combined Eddington limit for both objects. edge of the disk inwards, the size of the sink likely determines the magnitude of the resulting anisotropic accretion torques. Here we present the results of a sink radius parameter study, determining the deviations in orbital evolution due to anisotropic accretion torques, given a range of sink radii. In Figure 7 we show a series of snapshots displaying surface density centered around the secondary (\(M_{2}\)) in a \(e_{\rm b}=0.8,\,q_{b}=0.1\) simulation during pericenter approach. We compare simulations of two different sink radii, from largest (top row, \(r_{\rm s}=0.03a_{\rm b}\); our fiducial case) to smallest (second row, \(r_{\rm s}=0.005a_{\rm b}\)). In our simulation with the largest sink radius, the tidal distortion of the CSD at pericenter approach eliminates any isotropic accretion disk pattern down to the sink radius. As a result, gas streams in the cavity interact with the stripped sink particle, imparting momentum directly onto the sink. The effect on the orbital evolution is shown as a time series of \(\dot{a}_{\rm b}/a_{\rm b}\) in the third row of Figure 7: The CSD distortion coincides with large discrepancies between measures of \(\dot{a}_{\rm b}/a_{\rm b}\) computed with gravity only (labeled as g, dashed line) and including anisotropic accretion forces (labeled as g*a, solid line). These large discrepancies can lead to qualitatively different interpretations of orbital evolution (e.g. 'outspiral' vs. 'inspiral', see also Figure 8). In smaller sink radius simulations, we resolve the inner region of the CSD down to \(r_{\rm s}=0.005a_{\rm b}\). We find that the innermost region of the CSD continues to be bound to the sink, as shown in the second row of Figure 7. Since gas remains isotropically distributed around the sink, accretion is also likely to be isotropic rather than anisotropic. As a result, we find good agreement between the g and g*a calculations (bottom row in Figure 7), indicating negligible anisotropic accretion torques as expected. From the results in Figure 7 we have found that accretion torques play a non-negligible role in simulations of binaries with large enough eccentricity and sink radius. However, does this imply that accretion torques should be excluded from our fiducial simulations, to avoid numerical effects due to large sink particles? Given a small enough sink particle, towards which solution would our simulations converge? To answer this question, we expanded our sink radius study across equal mass binaries and eccentricities in the range \(e_{\rm b}=[0.0,0.2,0.4,0.5,0.6,0.8]\). We test sink radii in the range \(r_{\rm s}=[0.03a_{\rm b},\,0.01a_{\rm b},\,0.005a_{\rm b}]\), and check whether the small sink radius simulations converge with the g or g*a results of our fiducial simulations, or whether the convergence points towards a different result entirely. In Figure 8 we show a convergence study of \(\dot{a}_{\rm b}\) and \(\dot{e}_{\rm b}\) as a function of sink radius \(r_{\rm s}\). We test sink radii in the range \(r_{\rm s}=[0.03a_{\rm b},\,0.01a_{\rm b},\,0.005a_{\rm b}]\), and compare calculations that include only gravitational forces (denoted as "g" in the figure legend), and including both gravitational and accretion forces (denoted as "g*a" in the figure legend). At the largest sink radius \(r_{\rm s}=0.030\,a_{\rm b}\), there is a significant deviation of the semi-major axis evolution when comparing the g and g*a calculation, as soon as significant binary eccentricity (\(e_{\rm b}\gtrsim 0.4\)) is reached. In line with Munoz et al. (2019), we find that anisotropic accretion torques are not negligible in highly eccentric binaries. However, if a smaller sink radius is chosen (\(r_{\rm s}\lesssim 0.010\)), the g and g*a calculations yield very similar results, due to negligible contributions from anisotropic accretion torques. Even though anisotropic accretion torques become negligible at sufficiently small sink radii, the small sink radius simulations converge with the g*a result from our fiducial simulations. We suggest that this convergence occurs since the linear and angular momentum deposited into the sink particles by gas streams can occur in two distinct ways, depending on sink radius: (i) when sink radii are large, the Figure 6: Literature comparison of hydrodynamic simulations of CBDs around equal mass binaries with varying eccentricities, measuring the rates of change of semi-major axis (purple markers/lines) and binary eccentricity (orange markers/lines). We include work from Munoz et al. (2019) (purple/orange crosses), a fitting function of eccentricity rate of change from Zrake et al. (2021) (faint orange line), and orbital evolution results from D’Orazio and Duffell (2021) (solid purple/orange lines). We find that both eccentricity and semimajor axis evolution mostly agree between different studies, with some qualitative disagreement at \(e_{\rm b}\sim 0.1\). All studies find similar equilibrium eccentricities for \(q_{\rm b}=1.0\) binaries in the range \(0.4\lesssim e_{\rm b,eq}\lesssim 0.5\). CSDs can be entirely stripped in highly eccentric binaries, since \(r_{\rm s}\sim r_{\rm ER}\) (compare with equation 8). In this case, momentum from streams in the cavity is added to the sink particles directly through anisotropic accretion, requiring the addition of the anisotropic accretion torques in the calculation of the orbital evolution; (ii) in simulations with small sink radii, where the sink radius is smaller than \(r_{\rm ER}\) even in eccentric binaries, our simulations resolve the CSD region down to small scales. Here, momentum from streams in the cavity is added to the CSD instead of the sink particle directly, and the CSD acts as a 'buffer'. The momentum transport to the CSD however changes the gas morphology near the sink particles (e.g. by shifting the disk to 'lag' behind the sink), which is then reflected in the changing gravitational torques. Accretion torques are negligible, since the inner CSD is more symmetric, leading to isotropic accretion. This conservation of momentum argument explains why simulations with small sink radii converge with our fiducial g*a simulations. The convergence suggests that our fiducial results could be applied even to binary systems such as massive black hole binaries (MBHBs), where the scale of the black holes can be comparable to our smallest sink radius \(r_{\rm s}\sim 0.005a_{\rm b}\). We note that the binary eccentricity evolution itself is not sensitive to the numerical effects outlined above: \(e_{\rm b}\) is not affected by sink treatment or inclusion of accretion torques, and as such the equilibrium eccentricities presented here are numerically robust. This is because the binary eccentricity rate of change is dominated by the CBD itself, instead of the CSD region (as discussed in section 3.2). ## 4 Summary and Discussion In this work we have investigated the orbital evolution of binaries immersed in circumbinary disks as a function of binary mass ratio and eccentricity. We have shown that the sign and magnitude of \(\dot{a}_{\rm b}\) and \(\dot{e}_{\rm b}\) depends on the orbital parameter combination \([e_{\rm b},q_{\rm b}]\) of each binary, as discussed in sections 3.1 and 3.2. Semi-major axis expansion is a phenomenon mostly seen in circular binaries, while eccentric binaries tend to inspiral due to gas torques. We provide numerical values of the semi-major axis evolution in Figure 2, with our fiducial cal Figure 7: Top panels: successive surface density snapshots centered around the secondary in a \(e_{\rm b}=0.8\), \(q_{\rm b}=0.1\) simulation with sink radii \(r_{\rm s}=0.03\,a_{\rm b}\) (top row) and \(r_{\rm s}=0.005\,a_{\rm b}\) (second row). We find that the largest sink particle loses its disk structure at pericenter, while the CSD around the smallest sink particle is never entirely stripped. The bottom two panels show a time series of semi-major axis rate of change in all three simulations, comparing the orbital evolution when including gravitational torques only (g in the legend, dashed lines), and including anisotropic accretion torques (g*a, solid lines). We find that g and g*a calculations converge in simulations with smaller sink radii. culation in the left table and our calculation including only outer CBD torques on the right hand side. The binary eccentricity also evolves due to gas torques: binaries in the mass ratio range \(0.1\leq q_{\rm b}\leq 1.0\) all evolve towards fixed equilibrium eccentricities, which are predominantly determined by gravitational interaction with gas in the CBD (as opposed to the CSDs). In Figure 3, we provide numerical values of \(\dot{e}_{\rm b}\) for each binary, based on our fiducial torque calculation (g*a; left table), or on gravity-only calculations neglecting contributions from the cavity (g\({}_{\rm r>a}\); right table). By eye, it is easy to distinguish regions of eccentricity growth (red) and eccentricity damping (blue). The transition between those is a diagonal across the parameter space (off-white panels), terminating at \(e_{\rm b}\sim 0.5\). This diagonal represents the region of the parameter space towards which binaries evolve when interacting with CBDs. In Figure 4 we show the equilibrium eccentricities found in our simulations, which grow as a function of binary mass ratio. ### Steady State: Inspiral or Outspiral? A key goal of our work was to predict the orbital evolution of binaries immersed in CBDs as a function of the 2-D parameter space \([e_{\rm b},\,q_{\rm b}]\). We found that interaction with a CBD drives binaries towards equilibrium eccentricities, which increase with binary mass ratio. Binaries reach a steady state with orbital parameters \(e_{\rm b,eq}\), \(q_{\rm b}\) within a few Myrs (see Figure 5), which is likely within the lifetimes of AGN disks (1-100 Myr; e.g. Haiman and Hui, 2001; Martini and Weinberg, 2001; Yu and Tremaine, 2002; Khrykin et al., 2019), or protoplanetary disks (\(\gtrsim 10\) Myr; Ronco et al., 2021). Once settled into their steady state eccentricities, do binaries evolve towards coalescence, or do their orbits expand? At \(q_{\rm b}=0.1\), we find that \(e_{\rm b,eq}\sim 0.2\). Referring to the table in Figure 2, we find that such a binary would expand due to CBD torques, and is therefore less likely to coalesce. However, at larger mass ratios, when \(q_{\rm b}\gtrsim 0.3\), all remaining equilibrium eccentricities fall into the inspiral regime (compare with the blue tiles in Figure 2). Our work thus predicts that in steady state, binaries with \(q_{\rm b}\lesssim 0.2\) expand, while binaries with \(q_{\rm b}\gtrsim 0.3\) are driven towards coalescence by CBD torques. ### Origin of Equilibrium Eccentricities We have found that a non-zero equilibrium eccentricity exists for all the binaries in our simulations, and that this eccentricity increases as a function of mass ratio. This result is insensitive to the addition of accretion torques, and appears to be dominated by gas dynamics in the CBD rather than the CSDs. D'Orazio and Duffell (2021) suggest that the equilibrium eccentricity they find in equal mass binaries at \(e_{\rm b,eq}\sim 0.4\) is connected to the transition of CBD locking to disk precession in the same eccentricity range. In our simulation suite, we found that the equilibrium eccentricity is a function of mass ratio, and lower mass ratio binaries tend towards lower equilibrium eccentricities. However, at lower mass ratios, CBDs are locked in the entire parameter space tested (see Siwek et al., 2023), with no transition to precession taking place. Our result therefore suggests that disk locking/precession regimes are unrelated to the value of the equilibrium eccentricity. However, an alternative explanation related to resonant theory stems from earlier predictions by Artymowicz et al. (1991); Lubow and Artymowicz (1992), and is discussed below. Figure 8: Semi-major axis evolution rates as a function of \(e_{\rm b}\) and \(r_{\rm s}\). We show the largest sink radius in red (\(r_{\rm s}=0.030\,a_{\rm b}\)), the moderate value in orange (\(r_{\rm s}=0.010\,a_{\rm b}\)), and the smallest value in green (\(r_{\rm s}=0.005\,a_{\rm b}\)). In addition, we show results given each sink radius value including gravity only (g, dashed line), and including both gravity and accretion (g*a, solid line). Our simulations find that there is an upper limit of the equilibrium eccentricity at \(e_{\rm b,eq}\sim 0.5\) for any mass ratio in the range \(0.1\lesssim q_{\rm b}\lesssim 1.0\). These results are in line with previous work by Lubow and Artymowicz (1992). They analyzed the eccentricity growth and damping due to resonant interactions between binary and CBD, and found that binaries with mass ratios above \(q_{\rm b}>0.2\) evolve towards an equilibrium eccentricity \(e_{\rm b,eq}\sim 0.5\). This closely mirrors the results found in this study (green line in Figure 4), with gas dynamics in the cavity (included in blue and red lines in Figure 4) modifying the function \(e_{\rm b,eq}(q_{\rm b})\) only slightly. Our work expands on recent hydrodynamic simulations by Zrake et al. (2021) and D'Orazio and Duffell (2021), who both predicted equilibrium eccentricities for equal mass ratio binaries between \(0.4\lesssim e_{\rm b,eq}\lesssim 0.5\). The agreement between our simulations and theirs is encouraging and suggests that this result is robust to minor numerical differences in the simulations. ### Observations of Binary Populations For population studies of binaries that spend part of their lifetimes evolving in tandem with CBDs, we suggest that there are "preferred" regions in the \([e_{\rm b},q_{\rm b}]\) parameter space. These preferred regions are found along a diagonal across the \([e_{\rm b},q_{\rm b}]\) parameter space (Figure 3), with low mass ratio systems along the lower end of eccentricities, and higher mass ratio systems near \(e_{\rm b}\sim 0.5\). As mass ratios of binaries grow (slowly) due to accretion, so do their steady state eccentricities, moving them up the \([e_{\rm b},q_{\rm b}]\) diagonal. Given a large sample of binaries, an excess of systems may be found at this diagonal due to circumbinary disk driven evolution. We discuss some examples below. #### 4.3.1 MBHBs: PTAs and LISA The eccentricity evolution of binaries due to interaction with accretion disks is of interest for GW and EM observations at all scales. In the case of MBHBs, orbital eccentricity may be detectable in continuous wave (CW) searches (Taylor et al., 2016) with Pulsar Timing Arrays (PTAs; Foster and Backer, 1990). At higher frequency, future space-based observatories such as the Laser Interferometer Space Antenna (LISA; Amaro-Seoane et al., 2017) may be able to characterize the orbital eccentricities of MBHBs and intermediate mass black hole binaries (IMBHBs) during inspiral. The residual effect of eccentricity excitation due to the CBD driven evolution of MBHBs/IMBHBs may then be measurable (Zrake et al., 2021), and would shed light on the gas-driven evolution of MBHBs and IMBHBs detected in the GW emitting regime. In future work we will apply our eccentricity calculations presented in this work to non-equal mass binaries, and constrain the orbital eccentricities at which MBHBs and IMBHBs may enter the LISA/PTA regime. #### 4.3.2 LIGO Progenitors Circumbinary disks may form around the progenitors of compact object binaries following the late stages of common envelope evolution (CEE; Paczynski, 1976, see also Roepke and De Marco (2022) for a recent review). This is especially likely if the common envelope is not entirely ejected (e.g. Kashi and Soker, 2011), leaving a residual gas reservoir which could form a CBD. Dynamical interaction with the CBD may subsequently influence the orbital parameter evolution of the central binary, modifying properties of the growing observed population of merging compact-object binaries, such as those observed by the LIGO and Virgo gravitational-wave detectors (LIGO Scientific Collaboration et al., 2015; Acernese et al., 2015). #### 4.3.3 Stellar Binaries The Gaia Collaboration (Gaia Collaboration et al., 2016) recently published a large sample of stellar binaries prompting the discovery of an excess of wide "twin"binaries (e.g., El-Badry et al., 2019). The formation of twin binaries is likely driven by circumbinary disks (e.g. El-Badry et al., 2019; Hwang et al., 2022), due to the excess accretion onto the secondary. We suggest that further evidence of CBDs in stellar binary formation and evolution could be obtained using our simulations: our new results connecting binary mass ratio and eccentricity provide an opportunity to search for further evidence of circumbinary disk driven formation of stellar binaries, by statistically comparing the mass ratio and eccentricity distributions in short-period binaries. As an example we point out the classical T Tauri star DQ Tan, which has a mass ratio close to 1 and an orbital eccentricity \(e_{\rm b}=0.556\)(Mathieu et al., 1997), in agreement with our predictions. Further data analysis of the large data sets provided by Gaia can help establish the effect of circumbinary disk driven evolution on stellar binaries with more certainty. ### Caveats We have made several simplifications and assumptions in our simulations, which should be taken into account when interpreting our results. (i) Throughout our simulations, we have modeled the viscosity using an \(\alpha\)-prescription and a locally isothermal equation of state, taking \(\alpha=0.1\) and the scale height of the disk \(h=0.1\). We caution that this choice of parameters may not apply to disks at all scales: AGN disks are typically assumed to be several orders of magnitude thinner (Shakura and Sunyaev, 1973), while stellar disks may have lower values of \(\alpha\sim 0.01\)(Hartmann et al., 1998), likely since the low ionization fraction suppresses the angular momentum transport (Gammie, 1996) due to magnetorotational instability-driven turbulence (Balbus and Hawley, 1991). (ii) In this work we have simulated coplanar disks and binaries, without modeling any 3D effects, such as inclination angles between the disk and binary. Future work should address the question of orbital evolution (in particular eccentricity evolution) of binaries interacting with inclined disks (however, note the inclined disk simulations around circular binaries by Moody et al., 2019). ## 5 Acknowledgements MS thanks Steve Lubow, Morgan MacLeod and Ramesh Narayan for helpful discussions. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. This work was also supported by the Black Hole Initiative at Harvard University, which is funded by grants from the John Templeton Foundation and the Gordon and Betty Moore Foundation. RW is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference #CITA 490888-16. ## 6 Data Availability The data underlying this article will be shared on reasonable request to the corresponding author.
2307.12899
Shape Theory
Shape theory was founded by K.~Borsuk 50 years ago. In essence, this is spectral homotopy theory; it occupies an important place in geometric topology. The article presents the basic concepts and the most important, in our opinion, results of shape theory. Unfortunately, many other interesting problems and results related to this theory could not be covered because of space limitations. The article contains an extensive bibliography for those who wants to gain a more detailed and systematic insight into the issues considered in the survey.
Pavel S. Gevorgyan
2023-07-24T15:47:05Z
http://arxiv.org/abs/2307.12899v1
# Shape theory ###### Abstract. Shape theory was founded by K. Borsuk 50 years ago. In essence, this is spectral homotopy theory; it occupies an important place in geometric topology. The article presents the basic concepts and the most important, in our opinion, results of shape theory. Unfortunately, many other interesting problems and results related to this theory could not be covered because of space limitations. The article contains an extensive bibliography for those who wants to gain a more detailed and systematic insight into the issues considered in the survey. **Keywords:** homotopy, inverse system, homotopy pro-category, shape category, shape functor, equivariant shape, \(Z\)-set, movability, stable space, shape retract, homotopy pro-group, shape group, shape dimension, cell-like map, \(Q\)-manifold. To the blessed memory of my teacher Yurii Mikhailovich Smirnov ###### Contents * 1 Introduction * 2 Basic Constructions of Shape Theory * 3 The Shape Classification of Spaces * 4 Complement Theorems in Shape Theory * 5 Movability and Other Shape Invariants * 6 Stable Spaces and Shape Retracts * 7 Whitehead's and Hurewicz' Theorems in Shape Theory * 8 The Shape Dimension of Spaces and Maps * 9 Embeddings in Shape Theory * 10 Cell-Like Maps and Shape Theory ## 1. Introduction Shape theory was initiated in 1968 by the renowned Polish mathematician K. Borsuk. In his opening speech [2] at the 1979 International Topology Conference, P. S. Aleksandrov identified three periods in the development of geometric topology1 and mentioned shape theory as one of the most important research directions during the third period. Footnote 1: According to Aleksandrov, the first of these periods was marked by outstanding works of Brouwer (1909–1913), Frechet (1907), and Hausdorff (1914). During the second period (1925–1943), homology and cohomology theory (including duality theorems) was developed, dimension theory was set up, and continuum theory was elaborated. The third period began with Borsuk’s work on retract theory and continued with the development of shape theory (see also [3]). Shape theory is the spectral form of homotopy theory; it uses set-theoretic, geometric, and combinatorial-algebraic ideas and methods of topology. This theory is related to Aleksandrov-Cech homology and, therefore, to algebraic topology. It is also closely related to infinite-dimensional topology, in particular, to the theory of \(Q\)-manifolds. As is known, the fundamental notions of homotopy theory can be applied to spaces with good local structure (such as manifolds, polyhedra, CW-complexes, and ANRs). Many theorems of homotopy topology are valid for CW-complexes but not valid for compact metrizable spaces. An example is Whitehead's celebrated theorem, which asserts that _a map \(f\colon X\to Y\) between_ CW_-complexes is a homotopy equivalence if and only if it induces isomorphisms \(\pi_{n}(f)\colon\pi_{n}(X)\to\pi_{n}(Y)\), \(n=1,2,\dots\), of all homotopy groups_. This theorem is not generally true for compact metrizable spaces. Indeed, consider the so-called _Warsaw circle_, or _quasi-circle_, \(W\), which consists of the graph of the function \(y=\sin\frac{2\pi}{x}\), \(0<x\leqslant 1\), the interval \([-1,1]\) of the \(y\)-axis, and the arc joining the points \((0,0)\) and \((1,0)\) (see Fig. 1). It has bad local structure and is not a CW-complex. Any continuous map of the \(n\)-sphere \(S^{n}\) to the Warsaw circle \(W\) is homotopic to a constant map, and hence all homotopy groups of the Warsaw circle are trivial. However, \(W\) does not have the homotopy type of a point. Thus, the constant map \(f\colon W\to\{*\}\) to a one-point space induces isomorphisms of the homotopy groups for all \(n\) but is not a homotopy equivalence. Homotopy theory is ideally suited for being applied to spaces with good local structure but not to arbitrary spaces. The Warsaw circle \(W\) and the usual circle \(S^{1}\) are not homotopy equivalent, although they look similar and share certain important properties. For example, each of them separates the plane into two parts. The reason why the Warsaw circle and the circle \(S^{1}\) are not homotopy equivalent is that there are not enough continuous maps from \(S^{1}\) to \(W\). The situation changes when continuous maps from \(S^{1}\) to arbitrary neighborhoods of the Warsaw circle \(W\) are considered. Such maps are already sufficiently many. This observation has enabled Borsuk to propose a remarkable idea of addressing this "drawback" of homotopy theory for compact metrizable spaces and, thereby, open up a new direction of geometric topology, shape theory [74]. Borsuk first reported his results at the 1967 Symposium on Infinite-Dimensional Topology, which was held in Baton Rouge, Louisiana, on March 27-April 1, 1967, and then at the International Symposium on Topology and Its Applications in Herceg Novi on August 25-31, 1968 (see [75]). After the appearance of Borsuk's pioneering papers [73]-[84], shape theory began to rapidly develop across the world. Warsaw, where Borsuk and his students worked, had become the center of scientific activities and research on shape theory. In the United States the first studies on shape theory were performed by R. Fox, T. A. Chapman, J. Segal, and J. Keesling, and in Japan, by K. Morita and Y. Kodama. In Moscow research in shape theory was headed by Yu. M. Smirnov. In Zagreb a group of topologists working on this theory was formed; it was led by S. Mardesic. In Frankfurt shape theory was developed by F. Bauer and his students, and in Great Britain, by T. Porter; many other topologists worked on it all over the world. Borsuk's idea underlying shape theory is based on the fact that any compact metrizable space embeds in the Hilbert cube \(Q\) (as well as in any other absolute retract) and consists in considering, instead of continuous maps between compact metrizable spaces \(X\) and \(Y\), so-called fundamental sequences \((f_{n}):Q\to Q\) coordinated in a certain way with neighborhoods of \(X\) and \(Y\) (see Sec. 2). In 1972 Fox [168] extended shape theory to metrizable spaces by using Borsuk's method. To this end, he embedded an arbitrary metric space \(X\) in an absolute neighborhood retract \(M\) as a closed subspace and constructed shape morphisms by using a system of neighborhoods of the space \(X\) in \(M\). Importantly, all these neighborhoods are absolute neighborhood retracts for metrizable spaces. Borsuk's geometric method does not apply in the general case, because there are too few absolute neighborhood retracts in the category of all topological spaces. In 1970 Mardesic and Segal [287], [288] constructed shape theory for arbitrary compact Hausdorff spaces by means of inverse systems. They essentially used the well-known theorem that _any compact Hausdorff set \(X\) is the limit of an inverse system \(\mathbf{X}=\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\) of compact ANR spaces \(X_{\alpha}\)_. In 1975 Morita [308] extended shape theory to arbitrary topological spaces by the method of inverse ANR-systems. He had succeeded thanks to Cech's functor \(\check{C}\); to each topological space \(X\) this functor assigns an inverse system \(\check{C}(X)\) of the nerves \(X_{\alpha}\) of normal locally finite open covers \(\alpha\) with the natural projections generated by the refinement of covers. Morita introduced the notion of the _associated inverse ANR-system_ and proved that, for any topological space \(X\), the inverse ANR-system \(\check{C}(X)\) is associated with \(X\). Moreover, all inverse ANR-systems associated with a given space \(X\) are isomorphic to each other and, in particular, to the system \(\check{C}(X)\). The spectral method for constructing shape theory has turned out to be universal. Thus, in [74], given a compact metrizable space \(X\), Borsuk essentially used an associated countable ANR-system of its neighborhoods which arises under an embedding of \(X\) in an absolute retract \(M\). Moreover, since the ANR-system obtained from the system of all neighborhoods of a closed subset \(X\) of an absolute neighborhood retract \(N\) in the class of metrizable spaces is associated with \(X\), we see that Fox [168] essentially considered inverse ANR-systems associated with metrizable spaces. Finally, if a compact Hausdorff space \(X\) is the limit of an inverse system \(\mathbf{X}=\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\) of compact ANR-spaces \(X_{\alpha}\), i.e., \(X=\underset{\longleftarrow}{\lim}\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\), then the ANR-system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) in the homotopy category H-CW is associated with \(X\). This has enabled Mardesic and Segal [287], [288] to construct shape theory for arbitrary compact Hausdorff spaces. In the 1973 paper [278] Mardesic gave a short categorical definition of shape theory. The objects of the shape category Sh-Top are topological spaces, and morphism \(F\colon X\to Y\) are maps which take each homotopy class \(\boldsymbol{\psi}\colon Y\to P\), where \(P\) is an ANR, to a homotopy class \(\boldsymbol{\varphi}\colon X\to P\) such that, given any ANR \(P^{\prime}\) and any homotopy classes \(\boldsymbol{\psi^{\prime}}\colon Y\to P^{\prime}\) and \(\mathbf{q}\colon P^{\prime}\to P\) satisfying the relation \(\mathbf{q}\boldsymbol{\psi^{\prime}}=\boldsymbol{\psi}\), we have \(\mathbf{q}\boldsymbol{\varphi^{\prime}}=\boldsymbol{\varphi}\). The shape functor \(S\colon\text{H-Top}\to\text{Sh-Top}\) satisfies the following two conditions: (i) \(S(X)=X\) for any topological space \(X\) and (ii) for any shape morphism \(F\colon X\to Q\) to an ANR \(Q\), there exists a unique homotopy class \(\mathbf{f}\colon X\to Q\) such that \(S(\mathbf{f})=F\). These conditions have the universality property, and it is natural to call them the _axioms of shape theory_. The first axiomatics of shape theory for the class of all compact metrizable spaces was proposed by Holsztynski [211]. In the general case, an axiomatic characterization of shape theory was given by Mardesic [278]. For compact metrizable spaces, all methods for constructing shape theory are equivalent to Borsuk's. However, in the class of metrizable spaces, the shape theories of Borsuk [80], Fox [168], and Bauer [62] differ from each other. Thus, there exist different shape theories (likewise, in algebraic topology, there are different homology and cohomology theories). The shape category is weaker than the homotopy category. The former is not a generalization of the latter; it is rather a natural correction, or extension, of the homotopy category, because the shape functor \(S\colon\mathrm{H}\mathrm{-Top}\to\mathrm{Sh}\mathrm{-Top}\) is an isomorphism onto the subcategory \(\mathrm{H}\mathrm{-CW}\) of all spaces with the homotopy type of an ANR. Thus, for the class of spaces having the homotopy type of a polyhedron, which includes all CW-complexes and ANRs, shape theory coincides with homotopy theory. The Warsaw circle and the circle \(S^{1}\) are examples of compact metrizable spaces which have different homotopy types but are shape equivalent. In essence, shape theory is spectral homotopy theory, because it is based on the idea of replacing the singular by the approximative; in this respect, it is closely related to Aleksandrov-Cech spectral homology. Shape theory is intended for application to spaces with bad local structure, which increasingly frequently arise in very diverse areas of mathematics; they also result from applying many topological constructions, such as bundles, cell-like maps, fixed point sets, attractors of dynamical systems, spectra of linear operators, remainders of compactifications, boundaries of groups, etc. Ideas and problems closely related to shape theory have been known long before the publication of Borsuk's first papers [74] and [75], in which the shape category for compact metrizable spaces was constructed. As early as in 1895 Poincare laid the foundations of algebraic topology and introduced the important notions of a chain, a cycle, a boundary, the Betti numbers, homologous cycles, etc. in his celebrated treatise _Analysis Situs_[326]. Poincare also conjectured (but not proved) that the Betti numbers are topological invariants. The first rigorous proof of this fact was given by Alexander in 1915 [39], which had put Poincare's intuitive ideas on solid mathematical grounds. In 1922 Veblen [387] proved the topological invariance of singular homology. The application of singular homology is based on the consideration of a family of continuous maps from a polyhedron \(P\) to a topological space \(X\). Homotopy theory is based on the same idea, because each homotopy group \(\pi_{n}(X,*)\) is defined by using continuous maps from the \(n\)-sphere \(S^{n}\) to a pointed space \((X,*)\). However, this idea does not work for spaces with bad local structure. This is why shape theory is based on the dual idea of considering families of continuous maps from a topological space \(X\) to a polyhedron \(P\). This approach was first applied by Aleksandrov to define inverse systems of spaces [43], [44] and introduce the notion of the nerve of a cover of a topological space [45]. Thus, the introduction of Cech homology in various forms by Aleksandrov [43], [44], Vietoris [391], and Cech [97] can be justly regarded as the rudiment of shape theory. Shape theory is related to many fields of topology, and its ideas and results turn out to be important and useful for these fields. Because of space limitations, we could not cover all interesting problems and remarkable results of shape theory. Nevertheless, we have tried to touch upon the most important points in the development of shape theory during the past five decades. The literature on shape theory is fairly extensive (see the bibliography at the end of this paper); we mention Borsuk's monograph [10], the books [289] by Mardesic and Segal and [148] by Dydak and Segal, and survey papers [24] by Smirnov, [85] by Borsuk and Dydak, and [271] and [283] by Mardesic. ## 2. Basic Constructions of Shape Theory ### The Shape Category of Compact Metrizable Spaces (Borsuk's Method) The main idea of Borsuk's construction of the shape category of compact metrizable spaces is the replacement of the class of continuous maps by a larger class of morphisms and the introduction of the notion of a homotopy between such morphisms. Let \(X\) and \(Y\) be compact metrizable spaces contained in ARs \(M\) and \(N\), respectively. **Definition 1**.: A sequence of maps \(f_{n}\colon M\to N\), \(n\in\mathbb{N}\), is called a _fundamental sequence from \(X\) to \(Y\)_ and denoted by \((f_{n})_{MN}\colon X\to Y\) if, for any neighborhood \(V\) of the compact set \(Y\) in \(N\), there exists a neighborhood \(U\) of the compact set \(X\) in \(M\) and a positive integer \(n_{V}\in\mathbb{N}\) such that, first, \(f_{n}(U)\subset V\) for all \(n\geqslant n_{V}\) and, secondly, \[f_{n}|U\simeq f_{m}|U\] in \(V\) for all \(n,m\geqslant n_{V}\). The fundamental sequence \((1_{n})_{MM}\colon X\to X\), where \(1_{n}=1_{M}\colon M\to M\) is the identity map for all \(n\in\mathbb{N}\), is called the _identity fundamental sequence_. The composition of fundamental sequences \((f_{n})_{MN}\colon X\to Y\) and \((g_{n})_{NP}\colon Y\to Z\), where \(Z\) is a compact metrizable space contained in an AR \(P\), is defined by \[(g_{n})(f_{n})=(g_{n}f_{n}).\] Clearly, \((g_{n}f_{n})_{MP}\colon X\to Z\) is a fundamental sequence, and the composition operation is associative. The notion of homotopy naturally extends to fundamental sequences. **Definition 2**.: Two fundamental sequences \((f_{n})_{MN}\colon X\to Y\) and \((f_{n}^{\prime})_{MN}\colon X\to Y\) are said to be _homotopic_ if, for any neighborhood \(V\) of the compact set \(Y\) in \(N\), there exists a neighborhood \(U\) of the compact set \(X\) in \(M\) and a positive integer \(n_{V}\in\mathbb{N}\) such that \[f_{n}|U\simeq f_{n}^{\prime}|U\] in \(V\) for all \(n\geqslant n_{V}\). In this case, we write \((f_{n})\simeq(f_{n}^{\prime})\). This relation is reflexive, symmetric, and transitive. The equivalence class of a fundamental sequence \((f_{n})_{MN}\colon X\to Y\) is called a _fundamental class_ and denoted by \([(f_{n})]\). It is easy to see that if \((f_{n})\simeq(f_{n}^{\prime})\) and \((g_{n})\simeq(g_{n}^{\prime})\), then \(g_{n}f_{n}\simeq g_{n}^{\prime}f_{n}^{\prime}\). Therefore, we can define the composition of fundamental classes \([(f_{n})]\) and \([(g_{n})]\) as \[[(g_{n})][(f_{n})]=[(g_{n})(f_{n})].\] Obviously, this composition operation is associative as well, and \[[(f_{n})][(1_{n})]=[(1_{n})][(f_{n})]=[(f_{n})].\] Thus, we obtain the so-called _fundamental category_, whose objects are all pairs \((X,M)\), where \(M\) is an AR and \(X\) is a compact metrizable subspace of \(M\), and morphisms are classes of fundamental sequences. Isomorphic objects in this category are said to be _fundamentally equivalent_. In other words, pairs \((X,M)\) and \((Y,N)\) are fundamentally equivalent if there exist fundamental sequences \((f_{n})_{MN}\colon X\to Y\) and \((g_{n})_{NM}\colon Y\to X\) such that \([(g_{n})][(f_{n})]=[(1_{n})_{MM}]\) and \([(f_{n})][(g_{n})]=[(1_{n})_{NN}]\). We say that a fundamental sequence \((f_{n})_{MN}\colon X\to Y\) is _generated by a map_\(f\colon X\to Y\) if \(f_{n}|X=f\) for all \(n\in\mathbb{N}\). Clearly, any map \(f\colon X\to Y\) generates a fundamental sequence \((f_{n})_{MN}\colon X\to Y\). Indeed, since \(N\) is an AR, it follows that there exists an extension \(\tilde{f}\colon M\to N\) of the map \(f\colon X\to Y\). Setting \(f_{n}=\tilde{f}\) for each \(n=1,2,\dots\), we obtain a fundamental sequence generated by \(f\). The following theorem shows that the fundamental class of a fundamental sequence \((f_{n})_{MN}\colon X\to Y\) generated by a map \(f\colon X\to Y\) depends only on the homotopy class of this map. **Theorem 1**.: Suppose that fundamental sequences \((f_{n})_{MN}\colon X\to Y\) and \((f_{n}^{\prime})_{MN}\colon X\to Y\) are generated by maps \(f\colon X\to Y\) and \(f^{\prime}\colon X\to Y\), respectively, and \(f\simeq f^{\prime}\). Then \((f_{n})\simeq(f_{n}^{\prime})\). Proof.: Consider any neighborhood \(V\) of the compact set \(Y\) in \(N\). By Definition 1 there exists a neighborhood \(U_{1}\) of the compact set \(X\) in \(M\) and a positive integer \(n_{V}^{\prime}\in\mathbb{N}\) such that \(f_{n}|U_{1}\simeq f_{m}|U_{1}\) in \(V\) for all \(n,m\geqslant n_{V}^{\prime}\). Similarly, there exists a neighborhood \(U_{2}\) of the compact set \(X\) in \(M\) and a positive integer \(n^{\prime\prime}_{V}\in\mathbb{N}\) such that \(f^{\prime}_{n}|U_{2}\simeq f^{\prime}_{m}|U_{2}\) in \(V\) for all \(n,m\geqslant n^{\prime\prime}_{V}\). We set \(\widetilde{U}=U_{1}\cap U_{2}\) and \(n_{V}=\max\{n^{\prime}_{V},n^{\prime\prime}_{V}\}\). Clearly, \[f_{n}|\widetilde{U}\simeq f_{m}|\widetilde{U}\quad\text{and}\quad f^{\prime}_{ n}|\widetilde{U}\simeq f^{\prime}_{m}|\widetilde{U} \tag{1}\] in \(V\) for all \(n,m\geqslant n_{V}\). Now consider the maps \(f_{n_{V}},f^{\prime}_{n_{V}}\colon\widetilde{U}\to V\). By the assumption of the theorem, we have \(f_{n_{V}}|X=f\), \(f^{\prime}_{n_{V}}|X=f^{\prime}\), and \(f\simeq f^{\prime}\). Let \(F\colon X\times I\to Y\) be a homotopy between \(f\) and \(f^{\prime}\). Since \(V\) is an ANR, it follows that there exists a neighborhood \(U\) of \(X\) in \(M\) such that \(U\subset\widetilde{U}\) and a homotopy \(\widetilde{F}\colon U\times I\to V\) between \(f_{n_{V}}|U\) and \(f^{\prime}_{n_{V}}|U\) such that \(\widetilde{F}|X\times I=F\) (see [289], Theorem 8, p. 40). Thus, \[f_{n_{V}}|U\simeq f^{\prime}_{n_{V}}|U. \tag{2}\] It remains to note that \[f_{n}|U\simeq f^{\prime}_{n}|U\] in \(V\) for all \(n\geqslant n_{V}\); this follows from (1) and (2). **Corollary 1**.: Let \((f_{n})_{MN}\colon X\to Y\) and \((f^{\prime}_{n})_{MN}\colon X\to Y\) be fundamental sequences generated by a map \(f\colon X\to Y\). Then \((f_{n})\simeq(f^{\prime}_{n})\). This corollary implies, in particular, that all fundamental sequences \((i_{n})_{MM}\colon X\to X\) generated by the map \(1_{X}\colon X\to X\) are homotopic to the identity fundamental sequence \((1_{n})_{MM}\colon X\to X\). The following important theorem shows that the fundamental equivalence relation is absolute in the sense that it does not depend on the choice of the ARs in which metrizable compact spaces are embedded. **Theorem 2**.: Let \(X\subset M\cap M^{\prime}\). Then the pairs \((X,M)\) and \((X,M^{\prime})\) are fundamentally equivalent. Proof.: Consider fundamental sequences \((i_{n})_{MM^{\prime}}\colon X\to X\) and \((i_{n})_{M^{\prime}M}\colon X\to X\) generated by the identity map \(1_{X}\colon X\to X\). Obviously, the compositions \((i_{n})_{M^{\prime}M}(i_{n})_{MM^{\prime}}\) and \((i_{n})_{MM^{\prime}}(i_{n})_{M^{\prime}M}\) are generated by the identity map \(1_{X}\colon X\to X\) as well; hence, according to Corollary 1, they are homotopic to the identity fundamental sequences \((1_{n})_{MM}\) and \((1_{n})_{M^{\prime}M^{\prime}}\), respectively. Thus, \([(i_{n})_{M^{\prime}M}][(i_{n})_{MM^{\prime}}]=[(1_{n})_{MM}]\) and \([(i_{n})_{MM^{\prime}}][(i_{n})_{M^{\prime}M}]=[(1_{n})_{M^{\prime}M^{\prime}}]\), i.e., \((X,M)\) and \((X,M^{\prime})\) are fundamentally equivalent. **Remark 1**.: Since each metrizable compact space \(X\) is homeomorphic to a compact subset of the Hilbert cube \(Q\), in constructing the fundamental category, is suffices to consider pairs of the form \((X,Q)\). The category whose objects are compact metrizable spaces and morphisms are classes of fundamental sequences between them is called the _shape category_ of compact metrizable spaces. We denote it by Sh(CM). Since the fundamental equivalence relation is an equivalence relation, it follows that the class of all compact metrizable spaces splits into pairwise disjoint classes of spaces, which are called _shapes_. The shape containing a space \(X\) is called the _shape of the space_\(X\) and denoted by \(\operatorname{sh}(X)\). We say that two compact spaces \(X\) and \(Y\)_are shape equivalent_, or _have the same shape_, and write \(\operatorname{sh}(X)=\operatorname{sh}(Y)\), if they are fundamentally equivalent. ### The Spectral Method for Constructing Shape Theory (the Mardesic-Morita Method [278], [308]) To define the shape category of arbitrary topological spaces, we need the pro-homotopy category pro-N-TOP, which was introduced by Grothendieck in [196]. The objects of this category are inverse systems \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) in the homotopy category H-Top of topological spaces. The morphisms of the category pro-N-TOP are defined in two steps. First, a morphism \((\mathbf{f}_{\beta},\varphi)\colon\mathbf{X}\to\mathbf{Y}=\{Y_{\beta},\mathbf{q}_{ \beta\beta^{\prime}},B\}\) between inverse systems is defined as a pair of maps \(\varphi\colon B\to A\) and \(\mathbf{f}_{\beta}\colon X_{\varphi(\beta)}\to Y_{\beta}\) satisfying the following condition: (*) for any \(\beta,\beta^{\prime}\in B\), \(\beta^{\prime}\geqslant\beta\), there exists an index \(\alpha\geqslant\varphi(\beta),\varphi(\beta^{\prime})\) such that \(\mathbf{f}_{\beta}\mathbf{p}_{\varphi(\beta)\alpha}=\mathbf{q}_{\beta\beta^{ \prime}}\mathbf{f}_{\beta^{\prime}}\mathbf{p}_{\varphi(\beta^{\prime})\alpha}\), i.e., the following diagram is commutative: Then, the notion of equivalent morphisms is introduced. Morphisms \((\mathbf{f}_{\beta},\varphi)\) and \((\mathbf{g}_{\beta},\psi)\) from an inverse system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) to an inverse system \(\mathbf{Y}=\{Y_{\beta},\mathbf{q}_{\beta\beta^{\prime}},B\}\) are said to be _equivalent_ if the following condition holds: (**) for any \(\beta\in B\), there exists an index \(\alpha\geqslant\varphi(\beta),\psi(\beta)\) such that \(\mathbf{f}_{\beta}\mathbf{p}_{\varphi(\beta)\alpha}=\mathbf{g}_{\beta}\mathbf{ p}_{\psi(\beta)\alpha}\), i.e., the following diagram is commutative: This relation is indeed an equivalence relation. Thus, the set of all morphisms from an inverse system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) to an inverse system \(\mathbf{Y}=\{Y_{\beta},\mathbf{q}_{\beta\beta^{\prime}},B\}\) splits into equivalence classes \([(\mathbf{f}_{\beta},\varphi)]\); it is these classes which are morphisms \(\mathbf{f}\colon\mathbf{X}\to\mathbf{Y}\) of the pro-homotopy category pro-N-TOP. The composition of two morphisms \(\mathbf{f}\colon\mathbf{X}\to\mathbf{Y}\) and \(\mathbf{g}\colon\mathbf{Y}\to\mathbf{Z}=\{Z_{\gamma},\mathbf{r}_{\gamma\gamma^{ \prime}},\Gamma\}\) is defined by using their representatives \(\mathbf{f}=[(\mathbf{f}_{\beta},\varphi)]\) and \(\mathbf{g}=[(\mathbf{g}_{\gamma},\psi)]\) as \(\mathbf{g}\circ\mathbf{f}=[(\mathbf{g}_{\gamma}\circ\mathbf{f}_{\psi(\gamma)},\varphi\circ\psi)]\). **Remark 2**.: The morphisms between inverse systems \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) and \(\mathbf{Y}=\{Y_{\beta},\mathbf{q}_{\beta\beta^{\prime}},B\}\) in the pro-homotopy category pro-N-TOP are described by Grothendieck's formula [196] \[\operatorname{Mor}(\mathbf{X},\mathbf{Y})=\varprojlim_{\underset{\alpha}{ \longleftarrow}}\varprojlim_{\underset{\alpha}{\longrightarrow}}[X_{\alpha},Y _{\beta}],\] where the \([X_{\alpha},Y_{\beta}]\) are homotopy classes of maps. In constructing shape theory, of special interest is the pro-homotopy category pro-H-CW of inverse systems in the complete subcategory H-CW of the category H-Top. The following well-known result gives an important characterization of spaces in the category H-CW (see [289], Chap. I, Sec. 4.1, Theorem 1). **Lemma 1**.: For a topological space \(X\), the following conditions are equivalent: (a) \(X\) has the homotopy type of a CW-complex; (b) \(X\) has the homotopy type of a simplicial complex with the metric topology; (c) \(X\) has the homotopy type of an ANR (the class of metrizable spaces). Taking into account this lemma, we can say that H-CW is the homotopy category of spaces having the homotopy type of an ANR. In what follows, we shall adhere to this approach, because in shape theory only properties of ANRs are used; we refer to inverse systems in the category pro-H-CW as _ANR-systems_. An important role in the construction of shape theory for the class of all topological spaces is played by the notion of an associated ANR-system, which was introduced by Morita in [308]. **Definition 3** (Morita [308]).: An inverse ANR-system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) is said to be _associated with a topological space \(X\)_ if there exist homotopy classes \(\mathbf{p}_{\alpha}\colon X\to X_{\alpha}\), \(\alpha\in A\), satisfying the following conditions: (i) \(\mathbf{p}_{\alpha}=\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{p}_{\alpha^{ \prime}}\) for all \(\alpha\leqslant\alpha^{\prime}\); (ii) for any ANR-space \(P\) and any homotopy class \(\mathbf{f}\colon X\to P\), there exists an index \(\alpha\in A\) and a homotopy class \(\mathbf{h}_{\alpha}\colon X_{\alpha}\to P\) such that \(\mathbf{h}_{\alpha}\mathbf{p}_{\alpha}=\mathbf{f}\); (iii) if \(\boldsymbol{\varphi}\mathbf{p}_{\alpha}=\boldsymbol{\psi}\mathbf{p}_{\alpha}\) for two homotopy classes \(\boldsymbol{\varphi},\boldsymbol{\psi}\colon X_{\alpha}\to P\), then there exists an index \(\alpha^{\prime}\geqslant\alpha\) such that \(\boldsymbol{\varphi}\mathbf{p}_{\alpha\alpha^{\prime}}=\boldsymbol{\psi} \mathbf{p}_{\alpha\alpha^{\prime}}\). **Theorem 3** (Morita [308]).: For any topological space \(X\), there exists an associated inverse ANR-system. Proof.: Let \(\mathcal{U}_{\alpha},\alpha\in A\), be all normal locally finite open covers of the space \(X\). If a cover \(\mathcal{U}_{\alpha^{\prime}}\) is a refinement of \(\mathcal{U}_{\alpha}\), then we write \(\alpha^{\prime}>\alpha\). Let \(X_{\alpha}\) be the nerve of \(\mathcal{U}_{\alpha}\); this is a simplicial complex with the weak topology. For each \(\alpha\in A\), consider a continuous map \(p_{\alpha}\colon X\to X_{\alpha}\) satisfying the condition \[p_{\alpha}^{-1}(St(u,X_{\alpha}))\subset U,\] where \(u\) is the vertex of \(X_{\alpha}\) corresponding to the element \(U\) of the cover \(\mathcal{U}_{\alpha}\). Such a map is said to be _canonical_. All canonical maps are homotopic to each other. If \(\alpha^{\prime}>\alpha\), then there exists a simplicial map \(p_{\alpha\alpha^{\prime}}\colon X_{\alpha^{\prime}}\to X_{\alpha}\) such that \(p_{\alpha\alpha^{\prime}}(u)=v\) implies \(U\subset V\), where \(u\) and \(v\) are the vertices of the simplicial complexes \(X_{\alpha}\) and \(X_{\alpha^{\prime}}\) corresponding to the elements \(U\) and \(V\) of the covers \(\mathcal{U}_{\alpha}\) and \(\mathcal{U}_{\alpha^{\prime}}\), respectively. The map \(p_{\alpha\alpha^{\prime}}\) is called a _canonical projection_. Any two canonical projections are homotopic to each other. Moreover, \[\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{p}_{\alpha^{\prime}}=\mathbf{p}_{\alpha}\] for all \(\alpha,\alpha^{\prime}\in A\), \(\alpha>\alpha^{\prime}\). The inverse ANR-system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) thus obtained is associated with the space \(X\). Inverse ANR-systems associated with different spaces can be constructed by different methods. For example, any compact Hausdorff space \(X\) is the limit of an inverse system \(\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\) consisting of compact ANRs \(X_{\alpha}\): \(X=\varprojlim\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\). It turns out that the inverse ANR-system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) is associated with \(X\). However, importantly, all inverse ANR-systems associated with the same space \(X\) are equivalent in the category pro-H-CW. Thus, each topological space \(X\) is assigned an inverse ANR-system \(S(X)\) in the category pro-H-CW. Moreover, this assignment can be uniquely extended in a natural way to morphisms of the category H-Top, i.e., each homotopy class \(\mathbf{f}\colon X\to Y\) can be assigned a morphism \(S(\mathbf{f})\colon S(X)\to S(Y)\) of the category pro-H-CW. Thereby, we obtain a functor S: H-Top \(\rightarrow\) pro-H-CW, which is called the _shape functor_. The shape functor \(S\) makes it possible to naturally define the _shape category_ Sh-Top for the class of all topological spaces. The objects of the category Sh-Top are those of the category H-Top (i.e., these are topological spaces), and the morphisms are those of the category pro-H-CW: \(S(X,Y)=\mathrm{MOR}(S(X),S(Y))\). The shape classification of spaces is weaker than the homotopy classification, but on the class H-CW, both classifications coincide. In the class of all compact metrizable spaces, Mardesic-Morita shape theory coincides with Borsuk's one. Shape theory can be constructed without employing associated inverse systems as follows (see Mardesic [278]). To each topological space \(X\) we assign a category \(W^{X}\) whose objects are homotopy classes \(\mathbf{f}\colon X\to P\), \(P\in\mathrm{H}\)-CW, and morphisms \(\mathbf{u}\colon\mathbf{f}\to\mathbf{f}^{\prime}\), where \(\mathbf{f}^{\prime}\colon X\to P^{\prime}\), \(P^{\prime}\in\mathrm{H}\)-CW, are homotopy classes \(\mathbf{u}\colon P\to P^{\prime}\) such that \(\mathbf{uf}=\mathbf{f}^{\prime}\). Note that the identity morphism \(1_{\mathbf{f}}\colon\mathbf{f}\to\mathbf{f}\) is the homotopy class \(\mathbf{1}_{P}\colon P\to P\), and the compositions of morphisms in the category \(W^{X}\) are the compositions of the corresponding homotopy classes in the category \(\mathrm{H}\)-CW. Now the shape morphisms \(F\colon X\to Y\) are defined as the covariant functors \(F\colon W^{Y}\to W^{X}\) satisfying the following two conditions: (i) if \(\mathbf{g}\in W^{Y}\), \(\mathbf{g}\colon Y\to P\), then \(F(\mathbf{g})=\mathbf{f}\in W^{X}\), where \(\mathbf{f}\colon X\to P\); (ii) if \(\mathbf{u}\colon\mathbf{g}\to\mathbf{g}^{\prime}\) is the morphism determined by a homotopy class \(\mathbf{u}\colon P\to P^{\prime}\), then the morphism \(F(\mathbf{u})\colon F(\mathbf{g})\to F(\mathbf{g}^{\prime})\) is determined by the same homotopy class \(\mathbf{u}\colon P\to P^{\prime}\). By using this definition of the shape category, is easy to prove the following criterion for a continuous map to be a shape equivalence. **Theorem 4**.: A continuous map \(f\colon X\to Y\) is a shape equivalence if and only if, for any \(\mathrm{ANR}\)\(P\), the induced map \(f^{*}\colon[Y,P]\to[X,P]\) is one-to-one. Here \([Y,P]\) denotes the set of homotopy classes \(\mathbf{g}\colon Y\to P\) and \(f^{*}\) is the map assigning the homotopy class \(f^{*}(\mathbf{g})=\mathbf{gf}\in[X,P]\) to each \(\mathbf{g}\colon Y\to P\). ### Equivariant Shape Theory There are many ways to construct shape category for \(G\)-spaces (continuous transformation groups with a fixed action of a group \(G\)). Smirnov [25], [26] constructed shape theory for the categories of metrizable \(G\)-spaces and compact \(G\)-spaces in the case of a compact acting group \(G\) by the Borsuk-Fox method [10], [168]. Equivariant shape theory for arbitrary \(G\)-spaces in the case of a finite group \(G\) was constructed independently by Pop [330] and Matumoto [293] and in the case of a compact group \(G\), by Antonyan and Mardesic [50] and Gevorgyan [186]. Cerin [99] constructed the shape category for arbitrary \(G\)-spaces in the case of any acting group \(G\) by using \(G\)-homotopy classes of families of set-valued maps. In [186] equivariant shape theory was constructed by a method based on the application of all invariant continuous pseudometrics on a given \(G\)-space. Let \(X\) be any \(G\)-space, and let \(\mu\) be an invariant continuous pseudometric on \(X\). Consider the equivalence relation on \(X\) defined by setting \(x\sim x^{\prime}\) if and only if \(\mu(x,x^{\prime})=0\). We denote the corresponding quotient space \(X|_{\sim}\) by \(X_{\mu}\) and the equivalence class of an element \(x\in X\) by \([x]_{\mu}\). The quotient space \(X_{\mu}\) is a \(G\)-space with action \(g[x]_{\mu}=[gx]_{\mu}\), and \(\rho([x]_{\mu},[x^{\prime}]_{\mu})=\mu(x,x^{\prime})\) is an invariant metric on \(X_{\mu}\). Note also that the quotient map \(p_{\mu}\colon X\to X_{\mu}\) defined by \(p_{\mu}(x)=[x]_{\mu}\) is continuous and equivariant. Thus, the following assertion is valid. **Proposition 1**.: The quotient space \(X_{\mu}\) is an invariant metrizable \(G\)-space, and the quotient map \(p_{\mu}\colon X\to X_{\mu}\) is an equivariant continuous map. The following theorem makes it possible to construct equivariant shape theory (see Gevorgyan's paper [186]) for the class of all \(G\)-spaces in the case a compact group \(G\) by the Mardesic-Morita method of inverse systems. **Theorem 5**.: For any \(G\)-space \(X\), there exists an associated inverse system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) in the equivariant homotopy category \(\mathrm{H}\)-\(\mathrm{G}\)-\(\mathrm{ANR}\). Proof.: Consider the family \(\mathcal{P}\) of all invariant pseudometrics on the \(G\)-space \(X\). We introduce a natural order on this family by setting \(\mu\leqslant\mu^{\prime}\) if \(\mu(x,x^{\prime})\leqslant\mu^{\prime}(x,x^{\prime})\) for \(x,x^{\prime}\in X\). Given any pseudometrics \(\mu,\mu^{\prime}\in\mathcal{P}\), \(\mu\leqslant\mu^{\prime}\), we define a map \(p_{\mu\mu^{\prime}}\colon X_{\mu^{\prime}}\to X_{\mu}\) by \(p_{\mu\mu^{\prime}}([x]_{\mu^{\prime}})=[x]_{\mu}\), where \([x]_{\mu^{\prime}}\in X_{\mu^{\prime}}\) and \([x]_{\mu}\in X_{\mu}\). This map is well-defined, because it does not depend on the choice of a representative in the class \([x]_{\mu^{\prime}}\). Indeed, let \([x]_{\mu^{\prime}}=[x^{\prime}]_{\mu^{\prime}}\), that is, \(\mu^{\prime}(x,x^{\prime})=0\). Then \(\mu(x,x^{\prime})\leqslant\mu^{\prime}(x,x^{\prime})=0\), whence \(\mu(x,x^{\prime})=0\). This means that \([x]_{\mu}=[x^{\prime}]_{\mu}\), that is, \(p_{\mu\mu^{\prime}}([x]_{\mu^{\prime}})=p_{\mu\mu^{\prime}}([x^{\prime}]_{\mu^ {\prime}})\). It is easy to verify that the map \(p_{\mu\mu^{\prime}}\colon X_{\mu^{\prime}}\to X_{\mu}\) satisfies the conditions \(p_{\mu\mu^{\prime}}p_{\mu^{\prime}}=p_{\mu}\) and \(p_{\mu\mu^{\prime}}p_{\mu^{\prime}\mu^{\prime\prime}}=p_{\mu\mu^{\prime\prime}}\) for any \(\mu,\mu^{\prime},\mu^{\prime\prime}\in\mathcal{P}\), \(\mu\leqslant\mu^{\prime}\leqslant\mu^{\prime\prime}\). Thus, \(\{X_{\mu},p_{\mu\mu^{\prime}}\}\) is an inverse system of invariant metrizable \(G\)-spaces. The \(G\)-space \(X_{\mu}\) is isometrically and equivariantly embedded in some normed \(G\)-space \(M(X_{\mu})\) as a closed subspace, and the equivariant map \(p_{\mu\mu^{\prime}}\colon X_{\mu}^{\prime}\to X_{\mu}\) extends to equivariant maps \(\bar{p}_{\mu\mu^{\prime}}\colon M(X_{\mu}^{\prime})\to M(X_{\mu})\), which satisfy the relation \(\bar{p}_{\mu\mu^{\prime}}\bar{p}_{\mu^{\prime}\mu^{\prime\prime}}=\bar{p}_{ \mu\mu^{\prime\prime}}\) (see Gevorgyan's paper [182]). Thus, we obtain the inverse system \(\{M(X_{\mu}),\bar{p}_{\mu\mu^{\prime}}\}\) of normed \(G\)-spaces. Let \(A\) be the set of all pairs \((\mu,U)\), where \(\mu\in\mathcal{P}\), and let \(U\) be an open invariant neighborhood of the metrizable \(G\)-space \(X_{\mu}\) in the normed \(G\)-space \(M(X_{\mu})\). On the set \(A\) we define a natural order by setting \(\alpha\leqslant\alpha^{\prime}\) for \(\alpha=(\mu,U)\) and \(\alpha^{\prime}=(\mu^{\prime},U^{\prime})\) if and only if \(\mu\leqslant\mu^{\prime}\) and \(\bar{p}_{\mu\mu^{\prime}}(U^{\prime})\subset U\). Now we set \(X_{\alpha}=U\) and define an equivariant map \(p_{\alpha\alpha^{\prime}}\colon X_{\alpha^{\prime}}\to X_{\alpha}\) by \(p_{\alpha\alpha^{\prime}}=\bar{p}_{\mu\mu^{\prime}}|_{U^{\prime}}\colon U^{ \prime}\to U\). Note that if \(\mu=\mu^{\prime}\), then \(\bar{p}_{\mu\mu^{\prime}}=id\colon M(X_{\mu})\to M(X_{\mu})\); therefore, \(p_{\alpha\alpha^{\prime}}=i\colon U^{\prime}\hookrightarrow U\). Thus, we have constructed the inverse system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) in the category H-G-ANR. Let us prove that it is equivariantly associated with the \(G\)-space \(X\). For each \(\alpha=(\mu,U)\in A\), we define an equivariant map \(p_{\alpha}\colon X\to X_{\alpha}=U\) by \(p_{\alpha}=ip_{\mu}\), where \(p_{\mu}\colon X\to X_{\mu}\) is the quotient map and \(i\colon X_{\mu}\to U\) is an embedding. It is easy to see that \(p_{\alpha}=p_{\alpha\alpha^{\prime}}p_{\alpha^{\prime}}\) if \(\alpha\leqslant\alpha^{\prime}\). Now let \(f\colon X\to Q\) be a continuous equivariant map, where \(Q\) is any \(G\)-ANR. Consider an invariant metric \(\rho\) on the space \(Q\), whose existence follows from the compactness of the group \(G\). On \(X\) we define a continuous invariant pseudometric \(\mu\) by \(\mu(x,x^{\prime})=\rho(f(x),f(x^{\prime}))\). Note that the map \(\varphi\colon X_{\mu}\to Q\) given by \(\varphi([x]_{\mu})=f(x)\) is an equimorphism of the \(G\)-spaces \(X_{\mu}\) and \(f(X)\), and \(\varphi p_{\mu}=f\). Since \(X_{\mu}\) is a closed invariant subset of the normed \(G\)-space \(M(X_{\mu})\) and \(Q\) is a \(G\)-ANR, it follows that there exists an equivariant extension \(h\colon U\to Q\) of the map \(\varphi\colon X_{\mu}\to Q\), where \(U\) is an invariant neighborhood of \(X_{\mu}\) in \(M(X_{\mu})\). In other words, \(h|_{X_{\mu}}=\varphi\), or \(hi=\varphi\). Take any index \(\alpha=(\mu,U)\in A\). We have \(X_{\alpha}=U\), \(p_{\alpha}=ip_{\mu}\), and \(hp_{\alpha}=f\). Indeed, \(hp_{\alpha}=hip_{\mu}=\varphi p_{\mu}=f\). Now suppose that \(h_{0},h_{1}\colon X_{\alpha}\to Q\) is an equivariant map to the G-ANR \(Q\) such that \(h_{0}p_{\alpha}\simeq_{G}h_{1}p_{\alpha}\). Since \(X_{\alpha}=U\) and \(p_{\alpha}=ip_{\mu}\), it follows that the equivariant maps \(h_{0}|_{X_{\mu}}\) and \(h_{1}|_{X_{\mu}}\) are equivariantly homotopic. Hence there exists an invariant neighborhood \(V\subset U\) of the closed invariant subset \(X_{\mu}\) such that \(h_{0}|_{V}\) and \(h_{1}|_{V}\) are equivariantly homotopic as well. This means that \(h_{0}p_{\alpha\alpha^{\prime}}\simeq_{G}h_{1}p_{\alpha\alpha^{\prime}}\), where \(\alpha^{\prime}=(\mu,V)\). ## 3. The Shape Classification of Spaces The shape category, as any other category, generates a classification of all of its objects, that is, a shape classification of topological spaces. Of primary interest are classes of spaces in which the shape classification coincides with the homotopy classification. One of such classes is the class of ANRs. However, there are also other classes for which the shape classification coincides even with the topological one. An example is the class of all zero-dimensional spaces (see Godlewski's paper [195] and Mardesic and Segal's paper [288]). **Theorem 6**.: Zero-dimensional spaces \(X\) and \(Y\) have the same shape if and only if they are homeomorphic. **Corollary 2**.: There exist \(\aleph_{1}\) countable compact metrizable spaces with pairwise different shapes. Indeed, all countable compact metrizable sets are zero-dimensional, and Mazurkiewicz and Sierpinski [294] proved that there are \(\aleph_{1}\) different topological types of countable compact metrizable spaces. Corollary 2 leads to the conclusion that the number of different shapes of compact sets in \(\mathbf{R}^{1}\) is uncountable. Yet another class of spaces in which the shape classification coincides with the topological one is the class of all \(P\)-adic solenoids \(S_{P}\), where \(P=(p_{1},p_{2},\ldots)\) is a sequence of primes. Recall that a \(P\)-adic solenoid is defined as the limit of an inverse sequence \(\{X_{n},p_{nn+1},N\}\), where \(X_{n}=S^{1}\) and \(p_{nn+1}\) is a map of degree \(p_{n}\) for each \(n\in\mathbb{N}\). **Theorem 7** ([195], [288]).: Solenoids \(S_{P}\) and \(S_{Q}\) have the same shape if and only if they are homeomorphic. Godlewski [195] showed that the shape of any solenoid is completely determined by its first cohomology group, in much the same way that the shape of any plane continuum is determined by its first Betti number. Another interesting case in which shape theory gives nothing new is that of all compact connected Abelian groups. The shape morphisms between groups are in one-to-one correspondence with the continuous homomorphisms of these groups. Moreover, Keesling [233] proved the following theorem. **Theorem 8**.: Let \(X\) and \(Y\) be compact connected Abelian groups. Then \(\operatorname{sh}X=\operatorname{sh}Y\) if and only if \(X\) and \(Y\) are isomorphic (algebraically and topologically). It should be mentioned that this result turns out to be very useful for constructing various counterexamples in shape theory. The class of compact connected Abelian groups is substantially larger than the class of all solenoids. Moreover, the following theorem of Keesling is valid [230] (see also [149]). **Theorem 9**.: Let \(X\) be a \(T^{n}\)-like continuum, where \(T^{n}=S^{1}\times\ldots\times S^{1}\) is the \(n\)-torus. Then \(X\) has the shape of a compact connected Abelian topological group. However, if \(\Pi\) is the family of all compact connected Lie groups and \(X\) is a \(\Pi\)-like continuum, then the shape of \(X\) may be different from that of a compact connected topological group. The corresponding example was constructed in [230]. In the case of plane continua, shape depends only on the first Betti number of a continuum, i.e., on the number of domains into which the plane is separated by this continuum. **Theorem 10**.: [10] Two plane continua \(X\) and \(Y\) have the same shape if and only if their first Betti numbers coincide. In particular, \(\operatorname{sh}X\geqslant\operatorname{sh}Y\) if and only if the number of domains in \(\mathbf{R}^{2}\backslash X\) is greater than or equal to that of domains in \(\mathbf{R}^{2}\backslash Y\). Therefore, for any two plane continua \(X\) and \(Y\), we have either \(\operatorname{sh}X=\operatorname{sh}Y\), \(\operatorname{sh}X<\operatorname{sh}Y\), or \(\operatorname{sh}X>Y\). However, for continua in Euclidean 3-space \(\mathbf{R}^{3}\), the situation is different: there exist continua \(X,Y\subset\mathbf{R}^{3}\) such that \(\operatorname{sh}X\leqslant\operatorname{sh}Y\) and \(\operatorname{sh}X\geqslant\operatorname{sh}Y\) but \(\operatorname{sh}X\neq\operatorname{sh}Y\). Theorem 10 implies the existence of countably many different shapes of plane continua. Representatives of these shapes are a point (the trivial shape), the wedge of \(n\) circles for every \(n\in\mathbb{N}\), and the wedge of infinitely many circles. This gives a complete shape classification of plane continua. It is easy to show that the family of all different shapes of compact sets in the plane has cardinality \(2^{\aleph_{0}}\). Godlewski [194] proved that in \(\mathbf{R}^{3}\) there exist \(2^{\aleph_{0}}\) continua with pairwise different shapes (see also [288]). In fact, these different shapes can be found among solenoids. Complete shape classifications of \(\Pi\)-similar continua for various classes \(\Pi\) can be found in [288], [354], [355], [356], [201], [202], [204], [230], [149], and [403]. ## 4. Complement Theorems in Shape Theory Shape theory is a very useful tool for solving classical problems of infinite-dimensional and geometric topology. This has become quite clear after deep results of Chapman [105], [106], Geoghegan and Summerhill [180], Edwards [160], West [407], and other authors, which have demonstrated the effectiveness of shape theory methods in geometric topology. On the other hand, methods of infinite-dimensional topology are very useful for studying the shapes of metrizable compact sets. As far as we know, in homotopy theory, these methods were first applied by Borsuk [81]. He proved a theorem about the homotopy type of a quotient space of Euclidean space \(\mathbf{R}^{n}\) by using Klee's theorem [240] on extending homeomorphisms of compact sets in a Hilbert space to the entire Hilbert space. These methods were also applied by Henderson [209]. A special role in the application of methods of infinite-dimensional topology to shape theory is played by the so-called complement theorems, which answer the following question in various situations: When are the complements \(M\backslash X\) and \(M\backslash Y\) of compact sets \(X\) and \(Y\) embedded in an ambient space \(M\) in a special way homeomorphic? One of the first results is this kind was Borsuk's theorem [10] that if \(X\) and \(Y\) are plane continua, then the complements \(\mathbf{R}^{2}\backslash X\) and \(\mathbf{R}^{2}\backslash Y\) are homeomorphic if and only if \(\operatorname{sh}(X)=\operatorname{sh}(Y)\). Remarkable results in this direction were also obtained by Chapman [105], [106]. In essence, Chapman's theorems assert that, under certain constraint on the embedding of two compact spaces \(X\) and \(Y\) in the Hilbert cube \(Q\), these spaces have the same shape if and only if their complements are homeomorphic. Therefore, if \(X\) and \(Y\) are absolute neighborhood retracts, then they have the same homotopy type if and only if their complements are homeomorphic. We represent the Hilbert cube \(Q\) in the form the product \(\prod\limits_{n=1}^{\infty}I_{n}\), where \(I_{n}=[0,1]\). The set \(s=\prod\limits_{n=1}^{\infty}\mathring{I}_{n}\), where \(\mathring{I}_{n}=(0,1)\), is called the _pseudointerior_ of the Hilbert cube \(Q\). Recall that a compact subset \(X\) of the Hilbert cube \(Q\) is called a _\(Z\)-set_ if the identity map \(1_{Q}\) can be approximated arbitrarily closely by continuous maps from \(Q\) to \(Q\backslash X\), i.e., for any \(\varepsilon>0\), there exists a continuous map \(f\colon Q\to Q\backslash X\) such that \(d(f,1_{Q})<\varepsilon\). This notion was introduced by Anderson [46] and is one of the fundamental notions of the theory of \(Q\)-manifolds [104]. Note that any compact metrizable space is embedded in the Hilbert cube as a \(Z\)-set. Compact sets lying in the pseudointerior \(s\) of the Hilbert cube \(Q\) form an important class of \(Z\)-sets. The following theorem shows that any \(Z\)-set in the Hilbert cube \(Q\) can be mapped to the pseudointerior \(s\) by a homeomorphism \(h\colon Q\to Q\). **Theorem 11**.: Let \(X\in Q\) be a \(Z\)-set. Then, for any \(\varepsilon>0\), there exists a homeomorphism \(h\colon Q\to Q\) such that \(h(X)\subset s\) and \(d(h,1_{Q})<\varepsilon\). This theorem, which was proved by Anderson [46], is exceptionally important, because it reduces studying \(Z\)-sets to the theory of compact sets in the pseudointerior \(s\). Chapman [105] proved that the shape of any \(Z\)-set (in particular, of any compact set in \(s\)) depends only on the topological type of its complement. To be more precise, the following remarkable _complement theorem_ is valid (see [105]). **Theorem 12**.: Let \(X\) and \(Y\) be any \(Z\)-sets in the Hilbert cube \(Q\). Then they have the same shape if and only if their complements \(Q\backslash X\) and \(Q\backslash Y\) are homeomorphic. If \(X\) and \(Y\) are absolute neighborhood retracts, then they have the same homotopy type if and only if their complements are homeomorphic. To prove his theorem, Chapman applied deep results on \(Q\)-manifolds (see [46], [48], [49], and [415]). Note that a similar assertion for subsets of the Hilbert space \(l_{2}\) is false, because, according to one of Anderson's theorems [47], \(l_{2}\) is homeomorphic to \(l_{2}\backslash X\) for any compact set \(X\subset l_{2}\). Chapman [106] also proved the first finite-dimensional analogue of Theorem 12 about complements. This theorem has attracted attention of many experts in geometric topology, who proved new finite-dimensional complement theorems. We mention papers [180] by Geoghegan and Summerhill, [390] by Venema, [218] by Ivan\(\check{\rm s}\)ic, Sher, and Venema, and [314] by Mrozik. In all these papers, it was assumed that compact metrizable spaces \(X\) and \(Y\) are appropriately embedded in \({\bf R}^{n}\) and satisfy certain dimension conditions, and it was proved that \(X\) and \(Y\) have the same shape if and only if their complements \({\bf R}^{n}\backslash X\) and \({\bf R}^{n}\backslash Y\) are homeomorphic. **Theorem 13** (Chapman [106]).: Let \(X\) and \(Y\) be compact metrizable spaces of dimension \(\leqslant k\). Then the following assertions hold: (a) for any \(n\geqslant 2k+2\), there exist embeddings \(i\colon X\to{\bf R}^{n}\) and \(j\colon Y\to{\bf R}^{n}\) such that if \({\rm sh}(X)={\rm sh}(Y)\), then \({\bf R}^{n}\backslash i(X)\cong{\bf R}^{n}\backslash j(Y)\); (b) for any \(n\geqslant 3k+3\), there exist embeddings \(i\colon X\to{\bf R}^{n}\) and \(j\colon Y\to{\bf R}^{n}\) such that if \({\bf R}^{n}\backslash i(X)\cong{\bf R}^{n}\backslash j(Y)\), then \({\rm sh}(X)={\rm sh}(Y)\). To state and prove their complement theorem, Geoghegan and Summerhill [180] introduced the notions of an \(\varepsilon\)-push and a strong \(Z_{k}\)-set. Let \(X\) be a closed subset of \({\bf R}^{n}\). A homeomorphism \(h\colon{\bf R}^{n}\to{\bf R}^{n}\) is called an \(\varepsilon\)_-push_ of the pair \(({\bf R}^{n},X)\) if there exists an isotopy \(H\colon{\bf R}^{n}\times I\to{\bf R}^{n}\) such that \(d(x,H_{t}(x)<\varepsilon)\) for all \(x\in{\bf R}^{n}\) and \(t\in I\) (this isotopy is called an \(\varepsilon\)-isotopy), \(H_{0}=1\), \(H_{1}=h\), and \(H_{t}(x)=x\) for all \(t\in I\) and all \(x\) satisfying condition \({\rm dist}(x,X)\geqslant\varepsilon\). A closed subset \(X\) of \({\bf R}^{n}\) is called a _strong \(Z_{k}\)-set_\((k\geqslant 0)\) if, for each compact polyhedron \(P\) in \({\bf R}^{n}\) of dimension \(\dim P\leqslant k+1\) and any \(\varepsilon>0\), there exists an \(\varepsilon\)-push \(h\) of the pair \(({\bf R}^{n},X)\) such that \(P\cap h(X)=\emptyset\). **Theorem 14** (Geoghegan and Summerhill [180]).: Let \(X\) and \(Y\) be compact metrizable spaces being strong \(Z_{n-k-2}\)-sets in \({\bf R}^{n}\)\((k\geqslant 0,n\geqslant 2k+2)\). Then the following conditions are equivalent: (i) \({\rm sh}(X)={\rm sh}(Y)\); (ii) the pairs \(({\bf R}^{n}/X,\{X\})\) and \(({\bf R}^{n}/Y,\{Y\})\) are homeomorphic; (iii) the triples \(({\bf R}^{n}/X,({\bf R}^{n}/X)\backslash\{X\},\{X\})\) and \(({\bf R}^{n}/Y,({\bf R}^{n}/Y)\backslash\{Y\},\{Y\})\) have the same homotopy type; (iv) \({\bf R}^{n}\backslash X\cong{\bf R}^{n}\backslash Y\). In relation to this theorem, it is important that _any metrizable compact set \(X\) of dimension \(\dim X\leqslant k\) is embedded in \({\bf R}^{n}\)\((n\geqslant 2k+1)\) as a strong \(Z_{n-k-2}\)-set_. Note that the property of being a strong \(Z_{k}\)-set is not homotopy invariant. However, homotopy-invariant properties often imply the property of being a strong \(Z_{n-k-2}\)-set. This has enabled Geoghegan and Summerhill [180] to prove the following complement theorem, which is more convenient for applications. **Theorem 15**.: Let \(X\) and \(Y\) be nonempty compact sets in \({\bf R}^{n}\) such that their complements \({\bf R}^{n}\backslash X\) and \({\bf R}^{n}\backslash Y\) are uniformly locally 1-connected and \(\max\{\dim X,\dim Y\}\leqslant k\), \(\max\{2k+2,5\}\leqslant n\). Then \({\rm sh}(X)={\rm sh}(Y)\) if and only if \({\bf R}^{n}\backslash X\cong{\bf R}^{n}\backslash Y\). Recall that a metric space \(X\) is said to be _uniformly locally \(k\)-connected_ (written as \(X\in ULC^{k}\)) if, for any \(\varepsilon>0\), there exists a \(\delta>0\) such that any map \(f\colon S^{k}\to X\) satisfying the condition \({\rm diam}\,f(S^{k})<\delta\) is homotopic to a constant map on any set \(V\subset X\) of diameter \({\rm diam}\,V<\varepsilon\). **Corollary 3**.: Let \(X\) and \(Y\) be closed \(k\)-dimensional submanifolds of \(\mathbf{R}^{n}\), where \(\max\{2k+2,5\}\leqslant n\). Then \(X\) is homotopy equivalent to \(Y\) if and only if \(\mathbf{R}^{n}\backslash X\cong\mathbf{R}^{n}\backslash Y\). The condition \(ULC^{1}\) in Theorem 15 cannot be omitted, because the complement of Blankinship's wild arc [72] in \(\mathbf{R}^{n}\), \(n\geqslant 3\), is not simply connected. It is also important that the codimension be sufficiently large. Henderson [209] constructed an example of non-homotopy equivalent two-dimensional compact polyhedra with homeomorphic complements in \(\mathbf{R}^{3}\). In [112] the so-called _small loops condition_ (SLC) was introduced. A compact space \(X\subset\mathbf{R}^{n}\) satisfies the SLC if, for any neighborhood \(U\) of \(X\) there exists a smaller neighborhood \(V\) (\(X\subset V\subset U\)) and a number \(\varepsilon>0\) such that any loop in \(V\backslash X\) of diameter \(<\varepsilon\) is homotopic to zero in \(U\backslash X\). This notion is a generalization of McMillan's _cellularity criterion_ (CS) [297], which characterizes cellular embeddings in manifolds. The CC is obtained from the SLC at \(\varepsilon=\infty\). Using results of [112], Hollingsworth and Rushing [210] were able to prove the following generalization of Theorem 15. **Theorem 16**.: Let \(X,Y\subset\mathbf{R}^{n}\) be compact sets satisfying the SLC, and let \(\max\{\dim X,\)\(\dim Y\}\leqslant k\), where \(\max\{2k+2,5\}\leqslant n\). Then \(\operatorname{sh}(X)=\operatorname{sh}(Y)\) if and only if \(\mathbf{R}^{n}\backslash X\cong\mathbf{R}^{n}\backslash Y\). This theorem has two interesting corollaries. **Corollary 4**.: Let \(X,Y\subset\mathbf{R}^{n}\) be compact ANRs satisfying the SLC, and let \(\max\{\dim X,\)\(\dim Y\}\leqslant k\), where \(\max\{2k+2,5\}\leqslant n\). Then \(X\) and \(Y\) have the same homotopy type if and only if \(\mathbf{R}^{n}\backslash X\cong\mathbf{R}^{n}\backslash Y\). **Corollary 5**.: Suppose that compact spaces \(X,Y\subset\mathbf{R}^{n}\) are homeomorphic and satisfy the SLC. If \(\dim X=\dim Y\leqslant k\) and \(\max\{2k+2,5\}\leqslant n\), then \(\mathbf{R}^{n}\backslash X\cong\mathbf{R}^{n}\backslash Y\). Venema [390] has succeeded in replacing the dimension \(\dim\) in Theorem 16 by the shape dimension \(\operatorname{sd}\). For this purpose, he defined the _inessential loop condition_ (ILC). A compact subset \(X\) of a manifold \(M\) satisfies the ILC if, for any neighborhood \(U\) of \(X\) in \(M\), there exists a smaller neighborhood \(V\), \(X\subset V\subset U\), such that any loop in \(V\backslash X\) which is inessential (homotopic to zero) in \(V\) is also inessential in \(U\backslash X\). Note that the ILC implies the SLC. If \(X\subset M^{n}\) and \(\dim X\leqslant n-2\), then these two conditions are equivalent. **Theorem 17** (Venema [390]).: Suppose that compact sets \(X,Y\subset\mathbf{R}^{n}\) satisfy the ILC, \(\max\{\operatorname{sd}X,\operatorname{sd}Y\}\leqslant k\), and \(\max\{2k+2,5\}\leqslant n\). Then \(\operatorname{sh}(X)=\operatorname{sh}(Y)\) if and only if \(\mathbf{R}^{n}\backslash X\cong\mathbf{R}^{n}\backslash Y\). In the case where \(\dim X\leqslant n-3\) and \(Y\) is a finite polyhedron of dimension \(\leqslant k\) such that \(2k+2\leqslant n\), this theorem was proved in [112]. The most general complement theorem was proved by Ivansic, Sher, and Venema [218]. **Theorem 18** ([218]).: Suppose that compact sets \(X,Y\subset\mathbf{R}^{n}\) are shape \(r\)-connected and satisfy the ILC. Suppose also that \(\max\{\operatorname{sd}X,\operatorname{sd}Y\}\leqslant k\) and \(n\geqslant\max\{2k+2-r,5\}\). If \(n\geqslant k+3\), then \(\operatorname{sh}(X)=\operatorname{sh}(Y)\) implies \(\mathbf{R}^{n}\backslash X\cong\mathbf{R}^{n}\backslash Y\). The converse is true if \(n\geqslant k+4\). This theorem generalizes almost all preceding results. Complement theorems in various categories and in more general situations were proved by Mrozik [314]. ## 5. Movability and Other Shape Invariants The notion of movability for compact metrizable spaces was introduced by Borsuk [78]. **Definition 4**.: A compact metrizable space \(X\) embedded in an AR \(M\) is said to be _movable_ if, given any neighborhood \(U\) of \(X\), there exists a neighborhood \(U^{\prime}\subset U\) of \(X\) such that, for any neighborhood \(U^{\prime\prime}\subset U\) of \(X\), there exists a homotopy \(H\colon U^{\prime}\times I\to U\) satisfying the conditions \(H(x,0)=x\) and \(H(x,1)\in U^{\prime\prime}\) for all \(x\in U^{\prime}\). This definition does not depend on the choice of the AR \(M\) and the embedding of \(X\) in \(M\). For arbitrary topological spaces, the notion of movability was defined by Mardesic and Segal [287] in terms of ANR-systems. **Definition 5**.: An inverse ANR-system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) is said to be _movable_ if the following condition holds: (M) given any \(\alpha\in A\), there is an \(\alpha^{\prime}\in A\), \(\alpha^{\prime}\geqslant\alpha\), such that, for any \(\alpha^{\prime\prime}\in A\), \(\alpha^{\prime\prime}\geqslant\alpha\), there exists a homotopy class \(\mathbf{r}^{\alpha^{\prime}\alpha^{\prime\prime}}\colon X_{\alpha^{\prime}} \to X_{\alpha^{\prime\prime}}\) satisfying the condition \(\mathbf{p}_{\alpha\alpha^{\prime}}=\mathbf{p}_{\alpha\alpha^{\prime\prime}} \circ\mathbf{r}^{\alpha^{\prime}\alpha^{\prime\prime}}\). An inverse ANR-system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) is _uniformly movable_ if (UM) given any \(\alpha\in A\), there exists an \(\alpha^{\prime}\in A\), \(\alpha^{\prime}\geqslant\alpha\), and a morphism \(\mathbf{r}\colon X_{\alpha^{\prime}}\to\mathbf{X}\) in the category pro-H-CW such that \(\mathbf{p}_{\alpha}\circ\mathbf{r}=\mathbf{p}_{\alpha\alpha^{\prime}}\), where \(\mathbf{p}_{\alpha}\colon\mathbf{X}\to X_{\alpha}\) is the morphism in pro-H-CW generated by the identity homotopy class \(\mathbf{1}_{X_{\alpha}}\). A topological space \(X\) is said to be _(uniformly) movable_ if there exists an associated (uniformly) movable ANR-system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\). If a space \(X\) is (uniformly) movable, then all ANR-systems associated with it are (uniformly) movable, i.e., the notion of (uniform) movability depends only on the space \(X\) itself. In the case of compact metrizable spaces, Definitions 4 and 5 are equivalent. The following theorem shows that the notion of a movable space can be defined without resorting to associated inverse systems. **Theorem 19** (Gevorgyan [14]).: A topological space \(X\) is movable if and only if the following condition holds: (*) given any homotopy class \(\mathbf{f}\colon X\to Q\), where \(Q\in\operatorname{ANR}\), there are homotopy classes \(\mathbf{f}^{\prime}\colon X\to Q^{\prime}\) and \(\boldsymbol{\eta}\colon Q^{\prime}\to Q\), where \(Q^{\prime}\in\operatorname{ANR}\) and \(\mathbf{f}=\boldsymbol{\eta}\mathbf{f}^{\prime}\), such that, for any homotopy classes \(\mathbf{f}^{\prime\prime}\colon X\to Q^{\prime\prime}\) and \(\boldsymbol{\eta}^{\prime}\colon Q^{\prime\prime}\to Q\), where \(Q^{\prime\prime}\in\operatorname{ANR}\) and \(\mathbf{f}=\boldsymbol{\eta}^{\prime}\mathbf{f}^{\prime\prime}\), there exists a homotopy class \(\boldsymbol{\eta}^{\prime\prime}\colon Q^{\prime}\to Q^{\prime\prime}\) satisfying the condition \(\boldsymbol{\eta}=\boldsymbol{\eta}^{\prime}\boldsymbol{\eta}^{\prime\prime}\) (see Diagram 1). \(Q\)\(Q^{\prime}\)\(Q^{\prime}\)\(Q\)\(\mathbf{f}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(Q^{\prime\prime}\)\(\mathbf{f}^{\prime\prime}\)\(Q^{\prime}\)\(\mathbf{f}^{\prime\prime}\)\(Q^{\prime}\)\(\mathbf{f}^{\prime\prime}\)\(Q^{\prime}\)\(\mathbf{f}^{\prime\prime}\)\(Q^{\prime}\)\(\mathbf{f}^{\prime\prime solenoids [10]. This follows from the non-movability of the first homology pro-group of any solenoid. The point is that movability is preserved under functorial passages. Therefore, if a space \(X\) is movable, then so are its homotopy pro-groups \(\text{pro-}\pi_{n}(X,*)\) and homology pro-groups \(\text{pro-}H_{n}(X)\). Movability, which can be defined in any pro-category (see [312]), is important because the passage to the limit in movable inverse systems can be performed without losing the algebraic information about the system. In the case of a shape morphism \(F\colon(X,*)\to(Y,*)\) of movable compact metrizable spaces, the induced homomorphisms \(\check{\pi}_{n}(F)\colon\check{\pi}_{n}(X,*)\to\check{\pi}_{n}(Y,*)\) and \(\text{pro-}\pi_{n}(F)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\) of the shape groups and pro-groups are, or are not, isomorphisms simultaneously (see Theorem 50). Therefore, in the case of movable spaces, some theorems remain valid under the replacement of homotopy pro-groups by shape groups. This is the case with the shape versions of Whitehead's theorem (see Theorem 49), Hurewicz' theorem (see Theorem 57), and so on. In the category pro-Set movability has a simple characterization: an inverse system \(\mathbf{X}=\{X_{\alpha},p_{\alpha\alpha^{\prime}},A\}\in\text{pro-Set}\) is movable if and only if the following Mittag-Leffler (ML) condition holds: _for any \(\alpha\in A\), there exists an \(\alpha^{\prime}\geqslant\alpha\) such that \(p_{\alpha\alpha^{\prime}}(X_{\alpha^{\prime}})=p_{\alpha\alpha^{\prime\prime} }(X_{\alpha^{\prime\prime}})\) for any \(\alpha^{\prime\prime}\geqslant\alpha^{\prime}\)._ All movable pro-groups satisfy the Mittag-Leffler condition. However, the converse is not true (see [289]). The family of all movable compact spaces is _shape bounded_; to be more precise, the following theorem of Spiez is valid [366]. **Theorem 20**.: There exists a movable compact space \(X_{0}\) such that \(\text{sh}(X)\leqslant\text{sh}(X_{0})\) for any movable compact space \(X\). Such a compact space \(X_{0}\) is called a _majorant_ of the family of all movable compact spaces. This theorem implies that the family of all compact sets in the plane is shape bounded, or has a majorant. This is not true for the family of all compact sets in \(\mathbf{R}^{3}\). This follows from a result of Borsuk and Holsztynski [86], according to which there does not exist a compact space whose shape is larger than that of any solenoid. In equivariant shape theory, the counterpart of Theorem 20 is false, i.e., _the class of all \(G\)-movable compact spaces has no majorants._ However, the following theorem is valid. **Theorem 21** (Gevorgyan [12]).: Let \(G\) be a second-countable compact group. Then, in any class of weakly shape comparable \(G\)-movable compact spaces, there exists a majorant. The _weak shape comparability_ of \(G\)-spaces \(X\) and \(Y\) means that there exist \(G\)-shape morphisms both from \(X\) in \(Y\) and from \(Y\) to \(X\). The proof of the last theorem is based on McCord's construction [296] of a universal compact space and on the equivariant counterpart of Brown's theorem [12]. The first results on equivariant movability were obtained by the author in [12]-[15] and [185], where, in particular, the following theorems were proved. **Theorem 22**.: Let \(G\) be a compact Lie group, and let \(X\) be a \(G\)-movable metrizable \(G\)-space. Then \(X\) is \(H\)-movable for any closed subgroup \(H\) of \(G\). The \(G\)-movability of a metrizable \(G\)-space implies also the movability of the \(H\)-fixed point set for any closed subgroup \(H\) of \(G\). In particular, _the \(G\)-movability of a metrizable \(G\)-space \(X\) implies its movability_. The converse is not true even if the acting group \(G\) is the cyclic group \(\mathbb{Z}_{2}\) (see [185, Example 5.1]). **Theorem 23**.: Let \(G\) be a compact group, and let \(X\) be a metrizable \(G\)-space. If \(X\) is \(G\)-movable, then, for any closed normal subgroup \(H\) of \(G\), the \(H\)-orbit space \(X|_{H}\) is \(G\)-movable as well. In particular, the \(G\)-movability of a \(G\)-space \(X\) implies the movability of the orbit space \(X|_{G}\). The converse is not true. However, if a compact Lie group acts freely on a metrizable space \(X\), then \(G\)-movability is equivalent to movability. The assumptions of the theorem that the acting group \(G\) is Lie and the action is free are essential. The movability of topological groups was studied by Keesling [229], [233], [236] and by Kozlowski and Segal [254]. In particular, Keesling [229] proved the following theorem. **Theorem 24**.: A connected compact Abelian group is movable if and only if it is locally connected. For non-Abelian groups, this theorem does not hold (see [229]). The equivariant movability of topological groups was studied by the author in [183]. We mention the following result. **Theorem 25** (Gevorgyan [183]).: A second-countable compact group \(G\) is Lie if and only if it is \(G\)-movable. This theorem gives, in particular, new examples of movable but equivariantly non-movable \(G\)-spaces (see [183]). _Movable shape morphisms_ were defined and studied by Gevorgyan and Pop [187]. A morphism \((\mathbf{f},\varphi)\colon\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A \}\to\{Y_{\beta},\mathbf{q}_{\beta^{\prime}},B\}\) in the category pro-H-CW is said to be _movable_ if, given any \(\beta\in B\), there is an \(\alpha\in A\), \(\alpha\geqslant\varphi(\beta)\), such that, for any \(\beta^{\prime}\geqslant\beta\), there exists a homotopy class \(\mathbf{u}\colon X_{\alpha}\to Y_{\beta^{\prime}}\) satisfying the condition \(\mathbf{f}_{\beta}\mathbf{p}_{\varphi(\beta)\alpha}=\mathbf{q}_{\beta\beta^{ \prime}}\mathbf{u}\). The notion of a movable shape morphism agrees with that of a movable space in the sense that _a space \(X\) movable if and only if so is the identity map \(1_{X}\)_. Movable morphisms are important, in particular, because the movability assumption on a space can sometimes be replaced by the weaker assumption that a shape morphism \(F\colon X\to Y\) is movable; see, e.g., Whitehead's Theorem 53 for movable morphisms \(F\colon X\to Y\). Movable shape morphisms were also introduced and studied (by different methods and for different purposes) by other authors [158], [100], [416], [417]. In [10] Borsuk introduced the notion of \(n\)-movability, which is a shape invariant, too. A metrizable compact space \(X\) lying in AR-space \(M\) is said to be _\(n\)-movable_ if, given any neighborhood \(U\) of \(X\) in \(M\), there exists a neighborhood \(U^{\prime}\subset U\) of \(X\) such that, for any neighborhood \(U^{\prime\prime}\subset U\) of \(X\), any metrizable compact space \(K\) of dimension \(\dim K\leqslant n\), and any map \(f\colon K\to U^{\prime}\), there exists a map \(g\colon K\to U^{\prime\prime}\) homotopic to \(f\) in \(U\). For arbitrary spaces, this notion is introduced by using inverse systems. **Definition 6**.: An inverse ANR-system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) is said to be _\(n\)-movable_ if, given any \(\alpha\in A\), there is an \(\alpha^{\prime}\in A\), \(\alpha^{\prime}\geqslant\alpha\), such that, for any \(\alpha^{\prime\prime}\in A\), \(\alpha^{\prime\prime}\geqslant\alpha\), and any homotopy class \(\mathbf{h}\colon P\to X_{\alpha^{\prime}}\), where \(P\) an ANR of dimension \(\dim P\leqslant n\), there exists a homotopy class \(\mathbf{r}\colon P\to X_{\alpha^{\prime\prime}}\) satisfying the condition \(\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{h}=\mathbf{p}_{\alpha\alpha^{ \prime\prime}}\mathbf{r}\). A topological space \(X\) is _\(n\)-movable_ if there exists an associated \(n\)-movable ANR-system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\). It is easy to see that \((n+1)\)-movability always implies \(n\)-movability. It is also clear that the movability of a space \(X\) implies its \(n\)-movability for any \(n\). If \(\dim X\leqslant n\), then the converse is also true: the \(n\)-movability of a space \(X\) implies its movability (in the case \(\operatorname{sd}X\leqslant n\), this was proved in [8]). The equivariant counterpart of this statement has been proved in the case of a finite acting group \(G\): any equivariantly \(n\)-movable \(G\)-compact set of dimension \(\dim X\leqslant n\) is equivariantly movable (see Gevorgyan [15]). Therefore, a finite-dimensional \(G\)-compact space is equivariantly movable if and only if it is equivariantly \(n\)-movable for all \(n\). In the general case, the \(n\)-movability does not imply movability: the Kahn compact space [221] is not movable, but it is \(n\)-movable for all \(n\) (see [8], [255]). Any \(LC^{n-1}\) paracompact space is \(n\)-movable (see [253]). The particularly important case of \(1\)-movability was studied in detail by Dydak [125], McMillan [298], Krasinkiewicz [257], [259], [258], and other authors. If a metrizable continuum \((X,*)\) is \(1\)-movable for a point \(x\in X\), then it is also \(1\)-movable for any other point of \(X\). Moreover, if a metrizable continuum \((X,x)\) is \(1\)-movable and \(\operatorname{sh}(X)=\operatorname{sh}(Y)\), then \(\operatorname{sh}(X,x)=\operatorname{sh}(Y,y)\) for any \(y\in Y\) (Dydak [125]). There exists a nontrivial example of a metrizable continuum \(X\) such that it is \(1\)-movable but \((X,*)\) is not movable (Dydak [144]). A continuous image of a \(1\)-movable metrizable continuum is \(1\)-movable. The following important criterion for the \(1\)-movability of a metrizable continuum is due to Krasinkiewicz [258]. **Theorem 26**.: A metrizable continuum \((X,*)\) is \(1\)-movable if and only if its homotopy pro-group pro-\(\pi_{1}(X,*)\) satisfies the Mittag-Leffler condition. **Theorem 27**.: A metrizable continuum \((X,*)\) is \(1\)-movable if and only if it has the shape of a locally connected continuum. It follows from these theorems that a solenoid \(X\) does not have the shape of a locally connected continuum, because the pro-group pro-\(\pi_{1}(X)\) does not satisfy the Mittag-Leffler condition. Many results of shape theory are easy to transfer from the case of pointed spaces to that of nonpointed spaces. However, the reverse transfer of results is not always easy or even possible (see [176], [289]). For example, the movability of a pointed space \((X,*)\) readily implies that of \(X\), but the question of whether the converse is true is difficult and has not been answered so far. A partial answer is given by the following theorem. **Theorem 28** (Krasinkiewicz [257]).: Let \((X,*)\) be a \(1\)-movable metrizable continuum. Then the movability of \(X\) implies that of \((X,*)\). The notion of \(n\)-movability has initiated \(n\)_-shape theory_, which was developed by Chi-gogidze in [27]. This theory is an important tool in the study of \(k\)-dimensional Menger manifolds, which was begun by Bestvina in [67]. There are several more modifications of the notions of movability and \(n\)-movability (see papers [4] by Bogatyi, [241] by Kodama, [281] by Mardesic, and [312] by Moszynska; we also mention papers [61] by Bauer, [134] by Dydak, [254] by Koslowski and Segal, [368] by Spiez, and [401] by Watanabe). The notion of the \(n\)_-movability of shape morphisms_ was introduced and studied by Gevorgyan and Pop [188]. Yet another important shape invariant is _strong movability_. This notion has played an important role, primarily in the study of the stability of topological spaces and absolute neighborhood shape retracts. A space \(X\) is said to be _strongly movable_ if there exists an associated _strongly movable ANR-system_\(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\), which means that, given any \(\alpha\in A\), there is an \(\alpha^{\prime}\geqslant\alpha\) such that, for any \(\alpha^{\prime\prime}\geqslant\alpha\), there exists an \(\alpha^{*}\geqslant\alpha^{\prime},\alpha^{\prime\prime}\) and a homotopy class \(\mathbf{r}^{\alpha^{\prime}\alpha^{\prime\prime}}\colon X_{\alpha^{\prime}} \to X_{\alpha^{\prime\prime}}\) satisfying the conditions \(\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{r}^{\alpha^{\prime}\alpha^{\prime \prime}}=\mathbf{p}_{\alpha\alpha^{\prime}}\) and \(\mathbf{r}^{\alpha^{\prime}\alpha^{\prime\prime}}\mathbf{p}_{\alpha^{ \prime}\alpha^{*}}=\mathbf{p}_{\alpha^{\prime\prime}\alpha^{*}}\). Obviously, strong movability implies movability. Strong movability is preserved under shape domination. For a connected space \((X,*)\), strong movability is equivalent to stability (Dydak [137], Watanabe [401]). The notion of movability was extended to and studied in more general cases by Segal [353], Shostak [31], Gevorgyan [14], [184], Gevorgyan and Pop [187], [188], [190], and Avakyan and Gevorgyan [55]. Avakyan and Gevorgyan [55] (see also the author's papers [14] and [184]) introduced the notion of a _movable category_ and proved criteria for the movability and strong movability of a topological space. **Definition 7**.: Let \(K\) and \(L\) be any categories, and let \(\Phi\colon K\to L\) be any covariant functor. The category \(K\) is said to be _movable with respect to the category \(L\) and the _functor_\(\Phi\colon K\to L\) if, given any object \(X\in\operatorname{Ob}(K)\), there is an object \(Y\in\operatorname{Ob}(K)\) and a morphism \(f\in\operatorname{Mor}_{K}(Y,X)\) such that, for any object \(Z\in\operatorname{Ob}(K)\) and any morphism \(g\in\operatorname{Mor}_{K}(Z,X)\), there exists a morphism \(h\in\operatorname{Mor}_{L}(\Phi(Y),\Phi(Z))\) satisfying the condition \(\Phi(g)h=\Phi(f)\). If \(K\) is a full subcategory of the category \(L\) and \(\Phi\colon K\hookrightarrow L\) is the embedding functor, then the subcategory \(K\) is said to be _movable with respect to the category \(L\)_. If \(K=L\) and \(\Phi=1_{K}\), then the category \(K\) is called _strongly movable_. **Theorem 29** (Avakyan and Gevorgyan [55]).: A topological space \(X\) is movable if and only if the category \(W^{X}\) is movable with respect to the category \(\operatorname{H-CW}\) and forgetful functor \(\Omega\colon W^{X}\to\operatorname{H-CW}\). The _forgetful functor_\(\Omega\colon W^{X}\to\operatorname{H-CW}\) takes each object \(f\colon X\to Q\) to the object \(Q\in\operatorname{H-CW}\) and each morphism \(\eta\colon(f\colon X\to Q)\to(f^{\prime}\colon X\to Q^{\prime})\), \(\eta\circ f=f^{\prime}\), of the category \(W^{X}\) to a morphism \(\eta\colon Q\to Q^{\prime}\) of the category \(\operatorname{H-CW}\). **Theorem 30** (Avakyan and Gevorgyan [55]).: The topological space \(X\) is strongly movable if and only if so is the category \(W^{X}\). Similar theorems for _uniformly movable_ spaces and categories were proved by Gevorgyan and Pop [190]. **Definition 8**.: A category \(K\) is said to be _uniformly movable_ if, given any object \(X\in K\), there exists an object \(M(X)\in K\) and a morphism \(m_{X}\colon M(X)\to X\) satisfying the following conditions: (i) for any object \(Y\in K\) and any morphism \(p\colon Y\to X\) of the category \(K\), there exists a morphism \(u(p)\colon M(X)\to Y\) such that \(pu(p)=m(X)\); (ii) for any objects \(Y,Z\in K\) and morphisms \(p\colon Y\to X\), \(q\colon Z\to X\), and \(r\colon Z\to Y\) of \(K\) satisfying the condition \(pr=q\), the relation \(ru(q)=u(p)\) holds. **Theorem 31** (Gevorgyan and Pop [190]).: A topological space \(X\) is uniformly movable if and only if so is the category \(W^{X}\). ## 6. Stable Spaces and Shape Retracts A topological space \(X\) is said to be _stable_ if it has the shape of an ANR. Stability in the shape category was first considered by Porter [337] and systematically studied by Demers [120], Edwards and Geoghegan [153], [154], [155], Dydak [125], [137], [143], Geoghegan and Lacher [179], Porter [335], [336], and other authors. Stability is a shape invariant. The notions of stability and pointed stability are equivalent, i.e., the following theorem is valid (see Dydak's paper [143] and Geoghegan's paper [174]). **Theorem 32**.: A pointed space \((X,*)\) is stable if and only if so is \(X\). **Theorem 33**.: If a connected pointed space \((X,*)\) is shape dominated by a pointed ANR-space \((P,*)\), then \((X,*)\) is stable. This theorem was independently proved by Demers [120] and by Edwards and Geoghegan [153], [154], who used different methods. The following theorem of Dydak [137] shows that, for connected pointed spaces, the notions of stability and strong movability coincide (see also Watanabe's paper [401]). **Theorem 34**.: A connected space \((X,*)\) is stable if and only if it is strongly movable. If a pointed space \((X,*)\) is stable, then the pro-groups pro-\(\pi_{k}(X,*)\) are isomorphic to the shape groups \(\check{\pi}_{k}(X,*)\). Therefore, the stability of \((X,*)\) implies that of the pro-groups pro-\(\pi_{k}(X,*)\). The converse is true for finite-dimensional connected spaces. Thus, stability admits the following algebraic characterization. **Theorem 35**.: Let \((X,*)\) be a connected space of finite shape dimension \(\operatorname{sd}X\). Then \((X,*)\) is stable if and only if so are all homotopy pro-groups pro-\(\pi_{k}(X,*)\). This theorem was first proved by Edwards and Geoghegan [154], [155], who used fairly sophisticated abstract tools. Subsequently, elementary proofs of this theorem were found by Dydak [143] and Geoghegan [174]. Let \(X\) be a subspace of a topological space \(Y\). A shape morphism \(R\colon Y\to X\) is called a _shape retraction_ if \(\text{Ri}=1_{X}\), where \(i\colon X\to Y\) is a shape embedding. The space \(X\) is then called a _shape retract_ of \(Y\). A subspace \(X\subset Y\) is called a _neighborhood shape retract_ of \(Y\) if \(X\) is a shape retract of some neighborhood \(U\) of \(X\) in \(Y\). Absolute (neighborhood) shape retracts in shape theory are defined by analogy with the corresponding notions for the class of metrizable spaces. We say that a metrizable space \(X\) is an _absolute (neighborhood) shape retract_ in the class of metrizable spaces and write \(X\in\operatorname{ASR}\) (respectively, \(X\in\operatorname{ANSR}\)) if, for any closed embedding of \(X\) in a metrizable space \(Y\), the space \(X\) is a (neighborhood) shape retract of \(Y\) (see papers [10] by Borsuk, [277] and [281] by Mardesic, [352] by Segal, and [31] by Shostak). It is easy to see that these notions are shape invariants. The notion of an absolute (neighborhood) shape retract in the class of compact metrizable spaces coincides with that of a _fundamental absolute (neighborhood) retract_, abbreviated as FAR (respectively, FANR), which was introduced by Borsuk in [10]. One of the first questions arising in shape theory is as follows: When does a space have the shape of a point? It turns out that this property characterizes absolute shape retracts. **Theorem 36** (Mardesic [277]).: A metrizable space \(X\) is an ASR if and only if \(\operatorname{sh}(X)=0\). For metrizable compact spaces, this theorem was proved by Borsuk [73], and in the class of all \(p\)-paracompact spaces, by Shostak [31]. For arbitrary topological spaces, Theorem 36 is not true (Godlewski [193]). The triviality of the shape of a space \(X\) can also be characterized in the language of associated ANR-systems as follows. **Theorem 37**.: Let \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) be an ANR-system associated with a space \(X\). Then \(X\) has trivial shape if and only if, for each \(\alpha\in A\), there exists an \(\alpha^{\prime}\geqslant\alpha\) such that the projection \(p_{\alpha\alpha^{\prime}}\) is homotopic to a constant map. In the case of compact Hausdorff spaces, this theorem was proved by Mardesic [277]. Of interest is also the following theorem (see Dydak's paper [143]). **Theorem 38**.: A space \(X\) is an absolute shape retract if and only if \(\operatorname{sd}X<\infty\) and \(X\) is shape \(n\)-connected for some \(n\). **Theorem 39**.: A space \(X\) is an absolute shape retract if and only if it is movable and shape \(n\)-connected for all \(n\). Recall that a space \(X\) is said to be _shape \(n\)-connected_ if pro-\(\pi_{k}(X,*)=0\) for all \(k\leqslant n\). For compact metrizable spaces, the last theorem was proved by Bogatyi [7] and Borsuk [78]. The notion of an ASR is remarkable in many respects. In particular, on this notion the definition of a cell-like map is based, which makes it possible to answer the natural question of under what assumptions a map \(f\colon X\to Y\) generates a shape isomorphism (see Theorems 83 and 84). The class of ANSRs is a natural extension of the class of ANRs, and the global properties of ANSRs resemble to a certain extent those of ANRs. Clearly, each compact space having the shape of an ANR is an ANSR. Borsuk [79] asked the converse question: Is it true that any compact metrizable ANSR has the shape of a compact metrizable ANR? Edwards and Geoghegan [153] gave a negative answer to this question. They constructed an example of a two-dimensional metrizable continuum \(X\) which is an ANSR but does not have the shape of a finite complex. This continuum \(X\) cannot have the shape of a compact ANR, because, as is well known (West [406]), any compact metrizable ANR has the homotopy type (and hence the shape) of a finite complex. Nevertheless, any plane compact ANR has the shape of a compact ANR. Indeed, a compact set \(X\) in the plane is an ANSR if and only if its Betti numbers \(p_{0}(X)\) and \(p_{1}(X)\) are finite. Therefore, the shape of a plane compact ANSR \(X\) equals the shape of a finite union of pairwise disjoint planar graphs, and such unions are compact ANRs. Since one-dimensional ANSRs have the shape of a compact set in the plane (see [383]), we can say that Borsuk's problem has a positive solution for one-dimensional compact ANSRs. As shown by the example of Edwards and Geoghegan mentioned above, for two-dimensional compact ANSRs, the solution is negative. The ANSRs are characterized by the following theorem due to Mardesic and Segal [289]. **Theorem 40**.: A metrizable space \(X\) is an ANSR if and only if it is shape dominated by an ANR \(P\). Importantly, in the case of compact metrizable spaces, the shape dominating ANR \(P\) can be chosen compact in this theorem. **Theorem 41**.: A compact metrizable space \(X\) is an ANSR if and only if it is shape dominated by a compact ANR \(P\). Theorem 40 is also valid in the pointed case. **Theorem 42**.: A metrizable space \((X,*)\) is an ANSR if and only if it is shape dominated by an ANR \((P,*)\). Theorems 32, 33, and 42 imply the following important property of pointed ANSR (see [289]). **Theorem 43**.: A connected metrizable space \((X,*)\) is an ANSR if and only if \((X,*)\) is stable, i.e., has the shape of an ANR \((P,*)\). For compact metrizable spaces, this theorem was proved by Edwards and Geoghegan [154], [155]. In Theorem 43 an ANR \((P,*)\) cannot be replaced by a compact ANR even in the case of a metrizable continuum. Indeed, the above-mentioned two-dimensional metrizable continuum constructed by Edwards and Geoghegan [153] is an ANSR but does not have the shape of a compact ANR. The question of when a compact metrizable space \((X,*)\) has the shape of a compact ANR has been answered by using tools of algebraic \(K\)-theory. To be more precise, a metrizable continuum \((X,*)\) has the shape of a finite complex precisely when the so-called Wall obstruction [393]\(\sigma(X,*)\), which takes values in the reduced Grothendieck group \(\widetilde{K}^{0}(\tilde{\pi}_{1}(X,*))\), is trivial (see [153]). Theorems 35 and 43 imply the following algebraic characterization of ANSRs. **Theorem 44**.: Let \((X,*)\) be a connected space of finite shape dimension \(\operatorname{sd}X\). Then \((X,*)\) is an ANSR if and only if all homotopy pro-groups pro-\(\pi_{k}(X,*)\) are stable. The following important result was obtained by Hastings and Heller [206], [207]. **Theorem 45**.: Let \(X\) be a connected ANSR. Then \((X,*)\) is a pointed ANSR. The proof of this theorem has become possible after the splitting problem for homotopy idempotents was solved. Recall that a map \(f\colon X\to X\) is called a _homotopy idempotent_ if \(f^{2}\simeq f\). A map \(f\colon X\to X\) is said to _split_ in the category H-CW if there exists a CW-complex \(P\) and maps \(u\colon P\to X\) and \(v\colon X\to P\) such that \(v\circ u\simeq 1_{P}\) and \(u\circ v\simeq f\). **Theorem 46**.: Any homotopy idempotent \(f\colon P\to P\) of a finite-dimensional CW-complex \(P\) splits. In the case of infinite-dimensional CW-complexes, this statement is false. Dydak [125] and Minc, and also Freyd and Heller [172], independently proved the existence of a group \(G\) and a homomorphism \(\phi\colon G\to G\) which induces a nonsplitting homotopy idempotent \(f\colon K(G,1)\to K(G,1)\) of the infinite-dimensional Eilenberg-MacLane complex \(K(G,1)\). Moreover, the group \(G\) is universal in the sense that if \(f^{\prime}\colon X\to X\) is a nonsplitting homotopy idempotent of an infinite-dimensional CW-complex \(X\), then there exists an injection \(G\to\pi_{1}(X)\) equivariant with respect to \(f_{*}\) and \(f^{\prime}_{*}\). However, it is well known that any homotopy idempotent \(f\colon(X,*)\to(Y,*)\) between pointed connected CW-complexes splits (see papers [87] by Brown, [153] by Edwards and Geoghegan, and [171] by Freyd). Note that, in the nonpointed case, the problem of splitting homotopy idempotents arises in various areas of topology. In homotopy theory this problem is closely related to Brown's theorem on the representability of half-exact functors (see papers [172] by Freyd and Heller and [208] by Heller), and in the shape theory, to the study of ANSRs. It follows from Theorems 32 and 45 that Theorem 43 remains valid in the nonpointed case. **Theorem 47**.: A connected metrizable space \(X\) is an ANSR if and only if \(X\) is stable. ## 7. Whitehead's and Hurewicz' Theorems in Shape Theory It was mentioned in the introduction that Whitehead's classical theorem is not valid for arbitrary spaces instead of CW-complexes. Extending Whitehead's theorem to more general spaces is one of the objectives of shape theory. Let \((X,*)\) be a pointed space, and let \((\mathbf{X},\star)=((X_{\alpha},*),\mathbf{p}_{\alpha\alpha^{\prime}},A)\) be an inverse ANR-system associated with it. Applying the functors \(\pi_{n}\), \(n=0,1,\dots\), to this system, we obtain the _homotopy pro-groups_ of the pointed space \((X,*)\): \[\text{pro-}\pi_{n}(X,*)=(\pi_{n}(X_{\alpha},*),\mathbf{p}_{\alpha\alpha^{ \prime}\#},A).\] The homotopy pro-groups pro-\(\pi_{n}(X,*)\) are an analogue of the homotopy groups \(\pi_{n}(X,*)\) in shape theory. Recall that any shape morphism \(F\colon(X,*)\to(Y,*)\) is determined by a morphism \(\mathbf{f}\colon(\mathbf{X},\star)\to(\mathbf{Y},\star)\) in the category pro-H-CW; this suggest the natural definition of a _morphism_ pro-\(\pi_{n}(F)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\)_of homotopy pro-groups_. It is easy to show that pro-\(\pi_{n}\) is a covariant functor from the shape category \(\text{Sh-Top}_{*}\) to the category \(\text{pro-Set}_{*}\) for \(n=0\) and to the category \(\text{pro-Grp}_{*}\) for \(n=1,2,\dots\). In particular, \(\text{pro-}\pi_{n}(X,*)\) is a shape invariant. The inverse limits of the pro-homotopy groups \(\text{pro-}\pi_{n}(X,*)\) are called the _shape groups_ of the pointed space \((X,*)\) and denoted by \(\tilde{\pi}_{n}(X,*)\): \[\tilde{\pi}_{n}(X,*)=\lim_{\longleftarrow}(\pi_{n}(X_{\alpha},*),p_{\alpha \alpha^{\prime}\#},A).\] If \(F\colon(X,*)\to(Y,*)\) is a shape morphism, then the passage to the limit yields a homomorphism \(\tilde{\pi}_{n}(F)\colon\tilde{\pi}_{n}(X,*)\to\tilde{\pi}_{n}(Y,*)\) of shape groups. Therefore, \(\tilde{\pi}_{n}\) is a covariant functor from the category \(\text{Sh-Top}_{*}\) to the category \(\text{Set}_{*}\) (for \(n=0\)) or to the category \(\text{Grp}_{*}\) (for \(n=1,2,\dots\)). The _homology pro-group_ pro-\(H_{n}(X)\) is defined in a similar way: \[\text{pro-}H_{n}(X)=(H_{n}(X_{\alpha}),H_{n}(p_{\alpha\alpha^{\prime}}),A).\] Passing to the limit in this system, we obtain the well-known _Cech homology group_\(\tilde{H}_{n}(X)\) of the space \(X\). It should be mentioned that, for spaces with good local structure, such as ANRs, the homotopy (homology) pro-groups can be replaced by shape groups (Cech homology groups). However, in the general case, a part of information is lost under the passage to the limit. Therefore, the homotopy or homology pro-groups contain more information about the space \(X\) than the shape or Cech homology groups. The following theorem of Morita [309] is the most general version of Whitehead's theorem in shape theory. **Theorem 48** (shape version of Whitehead's theorem, Morita [309]).: A shape morphism \(F\colon(X,*)\to(Y,*)\) of finite-dimensional (in the sense of dimension sd) connected topological spaces is a shape equivalence if and only if the induced homomorphisms \(\text{pro-}\pi_{n}(F)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\) of homotopy pro-groups are isomorphisms for all \(n\). Unlike Whitehead's classical theorem, this theorem is valid for any spaces, and it deals with homotopy pro-groups rather than homotopy groups. However, it involves a new constraint, namely, the finite dimensionality of the spaces \(X\) and \(Y\). This constraint cannot be removed. The corresponding example was constructed in [122] by using the Kahn metric continuum [221], which is defined as follows. First, a space \(X_{0}\) is defined as the CW-complex obtained by attaching a \((2p+1)\)-cell to the sphere \(S^{2p}\) by a map of prime degree \(p\). Then, for each \(n\geqslant 0\), a space \(X_{n+1}\) is constructed by induction as the \((2p-2)\)-fold suspension of \(X_{n}\): \[X_{n+1}=\Sigma^{2p-2}(X_{n}).\] Between these spaces maps \(f_{n}\colon X_{n}\to X_{n-1}\), \(n>1\), are defined, also by induction, as \[f_{n}=\Sigma^{2p-2}(f_{n-1}),\] beginning with map \(f_{1}\colon X_{1}\to X_{0}\) constructed in a certain special way. The Kahn compact space \(K\) is the limit of the inverse sequence \[X_{0}\xleftarrow{f_{1}}X_{1}\xleftarrow{f_{2}}X_{2}\xleftarrow\cdots\] Since \(\pi_{k}\left(\Sigma^{n(2p-2)},*\right)=0\) for \(n(2p-2)\geqslant k\), it follows that \(\text{pro-}\pi_{k}(K,*)=0\) for all \(k\). However, \(K\) does not have the shape of a point, because the composition \[f_{1}\circ f_{2}\circ\ldots\circ f_{n}\colon X_{n}\to X_{0}\] is essential (i.e., not homotopic to a constant) for any \(n\) (see Adams' papers [36] and [37]). This fact follows also from Toda's results [379]. The first Whitehead-type theorem in shape theory was proved by Moszynska [311]. She proved Theorem 48 for metrizable continua \((X,*)\) and \((Y,*)\). Mardesic [274] proved this theorem in the case where \((X,*)\) and \((Y,*)\) are compact and \((Y,*)\) is metrizable and in the case where \((X,*)\) and \((Y,*)\) are any spaces and the shape morphism \(F\colon(X,*)\to(Y,*)\) is induced by a continuous map. For movable spaces, the shape analogue of Whitehead's theorem (Theorem 48) takes a simpler form [311], [231], [141], [143]. **Theorem 49**.: Let \((X,*)\) and \((Y,*)\) be movable metrizable continua of finite shape dimension. Then a shape morphism \(F\colon(X,*)\to(Y,*)\) is a shape equivalence if and only if all induced homomorphisms \(\check{\pi}_{n}(F)\colon\check{\pi}_{n}(X,*)\to\check{\pi}_{n}(Y,*)\) of shape groups are isomorphisms. This theorem is a consequence of Theorem 48 and the following result of independent interest [231], [143]. **Theorem 50**.: Let \(F\colon(X,*)\to(Y,*)\) be a shape morphism movable metrizable continua. If, for some \(n\), the induced homomorphism \(\check{\pi}_{n}(F)\colon\check{\pi}_{n}(X,*)\to\check{\pi}_{n}(Y,*)\) is an isomorphism (epimorphism) of shape groups, then \(\text{pro-}\pi_{n}(F)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\) is an isomorphism (epimorphism) of pro-groups. The first "infinite-dimensional" Whitehead-type theorems were proved by Edwards and Geoghegan in [152] and generalized in [137] and [141]. The most general results are the following two theorems of Dydak [143]. **Theorem 51**.: Let \((X,*)\) and \((Y,*)\) be connected spaces. Suppose that \((X,*)\) is movable and \(F\colon(X,*)\to(Y,*)\) is a shape domination. If \(\text{pro-}\pi_{n}(F)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\) is an isomorphism for every \(n\), then \(F\) is a shape equivalence. **Theorem 52**.: Let \(F\colon(X,*)\to(Y,*)\) be a shape morphism of connected spaces. Suppose that one of the following two conditions holds: (i) \(\operatorname{sd}X<\infty\) and \((Y,*)\) movable; (ii) \((X,*)\) is movable and \(\operatorname{sd}Y<\infty\). If \(\text{pro-}\pi_{n}(F)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\) is an isomorphism for every \(n\), then \(F\) is a shape equivalence. In particular, if \((X,*)\) is movable and all pro-groups \(\text{pro-}\pi_{n}(X,*)\) are trivial, then \(\operatorname{sh}(X,*)=0\). In the last theorem the finite-dimensionality of spaces cannot be replaced by their movability. Draper and Keesling [122] constructed a map \(f\colon(X,*)\to(Y,*)\) of movable metrizable continua which induces isomorphisms of all homotopy pro-groups but is not a shape equivalence. A similar example was also constructed by Kozlowski and Segal [252]. A shape analogue of Whitehead's theorem for movable morphisms was obtained by Gevorgyan and Pop [187]. **Theorem 53**.: Let \((X,*)\) and \((Y,*)\) be connected spaces. Suppose that \(\operatorname{sd}X<\infty\) and \(F\colon(X,*)\to(Y,*)\) is a movable shape morphism being a shape domination. If \(\text{pro-}\pi_{n}(F)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\) is an isomorphism for every \(n\), then \(F\) is a shape equivalence. Whitehead's theorem has also homology versions (see [275], [309], [341]). In their proofs the key role is played by the following result (see [309], [341]). **Theorem 54**.: Let \(F\colon(X,*)\to(Y,*)\) be a shape morphism. If \[\text{pro-}\pi_{0}(X,*)=\text{pro-}\pi_{1}(X,*)=\text{pro-}\pi_{0}(Y,*)=\text {pro-}\pi_{1}(Y,*)=0,\] then the following two conditions are equivalent for each \(n\geqslant 2\): (i) \(\text{pro-}\pi_{k}(F)\) is an isomorphism for \(k<n\) and an epimorphism for \(k=n\); (ii) \(\text{pro-}H_{k}(F)\) is an isomorphism for \(k<n\) and an epimorphism for \(k=n\). The well-known _Hurewicz homomorphism_\(\phi_{n}\colon\pi_{n}(X,*)\to H_{n}(X)\) naturally generates a morphism \(\text{pro-}\phi_{n}\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}H_{n}(X)\). The passage to the limit yields also the _spectral Hurewicz homomorphism_\(\tilde{\phi}_{n}\colon\tilde{\pi}_{n}(X,*)\to\tilde{H}_{n}(X)\). Hurewicz' classical theorem asserts that _if \((X,*)\) is an \((n-1)\)-connected space, i.e., \(\pi_{k}(X,*)=0\) for all \(0\leqslant k\leqslant n-1\), then, for any \(n\geqslant 2\), the following conditions hold:_ (i) _\(H_{k}(X)=0\), \(0\leqslant k\leqslant n-1\);_ (ii)_\(\phi_{n}\colon\pi_{n}(X,*)\to H_{n}(X)\) is an isomorphism;_ _is an epimorphism;_ (iv)_\(\phi_{1}\colon\pi_{1}(X,*)\to H_{1}(X)\) is an epimorphism._ This theorem was transferred to shape theory in various forms by many authors. The following version of Hurewicz' theorem for the pro-category can be found in [289]. **Theorem 55**.: Let \((X,*)\) be a shape \((n-1)\)-connected space, i.e., \(\text{pro-}\pi_{k}(X,*)=0\), \(k\leqslant n-1\). Then, for all \(n\geqslant 2\), the following conditions hold: (i) pro-\(H_{k}(X)=0\), \(1\leqslant k\leqslant n-1\); (ii) pro-\(\phi_{n}\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}H_{n}(X)\) is an isomorphism of pro-groups; (iii) pro-\(\phi_{n+1}\colon\text{pro-}\pi_{n+1}(X,*)\to\text{pro-}H_{n+1}(X)\) is an epimorphism if pro-groups. For \(n=1\), (iv) pro-\(\phi_{1}\colon\text{pro-}\pi_{1}(X,*)\to\text{pro-}H_{1}(X)\) is an epimorphism. Somewhat different statements of this theorem were presented in [54], [291], and [341]. Kuperberg [261] proved the following theorem, which is a special case of a more general result due to Artin and Mazur [54]. **Theorem 56**.: If a compact metrizable space \((X,*)\) is shape \((n-1)\)-connected for \(n\geqslant 2\), then the Hurewicz spectral homomorphism \(\check{\phi}_{n}\colon\check{\pi}_{n}(X,*)\to\check{H}_{n}(X)\) is an isomorphism. For movable compact metrizable spaces \((X,*)\), the assumption that \((X,*)\) is shape \((n-1)\)-connected can be replaced by the weaker assumption \(\check{\pi}_{k}(X,*)=0\), \(k\leqslant n-1\) (see [261]). **Theorem 57**.: Let \((X,*)\) be a movable compact metrizable space such that \(\check{\pi}_{k}(X,*)=0\) for all \(k\leqslant n-1\), \(n\geqslant 2\). Then \(\check{\phi}_{n}\colon\check{\pi}_{n}(X,*)\to\check{H}_{n}(X)\) is an isomorphism. The movability assumption in this theorem is essential. Indeed, let \(X\) be the \(2\)-fold suspension of the \(3\)-adic solenoid. The space \(X\) is connected, and \(\check{\pi}_{k}(X,*)=0\) for all \(k\leqslant 3\). However, \(\check{\pi}_{4}(X,*)\) is not isomorphic to \(\check{H}_{4}(X)\), because \(\check{\pi}_{4}(X,*)\neq 0\), while \(\check{H}_{4}(X)=0\) (see [260]). The metrizability assumption in Theorem 57 is essential as well. Keesling [235] proved that, for any \(n\geqslant 2\), there exists a nonmetrizable movable continuum \((X,*)\) such that \(\check{\pi}_{k}(X,*)=0\) for all \(k\leqslant n-1\) but \(\check{\pi}_{n}(X,*)\) is not isomorphic to \(\check{H}_{n}(X)\). In papers [331] by Porter, [291] by Mardesic and Ungar, and [384] by Ungar Theorem 56 was proved for pairs \((X,A)\) (see also [309] and [396]). A Hurewicz theorem with an application of Steenrod homology is due to Kodama and Koyama [244]. It should be mentioned that the first spectral Hurewicz theorem was proved by Christie [111] but only in strong shape theory. Subsequently, similar theorems with an application of Steenrod homology were proved in [22], [244], and [340]. ## 8. The Shape Dimension of Spaces and Maps The dimension of topological spaces is not a shape invariant. For example, a contractible space, which may have arbitrarily large dimension, has the shape of a point, which is zero-dimensional. This leads to the necessity of introducing a numerical shape invariant playing the same role in shape theory as dimension plays in topology. Such an invariant, called _fundamental dimension_, was introduced by Borsuk [75], who defined it first for compact metrizable spaces. **Definition 9**.: The _fundamental dimension_ of a compact metrizable space \(X\), denoted by \(\operatorname{Fd}X\), is defined as the least dimension \(\dim Y\) of a compact metrizable space \(Y\) such that \(\operatorname{sh}Y\geqslant\operatorname{sh}X\): \[\operatorname{Fd}X=\min\{\dim Y:\operatorname{sh}Y\geqslant\operatorname{sh}X\}.\] The fundamental dimension of compact metrizable spaces was thoroughly studied by Novak [321, 322, 323, 316, 317, 318, 319, 324, 325, 326, 327, 319] and Spiez [367, 369]. For arbitrary topological spaces, Dydak [142] introduced the notion of _deformation dimension_; its definition in a slightly modified form is as follows. **Definition 10**.: A topological space \(X\) has _deformation dimension_\(\operatorname{ddim}X\leqslant n\) if any map \(f\colon X\to P\) to an ANR \(P\) can be homotopically factored through an ANR \(P^{\prime}\) of dimension \(\dim P^{\prime}\leqslant n\), i.e., \(f\simeq h\circ g\), where \(g\colon X\to P^{\prime}\) and \(h\colon P^{\prime}\to P\) are some maps. Mardesic and Segal [289] defined _shape dimension_ as follows. **Definition 11**.: A topological space \(X\) has _shape dimension_\(\operatorname{sd}X\leqslant n\) if there exists an associated inverse ANR-system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) such that each \(X_{\alpha}\), \(\alpha\in A\), is homotopy dominated by an ANR of dimension \(\leqslant n\). If \(\operatorname{sd}X\leqslant n\) and \(n\) is the least such number, then we say that \(X\) has _shape dimension_\(n\) and write \(\operatorname{sd}X=n\). If \(\operatorname{sd}X\leqslant n\) for no integer \(n\geqslant 0\), then we say that \(X\) has _infinite shape dimension_ and write \(\operatorname{sd}X=\infty\). The following theorem is valid [142]. **Theorem 58**.: A topological space \(X\) has shape dimension \(\operatorname{sd}X\leqslant n\) if and only if there exists an associated inverse ANR-system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) such that \(\dim X_{\alpha}\leqslant n\) for all \(\alpha\in A\). Proof.: Let \(\operatorname{sd}X\leqslant n\). Consider an associated inverse ANR-system \(\mathbf{X}=\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) satisfying the condition in Definition 11. This means that, for each space \(X_{\alpha}\), there exists an ANR \(Y_{\alpha}\) of dimension \(\dim Y_{\alpha}\leqslant n\) and homotopy classes \(\mathbf{f}_{\alpha}\colon X_{\alpha}\to Y_{\alpha}\) and \(\mathbf{g}_{\alpha}\colon Y_{\alpha}\to X_{\alpha}\) such that \[\mathbf{g}_{\alpha}\mathbf{f}_{\alpha}=\mathbf{1}_{X_{\alpha}}. \tag{3}\] For any \(\alpha,\alpha^{\prime}\in A\), \(\alpha\leqslant\alpha^{\prime}\), we define a homotopy class \(\mathbf{q}_{\alpha\alpha^{\prime}}\colon Y_{\alpha^{\prime}}\to Y_{\alpha}\) by \[\mathbf{q}_{\alpha\alpha^{\prime}}=\mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha ^{\prime}}\mathbf{g}_{\alpha^{\prime}}. \tag{4}\] Note that if \(\alpha\leqslant\alpha^{\prime}\leqslant\alpha^{\prime\prime}\), then \[\mathbf{q}_{\alpha\alpha^{\prime}}\mathbf{q}_{\alpha^{\prime}\alpha^{\prime \prime}}=\mathbf{q}_{\alpha\alpha^{\prime\prime}}. \tag{5}\] Indeed, according to (3) and (4), we have \[\mathbf{q}_{\alpha\alpha^{\prime}}\mathbf{q}_{\alpha^{\prime}\alpha^{\prime \prime}}=\mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{g}_{ \alpha^{\prime}}\mathbf{f}_{\alpha}^{\prime}\mathbf{p}_{\alpha^{\prime}\alpha ^{\prime\prime}}\mathbf{g}_{\alpha^{\prime\prime}}=\mathbf{f}_{\alpha} \mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{1}_{X_{\alpha^{\prime}}}\mathbf{p}_{ \alpha^{\prime}\alpha^{\prime\prime}}\mathbf{g}_{\alpha^{\prime\prime}}= \mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{g}_{\alpha^{ \prime\prime}}=\mathbf{q}_{\alpha\alpha^{\prime\prime}}.\] Thus, we have constructed an inverse ANR \(\mathbf{Y}=\{Y_{\alpha},\mathbf{q}_{\alpha\alpha^{\prime}},A\}\), where \(\dim Y_{\alpha}\leqslant n\) for all \(\alpha\in A\). Let us prove that this inverse system is associated with the space \(X\) (see Definition 3). Let \(\alpha\in A\) be any index. Consider the homotopy class \(\mathbf{q}_{\alpha}\colon X\to Y_{\alpha}\) defined by \[\mathbf{q}_{\alpha}=\mathbf{f}_{\alpha}\mathbf{p}_{\alpha}. \tag{6}\] This homotopy class satisfies the condition \[\mathbf{q}_{\alpha\alpha^{\prime}}\mathbf{q}_{\alpha^{\prime}}=\mathbf{q}_{ \alpha}, \tag{7}\] where \(\alpha\leqslant\alpha^{\prime}\). Indeed, by virtue of (3), (4), and (6) we have \[\mathbf{q}_{\alpha\alpha^{\prime}}\mathbf{q}_{\alpha^{\prime}}=\mathbf{f}_{ \alpha}\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{g}_{\alpha^{\prime}}\mathbf{ f}_{\alpha^{\prime}}\mathbf{p}_{\alpha^{\prime}}=\mathbf{f}_{\alpha}\mathbf{p}_{ \alpha\alpha^{\prime}}\mathbf{1}_{X_{\alpha^{\prime}}}\mathbf{p}_{\alpha^{ \prime}}=\mathbf{f}_{\alpha}\mathbf{p}_{\alpha}=\mathbf{q}_{\alpha}.\] Now, let us verify condition (ii) in Definition 3. Let \(P\) be any ANR, and let \(\mathbf{f}\colon X\to P\) be any homotopy class. Then there exists an index \(\alpha\in A\) and a homotopy class \(\mathbf{h}_{\alpha}\colon X_{\alpha}\to P\) such that \[\mathbf{h}_{\alpha}\mathbf{p}_{\alpha}=\mathbf{f}. \tag{8}\] Consider \(\tilde{\mathbf{h}}_{\alpha}\colon Y_{\alpha}\to P\) defined by \[\tilde{\mathbf{h}}_{\alpha}=\mathbf{h}_{\alpha}\mathbf{g}_{\alpha}. \tag{9}\] We claim that \(\tilde{\mathbf{h}}_{\alpha}\mathbf{q}_{\alpha}=\mathbf{f}\). Indeed, taking into account (3), (6), (8), and (9), we obtain \[\tilde{\mathbf{h}}_{\alpha}\mathbf{q}_{\alpha}=\mathbf{h}_{\alpha}\mathbf{g}_{ \alpha}\mathbf{f}_{\alpha}\mathbf{p}_{\alpha}=\mathbf{h}_{\alpha}\mathbf{1}_{ X_{\alpha}}\mathbf{p}_{\alpha}=\mathbf{h}_{\alpha}\mathbf{p}_{\alpha}=\mathbf{f}.\] It remains to verify condition (iii) in Definition 3. Suppose that \(\boldsymbol{\varphi},\boldsymbol{\psi}\colon Y_{\alpha}\to P\) are homotopy classes such that \[\boldsymbol{\varphi}\mathbf{q}_{\alpha}=\boldsymbol{\psi}\mathbf{q}_{\alpha}. \tag{10}\] By virtue of (6) this relation implies \[\boldsymbol{\varphi}\mathbf{f}_{\alpha}\mathbf{p}_{\alpha}=\boldsymbol{\psi} \mathbf{f}_{\alpha}\mathbf{p}_{\alpha}. \tag{11}\] Hence there exists an \(\alpha^{\prime}\in A\), \(\alpha^{\prime}\geqslant\alpha\), for which \[\boldsymbol{\varphi}\mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha^{\prime}}= \boldsymbol{\psi}\mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha^{\prime}}. \tag{12}\] Relations (2) and (12) directly imply \[\boldsymbol{\varphi}\mathbf{q}_{\alpha\alpha^{\prime}}=\boldsymbol{\varphi} \mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{g}_{\alpha^{ \prime}}=\boldsymbol{\varphi}\mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha^{ \prime}}\mathbf{g}_{\alpha^{\prime}}=\boldsymbol{\psi}\mathbf{f}_{\alpha} \mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{g}_{\alpha^{\prime}}=\boldsymbol{ \psi}\mathbf{f}_{\alpha}\mathbf{p}_{\alpha\alpha^{\prime}}\mathbf{g}_{\alpha^{ \prime}}=\boldsymbol{\psi}\mathbf{q}_{\alpha\alpha^{\prime}}.\] The sufficiency part of the theorem is obvious. The following theorem [289] shows that, for any topological space, the notions of deformation dimension and shape dimension are equivalent. **Theorem 59**.: Let \(X\) be topological space. Then \(\operatorname{sd}X\leqslant n\) if and only if \(\operatorname{ddim}X\leqslant n\). Proof.: The implication \(\operatorname{sd}X\leqslant n\implies\operatorname{ddim}X\leqslant n\) easily follows from Theorem 58. Let us prove the implication \(\operatorname{ddim}X\leqslant n\implies\operatorname{sd}X\leqslant n\). Consider an inverse ANR-system \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) associated with \(X\). In view of the CW-approximation theorem, we can assume without loss of generality that all \(X_{\alpha}\) are CW-complexes and all \(p_{\alpha\alpha^{\prime}}\) are cell maps. Since \(\operatorname{ddim}X\leqslant n\), it follows that, for each map \(p_{\alpha}\colon X\to X_{\alpha}\), there exists a CW-complex \(P_{\alpha}\) with \(\dim P_{\alpha}\leqslant n\) and maps \(u_{\alpha}\colon X\to P_{\alpha}\) and \(v_{\alpha}\colon P_{\alpha}\to X_{\alpha}\) such that \(p_{\alpha}\simeq v_{\alpha}\circ u_{\alpha}\). In view of the CW-approximation theorem, we can also assume that \(v_{\alpha}\colon P_{\alpha}\to X_{\alpha}\) is a cell map, i.e., a map to the \(n\)-skeleton \(X_{\alpha}^{n}\subset X_{\alpha}\). Now, consider the inverse system \(\{X_{\alpha}^{n},\mathbf{q}_{\alpha\alpha^{\prime}},A\}\), where each \(X_{\alpha}^{n}\) is the \(n\)-skeleton of the CW-complex \(X_{\alpha}\) and \(q_{\alpha\alpha^{\prime}}=p_{\alpha\alpha^{\prime}}|X_{\alpha^{\prime}}^{n}\). For each \(\alpha\in A\), we define a map \(q_{\alpha}\colon X\to X_{\alpha}^{n}\) by \(q_{\alpha}=v_{\alpha}u_{\alpha}\). It is easy to show that this inverse ANR-system is associated with the space \(X\). To complete the proof, it remains to apply Theorem 58. The following theorem is a corollary of Theorem 58. **Theorem 60**.: For any topological space \(X\), \(\operatorname{sd}X\leqslant\dim X\). Proof.: Let \(\dim X\leqslant n\), and let \(\{X_{\alpha},\mathbf{p}_{\alpha\alpha^{\prime}},A\}\) be an associated inverse system consisting of CW-complexes \(X_{\alpha}\). In view of the CW-approximation theorem, we can assume that all \(p_{\alpha\alpha^{\prime}}\) are cell maps, i.e., \(p_{\alpha\alpha^{\prime}}(X_{\alpha^{\prime}}^{n})\subset X_{\alpha}^{n}\), where \(X_{\alpha}^{n}\) is the \(n\)-skeleton of the CW-complex \(X_{\alpha}\). Consider the inverse system \(\{X_{\alpha}^{n},\mathbf{q}_{\alpha\alpha^{\prime}},A\}\), where \(q_{\alpha\alpha^{\prime}}=p_{\alpha\alpha^{\prime}}|X_{\alpha^{\prime}}\). Since \(\dim X\leqslant n\), it follows that there exists a map \(q_{\alpha}\colon X\to X_{\alpha}^{n}\) such that \(i\circ q_{\alpha}\simeq p_{\alpha}\), where \(i\colon X_{\alpha}^{n}\to X_{\alpha}\) is the inclusion map. It is easy to prove that this system with limit projections \(q_{\alpha}\colon X\to X_{\alpha}^{n}\), \(\alpha\in A\), is associated with the space \(X\). Now, applying Theorem 58, we obtain \(\operatorname{sd}X\leqslant n\). **Theorem 61**.: If \(X\) and \(Y\) are any spaces and \(\operatorname{sh}X\leqslant\operatorname{sh}Y\), then \(\operatorname{sd}X\leqslant\operatorname{sd}Y\). Proof.: Suppose that \(\operatorname{sh}X\leqslant\operatorname{sh}Y\) and \(\operatorname{sd}Y\leqslant n\). Consider any map \(f\colon X\to P\) to an ANR \(P\). By assumption there exist shape morphisms \(F\colon X\to Y\) and \(G\colon Y\to X\) such that \(G\circ F=1_{X}\). To the map \(f\colon X\to P\) the shape morphism \(G\) assigns a map \(G(f)\colon Y\to P\) (see Definition 9). Since \(\operatorname{sd}Y\leqslant n\), it follows that there exists an ANR \(P^{\prime}\) with \(\dim P^{\prime}\leqslant n\) and maps \(u\colon Y\to P^{\prime}\) and \(v\colon P^{\prime}\to P\) such that \(v\circ u\simeq G(f)\). But then \(v\circ F(u)\simeq F(G(f))=f\). Therefore, according to Theorem 59, we have \(\operatorname{sd}X\leqslant n\). **Corollary 6**.: If \(\operatorname{sh}X=\operatorname{sh}Y\), then \(\operatorname{sd}X=\operatorname{sd}Y\). Thus, the dimension \(\operatorname{sd}\) is a shape invariant. It follows from Corollary 6 that the shape dimension of a contractible space equals \(0\). Thus, the shape dimension of the disk equals \(0\). On the other hand, the shape dimension of the circle \(S^{1}\) equals \(1\). Therefore, the shape dimension \(\operatorname{sd}X\) is not a monotone function of \(X\). However, as Theorem 61 show, it is a monotone function of the shape \(\operatorname{sh}X\). **Theorem 62** (Holsztynski, see [322]).: Let \(X\) be a compact metrizable space with \(\operatorname{sd}X=n\). Then there exists a compact metrizable space \(Y\) with \(\dim Y=n\) such that \(\operatorname{sh}X=\operatorname{sh}Y\). **Corollary 7**.: Let \(X\) be a compact metrizable space. Then \(\operatorname{sd}X\leqslant n\) if and only if there exists a compact metrizable space \(Y\) such that \(\operatorname{sh}X\leqslant\operatorname{sh}Y\) and \(\dim Y\leqslant n\). It immediately follows from this corollary that \(\operatorname{sd}X=\operatorname{Fd}X\) for any compact metrizable space \(X\), i.e., Borsuk's notion of fundamental dimension (see Definition 9) coincides with Mardesic's shape dimension (see Definition 11). **Remark 3**.: Theorem 62 shows that the condition \(\operatorname{sh}Y\geqslant\operatorname{sh}X\) in the definition of fundamental dimension (Definition 9) can be replaced by \(\operatorname{sh}Y=\operatorname{sh}X\). The following theorem is due to Borsuk [75]. **Theorem 63**.: Let \(X\) and \(Y\) be compact metrizable spaces. Then \[\operatorname{sd}(X\times Y)\leqslant\operatorname{sd}X+\operatorname{sd}Y. \tag{13}\] Novak [318] showed that there exist compact connected polyhedra \(P\) and \(Q\) with \(\dim P=m\) and \(\dim Q=n\), where \(m,n\geqslant 3\), such that \(\dim(P\times Q)=\max\{m,n\}\). It follows that inequality (13) cannot generally be replaced by the equality \[\operatorname{sd}(X\times Y)=\operatorname{sd}X+\operatorname{sd}Y. \tag{14}\] Spiez [367], [369] proved that there exists a metrizable continuum \(X\) of shape dimension \(\operatorname{sd}X=2\) such that \[\operatorname{sd}(X\times S^{n})<\operatorname{sd}X+n.\] In particular, for \(n=1\), we obtain \[\operatorname{sd}(X\times S^{1})=\operatorname{sd}X=2. \tag{15}\] Moreover, if a continuum \(X\) satisfies condition (15), then \[\operatorname{sd}(X\times Y)=\operatorname{sd}(S^{1}\times Y) \tag{16}\] for any continuum \(Y\) with \(\operatorname{sd}Y>0\). Nevertheless, the following theorem of Novak is valid [323]. **Theorem 64**.: Let \(X\) and \(Y\) be compact metrizable spaces such that \(\operatorname{sd}X\neq 2\), \(\dim Y=n\), and the cohomology group \(H^{n}(Y,G)\) is nontrivial for any nontrivial Abelian group \(G\). Then equality (14) holds. Keesling [230] studied the shape dimension of topological groups; in particular, he proved the following theorem. **Theorem 65**.: If \(X\) is a connected compact group, then \(\operatorname{sd}X=\dim X\). The shape dimension of the Stone-Cech compactification \(\beta X\) was studied in [234]. We mention the following result. **Theorem 66**.: Let \(X\) be a Lindelof space, and let \(K\subset\beta X\backslash X\) be a continuum in the Stone-Cech remainder of \(X\). Then \(\operatorname{sd}K=\dim K\). In relation to this theorem, we give the following interesting result of Winslow [414]. **Theorem 67**.: The Stone-Cech remainder \(\beta\mathbf{R}^{3}\backslash\mathbf{R}^{3}\) contains \(\mathfrak{c}=2^{\aleph_{0}}\) continua of different shapes and \(2^{\mathfrak{c}}\) continua of different topological types. The notion of the shape dimension of shape morphisms and continuous maps between topological spaces was introduced in [189]. We say that the _dimension_\(\dim(\mathbf{f}_{\beta},\varphi)\) of a morphism \((\mathbf{f}_{\beta},\varphi)\colon\mathbf{X}=(X_{\alpha},\mathbf{p}_{\alpha \alpha^{\prime}},A)\to\mathbf{Y}=(Y_{\beta},\mathbf{q}_{\beta\beta^{\prime}},B)\) of inverse \(\operatorname{ANR}\)-systems is \(\leqslant n\) if, for any \(\beta\in B\), there exists an \(\alpha\geqslant\varphi(\beta)\) such that the map \(f_{\beta\alpha}\colon=f_{\beta}p_{\varphi(\beta)\alpha}\colon X_{\alpha}\to Y_ {\beta}\) can be homotopically factored through an \(\operatorname{ANR}\)\(P\) of dimension \(\dim P\leqslant n\), i.e., there exist maps \(u\colon X_{\alpha}\to P\) and \(v\colon P\to Y_{\beta}\) such that \(f_{\beta\alpha}\simeq v\circ u\). This property is preserved under the passage to equivalent morphisms of inverse systems. Therefore, we can define the _dimension of morphisms in the category pro-\(\operatorname{\mathit{H}-\mathit{CW}}\)_ and the _shape dimension_ of a shape morphism. We say that the _shape dimension_\(\operatorname{sd}F\) of a shape morphism \(F\colon X\to Y\) is \(\leqslant n\) if the corresponding morphism \(\mathbf{f}\colon\mathbf{X}\to\mathbf{Y}\) in the category pro-\(\operatorname{H}\)-\(\operatorname{CW}\) has dimension \(\dim\mathbf{f}\leqslant n\). For a continuous map \(f\colon X\to Y\), the condition \(\operatorname{sd}f\leqslant n\) means that \(\operatorname{sd}S(f)\leqslant n\). **Theorem 68**.: A space \(X\) has shape dimension \(\operatorname{sd}X\leqslant n\) if and only if the identity map \(1_{X}\) has shape dimension \(\operatorname{sd}1_{X}\leqslant n\). **Theorem 69**.: Let \(F\colon X\to Y\) be a shape morphism. Then \(\operatorname{sd}F\leqslant\min(\operatorname{sd}X,\operatorname{sd}Y)\). In particular, for a continuous map \(f\colon X\to Y\), we have \(\operatorname{sd}f\leqslant\min(\operatorname{sd}X,\operatorname{sd}Y)\). It follows from Theorem 69 that maps between shape finite-dimensional spaces have finite shape dimension. There arises the natural question of whether the image of a shape finite-dimensional space under a shape finite-dimensional map can be shape infinite-dimensional. In [189], this question was answered in the affirmative, and an example of a shape finite-dimensional surjective map \(f\colon X\to Y\) between shape infinite-dimensional spaces \(X\) and \(Y\) was constructed. **Theorem 70**.: Let \(F\colon X\to Y\) be a shape equivalence of topological spaces with \(\operatorname{sd}F=n\). Then \(\operatorname{sd}X=\operatorname{sd}Y=n\). The following theorem gives an important characterization of the shape dimension of a map \(f\colon X\to Y\). **Theorem 71**.: A map \(f\colon X\to Y\) has shape dimension \(\operatorname{sd}f\leqslant n\) if and only if, given any map \(h\colon Y\to Q\) to an \(\operatorname{ANR}\)-space \(Q\), the composition \(h\circ f\) can be homotopically factored through an \(\operatorname{ANR}\)\(P\) of dimension \(\dim P\leqslant n\), i.e., there exist maps \(u\colon X\to P\) and \(v\colon P\to Q\) such that \(h\circ f\simeq v\circ u\). **Theorem 72**.: Let \(F\colon X\to Y\) be a shape morphism of dimension \(\operatorname{sd}F\leq n\). Then, for any Abelian group \(G\) and any \(k>n\), the homomorphisms \(\tilde{F}_{k}\colon\tilde{H}_{k}(X;G)\to\tilde{H}_{k}(Y;G)\) are trivial. ## 9. Embeddings in Shape Theory The notion of a _shape embedding_ is defined by analogy with topological embeddings and embeddings up to homotopy type. We say that a compact metrizable space \(X\) is _shape embedded_ in a space \(Y\) if there exists a compact metrizable subspace \(X^{\prime}\) of \(Y\) for which \(\operatorname{sh}X=\operatorname{sh}X^{\prime}\). First, we recall a classical embedding theorem for finite-dimensional compact metrizable spaces (see [1]). **Theorem 73** (Menger-Nobeling-Pontryagin).: Any \(n\)-dimensional compact metrizable space \(X\) is embedded in Euclidean space \(\mathbf{R}^{2n+1}\). Much effort has been devoted to reducing the dimension of the ambient Euclidean space \(\mathbf{R}^{2n+1}\). Flores [167] proved that this cannot be done without additional assumptions, having constructed an example of a compact \(n\)-dimensional polyhedron not embedded in \(\mathbf{R}^{2n}\) for each \(n\). However, any smooth \(n\)-manifold is smoothly embedded in \(\mathbf{R}^{2n}\) (see Whitney's paper [413]). Below we recall the well-known theorem of Stallings on homotopy embeddings of finite CW-complexes, which has turned out to be very useful in the study of shape embeddings. A simple proof of Stallings' theorem can be found in Dranishnikov and Repovs' paper [121]. **Theorem 74** (Stallings [370]).: Any finite \(n\)-dimensional (\(n>0\)) CW-complex \(K\) is embedded in \(\mathbf{R}^{2n}\) up to homotopy type, i.e., there exists a polyhedron \(M\) having the homotopy type of the CW-complex \(K\) which is embedded in \(\mathbf{R}^{2n}\). In this theorem the dimension of \(\mathbf{R}^{2n}\) cannot be reduced (at least for \(n=2^{k}\), \(k\geqslant 1\)), because it is known [325] that the real projective plane \(\mathbf{P}^{n}\) cannot be embedded in \(\mathbf{R}^{2n-1}\) up to homotopy type for such \(n\). In shape theory, the following natural question arises (it was asked by Borsuk in [10]). **Problem 1**.: Is it true that any \(n\)-dimensional metrizable continuum is shape embedded in \(\mathbf{R}^{2n}\)? Clearly, the answer is "no" already for \(n=1\), because solenoids, which are one-dimensional continua, cannot be shape embedded in \(\mathbf{R}^{2}\). Indeed, solenoids are not movable, while all compact sets in the plane are movable. Borsuk [10] proved a more general theorem, according to which a one-dimensional metrizable continuum \(X\) is shape embedded in \(\mathbf{R}^{2}\) if and only if \(X\) is movable. Problem 1 also has a negative solution in the case of metrizable continua \(X\) of shape dimension \(\operatorname{sd}X=n\). Duvall and Husch [124] constructed a metrizable continuum \(X\) of shape dimension \(\operatorname{sd}X=n\), \(n=2^{k}\), which is not shape embedded in \(\mathbf{R}^{2n}\) for each \(k>1\). Nevertheless, under certain additional constraints, Problem 1 has a positive solution. The first results in this direction were obtained by Ivansic [216]. **Theorem 75**.: Let \(X\) be a pointed \(1\)-movable metrizable continuum with \(\operatorname{sd}X=n\geqslant 3\). Then \(X\) is shape embedded in \(\mathbf{R}^{2n}\). This theorem is a corollary of the following more general theorem (see Husch and Ivansic's paper [212]). **Theorem 76**.: Let \(X\) be a shape \(r\)-connected \((r+1)\)-movable metrizable continuum with \(\operatorname{sd}X=n\geqslant 3\). Then \(X\) is shape embedded in \(\mathbf{R}^{2n-r}\). In the last theorem, the \((r+1)\)-movability assumption can be removed at the expense of increasing the dimension of \(\mathbf{R}^{2n-r}\) by \(1\) (Husch and Ivansic [212]). **Theorem 77**.: Let \(X\) be a shape \(r\)-connected metrizable continuum with \(\operatorname{sd}X=n\) and \(n-r\geqslant 2\). Then \(X\) is shape embedded in \(\mathbf{R}^{2n-r+1}\). Of interest is also the following problem, which was posed by Borsuk in [10]. **Problem 2**.: Let \(X\) and \(Y\) be compact metrizable spaces such that \(\operatorname{sh}X\leqslant\operatorname{sh}Y\) and \(Y\subset\mathbf{R}^{n}\). Is it true that \(X\) is shape embedded in \(\mathbf{R}^{n}\)? Husch and Ivansic [213] obtained a positive solution of this problem under fairly strong constraints. **Theorem 78**.: Let \(X\) and \(Y\) be metrizable continua such that \(Y\subset\mathbf{R}^{n}\), \(\dim Y=k\), \(X\) has the shape of a finite complex, and \(\operatorname{sh}X\leqslant\operatorname{sh}Y\). If \(3k<2(n-1)\) and \(k\geqslant 3\), then \(X\) is shape embedded in \(\mathbf{R}^{n}\). An example constructed by Kadlof [219] shows that Problem 2 has a negative answer. To be more precise, there exists a compact connected two-dimensional CW-complex which is homotopy dominated by a compact connected polyhedron \(Y\subset\mathbf{R}^{3}\) but is not shape embedded in \(\mathbf{R}^{3}\). Recall that a space \(X\) is said to be \(\Pi\)_-similar_, where \(\Pi\) is a family of finite polyhedra, if each open finite cover of \(X\) has an open refinement whose nerve is homeomorphic to a polyhedron \(P\) in \(\Pi\). If \(\Pi=\{T^{n},n\geqslant 0\}\), where each \(T^{n}=S^{1}\times\ldots\times S^{1}\) is the \(n\)-torus, then \(X\) is said to be \(T^{n}\)_-similar_. Any \(\Pi\)-similar continuum is the limit of an inverse system consisting of elements of the family \(\Pi\). Questions concerning shape embeddings of \(T^{n}\)-similar continua in Euclidean spaces were studied by Keesling and Wilson [239]. They proved the following theorems. **Theorem 79**.: An arbitrary \(n\)-dimensional compact connected Abelian topological group \(A\) is (topologically) embedded in \(\mathbf{R}^{n+2}\). **Theorem 80**.: Let \(X\) be a \(T^{n}\)-similar continuum. Then \(X\) is shape embedded in \(\mathbf{R}^{n+2}\). The proofs of these theorems essentially use McCord's theorem [295] on an embedding of the limit of an inverse system in \(\mathbf{R}^{n}\) and a theorem of Keesling [230] (see Theorem 9) characterizing \(T^{n}\)-similar continua. In [239] is was also proved that any \(T^{n}\)-similar continuum is embedded in \(\mathbf{R}^{2n}\). For \(I^{n}\)-similar continua, this was proved by Isbell [214]. ## 10. Cell-Like Maps and Shape Theory Cell-like maps (or, briefly, CE-maps) play a very important role in geometric topology. They were defined by Armentrout [52] in terms of \(UV\)-properties and have already been extensively studied when shape theory was discovered. Here we discuss those aspects of cell-like maps which are of particular interest for shape theory. Information about other questions can be found in the fairly extensive literature on CE-maps, which includes, in particular, the survey papers [264] by Lacher, [161] by Edwards, [175] by Geoghegan, [342] by Rubin, and [18] by Dranishnikov and Shchepin. **Definition 12**.: A map \(f\colon X\to Y\) is said to be _cell-like_ if all fibers \(f^{-1}(y)\), \(y\in Y\), have trivial shape, i.e., \(\operatorname{sh}(f^{-1}(y))=0\). If \(X\), \(Y\), and \(Z\) are metrizable ANRs and \(f\colon X\to Y\) and \(g\colon Y\to Z\) are cell-like maps, then the composition \(g\circ f\colon X\to Z\) is cell-like as well. Therefore, the compact metrizable ANRs and cell-like maps form a category. This is not the case for compact sets not being ANRs (see Taylor's paper [377]). The following version of Smale's theorem in shape theory was proved by Dydak [142] (see also papers [262] by Kuperberg, [243] by Kodama, and [6] by Bogatyi). **Theorem 81**.: Let \(X\) and \(Y\) be compact metrizable sets, and let \(f\colon X\to Y\) be a cell-like map. Then the induced homomorphisms \[\text{pro-}\pi_{n}(f)\colon\text{pro-}\pi_{n}(X,*)\to\text{pro-}\pi_{n}(Y,*)\] of homotopy pro-groups are isomorphisms for all \(n\). The following theorem is a homotopy characterization of cell-like maps. **Theorem 82**.: (Lacher [264]). A map \(f\colon X\to Y\) between compact ANRs \(X\) and \(Y\) is cell-like if and only if, given any open set \(V\subset Y\), the map \(f|f^{-1}(V)\colon f^{-1}(V)\to V\) is a homotopy equivalence. This theorem directly implies the following assertion. **Theorem 83**.: Any cell-like map \(f\colon X\to Y\) between compact ANRs \(X\) and \(Y\) is a homotopy equivalence. However, cell-like maps between arbitrary compact spaces (not necessarily being absolute neighborhood retracts) do not have this property, they are not necessarily shape equivalences. Taylor [377] constructed a cell-like map from the Kahn compact space \(X\)[221] to the Hilbert cube \(Q\) which not is a shape equivalence, because the shape of the Kahn compact space \(X\) is not trivial. Taylor's construction was based on Adams' result in [37]. Keesling [224] used Taylor's example to construct a cell-like map of the Hilbert cube \(Q\) to a non-movable compact metrizable space \(Y\). Koslowski and Segal [252] and Dydak [142] constructed examples of cell-like maps between movable compact metrizable spaces which are not shape equivalences either. Developing Keesling's idea [224], van Mill [386] constructed a cell-like map \(f\colon Q\to X\), where \(X\) is not movable and all fibers \(f^{-1}(x)\), \(x\in X\), are homeomorphic to \(Q\). Nevertheless, in some cases, cell-like maps between compact metrizable spaces are shape equivalences (see papers [363] by Sher, [6] and [7] by Bogatyi, and [243] by Kodama). **Theorem 84**.: A cell-like map \(f\colon X\to Y\) between finite-dimensional compact metrizable spaces \(X\) and \(Y\) is a shape equivalence. Indeed, by virtue of Smale's theorem 81, the map \(f\) induces isomorphisms of all homotopy pro-groups, and since \(\dim X,\dim Y<\infty\), it follows by Whitehead's theorem 48 that \(f\) is a shape equivalence. Both the Kahn compact space and the Hilbert cube are infinite-dimensional; therefore, Taylor's example mentioned above shows that the finite-dimensionality assumption on the compact metrizable spaces in the last theorem is essential. Thus, if \(X\) and \(Y\) have finite dimension or are ANRs, then any cell-like map \(f\colon X\to Y\) is a shape equivalence. An important strengthening of the notion of a cell-like map is the notion of a _hereditary shape equivalence_, which was introduced by Kozlowski in [250] and [251]. **Definition 13**.: A surjective map \(f\colon X\to Y\) between compact metrizable spaces is called a _hereditary shape equivalence_ if, for any closed set \(B\subset Y\), the map \(f|f^{-1}(B)\colon f^{-1}(B)\to B\) is a shape equivalence. Obviously, any hereditary shape equivalence \(f\colon X\to Y\) is a cell-like map. However, the converse is false: the map of the Kahn compact space on the Hilbert cube constructed by Taylor in [377] is cell-like, but it is not a shape equivalence. Nevertheless, there exists a large class of cell-like maps being hereditary shape equivalences. The following two theorems are due to Koslowski [250] (see also [148]). **Theorem 85**.: Any cell-like map \(f\colon X\to Y\) between compact ANRs is a hereditary shape equivalence. **Theorem 86**.: Let \(f\colon X\to Y\) be a cell-like map between compact metrizable spaces. If \(\dim Y<\infty\), then \(f\) is a hereditary shape equivalence. Kozlowski [250] proved that hereditary shape equivalences have the following important property. **Theorem 87**.: Let \(f\colon X\to Y\) be a hereditary shape equivalence of compact metrizable spaces. If \(X\) is an ANR, then so is \(Y\). In this theorem the condition that the map \(f\colon X\to Y\) is a hereditary shape equivalence cannot be replaced by the weaker condition that this is a cell-like map. Indeed, the cell-like map constructed by Keesling in [224] maps the Hilbert cube \(Q\) to a non-movable compact metrizable space \(Y\) which is not an ANR. Thus, the image of a compact ANR under a cell-like map is not necessarily a compact ANR. The following theorem of West [408] shows that all compact metrizable ANRs are images of \(Q\)-manifolds under cell-like maps. **Theorem 88**.: For any metrizable compact ANR \(X\), there exists a \(Q\)-manifold \(M\) and a cell-like map \(f\colon M\to X\). This theorem solves Borsuk's problem [77] of the finite-dimensionality of the homotopy type of a compact ANR, because any \(Q\)-manifold has the homotopy type of a finite CW-complex (Chapman [103]) and cell-like maps between compact ANRs are homotopy equivalences (see Theorem 83). As we have already mentioned in relation to Theorem 87, the image of a compact ANR under a cell-like map is not necessarily a compact ANR. The following question arises: _Is it true that the image of any finite-dimensional compact ANR under a cell-like map is a compact ANR?_ It follows from results of Koslowski [250] that this question is equivalent to the following one: _Is it true that the image of any finite-dimensional compact ANR under a cell-like map is finite-dimensional?_ All these questions are closely related to the _dimension-raising problem for cell-like maps_: (i) Is it true that the image of any finite-dimensional compact metrizable space under a cell-like map is a finite-dimensional compact space? In this connection we mention that the cell-like maps of compact metrizable spaces do not increase cohomological dimension, and maps of compact metrizable spaces which are hereditary shape equivalences do not increase the dimension dim (Kozlowski [250]). In shape theory, problem (i) can also be formulated in the following equivalent form (see [250]): _Let \(X\) be a finite-dimensional compact metrizable space, and let \(f\colon X\to Y\) be a cell-like map. Is it true that \(f\) is a shape equivalence?_ Problem (i) of raising dimension by cell-like maps has turned out to be difficult and instructive. Many topologists have long been taking efforts to solve it. Edwards [159] and Walsh [395] proved that a negative answer to question (i) is equivalent to a positive solution of the following well-known problem of Aleksandrov [40], [41]: (ii) Does there exist an infinite-dimensional compact metrizable space of finite cohomological dimension? Aleksandrov posed this problem after proving the following fundamental theorem of homological dimension theory. **Theorem 89** (Aleksandrov [40]).: The dimension of any finite-dimensional compact metrizable space coincides with its cohomological dimension. Problem (ii) essentially reduces to that of the validity of Theorem 89 without the assumption that the given compact metrizable space is finite-dimensional. In 1988 Dranishnikov [16], [17] solved Aleksandrov's problem by constructing an example of an infinite-dimensional compact space \(X\) having finite cohomological dimension \(c\)-\(\dim_{\mathbb{Z}}X\leqslant 3\).